August 2024 was packed with news, most of which revolved around AI. In this news digest, you’ll discover the key shifts happening in major companies like Google, OpenAI, and Apple. We’ll explore whether AI integration in apps is on the rise or if there are forces actively resisting it. Plus, you’ll get the latest updates on AI regulation and the debates shaping the future of this powerful technology. Ready to dive in?
Key shifts in leadership at OpenAI, Google, and Apple
Wow, wow, wow. Big shifts at OpenAI (again?). This time, John Schulman, a co-founder of OpenAI, said on X that he is leaving the company to join rival AI startup Anthropic, led by Daniela Amodei and Dario Amodei, where he aims to focus on AI alignment research. But that’s not all. Greg Brockman, OpenAI’s president and co-founder, is also taking an extended leave until the end of the year to “relax and recharge.” Besides, Peter Deng, a former product manager at OpenAI, exited the company earlier this year. With Schulman’s departure, only three of the original 11 OpenAI founders remain. Schulman emphasized that his decision was personal and not due to any dissatisfaction with OpenAI’s support for AI safety research.
Meanwhile, OpenAI has appointed Zico Kolter, a Carnegie Mellon professor specializing in AI safety, to its board of directors. His expertise is seen as crucial for enhancing OpenAI’s focus on AI safety, especially following the recent departure of several key figures in the company’s safety team. Kolter will also join the Safety and Security Committee, which oversees safety measures for all OpenAI projects. His addition is expected to bolster the company’s efforts to ensure that AI developments benefit humanity.
At the same time, interesting things happen in Google. Character.AI co-founder and CEO Noam Shazeer is returning to Google, where he previously led the team that developed the LaMDA language model until his departure in October 2021. Shazeer, along with some employees from Character.AI, will join Google DeepMind, while Character.AI’s general counsel Dominic Perella will serve as interim CEO. Google has also signed a non-exclusive agreement with Character.AI to continue using its technology, providing increased funding for the startup’s future growth.
On the other hand, Apple announced that its Chief Financial Officer Luca Maestri will step down on January 1, 2025. He will be succeeded by Kevan Parekh, Apple’s current VP of Financial Planning, who has been with the company for 11 years. Maestri, who has served as CFO since 2014, will transition to a different role within Apple, reporting directly to CEO Tim Cook and continuing to lead the corporate services team.
AI integration is growing in apps
In the era of AI, more and more big companies make AI a part of their core operations. Thus, in August, several tech giants are rolling out new features to enhance user experiences across various platforms. Amazon-owned Audible is testing “Maven,” an AI-powered search tool designed to help users find audiobooks more efficiently through personalized recommendations based on natural language queries. This move is part of Audible’s broader exploration of AI, including curated collections and AI-generated review summaries.
Similarly, Automattic, the company behind WordPress.com, has introduced “Write Brief with AI,” a tool aimed at helping bloggers write more clearly and concisely. This tool, currently in beta, simplifies content by suggesting shorter sentences and improving readability, complementing an AI writing assistant released by Automattic last year.
Reddit is also embracing AI with a soon-to-be-tested feature that will provide AI-generated summaries at the top of search result pages, making it easier for users to discover content and connect with new communities. This feature leverages both first-party and third-party AI technology and is set to launch later this year.
Meanwhile, Amazon continues to expand its AI capabilities with the release of Titan Image Generator v2 on its AWS Bedrock platform. This upgraded model offers enhanced features like image editing, background removal, and the ability to generate variations using reference images.
In the audio space, Amazon Music has rolled out “Topics,” a new AI-powered feature that helps users browse podcast episodes by specific topics identified within the content. This feature, available on iOS and Android in the U.S., uses AI to analyze podcast transcripts and descriptions, supported by human curation.
Finally, Google has launched the fourth-generation Nest Learning Thermostat, marking its first major update in nearly a decade. The new model boasts a sleeker design, a larger customizable display, and AI-driven “micro-adjustments” that learn from user habits to optimize energy savings. This updated thermostat is available for pre-order and will hit the market on August 20.
Besides, Google introduced new AI-powered features with the launch of its Pixel 9 series, enhancing photo editing and image generation capabilities. The new tools include advanced photo editing options like auto framing and AI-generated backgrounds, as well as new apps for managing and searching screenshots and creating AI-powered images on the device.
Procreate rejects AI: a new trend started?
However, not every company wants to follow the “AI integration” trend. For instance, Procreate, the popular art app, is taking a stand against the AI trend by keeping artificial intelligence out of its products. The company’s CEO states that AI stifles creativity and makes art less authentic, as AI-generated creations are based on existing works. Procreate’s commitment to remaining AI-free might signal a new movement where originality and human touch are increasingly valued in a world dominated by AI.
Can AI scientists go rogue? Sakana AI’s latest innovation sparks alarming concerns
What is one of the biggest fears people have about AI? Probably that it might eventually become so intelligent that it will rebel and take over humanity. But does it even need to be super-intelligent to pose such a threat?
Sakana AI, a Tokyo-based startup, developed a groundbreaking AI model called the “AI Scientist.” Unlike typical AI models that assist with specific tasks, this AI can independently generate research ideas, conduct experiments, analyze results, and even write complete research papers – all with minimal human intervention. This level of autonomy in scientific research is unprecedented. Additionally, the AI Scientist is designed to mimic the creative and iterative process of scientific discovery, which typically requires human intuition and reasoning.
The excitement and concern around Sakana AI’s “AI Scientist” stem from its ability to work almost entirely independently, handling everything from idea generation to writing research papers. This level of autonomy in a scientific research AI is groundbreaking but also raises significant concerns about the accuracy, reliability, and ethical implications of allowing an AI to conduct research without human supervision. Does an AI model need to be super intelligent to go out of control?
Why everyone’s talking about SB 1047: the debate over California’s AI Bill
California’s Senate Bill 1047 (SB 1047), also known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” is a legislative effort designed to regulate the development and deployment of advanced AI systems, particularly those requiring substantial training resources. The bill, introduced by Senator Scott Wiener, mandates rigorous safety protocols, including third-party audits and a “kill switch” to deactivate models if they pose a critical risk. It also requires developers to report safety incidents and ensure third parties cannot misuse their models.
The bill targets large-scale AI models, like those developed by tech giants such as OpenAI and Google, which could be misused for catastrophic purposes, such as creating weapons or launching massive cyberattacks.
Supporters of SB 1047, including Senator Scott Wiener, AI pioneers Geoffrey Hinton and Yoshua Bengio, Elon Musk, and the Center for AI Safety, argue that the bill is crucial for preventing future AI-related disasters. They stress the importance of proactive regulation to avoid repeating past mistakes in tech policy, such as the delayed actions seen with social media and data privacy. These advocates believe that by implementing stringent safety measures now, the bill can protect the public and ensure the responsible progression of AI development, safeguarding the industry’s future.
Opponents of the bill, such as venture capital firm a16z (Andreessen Horowitz), argue that the bill’s thresholds are arbitrary and could harm startups by stifling innovation as AI technology becomes more expensive. The Chamber of Progress, a tech trade group representing companies like Google, Apple, and Amazon, claims that the bill would restrict free speech and push innovation out of California. Congressman Ro Khanna, representing Silicon Valley, warns that the bill could be ineffective, overly punitive towards small businesses, and detrimental to California’s innovation culture.
As SB 1047 advances through the California State Assembly, it has ignited a broader debate about AI regulation in the U.S., with potential implications for other states considering similar legislation.