Welcome to our week’s edition of the news digest, where we’ll cover the latest news in various industries, such as AI and the social sector. From APA’s first-ever health advisory on social media use for kids to the exciting new updates announced by Google. So, let’s buckle up and prepare for an informative and thrilling ride.
Social
APA issues a first-ever health advisory on social media use for kids: Guidelines for safe and responsible online behavior
The American Psychological Association (APA) has published a set of recommendations to safeguard children from the possible risks of social media in its first-ever health advisory on social media use. Although the APA does not condemn social media, it emphasizes that it is not intrinsically beneficial or harmful to children but should be used cautiously. The advisory criticizes algorithms that direct young users to potentially harmful content, calls for regular screenings of children for “problematic social media use,” and urges parents to avoid letting social media interfere with their children’s sleep patterns and physical activity. The APA also suggests that young users restrict the amount of time they spend comparing themselves to others on social media applications, “especially around beauty – or appearance-related content.”
The release of the American Psychological Association’s health advisory on social media use is promising news for parents concerned about the potential harms of social media on their children’s mental health. As a parent, what do you think about the guidelines issued by the APA regarding social media use being helpful in protecting children from potential harm?
AI
OpenAI peels back the layers of language models with tool to explain their inner workings
Why do language models sometimes invent facts out of whole cloth? How does the lack of transparency in language models impact their reliability and accuracy? How do large language models like OpenAI’s ChatGPT actually work?
OpenAI is developing a tool to increase the interpretability of large language models (LLMs), such as their own ChatGPT. The tool works by using a language model to figure out the functions of the components of other, architecturally simpler LLMs. By exploiting the behavior of neurons in LLMs, the tool breaks models down into individual pieces and runs text sequences through the model being evaluated to identify the highly active neurons. It then uses GPT-4 to generate explanations for each neuron, allowing OpenAI to come up with natural language explanations for what each neuron is doing and have a score for how well that explanation matches the actual behavior. While the tool has a long way to go before it’s practical, it could one day be used to improve an LLM’s performance and cut down on bias or toxicity.
“We hope that this will open up a promising avenue to address interpretability in an automated way that others can build on and contribute to. The hope is that we have good explanations of not just what neurons are responding to but overall, the behavior of these models — what kinds of circuits they’re computing and how certain neurons affect other neurons.” shares Wu.
Anthropic unveils ‘Constitutional AI’ as the future of model development
It looks like Anthropic, a startup with a new approach to training text-generating AI systems, is making waves in the AI community.
Anthropic, led by Dario and Daniela Amodei, is a startup that aims to raise $5 billion in four years to develop robust text-generating AI systems like OpenAI’s ChatGPT. The company has developed a technique for imbuing systems with “values” defined by a “constitution,” called “constitutional AI,” that aims to make the behavior of systems easier to understand and adjust as needed.
The system gives a set of principles to a text-generating AI model to judge the text it generates. These principles guide the model to take on the behavior they describe, such as “nontoxic” and “helpful.” Anthropic claims this is superior to the method used to train systems such as ChatGPT, which relies on human contractors comparing two responses from a model and selecting the one they feel is better according to some principle.
Anthropic says its approach is less biased and more transparent. The principles used by Anthropic come from various sources, including the U.N. Declaration of Human Rights, Apple’s terms of service, and values identified by AI labs like Google DeepMind. Anthropic says it plans to explore ways to produce a constitution more democratically and offer customizable constitutions for specific use cases.
Everything you need to know about Google I/O 2023
On May 10th, Google held its annual I/O conference keynote, marking the beginning of the developer conference season. Google I/O keynote day is always a thrilling event for developers, packed with rapid-fire announcements that unveil the company’s latest products and features. Here are 5 of some of the most important announcements from the keynote in our easy-to-read list. So, let’s dive into the latest and greatest from Google I/O 2023:
- Google Maps announced “Immersive View for Routes,” a feature that displays all route information in one place, such as traffic simulations, bike lanes, parking, and more. It’s rolling out in 15 cities and is available on Android and iOS. Besides, Google launched an Aerial View API and Photorealistic 3D titles for developers to create immersive map experiences.
- Google Search has introduced new features to improve image search transparency and credibility, including an “About This Image” function and AI-generated image labeling. Besides, Google is experimenting with an AI-powered conversational mode that suggests next steps and provides information related to search queries. A new “Perspectives” filter has also been introduced to provide users with a broader range of content sources.
- Google Photos is launching a new AI-powered editing tool called Magic Editor that allows for complex edits like repositioning subjects and filling photo gaps. It will be available for select Pixel devices, but it’s unclear if it will be free or part of a subscription.
- Google has launched PaLM 2, its latest large language model (LLM), which will power its updated chat tool, Bard. PaLM 2 features improved support for writing and debugging code and is better at common sense reasoning, mathematics, and logic. The model was trained in over 100 languages and 20 programming languages, including JavaScript and Python. Google has also launched Codey, a specialized model for coding and debugging.
- Google has launched new AI capabilities for its Workspace productivity suite, such as automatic table and image generation in Sheets, Slides, and Meet. Users can describe what they want, and Sheets will provide personalized templates. Slides and Meet can create custom backgrounds, and Google Docs now have smart chips for locations and status. Google plans to add a chat interface to Docs and automatically add speaker notes to Slides. These services are known as ‘Duet AI’ and are integrated into Google’s Workspace and Cloud services.
- Google has introduced a new translation service called the “Universal Translator,” which can translate videos into different languages while synchronizing the speaker’s lips with the translated speech. This feature is possible due to advances in AI. The experimental service takes an input video, transcribes the speech, translates it, regenerates it in a different language, and then edits the video so that the speaker’s lips match the new audio. The new service was demonstrated using an online course lecture initially recorded in English.