Daniela Amodei, the President and co-founder of Anthropic, is an inspiring figure in the field of artificial intelligence. Her dedication to building reliable, interpretable, and steerable AI systems has led her on a remarkable journey marked by numerous achievements and accolades. 

Before co-founding Anthropic company, she worked as a Risk Manager at Stripe, overseeing сore operations, user policy, and underwriting. She later became the VP of Safety and Policy at Open AI, where she played a pivotal role in ensuring AI technologies’ safety and ethical use. In 2020, she teamed up with a group of like-minded people to create Anthropic, an AI safety and research company that is pushing the boundaries of what is possible with artificial intelligence. 

Daniela’s story is one of perseverance, passion, and purpose and serves as a testament to what can be achieved when we dare to dream big and follow our hearts.

The Inspiring Career of Daniela Amodei

Who are the founders of anthropic?
Daniela Amodei at the Cerebral Valley AI Summit

Family

Daniela Amodei is known for her close bond with her brother, Dario Amodei. Their bond has also transcended into their professional lives, as they worked together a lot. Dario Amodei is co-founder and CEO of Anthropic company, holds a degree in Physics from Stanford and a Ph.D. in BioPhysics from Princeton, and has a rich background in research and AI. 

According to his LinkedIn profile, Dario has an impressive resume, which includes previous roles as VP of Research at Open AI and Senior Research Scientist at Google. At OpenAI, Dario worked from 2016 to 2020 and oversaw the creation of the company’s GPT-2 and GPT-3 language models.

They both worked at OpenAi and share a common vision of safety importance. Daniela and Dario share a common perspective that their company’s mission is to create AI systems that are controllable, understandable, and trustworthy. They aim to achieve this by training large generative models and conducting safety research. Their ultimate goal is to ensure that these models are safe and aligned with human values, making them more beneficial to society.

Education

Outstanding achievements and accolades mark Daniela Amodei’s career. First, she completed her Bachelor of Arts in English Literature, Politics, and Music at the prestigious University of California, Santa Cruz, graduating with top honors and awards such as Summa Cum Laude, University Honors, College Honors, Literature Department Honors, and Dean’s List. 

Daniela’s passion for music also earned her a partial-tuition scholarship, and her exceptional talent as a soloist helped her win the 2008 Concerto competition. Besides, she was selected as the winner of the Senior Thesis Colloquium, showcasing her excellent academic abilities. These accomplishments testify to Daniela’s dedication and commitment to excellence, resulting in a further remarkable career.

Career path

According to her profile on LinkedIn, Daniela Amodei started her career in 2010. Daniela began to work as a Business Development at the IRIS Center, located at the University of Maryland, College Park. She worked there from January 2010 to August 2011, one year and eight months. In her role, she managed grant processes and recruited key personnel for various international development projects related to poverty assessment, monitoring and evaluation, conflict mitigation, microfinance, and direct cash transfers.

After her tenure at the IRIS Center, she joined Conservation Through Public Health as a fellow from September 2011 to January 2012. While there, she worked with the CEO to develop a strategic grant-making plan and delivered development training to senior staff. She also co-led training for over 50 community health workers at rural field sites in Kampala, Uganda.

She then worked at Matt Cartwright for Congress for ten months as a Deputy Field Director from February 2012 to May 2012. She recruited over 80 volunteers and personally made 11,000 voter calls in key districts. Her efforts contributed to defeating a 20-year incumbent by 12 points in a nationally-covered primary. Then, she became a Field Director from June 2012 to November 2012. Daniela created and executed a field strategy that helped win the general election campaign by 21 points. She led a team of field organizers and volunteers during the campaign.

From January to May 2013, she worked in the Washington D.C. Metro Area for the U.S. House of Representatives, managing all scheduling activities for Congressman Matt Cartwright’s office. She was also responsible for recruiting interns and contributing to articles and press releases. One of her notable contributions was an article published in the U.S. News and World Report.

Demonstrating reliable managerial skills, Daniela decided to join Stripe in 2013, when it was still a young company.

Career at Stripe & Open AI

When she joined Stripe, her career took off like a rocket. Starting as a solo Recruiter, she quickly grew the team from 45 to 300 people, becoming the Lead Technical Recruiter in no time. With a close rate of over 75%, she hired 92 engineers across 11 teams, working closely with the CTO, VP of Engineering, founders, and team leads to develop and execute the company’s technical recruiting strategy.

But she didn’t stop there. As a Risk Program Manager, she analyzed over 7,000 cases of potential fraud, credit, and policy violations with an impressive 90% quality rate and a 97% customer satisfaction rate. Perhaps that was a moment when Daniela realized that safety is essential to her, and later this will result in developing her own product where safety will be a priority.

As a Risk Manager for core operations, user policy, and underwriting, she led three teams of 26 people, achieving a 72% decrease in loss rate from the peak, reaching the lowest rate in company history. She worked cross-functionally with machine learning, data science, engineering, legal, finance, and vendor management to launch Risk’s first vendor program, develop and iterate on fraud, credit, and policy features for models, and partner with machine learning teams to identify and remove thousands of violating users, driving down violation rates by more than 60%. Her leadership and expertise improved the company’s customer satisfaction rating, volume, and policies, setting industry standards and making Stripe a safer and better place for everyone.

After working at Stripe, Daniela joined OpenAI, where she held various positions for two years and three months. Daniela led two technical teams as an Engineering Manager and VP of People. She managed the manager of one technical safety team while working closely with researchers, technical leaders, and engineers. She also took on the VP of People role, overseeing recruitment, people programs, DEI, learning and development, and incubating a new biz ops team, among other responsibilities. Later, as the VP of Safety and Policy, she supervised the technical Safety and Policy functions and managed the Business Operations team.

Building safe AI for a better world

After departing from OpenAI, Dario Amodei and his sister Daniela embarked on a new entrepreneurial journey by founding their own company, Anthropic, in 2021. Inspired by their passion for artificial intelligence and driven by their vision of making it safer for the world, they attracted at least nine other talented people from OpenAI who shared their vision and enthusiasm.

¨It’s a set of people who have known each other for a long time and have been aware of thinking in arguments about AI safety and have worked on them over the years¨, shares Daniela and Dario Amodei in the interview for Future of Life Institute.

How is Anthropic different from OpenAI?

Have you ever thought about how an AI system generates rhyming couplets? What thought process does it employ to accomplish this task? Which parameters can be adjusted to make the generated output more or less romantic or sad or limit its use of specific diction and lexicon? 

Anthropic is addressing precisely the issue that AI models, despite their immense potential, remain largely opaque and difficult to comprehend. One such model, ChatGPT, created by OpenAI, which Daniela Amodei and her brother Dario Amodei played a pivotal role in developing, is a remarkably adaptable language system that can generate persuasive text on almost any topic and in various styles. Despite this impressive capability, much is still unknown about how ChatGPT functions and how it arrives at its outputs. As a result, the Amodei siblings decided to work on enhancing our understanding of these AI systems.

Thus, 7 former OpenAI employees who share a common vision founded Anthropic. Their focus was and is on advancing AI research with an emphasis on safety and alignment, aiming to address the ethical challenges of AI development.

Anthropic’s goal is to make the fundamental research advances that will let us build more capable, general, and reliable AI systems, then deploy these systems in a way that benefits people,” says Dario Amodei, the CEO of Anthropic.

What does Anthropic company do?

Now that we know Anthropic’s main goal is to focus on safety and understanding how LLM operates, let’s briefly review Anthropic’s journey since its inception in 2021 up to the present. Here’s a look at its history and developments:

2021

2021 marked the inception of Anthropic. Anthropic received a substantial investment of $124 million, and the list of investors is quite impressive. The team is headed by Jaan Tallinn, one of the co-founders of Skype, and includes prominent figures such as James McClave, Dustin Moskovitz, Eric Schmidt, and the Center for Emerging Risk Research, among others. Later, PitchBook, a platform that monitors private investment data, reported that the company raised $704 million, leading to a valuation of $4 billion.

2022

In April 2022, Anthropic announced it had secured $580 million in funding. During the summer, Anthropic completed the training of the first version of its AI, Claude. However, the company chose not to release it immediately, citing the need for further internal safety testing and a commitment to preventing a potentially hazardous race to develop increasingly powerful AI systems.

2023

In March 2023, Anthropic introduced the first version of Claude, an advanced AI language model designed to compete with leading counterparts. Claude was created using “constitutional AI,” a technique based on 10 guiding principles with a strong emphasis on safety and alignment, ensuring its responses were accurate, ethical, and beneficial to users. Capable of understanding and generating human-like text, Claude could assist with various tasks such as drafting documents, answering questions, and providing creative content, all while adhering to stringent safety protocols to minimize potential risks.

Meanwhile, the funding of Anthropic did great. In May 2023, Anthropic raised $450 million in a Series C round led by Spark Capital, with contributions from Google, Salesforce Ventures, Sound Ventures, and Zoom Ventures, and $100M from South Korean telecom giant SK Telecom later in August. Then, in September 2023, Amazon committed to investing up to $4 billion in the AI startup Anthropic, with an initial investment of $1.25 billion for a minority stake. This agreement included an option for Amazon to increase its total investment to $4 billion.

In July 2023, Anthropic introduced Claude 2, building on the improvements of the original Claude while addressing some of its limitations. Claude 2 excelled in areas such as creativity, humor, and adherence to prompts through its “constitutional AI” approach. Now, it answered trivia questions more accurately, told more nuanced jokes, and provided context-aware responses more effectively than its predecessor. Despite these enhancements, Claude 2 still faced challenges like mathematical errors, programming limitations, and issues with factual accuracy, similar to its predecessor and other AI systems like ChatGPT.

2024

2024 started great for Anthropic as Amazon completed its planned $4 billion investment by adding $2.75 billion to its initial $1.25 billion investment made in September 2023.

In May 2024, Anthropic welcomed two significant additions to its team. Mike Krieger, co-founder of Instagram, joined as the company’s first-ever Chief Product Officer. Concurrently, Jan Leike, an AI researcher who recently left OpenAI due to AI safety concerns, joined Anthropic to spearhead a new team dedicated to advancing AI alignment and safety. (It looks like he definitely joined like-minded people.)

Finally, in June 2024, Anthropic launched the Claude 3 family of AI models, including Claude 3 Opus, Sonet, and Haiku. These models are designed to provide enterprises with a range of options based on power, speed, and cost: Opus for complex reasoning, Sonnet for versatility and speed, and Haiku for rapid responses. Claude 3 models are designed to be twice as likely to answer questions correctly, focusing on minimizing the chances of generating incorrect information.

These models emphasize safety and reliability with constitutional AI, making them ideal for enterprise use in sectors such as finance and healthcare. Anthropic continues to prioritize ethical AI development and supports a diverse, multi-model approach to meet different business requirements in the AI landscape.

“Why we did this model family is because we wanted to give enterprise businesses as much choice as possible to really be able to toggle between what is the most important kind of element for their business or even in particular for their use cases.

We anticipate some of our customers may use, you know, multiple models just for different applications. Many of the businesses that we build really require a deep amount of trust with the customers that they’re ultimately building for. And that’s really been kind of a guiding factor for us as we have been training these models for our customers.” Says Daniela Amodei in her latest interview with Bloomberg Technology.

Bottom line

Daniela Amodei stands out as a visionary leader in artificial intelligence, driven by a deep commitment to the safety and ethical development of AI technologies. As the co-founder and President of Anthropic, she has spearheaded efforts to create AI systems that are not only advanced but also interpretable and steerable, ensuring they align with human values and societal benefits.

Her leadership style is characterized by a strategic focus on building trust and fostering collaboration within her teams and with external partners. Daniela’s dedication to AI safety is not just a professional mission but a personal passion, reflected in her meticulous approach to research and development at Anthropic.

Throughout her career, Daniela has driven meaningful progress in AI. Her work is not just about technological advancement but also about ensuring these advancements are implemented in ways that are safe, ethical, and beneficial for all.