Mungomash LLC
Claude Leadership

2021 – 2026

Anthropic Leadership and Governance

Who founded Anthropic, how the company is governed, and how the funding arc shaped Claude. The 2021 OpenAI exodus, the seven cofounders, the Long-Term Benefit Trust that sits above the board, the multibillion-dollar Google and Amazon commitments, and the senior team additions that have shaped the company since.

Sibling pages: Claude Versions · Claude Lawsuits.

The seven cofounders — January 2021

Anthropic was incorporated by a group of senior researchers and engineers who had just left OpenAI over stated disagreements about the direction of safety and commercialization. Seven of the cofounders are publicly identified across Anthropic's own statements and contemporaneous reporting; several other early employees joined within weeks and are sometimes counted as cofounders depending on the source.

Dario Amodei
CEO
Prior · VP Research, OpenAI

Led OpenAI's research organization through the GPT-2 / GPT-3 era. Now Anthropic's primary public voice through Senate testimony, the “Machines of Loving Grace” essay (October 2024), and major podcast appearances.

Daniela Amodei
President
Prior · VP Safety & Policy, OpenAI

Runs the company day-to-day — people, finance, operations, and the commercial side of the business. Previously led safety and policy at OpenAI; before that, communications and operations at Stripe.

Tom Brown
Cofounder, technical staff
Prior · Lead author, GPT-3 paper

First author on the “Language Models are Few-Shot Learners” paper that introduced GPT-3. The technical lead on much of Anthropic's pre-training infrastructure.

Sam McCandlish
Cofounder, technical staff
Prior · Research, OpenAI

Coauthor on the original “Scaling Laws for Neural Language Models” paper with Jared Kaplan. Continues to shape the research roadmap on capability scaling and evaluations.

Jared Kaplan
Chief Science Officer
Prior · Johns Hopkins / OpenAI

Theoretical physicist by training; first author on the “Scaling Laws” paper that established the empirical relationship between compute, data, and language-model loss that the entire frontier-model industry now plans against.

Jack Clark
Cofounder, Policy
Prior · Policy Director, OpenAI

Author of the long-running “Import AI” newsletter. Handles much of Anthropic's policy-side communication — congressional testimony, AI-safety-institute liaison, public-comment letters.

Chris Olah
Cofounder, Interpretability
Prior · OpenAI / Google Brain / Distill

Leads the interpretability team. The team's published work — mechanistic interpretability, sparse autoencoders, the “Towards Monosemanticity” line — is the most-cited piece of Anthropic research after the Scaling Laws / Constitutional AI papers.

Other early employees frequently named alongside the seven include Tom Henighan, Andy Jones, Nick Joseph, and Ben Mann. Anthropic's own framing is variable on the exact “cofounder” line; the seven above are the consistently-named core.

Timeline event kinds

Founding — incorporation, formation milestones
Governance — LTBT, board, structural changes
Funding — investment rounds and strategic commitments
Leadership — senior hires and departures

Anthropic chronological timeline

May 2024
Leadership
Mike Krieger joins as CPO
Cofounder, Instagram · founder, Artifact
First Chief Product Officer in the company's history. The hire signaled a sharper investment in consumer-facing surfaces — Claude.ai, Projects, Artifacts.

Mike Krieger cofounded Instagram with Kevin Systrom in 2010, sold it to Facebook in 2012, and ran engineering there until 2018. After Instagram he cofounded Artifact, a personalized news-reader product, which shut down in early 2024.

His arrival at Anthropic in May 2024 (announced via Anthropic's May 15 announcement) marked the company's first dedicated product leader. The Claude 3.5 Sonnet release in June 2024 with Artifacts — a side-panel rendering surface for code and documents — landed weeks after he started, and the consumer product cadence accelerated noticeably afterward.

May 2024
Leadership
Jan Leike joins from OpenAI
Co-led OpenAI superalignment with Ilya Sutskever
Leike resigned from OpenAI publicly citing concerns about safety culture, then announced his Anthropic move within days. The OpenAI superalignment team was dissolved in the same window.

Leike's departure post on X stated that “safety culture and processes have taken a backseat to shiny products” at OpenAI. He announced his Anthropic move on May 28, 2024 with the framing “join me if you share these values.”

The move echoed the original 2021 founding pattern — safety-focused researchers leaving OpenAI for Anthropic over stated process disagreements — on a smaller scale.

Mar 2024
Funding
Amazon expands commitment to $4B
Total commitment expanded to a reported $8B during 2024
Amazon completed the second $1.25B tranche of its September 2023 commitment, then announced an additional $2.75B in March 2024, bringing the total to the initial $4B figure. Later 2024 reporting put the total at $8B as the relationship deepened around AWS Trainium chips.

Amazon's March 27, 2024 announcement confirmed the total at $4 billion. Subsequent reporting in The Information, NYT, and WSJ later in 2024 placed the cumulative commitment at $8 billion as the partnership extended around AWS Trainium2 training chips and AWS as a primary cloud.

The two-cloud financial structure (Google + Amazon) is a recurring background fact in coverage of Anthropic.

2024 – 2025
Funding
Tens-of-billions valuation rounds
Lightspeed, Menlo, Salesforce Ventures, Fidelity
Anthropic raised additional rounds across 2024 and 2025 valuing the company in the tens of billions of dollars. Lightspeed and Menlo Ventures led the most-reported rounds; valuation figures have moved fast and are sometimes reported before formal close.

The most-cited round in this window was the March 2025 Series E — reported by Bloomberg, NYT, and others as roughly $3.5 billion at a $61.5 billion post-money valuation, led by Lightspeed Venture Partners with Bessemer, Cisco Investments, D1 Capital, Fidelity, General Catalyst, Jane Street, MGX, Salesforce Ventures, and Wellington participating.

Subsequent rounds during 2025 and into 2026 have been reported at progressively higher valuations as the broader frontier-model fundraising market has run ahead of revenue. Where the page cites a valuation, the source is named — figures from leaks ahead of formal close are flagged as “reportedly.”

Aug 2025
Governance
$1.5B Bartz settlement
Largest copyright settlement in U.S. history
Anthropic settled Bartz v. Anthropic for $1.5 billion in installments through September 2027. The settlement is a corporate-governance milestone alongside the legal one — it set a four-year cash schedule the company has to plan around.

For per-case detail — plaintiffs, the Alsup ruling, the four-installment payment schedule, the claim mechanics — see the dedicated Bartz row on the Claude Lawsuits page.

Why it shows up here: the $1.5B obligation through 2027 is large enough relative to Anthropic's run-rate revenue that the cash structure of subsequent funding rounds is partly shaped by it. Coverage of the funding arc and the settlement is bidirectional.

Sep 2023
Governance
Long-Term Benefit Trust formed
Trustees gain phased authority to elect directors
Anthropic announced the LTBT, a financially-disinterested trust whose trustees gain power to elect a majority of the company's board over time. The structure is unique among frontier-AI companies and is described in detail below.

The LTBT was announced in Anthropic's September 19, 2023 post — the most detailed public description of the structure to date. The post lays out the financially-disinterested-trustees principle, the phased-authority schedule based on aggregate equity invested in the company, and the powers the trustees gain over board composition.

For a fuller treatment, see The Long-Term Benefit Trust below.

Sep 2023
Funding
Amazon commits up to $4B
Initial $1.25B with right to expand to $4B
Amazon announced an initial $1.25 billion investment with the right to expand to $4 billion. AWS became a primary cloud partner; the partnership extended over time around AWS Trainium training chips.

Amazon's September 25, 2023 announcement set the initial $1.25 billion figure and the option to expand to $4 billion. The expansion ran through tranches across late 2023 and 2024 (see the 2024 row) and ultimately to a reported $8 billion over the relationship.

2023
Funding
Google strategic investment
Initial $300M (early 2023), expanded to a reported $2B (Oct 2023)
Google's strategic investment came in two announced tranches during 2023: an initial $300 million round, then a multibillion-dollar expansion later in the year. Anthropic adopted Google Cloud as a primary partner alongside AWS.

Reporting in the FT, WSJ, and Reuters placed Google's initial Anthropic investment at roughly $300 million in February 2023, with reported terms including a Google Cloud commitment from Anthropic. A subsequent expansion in October 2023 was reported at up to $2 billion.

The Google relationship operates in parallel with the Amazon relationship; Anthropic uses both clouds and trains across both Trainium and TPU hardware.

May 2023
Funding
Series C — ~$450M
Spark Capital lead
Series C raised approximately $450 million led by Spark Capital, with Google, Salesforce Ventures, Sound Ventures, and Zoom Ventures participating. Reported valuation around $4.1 billion.

The round was announced shortly after Claude 1's broader release. Anthropic's Series C announcement framed it as funding the next generation of frontier models — the round directly preceded the Claude 2 launch in July 2023.

Apr 2022
Funding
Series B — ~$580M
FTX / Alameda lead · later sold by bankruptcy estate
Series B raised approximately $580 million in April 2022 led by Sam Bankman-Fried's FTX and Alameda Research, with Jaan Tallinn and the Center for Emerging Risk Research participating. The FTX bankruptcy estate later sold the stake.

The November 2022 FTX collapse left the FTX bankruptcy estate as one of Anthropic's largest single shareholders. The estate sold the stake in two tranches in 2024 for a combined ~$1.3 billion — a substantial recovery for FTX creditors.

The episode is one of the few places where Anthropic's cap table was reshaped by a third-party event rather than its own choices.

May 2021
Funding
Series A — ~$124M
Jaan Tallinn / Dustin Moskovitz lead
First outside round — approximately $124 million led by Jaan Tallinn (Skype cofounder) and Dustin Moskovitz (Asana, Open Philanthropy), with the Center for Emerging Risk Research and Eric Schmidt participating.

Anthropic spent its first eighteen months as a research-only operation on this funding. The first commercial Claude product did not ship until March 2023. The Series A investor list skews heavily toward the AI-safety and effective-altruism community — a deliberate choice by the founders to align early capital with the company's stated mission.

Jan 2021
Founding
Anthropic incorporated
Anthropic, PBC · Delaware public benefit corporation
Anthropic was incorporated as a Delaware public benefit corporation by Dario and Daniela Amodei alongside the cofounders listed above, after their departures from OpenAI in late 2020 / early 2021.

Anthropic chose the public benefit corporation form, which obligates directors to balance shareholder interests against the company's stated public benefit. This is a different choice from OpenAI's nonprofit-with-capped-profit-subsidiary structure and from xAI's PBC form — see Governance comparison below.

The OpenAI departures that led to the founding came over stated disagreements about safety culture and the direction of commercialization, particularly around the Microsoft partnership announced in mid-2019.

Background

The 2021 founding

Anthropic was incorporated in early 2021 by a group that had just left OpenAI. The departures clustered in late 2020 and early 2021 and centered on Dario Amodei, then OpenAI's VP of Research, and Daniela Amodei, then its VP of Safety and Policy. The Amodeis brought with them several of the most-published research leads at OpenAI — Tom Brown, Sam McCandlish, Jared Kaplan, Jack Clark, Chris Olah — along with several other senior engineers and researchers.

The stated reason at the time was a disagreement about direction and safety culture, surfacing publicly through Dario Amodei's later interviews and through Anthropic's mission framing. The 2019 OpenAI partnership with Microsoft, the conversion of OpenAI from a nonprofit to a capped-profit structure, and the commercialization arc through GPT-3 were the proximate context.

The founding pattern repeated, smaller-scale, in 2024: when OpenAI's superalignment team was dissolved, several senior alignment researchers including Jan Leike moved to Anthropic. The original founding and the 2024 moves are sometimes treated together as a single multi-year reshuffle of safety-focused researchers from OpenAI to Anthropic.

The Long-Term Benefit Trust

Anthropic's Long-Term Benefit Trust (LTBT) is the corporate-governance feature that most distinguishes the company from its frontier-AI peers. Announced in Anthropic's September 19, 2023 blog post, the LTBT is an independent body of trustees, financially disinterested in the company, that gains the power to elect a majority of Anthropic's board of directors over time.

The mechanics, as described by Anthropic. The LTBT trustees are appointed by the trust itself with input from Anthropic's stockholders. Trustees are required to be independent of the company — they hold no Anthropic equity or other financial stake. The trust's powers ramp in tranches tied to the cumulative equity invested in the company: as Anthropic crosses successive funding milestones, the LTBT gains the right to elect successively more board seats, eventually a majority. The trust's stated purpose is to hold Anthropic accountable to its mission — the “safe, beneficial AI” framing in the company's charter — on a longer horizon than ordinary equity holders typically operate on.

Powers and limits. The LTBT can elect (and remove) directors, subject to its own appointment process; it does not run the company day-to-day, set strategy, or control product decisions. The board it elects, like any board, hires and supervises the CEO and signs off on the major strategic decisions. The LTBT's lever is therefore indirect but durable — it shapes the composition of the body that holds the CEO accountable, on a timescale measured in funding rounds rather than quarters.

Trustees publicly identified to date include Jason Matheny (CEO of RAND Corporation; former director of IARPA and OSTP AI policy lead), Kanika Bahl (CEO of Evidence Action), and Paul Christiano (founder of the Alignment Research Center; later head of AI safety at the U.S. AI Safety Institute, with attendant questions about ongoing trustee status). Anthropic has not always disclosed the trustee roster comprehensively; the named members above are those identified in Anthropic's own announcements and in contemporaneous coverage in NYT, Bloomberg, and The Information. Treat the list as authoritative for who has been publicly named, not necessarily as the complete current roster.

How it compares. The structure is unusual. OpenAI's nonprofit / capped-profit / for-profit arrangement places fiduciary duties at the nonprofit level but couples them tightly to the operating company through governance overlap — a tension that surfaced visibly in the November 2023 board episode. xAI is a Delaware PBC with no equivalent independent body. Google DeepMind is a wholly-owned Alphabet subsidiary subject to ordinary corporate-board governance. The LTBT is the only structure of its kind among the major frontier-model labs as of early 2026. Whether it actually constrains the company in any specific decision will be empirically testable only over a longer horizon.

The funding arc

Anthropic's funding history shows up in two distinct shapes: traditional venture rounds, and strategic-cloud-partner commitments from Google and Amazon that dwarf them.

The venture arc — Series A in 2021 (~$124M, Tallinn / Moskovitz), Series B in 2022 (~$580M, FTX / Alameda), Series C in 2023 (~$450M, Spark Capital), and the tens-of-billions-valuation rounds across 2024 and 2025 — raised cumulative billions of equity capital. The 2025 Series E at a reported $61.5 billion post-money was the most-cited single round; subsequent 2025 / 2026 rounds at higher valuations have been reported but in some cases ahead of formal close.

The strategic-cloud arc — Google's investment beginning at $300M in early 2023 and reportedly expanding to $2B+ later that year, and Amazon's commitment beginning at $1.25B in September 2023 and ramping through tranches to a reported $4B and ultimately $8B — is denominated at a different scale. Both commitments are accompanied by cloud-partner relationships (Google Cloud and AWS, including AWS Trainium training silicon), and both reshape the cap table in ways an ordinary venture round would not.

Two practical consequences. Anthropic's cap table is unusually concentrated among a small number of strategic and venture investors, which has historically simplified governance. And the cash position generated by the strategic commitments funds frontier-model training at a scale that traditional venture capital alone could not sustain. Both shape, indirectly, what Anthropic can plan for over a multi-year horizon.

Senior team additions

The most notable post-founding hires have all clustered in 2024. Mike Krieger joined as Chief Product Officer in May 2024 — the company's first dedicated product leader, and a hire that signaled a sharper investment in consumer-facing surfaces (Claude.ai, Projects, Artifacts).

Jan Leike joined the same month from OpenAI's superalignment team after the team was dissolved. John Schulman, an OpenAI cofounder and the original lead on RLHF and on ChatGPT itself, joined briefly in August 2024 and later moved on. Several other senior alignment-focused researchers from OpenAI joined in the same window. The pattern echoed the original 2021 founding exodus on a smaller scale.

Departures from Anthropic at the senior level have been less prominent than the additions, in part because the company is younger and its senior team turns over less than OpenAI's has. Where senior departures have occurred, they have been individually low-noise — not the multi-week public episodes that have characterized OpenAI's leadership turnover.

Governance comparison — Anthropic vs. OpenAI vs. xAI vs. DeepMind

The four major frontier-model labs sit on four meaningfully different governance shapes. The differences are load-bearing for how each company can be expected to behave under stress.

Anthropic, PBC is a Delaware public benefit corporation with the Long-Term Benefit Trust above the board. The PBC form obligates directors to balance shareholder returns against the company's stated public benefit; the LTBT layer adds independent, financially-disinterested control over board composition that ramps with cumulative equity raised. The structure is designed to make the company harder to deflect from its stated mission as the financial stakes grow.

OpenAI is a nonprofit (OpenAI, Inc.) that controls a capped-profit subsidiary (OpenAI LP) and, more recently, an active conversion to a for-profit operating entity. The November 2023 board episode — in which the nonprofit board fired CEO Sam Altman and reversed within days under pressure from employees, investors, and Microsoft — tested the structure's ability to act as a brake on the operating entity. The for-profit conversion fight, including Musk v. Altman, is in part a fight over how much of that braking authority survives. (See the ChatGPT versions page for the November 2023 episode.)

xAI is a Delaware public benefit corporation under Elon Musk's control. Musk holds the operating control directly; there is no independent body equivalent to the LTBT. The PBC form imposes the same balance-of-interests duty on directors that Anthropic's does, but the absence of an independent control layer means the practical accountability is to Musk himself.

Google DeepMind is a wholly-owned subsidiary of Alphabet, governed by ordinary corporate-board mechanics. Mission framing and ethics review run through internal Alphabet processes rather than an independent external structure. The labs's incentives ultimately answer to Alphabet's public-company shareholders.

The public voice

Anthropic has cultivated a more concentrated public voice than its peers. Dario Amodei handles most of the high-profile external communication — Senate testimony, the “Machines of Loving Grace” essay (October 2024), the major podcast appearances. Jack Clark writes the long-running “Import AI” newsletter and handles much of the policy-side communication.

Compared to OpenAI's pattern of multiple high-visibility executive voices (Altman, Brockman, Murati, Sutskever, Schulman, and others have all had distinct public profiles at different points), Anthropic's pattern has been deliberately narrow. The narrowness reduces the noise floor around the company's communicated positions; it also concentrates risk in the founder pair.

Read these primary sources

Most of the page's content is paraphrased from the URLs below. They are the authoritative places to read what Anthropic has said in its own voice and what the company's investors have disclosed.

Anthropic's own announcements

Founding statements, governance explainer, funding announcements, senior-hire announcements.

# LTBT explainer — the most detailed public description of the structure
https://www.anthropic.com/news/the-long-term-benefit-trust

# All Anthropic announcements — funding, leadership, policy
https://www.anthropic.com/news

# Mike Krieger as CPO — May 2024
https://www.anthropic.com/news/mike-krieger-cpo

Investor disclosures

Amazon and Google disclose their Anthropic relationships in 10-K filings and press announcements.

# Amazon — original September 2023 commitment
https://www.aboutamazon.com/news/company-news/amazon-aws-anthropic-ai

# Amazon — March 2024 expansion to $4B
https://www.aboutamazon.com/news/company-news/amazon-anthropic-ai-investment

# SEC EDGAR — search "AMZN" or "GOOGL" 10-K filings
https://www.sec.gov/edgar/searchedgar/companysearch

Long-form reporting

For dates the announcements alone don't cleanly establish — valuations, trustee identities, hiring details.

# Bloomberg, NYT, FT, WSJ, Reuters, The Information — archive search
https://www.bloomberg.com/         # search: "Anthropic"
https://www.nytimes.com/           # search: "Anthropic"
https://www.theinformation.com/    # subscription, deepest cap-table reporting

# Dario Amodei's personal essays — long-form policy / mission framing
https://darioamodei.com/

# Jack Clark's newsletter — weekly AI policy + technical roundup
https://importai.substack.com/