2021 – 2026
Anthropic Leadership and Governance
Who founded Anthropic, how the company is governed, and how the funding arc shaped Claude. The 2021 OpenAI exodus, the seven cofounders, the Long-Term Benefit Trust that sits above the board, the multibillion-dollar Google and Amazon commitments, and the senior team additions that have shaped the company since.
Sibling pages: Claude Versions · Claude Lawsuits.
Background
The 2021 founding
Anthropic was incorporated in early 2021 by a group that had just left OpenAI. The departures clustered in late 2020 and early 2021 and centered on Dario Amodei, then OpenAI's VP of Research, and Daniela Amodei, then its VP of Safety and Policy. The Amodeis brought with them several of the most-published research leads at OpenAI — Tom Brown, Sam McCandlish, Jared Kaplan, Jack Clark, Chris Olah — along with several other senior engineers and researchers.
The stated reason at the time was a disagreement about direction and safety culture, surfacing publicly through Dario Amodei's later interviews and through Anthropic's mission framing. The 2019 OpenAI partnership with Microsoft, the conversion of OpenAI from a nonprofit to a capped-profit structure, and the commercialization arc through GPT-3 were the proximate context.
The founding pattern repeated, smaller-scale, in 2024: when OpenAI's superalignment team was dissolved, several senior alignment researchers including Jan Leike moved to Anthropic. The original founding and the 2024 moves are sometimes treated together as a single multi-year reshuffle of safety-focused researchers from OpenAI to Anthropic.
The Long-Term Benefit Trust
Anthropic's Long-Term Benefit Trust (LTBT) is the corporate-governance feature that most distinguishes the company from its frontier-AI peers. Announced in Anthropic's September 19, 2023 blog post, the LTBT is an independent body of trustees, financially disinterested in the company, that gains the power to elect a majority of Anthropic's board of directors over time.
The mechanics, as described by Anthropic. The LTBT trustees are appointed by the trust itself with input from Anthropic's stockholders. Trustees are required to be independent of the company — they hold no Anthropic equity or other financial stake. The trust's powers ramp in tranches tied to the cumulative equity invested in the company: as Anthropic crosses successive funding milestones, the LTBT gains the right to elect successively more board seats, eventually a majority. The trust's stated purpose is to hold Anthropic accountable to its mission — the “safe, beneficial AI” framing in the company's charter — on a longer horizon than ordinary equity holders typically operate on.
Powers and limits. The LTBT can elect (and remove) directors, subject to its own appointment process; it does not run the company day-to-day, set strategy, or control product decisions. The board it elects, like any board, hires and supervises the CEO and signs off on the major strategic decisions. The LTBT's lever is therefore indirect but durable — it shapes the composition of the body that holds the CEO accountable, on a timescale measured in funding rounds rather than quarters.
Trustees publicly identified to date include Jason Matheny (CEO of RAND Corporation; former director of IARPA and OSTP AI policy lead), Kanika Bahl (CEO of Evidence Action), and Paul Christiano (founder of the Alignment Research Center; later head of AI safety at the U.S. AI Safety Institute, with attendant questions about ongoing trustee status). Anthropic has not always disclosed the trustee roster comprehensively; the named members above are those identified in Anthropic's own announcements and in contemporaneous coverage in NYT, Bloomberg, and The Information. Treat the list as authoritative for who has been publicly named, not necessarily as the complete current roster.
How it compares. The structure is unusual. OpenAI's nonprofit / capped-profit / for-profit arrangement places fiduciary duties at the nonprofit level but couples them tightly to the operating company through governance overlap — a tension that surfaced visibly in the November 2023 board episode. xAI is a Delaware PBC with no equivalent independent body. Google DeepMind is a wholly-owned Alphabet subsidiary subject to ordinary corporate-board governance. The LTBT is the only structure of its kind among the major frontier-model labs as of early 2026. Whether it actually constrains the company in any specific decision will be empirically testable only over a longer horizon.
The funding arc
Anthropic's funding history shows up in two distinct shapes: traditional venture rounds, and strategic-cloud-partner commitments from Google and Amazon that dwarf them.
The venture arc — Series A in 2021 (~$124M, Tallinn / Moskovitz), Series B in 2022 (~$580M, FTX / Alameda), Series C in 2023 (~$450M, Spark Capital), and the tens-of-billions-valuation rounds across 2024 and 2025 — raised cumulative billions of equity capital. The 2025 Series E at a reported $61.5 billion post-money was the most-cited single round; subsequent 2025 / 2026 rounds at higher valuations have been reported but in some cases ahead of formal close.
The strategic-cloud arc — Google's investment beginning at $300M in early 2023 and reportedly expanding to $2B+ later that year, and Amazon's commitment beginning at $1.25B in September 2023 and ramping through tranches to a reported $4B and ultimately $8B — is denominated at a different scale. Both commitments are accompanied by cloud-partner relationships (Google Cloud and AWS, including AWS Trainium training silicon), and both reshape the cap table in ways an ordinary venture round would not.
Two practical consequences. Anthropic's cap table is unusually concentrated among a small number of strategic and venture investors, which has historically simplified governance. And the cash position generated by the strategic commitments funds frontier-model training at a scale that traditional venture capital alone could not sustain. Both shape, indirectly, what Anthropic can plan for over a multi-year horizon.
Senior team additions
The most notable post-founding hires have all clustered in 2024. Mike Krieger joined as Chief Product Officer in May 2024 — the company's first dedicated product leader, and a hire that signaled a sharper investment in consumer-facing surfaces (Claude.ai, Projects, Artifacts).
Jan Leike joined the same month from OpenAI's superalignment team after the team was dissolved. John Schulman, an OpenAI cofounder and the original lead on RLHF and on ChatGPT itself, joined briefly in August 2024 and later moved on. Several other senior alignment-focused researchers from OpenAI joined in the same window. The pattern echoed the original 2021 founding exodus on a smaller scale.
Departures from Anthropic at the senior level have been less prominent than the additions, in part because the company is younger and its senior team turns over less than OpenAI's has. Where senior departures have occurred, they have been individually low-noise — not the multi-week public episodes that have characterized OpenAI's leadership turnover.
Governance comparison — Anthropic vs. OpenAI vs. xAI vs. DeepMind
The four major frontier-model labs sit on four meaningfully different governance shapes. The differences are load-bearing for how each company can be expected to behave under stress.
Anthropic, PBC is a Delaware public benefit corporation with the Long-Term Benefit Trust above the board. The PBC form obligates directors to balance shareholder returns against the company's stated public benefit; the LTBT layer adds independent, financially-disinterested control over board composition that ramps with cumulative equity raised. The structure is designed to make the company harder to deflect from its stated mission as the financial stakes grow.
OpenAI is a nonprofit (OpenAI, Inc.) that controls a capped-profit subsidiary (OpenAI LP) and, more recently, an active conversion to a for-profit operating entity. The November 2023 board episode — in which the nonprofit board fired CEO Sam Altman and reversed within days under pressure from employees, investors, and Microsoft — tested the structure's ability to act as a brake on the operating entity. The for-profit conversion fight, including Musk v. Altman, is in part a fight over how much of that braking authority survives. (See the ChatGPT versions page for the November 2023 episode.)
xAI is a Delaware public benefit corporation under Elon Musk's control. Musk holds the operating control directly; there is no independent body equivalent to the LTBT. The PBC form imposes the same balance-of-interests duty on directors that Anthropic's does, but the absence of an independent control layer means the practical accountability is to Musk himself.
Google DeepMind is a wholly-owned subsidiary of Alphabet, governed by ordinary corporate-board mechanics. Mission framing and ethics review run through internal Alphabet processes rather than an independent external structure. The labs's incentives ultimately answer to Alphabet's public-company shareholders.
The public voice
Anthropic has cultivated a more concentrated public voice than its peers. Dario Amodei handles most of the high-profile external communication — Senate testimony, the “Machines of Loving Grace” essay (October 2024), the major podcast appearances. Jack Clark writes the long-running “Import AI” newsletter and handles much of the policy-side communication.
Compared to OpenAI's pattern of multiple high-visibility executive voices (Altman, Brockman, Murati, Sutskever, Schulman, and others have all had distinct public profiles at different points), Anthropic's pattern has been deliberately narrow. The narrowness reduces the noise floor around the company's communicated positions; it also concentrates risk in the founder pair.