8 families · 2 US public · 3 US private · 2 Chinese · 1 EU

Frontier AI Model Families

Every frontier-AI model family at a glance — Anthropic Claude, OpenAI ChatGPT, Google Gemini, xAI Grok, Meta Llama, DeepSeek, Mistral, Alibaba Qwen. Lab, current model, ship date, total versions. Click into a family for the full lineage. Inclusion criteria stated below.

Filter by lab status

Frontier AI model families table

Family
ChatGPT
GPT · o-series
OpenAI US private
GPT-5.5
Apr 23, 2026

OpenAI's flagship line, from GPT-1 (June 2018) through GPT-5.5 (April 2026). The August 2025 GPT-5 release introduced a unified router that picks between fast-response and reasoning models within a single endpoint; the 5.x cadence has been roughly six weeks per release through 5.5. The o-series reasoning models (o1, o3, o4) have been merged into the main line under GPT-5.

Family
Claude
Opus · Sonnet · Haiku
Anthropic US private
Claude Opus 4.7
Apr 16, 2026

Anthropic's flagship line, from Claude 1 (March 2023) through Opus 4.7 (April 2026). Three-tier structure (Opus for the largest, Sonnet for the balanced mid-tier, Haiku for the small-and-fast tier) introduced with Claude 3 in March 2024. Constitutional AI is the safety-training framing that shipped with Claude 1.

  • Lab: Anthropic — San Francisco, California, USA. Privately held; began IPO prep in late 2025.
  • Current model: Claude Opus 4.7, shipped April 16, 2026. API: claude-opus-4-7.
  • First model: Claude 1, March 14, 2023. Total versions: 20.
  • Pages on this site:
  • Primary sources: Anthropic Models · Anthropic News
Family
DeepSeek
V-series · R-series
DeepSeek Chinese
DeepSeek-V4-Pro
Apr 24, 2026

Hangzhou-based open-weights frontier-LLM lab. Released DeepSeek-V3 (December 2024) and DeepSeek-R1 (January 2026) at training-cost levels far below US-frontier peers, sparking a global re-evaluation of frontier-model economics. V4-Pro is the current flagship: a 1.6-trillion-parameter Mixture-of-Experts model with 49B active parameters per token and a 1M-token context window, MIT-licensed.

  • Lab: DeepSeek (Hangzhou DeepSeek Artificial Intelligence Basic Technology Research) — Hangzhou, China. Privately held subsidiary of High-Flyer Capital Management.
  • Current model: DeepSeek-V4-Pro, shipped April 24, 2026. HuggingFace: deepseek-ai/DeepSeek-V4-Pro.
  • First model: DeepSeek-LLM, November 27, 2023. Total versions: 18.
  • Pages on this site:
  • Primary sources: DeepSeek API Docs · deepseek-ai on HuggingFace
Family
Gemini
Pro · Flash · Flash-Lite · Nano
Alphabet (Google) US public
Gemini 3.1 Pro
Feb 19, 2026

Google's flagship line, from Bard (March 2023) and Gemini 1.0 (December 2023) through Gemini 3.1 Pro (February 2026). Largest integration surface of any frontier line — Search AI Overviews, Workspace (Gmail, Docs, Slides, Meet, Drive), Pixel + Android (on-device Nano), Chrome Built-in AI. The 2.5 line (March 2025) was the first reasoning-as-default frontier model.

  • Lab: Google DeepMind, inside Alphabet (NASDAQ: GOOGL) — Mountain View, California, USA.
  • Current model: Gemini 3.1 Pro, shipped February 19, 2026. API: gemini-3.1-pro-preview. ARC-AGI-2 jumped from 31.1% to 77.1% with this release.
  • First model: Bard (LaMDA-based), March 21, 2023. Total versions: 21.
  • Pages on this site:
  • Primary sources: Gemini API Docs · Google DeepMind Blog
Family
Grok
Chat · Heavy · Multi-agent
xAI US private
Grok 4.20
Mar 10, 2026

xAI's flagship line, from Grok 1 (November 2023) through Grok 4.20 (March 2026). Multi-agent collaboration as a first-class API mode (introduced with 4.20). 2M-token context window in agent modes. xAI itself became a SpaceX subsidiary via the February 2026 SpaceX-xAI merger; X integration is the original distribution surface.

  • Lab: xAI — San Francisco Bay Area, California, USA. Privately held; SpaceX subsidiary as of February 2026.
  • Current model: Grok 4.20, shipped March 10, 2026. API: grok-4.20 / grok-4.20-multi-agent.
  • First model: Grok 1, November 4, 2023. Total versions: 13 (including point-releases through 4.20).
  • Pages on this site:
  • Primary sources: xAI Models · xAI News
Family
Llama
Open-weights → closed
Meta US public
Muse Spark
Apr 8, 2026

Meta's frontier line, from LLaMA 1 (March 2023, originally research-only) through Llama 4 (April 2025) and into the post-Llama Muse Spark line (April 2026, the first model from the new Meta Superintelligence Labs). Llama 1–4 were the canonical open-weights frontier line; Muse Spark is the company's pivot to closed-weights, API-only frontier models — the end of Meta's open-weights frontier-AI era. The family is in transition; this row tracks the active frontier line under the Meta umbrella.

  • Lab: Meta Superintelligence Labs (MSL), inside Meta Platforms (NASDAQ: META) — Menlo Park, California, USA. Scale AI majority-acquired into MSL in 2025.
  • Current model: Muse Spark, shipped April 8, 2026. Closed-weights, API-only — no HuggingFace release.
  • First model: LLaMA 1, March 3, 2023 (research-only release). Total versions: 15.
  • Pages on this site:
  • Primary sources: llama.com · Meta AI Blog
Family
Mistral
Mistral · Mixtral · Magistral
Mistral AI EU
Mistral Small 4
Mar 16, 2026

Paris-based open-weights AI lab, from Mistral 7B (September 2023) through Mistral Small 4 (March 2026). The Mistral / Mixtral / Magistral product structure splits between dense (Mistral), Mixture-of-Experts (Mixtral), and reasoning (Magistral). Largest version count on the roster (23) reflects the lab's high release cadence and its tier proliferation.

  • Lab: Mistral AI — Paris, France. Privately held. EU-based; the only non-US, non-Chinese lab on the roster.
  • Current model: Mistral Small 4, shipped March 16, 2026. HuggingFace: mistralai/Mistral-Small-4. Apache 2.0.
  • First model: Mistral 7B, September 27, 2023. Total versions: 23.
  • Pages on this site:
  • Primary sources: Mistral Docs · mistralai on HuggingFace
Family
Qwen
Qwen · Tongyi Qianwen
Alibaba Chinese
Qwen3.6-27B
Apr 22, 2026

Alibaba Cloud's Tongyi Lab open-weights line, from Qwen 1 (August 2023) through Qwen3.6-27B (April 2026). Apache 2.0 licensed. The 3.6 release introduced Hybrid Gated DeltaNet plus self-attention with “Thinking Preservation” reasoning, 262K context (extensible to 1M); Alibaba claims it beats their own 397B-parameter MoE model on coding benchmarks.

  • Lab: Tongyi Lab inside Alibaba Cloud, Alibaba Group (NYSE: BABA) — Hangzhou, China.
  • Current model: Qwen3.6-27B, shipped April 22, 2026. HuggingFace: Qwen/Qwen3.6-27B. 27B dense, Apache 2.0.
  • First model: Qwen 1, August 3, 2023. Total versions: 19.
  • Pages on this site:
  • Primary sources: Qwen Docs · Qwen on HuggingFace

Notable absences

Each is a name visitors might expect to see and a one-line reason for why it isn't here. None of these failed for arbitrary reasons — each fails one of the four inclusion criteria documented in the methodology below.

About this list

Inclusion criteria. Curated to exactly the model families that satisfy all four of: (1) has a Mungomash /ai/<family>/versions/ page on this site, (2) has shipped a generation-class flagship in the last 18 months, (3) is openly described as a frontier-LLM line by its lab, and (4) has a primary API surface visitors can call (first-party hosted API, OpenAI-compatible API, or open-weights with a HuggingFace release). Closed-internal models with no callable surface, pre-2022 lines, research previews, and non-LLM frontier lines (image, video, speech) are out by criterion. See "Notable absences" above for the full list of considered-but-excluded names.

Data sourcing. Each row's data — current model, current ship date, first model, total versions — is sourced from this site's per-family Versions pages, which are themselves cross-checked against each lab's primary documentation on every refresh. The "current model" is the most-recent flagship release on the family's Versions page; "total versions" is the row count on that page.

Relationship to other AI-section pages. This page is the entity-roster — one row per family. Several adjacent cross-family pages will live alongside it as horizontal slices that visualize a single specific axis (context windows, model lineage, pricing history, release cadence, training-data disclosures); each can link back to a family's row here for the entity-level context. The per-family /ai/<family>/versions/ pages remain canonical for the full lineage data.

Data freshness. The roster, every current-model name, every current-model ship date, every total-versions count, and every lab status are re-verified against the per-family Versions pages and each lab's primary documentation on every refresh of this page. If a lab has shipped a new flagship since the last refresh, the row is updated; if a privately-held lab has gone public or been acquired, the labstatus is updated. Stale roster data on a frontier-LLM page drifts within weeks — the release cadence is fast.

What's intentionally excluded. Cross-family benchmark scores (noisy, change weekly, lab-published numbers are aggressive marketing). Live API pricing (scope of /ai/pricing-history/ when it ships). Context-window comparison (scope of /ai/context-windows/). Visual family-tree relationships between models (scope of /ai/model-lineage/). Editorial framing about which family is “best.”

Last updated: 2026-04-30. Looking for the per-family lineage? Each row above links to the family's /ai/<family>/versions/ page plus the lab's profile.