Mungomash LLC
Mistral Versions

2023 – 2026

Mistral Versions

Every Mistral release — Mistral 7B (September 2023) through Mistral Small 4 (March 2026) and Mistral Large 3 (December 2025) — with HuggingFace ids, ship dates, family (Open / Research / Proprietary), license terms, and the major changes per version. Plus the April 2023 founding by Mensch, Lample, and Lacroix, the funding arc through ASML's September 2025 €1.3B Series C, the February 2024 Microsoft partnership and the EU regulatory friction, the EU AI Act lobbying narrative, and the January 2025 Apache 2.0 turn at Mistral Small 3.

Family & status

Family (license tier)

Open — Apache 2.0; fully open weights, commercial use permitted without restriction
Research — Mistral Research License (MRL/MNPL); weights public, commercial use requires a separate license
Proprietary — closed-weights, API-only via la Plateforme

Status

Current — actively recommended; the latest in its product slot
Available — weights still served via HuggingFace and partner inference providers, or API still served, but superseded
Legacy — deprecated, retired from la Plateforme, or no longer recommended

Mistral version table

Model
Mistral Small 4
mistralai/Mistral-Small-4
Open
Current
Mar 16, 2026
First Mistral Small with MoE architecture. 119B total / 6B active. Apache 2.0. Headlined the March 2026 release wave.
  • Released March 16, 2026; the announcement is at mistral.ai/news/mistral-small-4.
  • First Mistral Small with Mixture-of-Experts — 119B total parameters with 6B active per token; the prior Mistral Small lineage (Small 3 / 3.1) was dense at 24B.
  • License: Apache 2.0, distributed via huggingface.co/mistralai and la Plateforme.
  • Headlined a March 2026 release wave that also included Voxtral TTS (the lab's first audio model, March 23, 2026), the Devstral Vibe CLI, and the NVIDIA partnership.
Model
Mistral OCR 3
mistral-ocr-2512 — closed-weights, API only
Proprietary
Current
Dec 17, 2025
Current OCR / structured-document model. Smaller and cheaper than OCR. Reported 74% win rate at $2 per 1,000 pages.
  • Released December 17, 2025; coverage in MarkTechPost.
  • Proprietary, closed-weights, API only via la Plateforme as mistral-ocr-2512. Targets enterprise document digitization with a 74% reported win rate against the prior OCR generation at $2 per 1,000 pages.
  • Replaces Mistral OCR (March 2025) on la Plateforme; the prior OCR row below carries Legacy.
Model
Devstral 2 + Devstral Small 2
mistralai/Devstral-Small-2-2512, mistralai/Devstral-2-2512
Open
Available
Dec 10, 2025
Open-weights coding-agent line. Devstral Small 2 is a 24B dense Apache 2.0 release claimed to beat Qwen 3 Coder Flash.
  • Released December 10, 2025; the announcement is at mistral.ai/news/devstral-2-vibe-cli; coverage in VentureBeat.
  • Devstral Small 2 is a 24B dense model, Apache 2.0, designed for laptop-scale agentic coding deployment; Mistral claimed it beats Qwen 3 Coder Flash on coding benchmarks.
  • Devstral 2 is the larger production-scale variant, also distributed under permissive open-source terms.
  • Builds on the original Devstral Small (May 2025) that established the Devstral coding-agent lineage. Shipped alongside the Devstral Vibe CLI.
Model
Mistral Large 3
mistralai/Mistral-Large-3-2512
Open
Current
Dec 2, 2025
Open-weights frontier flagship. Granular MoE, 41B active / 675B total. 256K context. Apache 2.0. Multimodal, multilingual.
  • Released December 2, 2025 as the headline of the “Mistral 3” family relaunch; the announcement is at mistral.ai/news/mistral-3; docs page: docs.mistral.ai/models/mistral-large-3-25-12; coverage in TechCrunch and The Register.
  • Granular Mixture-of-Experts with 41B active parameters and 675B total parameters; 256K-token context window; natively multimodal (text + image) and multilingual across 40+ languages.
  • Mistral characterized the model as catching up to GPT-4o and Gemini 2 on broad benchmarks while remaining open-weights.
  • License: Apache 2.0. Both base and instruction-tuned variants released; a reasoning variant was announced as “coming soon” in the launch post.
  • Shipped alongside the Ministral 3 family (3B / 8B / 14B); together they form the “Mistral 3” family relaunch that re-committed the line to permissive open-source licensing across every scale.
Model
Ministral 3 family (3B / 8B / 14B)
mistralai/Ministral-{3B, 8B, 14B}-2512, base + instruct + reasoning variants
Open
Available
Dec 2, 2025
Three small dense models. 3B / 8B / 14B with base, instruct, and reasoning variants each. 14B reasoning hit ~85% on AIME ’25. Apache 2.0.
  • Released December 2, 2025 alongside Mistral Large 3 as the small-end of the “Mistral 3” family.
  • Three sizes — 3B, 8B, 14B; each ships in three flavors — base, instruction-tuned, and reasoning. Nine model variants total.
  • The 14B reasoning variant reached ~85% accuracy on AIME 2025 per Mistral's release post — state-of-the-art accuracy in the 14B-and-under weight class at launch.
  • License: Apache 2.0. Replaces the original Ministral 3B / 8B (October 2024) on la Plateforme as the recommended edge / small-server line.
Model
Magistral Small + Magistral Medium
mistralai/Magistral-Small-2506, magistral-medium-2506 (proprietary preview)
Open
Available
Jun 10, 2025
First Mistral reasoning models. Small is 24B Apache 2.0; Medium is proprietary preview. Multilingual chain-of-thought.
  • Released June 10, 2025; the announcement is at mistral.ai/news/magistral; paper: “Magistral” (arXiv 2506.10910); coverage in TechCrunch.
  • Magistral Small — 24B-parameter reasoning model, Apache 2.0, distributed via huggingface.co/mistralai.
  • Magistral Medium — the larger proprietary variant available via Le Chat and la Plateforme as magistral-medium-2506.
  • First Mistral reasoning models, fine-tuned for multi-step logic with traceable chain-of-thought in the user's language across French, Spanish, German, Italian, Arabic, Russian, English, and Simplified Chinese. Trained with Mistral's own scalable RL pipeline.
  • Filed under Open on this page because the Small variant is Apache 2.0 and the more widely-deployed of the two; the proprietary Medium variant is described above.
Model
Mistral Medium 3
mistral-medium-3-2505 — closed-weights, API only
Proprietary
Available
May 7, 2025
Mid-tier proprietary flagship. Reported 90%+ of Claude Sonnet 3.7 quality at $0.4 / $2 per M tokens. La Plateforme + hyperscalers.
  • Released May 7, 2025; the announcement is at mistral.ai/news/mistral-medium-3.
  • Mistral characterized the model as performing at or above 90% of Claude Sonnet 3.7 on broad benchmarks at $0.4 / M input tokens, $2 / M output tokens — a deliberate “medium is the new large” mid-tier positioning.
  • Proprietary, closed-weights. Available on Mistral La Plateforme, Amazon SageMaker, IBM watsonx, NVIDIA NIM, Azure AI Foundry, and Google Cloud Vertex AI.
Model
Devstral Small
mistralai/Devstral-Small-2505
Open
Legacy
May 21, 2025
First Devstral. 24B dense, Apache 2.0. Coding-agent fine-tune of Mistral Small 3.1. Superseded by Devstral 2 in December 2025.
  • Released May 21, 2025; the announcement is at mistral.ai/news/devstral.
  • 24B-parameter dense coding-agent model, Apache 2.0, fine-tuned from Mistral Small 3.1 for software-engineering tasks (code generation, refactoring, multi-file edits).
  • Established the Devstral lineage; superseded by Devstral 2 + Devstral Small 2 in December 2025.
Model
Mistral OCR
mistral-ocr-2503 — closed-weights, API only
Proprietary
Legacy
Mar 2025
First Mistral OCR. Document parsing into structured Markdown. Superseded by Mistral OCR 3 in December 2025.
  • Released March 2025 as mistral-ocr-2503; the original first-party OCR / structured-document model.
  • Proprietary, closed-weights, API only via la Plateforme. Targets PDF and image document parsing into structured Markdown with table / formula recovery.
  • Superseded by Mistral OCR 3 in December 2025; status is Legacy on the la-Plateforme-deprecation reading.
Model
Mistral Saba
mistral-saba-2502 — closed-weights, API only
Proprietary
Available
Feb 2025
Regional model trained for Middle East and South Asia. Arabic, Tamil, and South Asian language coverage. Proprietary.
  • Released February 2025 as mistral-saba-2502; targets Arabic, Tamil, and South Asian language coverage.
  • Proprietary, closed-weights, API only. Distributed primarily through la Plateforme and partner clouds in the target regions.
  • The first Mistral release explicitly positioned as a regional / language-specialist model rather than a global flagship.
Model
Mistral Small 3 (+ 3.1)
mistralai/Mistral-Small-{Instruct-2501, 3.1-Instruct-2503}
Open
Legacy
Jan 30, 2025
24B dense, Apache 2.0. Marked the lab's commitment to permissive licensing across all open releases — the start of the Mistral 3 era.
  • Released January 30, 2025; the announcement is at mistral.ai/news/mistral-small-3; Simon Willison's launch coverage.
  • 24B dense parameters, Apache 2.0; Mistral characterized it as competitive with Llama 3.3 70B and Qwen 32B at three-times-faster inference on the same hardware.
  • The load-bearing licensing decision for the Mistral story — the first Mistral release after Mistral Large 2 / Pixtral Large that returned the line to Apache 2.0 instead of the Mistral Research License. Established the pattern that the December 2025 “Mistral 3” family relaunch then formalized.
  • Mistral Small 3.1 followed in March 2025, adding multimodal vision capability and improved long-context handling, also under Apache 2.0. The two are folded into a single row here because 3.1 is an iterative refresh of the same lineage.
  • Superseded as the recommended Small by Mistral Small 4 in March 2026; status is Legacy on the “no longer the recommended Small” reading.

The Apache 2.0 turn — January 30, 2025. Above this line: every open Mistral release after Small 3 ships under Apache 2.0, and the “Mistral 3” family relaunch in December 2025 formalized the commitment across every scale (Mistral Large 3 included). Below: the bespoke Mistral Research License (MRL) era — Codestral 22B (May 2024), Mistral Large 2 (July 2024), and Pixtral Large (November 2024) shipped with weights public but commercial use carved out, requiring a separate Mistral Commercial License. Apache 2.0 open releases (Mistral 7B, the Mixtral pair, Mistral NeMo, the original Ministral, Pixtral 12B, Codestral Mamba, Mathstral) continued throughout the MRL era as parallel options.

Model
Pixtral Large
mistralai/Pixtral-Large-Instruct-2411
Research
Available
Nov 18, 2024
124B multimodal flagship built on Mistral Large 2. 128K context. Mistral Research License (research) + Mistral Commercial License (production).
  • Released November 18, 2024; HuggingFace card: mistralai/Pixtral-Large-Instruct-2411.
  • 124B-parameter multimodal model built on top of Mistral Large 2, with vision capabilities trained on top of the dense 123B language tower. 128K-token context window.
  • License: Mistral Research License (MRL) for research and educational use, Mistral Commercial License for production. Weights public on HuggingFace; commercial deployment requires a separate license from Mistral.
  • Last MRL-licensed Mistral release before the January 30, 2025 Apache 2.0 turn at Mistral Small 3.
Model
Ministral 3B / Ministral 8B
mistralai/Ministral-{3B, 8B}-Instruct-2410
Open
Legacy
Oct 16, 2024
First Ministral. Two edge-targeted dense sizes. Apache 2.0 (8B); Mistral Commercial License (3B). Superseded by Ministral 3 in December 2025.
  • Released October 16, 2024 on Mistral's first anniversary; HuggingFace cards on the mistralai org.
  • Two edge-targeted dense sizes — 3B and 8B — designed for on-device and laptop-scale deployment. The 8B used a sliding-window attention pattern for long-context efficiency.
  • License: 8B was Apache 2.0; 3B shipped under the Mistral Commercial License. Filed under Open on this page because the more widely-deployed 8B variant is Apache 2.0; 3B's commercial-license terms are noted in the row's expansion.
  • Superseded by the Ministral 3 family (3B / 8B / 14B, all Apache 2.0) in December 2025; status is Legacy.
Model
Pixtral 12B
mistralai/Pixtral-12B-2409
Open
Available
Sep 11, 2024
First Pixtral. 12B vision-language model with 400M-parameter vision encoder. Apache 2.0. Built on Mistral NeMo.
  • Released September 11, 2024; HuggingFace card: mistralai/Pixtral-12B-2409.
  • 12B-parameter vision-language model built on Mistral NeMo's 12B dense base, with a 400M-parameter vision encoder. Apache 2.0. Image inputs at variable aspect ratios and resolutions.
  • First Pixtral release; Pixtral Large (November 2024, MRL) is the larger sibling.
Model
Mistral Large 2
mistralai/Mistral-Large-Instruct-2407
Research
Legacy
Jul 24, 2024
123B dense flagship. 128K context. Improved reasoning, math, coding, and multilingual capability. The MRL-licensed flagship of 2024.
  • Released July 24, 2024; HuggingFace card: mistralai/Mistral-Large-Instruct-2407.
  • 123 billion dense parameters, 128K-token context window; substantial improvements in reasoning, math, coding, and multilingual capability over the original Mistral Large.
  • License: Mistral Research License (MRL) — weights public, commercial use requires a separate Mistral Commercial License. The flagship MRL release of 2024.
  • Available on Mistral La Plateforme, Azure AI Foundry, Amazon Bedrock, IBM watsonx, Google Cloud Vertex AI. Superseded by Mistral Large 3 in December 2025.
Model
Mistral NeMo
mistralai/Mistral-Nemo-Instruct-2407
Open
Available
Jul 18, 2024
12B dense, Apache 2.0. Built in collaboration with NVIDIA. 128K context. Tekken tokenizer (~30% more efficient on natural languages).
  • Released July 18, 2024 in collaboration with NVIDIA; HuggingFace card: mistralai/Mistral-Nemo-Instruct-2407.
  • 12B dense parameters, Apache 2.0. 128K-token context window — the first Mistral with a context window past 32K.
  • Introduced the Tekken tokenizer, ~30% more efficient on natural languages and ~2× more efficient on Korean and Arabic than the prior Mistral tokenizer.
  • Optimized for FP8 inference on NVIDIA hardware; co-released as part of NVIDIA NIM. The Pixtral 12B vision model later in 2024 was built on this base.
Model
Codestral Mamba + Mathstral
mistralai/Mamba-Codestral-7B-v0.1, mistralai/mathstral-7B-v0.1
Open
Legacy
Jul 16, 2024
Specialized 7B pair. Codestral Mamba uses Mamba state-space architecture; Mathstral targets STEM reasoning. Both Apache 2.0.
  • Released July 16, 2024 as a pair of specialized 7B models, both Apache 2.0.
  • Codestral Mamba — the first Mistral release using Mamba state-space architecture instead of transformer attention, delivering linear-time inference for long-context coding workloads.
  • Mathstral — a 7B math-and-STEM-reasoning specialist released in collaboration with Project Numina.
  • Both superseded as practical recommendations by general-purpose Mistral Small 3 / Mistral Large 3 instruction tuning by 2025; status is Legacy.
Model
Codestral 22B
mistralai/Codestral-22B-v0.1
Research
Legacy
May 29, 2024
First Codestral. 22B dense coding model, 80+ languages. Mistral AI Non-Production License (MNPL). Debuted the bespoke MRL-style licensing.
  • Released May 29, 2024 as the first Codestral; HuggingFace card: mistralai/Codestral-22B-v0.1.
  • 22B dense coding model trained on 80+ programming languages; 32K-token context.
  • License: the bespoke Mistral AI Non-Production License (MNPL) — weights public on HuggingFace, but commercial / production use requires a separate Mistral Commercial License. The first Mistral release using the bespoke non-OSI license that Mistral Large 2 (July 2024) and Pixtral Large (November 2024) later used as MRL.
  • Superseded as the practical coding recommendation by Devstral / Devstral 2 by mid-2025; status is Legacy.
Model
Mixtral 8x22B
mistralai/Mixtral-8x22B-Instruct-v0.1
Open
Legacy
Apr 17, 2024
Larger MoE successor to Mixtral 8x7B. 39B active / 141B total. 64K context. Native function calling. Apache 2.0.
  • Released April 17, 2024; the announcement is at mistral.ai/news/mixtral-8x22b.
  • Sparse Mixture-of-Experts with 39B active parameters out of 141B total; 64K-token context window. Native function-calling support.
  • License: Apache 2.0. Faster than any dense 70B model at the time of release while reportedly outperforming most open-weight models in its size range.
  • Final release in the original Mixtral lineage; subsequent flagship MoE work shipped under the Mistral Large naming and the Mistral 3 family. Status is Legacy.
Model
Mistral Large (original)
mistral-large-2402 — closed-weights, API only
Proprietary
Legacy
Feb 26, 2024
First Mistral Large. Closed-weights. Launched alongside the Microsoft Azure partnership and a $16M Microsoft investment.
  • Released February 26, 2024; closed-weights, API only via la Plateforme as mistral-large-2402.
  • Launched alongside the Microsoft Azure strategic partnership — Mistral Large became available on Azure AI, and Microsoft made a $16 million financial commitment as part of a multi-year deal. The partnership drew immediate French-government and EU-Commission scrutiny; the EU briefly opened (and then closed) an antitrust look at the deal. Euronews coverage.
  • Performance was characterized at launch as competitive with GPT-4 on broad benchmarks. Multilingual training across French, German, Spanish, Italian, English, with native function calling.
  • Superseded by Mistral Large 2 (July 2024) and Mistral Large 3 (December 2025); status is Legacy.

The Microsoft partnership and the proprietary tier — February 26, 2024. Above this line: Mistral runs a hybrid commercial strategy with a proprietary API tier (Mistral Large → Large 2 → Medium 3 → Saba → OCR → OCR 3) alongside open-weights releases under Apache 2.0 or the Mistral Research License. Below: the founding lineage — Mistral 7B, Mixtral 8x7B, and the original Mistral Medium — that established the lab's open-source identity in the four months between the September 2023 founding-model release and the Microsoft deal.

Model
Mixtral 8x7B
mistralai/Mixtral-8x7B-Instruct-v0.1
Open
Legacy
Dec 11, 2023
First Mistral MoE. 13B active / 47B total. 32K context. Apache 2.0. The model that mainstreamed sparse Mixture-of-Experts in open weights.
  • Released December 11, 2023 via a magnet link posted to X with no announcement post; the formal blog post followed at mistral.ai/news/mixtral-of-experts; paper at arXiv 2401.04088.
  • Sparse Mixture-of-Experts with ~13B active parameters out of 47B total (8 experts, top-2 routing per token); 32K-token context window.
  • License: Apache 2.0. Reported parity with GPT-3.5 / Llama 2 70B on broad benchmarks at substantially lower active-compute cost.
  • The model that mainstreamed sparse Mixture-of-Experts in the open-weights ecosystem and seeded the architecture pattern that DeepSeek-V2 / Llama 4 / Mistral Large 3 later refined at scale.
Model
Mistral Medium (original)
mistral-medium-2312 — closed-weights, API only
Proprietary
Legacy
Dec 11, 2023
First proprietary Mistral. GPT-3.5-class quality. Closed-weights. The first hint at the hybrid open-and-proprietary strategy.
  • Released December 11, 2023 alongside the Mixtral 8x7B announcement; closed-weights, API only via la Plateforme as mistral-medium-2312.
  • Characterized at launch as GPT-3.5-class on broad benchmarks. The lab's first proprietary release — the early signal that Mistral's commercial strategy would include a closed-weights tier alongside the open-weights lineage.
  • Superseded by Mistral Large (February 2024) and the Medium-3 / Magistral-Medium product line; status is Legacy.
Model
Mistral 7B
mistralai/Mistral-7B-v0.1
Open
Legacy
Sep 27, 2023
The founding model. 7.3B dense parameters. Grouped-Query + Sliding-Window Attention. 8K context. Apache 2.0. Released via magnet link.
  • Released September 27, 2023 via a magnet link posted to X; the formal announcement is at mistral.ai/news/announcing-mistral-7b; paper at arXiv 2310.06825.
  • 7.3 billion dense parameters; 8,192-token context window; introduced Grouped-Query Attention and Sliding-Window Attention for inference efficiency.
  • Reported parity with Llama 2 13B and equivalent performance to a hypothetical Llama 2 model 3× its size on broad benchmarks at launch — the result that established Mistral's reputation in the open-weights ecosystem.
  • License: Apache 2.0. The lab's first release, four months after the April 2023 founding. Architecturally and historically the foundation of every later Mistral release.

Click any row to expand. Each row has a stable id for sharing — e.g. /ai/mistral/versions/#mistral-large-3, #mistral-7b, #mixtral-8x7b, #magistral. Mistral news: mistral.ai/news; docs changelog: docs.mistral.ai/getting-started/changelog; HuggingFace org: huggingface.co/mistralai; legal center: legal.mistral.ai.

The April 2023 founding

Mistral AI was incorporated in Paris on April 28, 2023 by three French AI researchers in their early thirties. Arthur Mensch (CEO) had been a researcher at Google DeepMind, where he co-authored the original LLaMA-related research; Guillaume Lample (Chief Scientist) and Timothée Lacroix (CTO) had worked on the LLaMA paper at Meta. The three had attended École polytechnique together a decade earlier.

Mistral's seed round closed in June 2023 at €105 million ($117M USD) — an unusual scale for a four-week-old company, valuing the lab at roughly $260M pre-product. Lead investor was Lightspeed Venture Partners; the syndicate included Eric Schmidt, Xavier Niel (Iliad / Free), JCDecaux Holding, and Bpifrance (the French sovereign investment bank). The seed-round valuation drew sustained press coverage as a marker of European AI capital commitment.

The first model, Mistral 7B, shipped on September 27, 2023 — five months after incorporation. It was released via a magnet link posted to X, a deliberate stylistic choice that the lab repeated for Mixtral 8x7B in December 2023. The release-by-torrent pattern signaled an open-source identity that the subsequent licensing turn at the Microsoft partnership and the Mistral Research License would complicate.

The funding arc — through ASML's September 2025 Series C

After the June 2023 seed, Mistral raised a $415M Series A in December 2023 led by Andreessen Horowitz at a ~$2B valuation. A €600M (~$640M) round in June 2024 mixed equity and debt, led by General Catalyst, valued the lab at roughly $6B. The headline fundraise came on September 9, 2025: a Series C led by Dutch semiconductor company ASML at €1.3 billion (~$1.5B) for an undisclosed minority stake, valuing Mistral at €11.7 billion (~$13.8B).

ASML's strategic position in EUV lithography for advanced semiconductor fabrication makes the investment substantively unusual: the deal is widely read as a Europe-on-Europe sovereignty bet, ASML buying a stake in the European frontier-AI tenant most likely to consume next-generation compute that ASML's lithography ultimately enables. Through April 2026, the Series C remains Mistral's largest single round and the largest European AI investment to date. The lab has not publicly disclosed gross revenue or operating margin.

The February 2024 Microsoft partnership

On February 26, 2024, Microsoft and Mistral jointly announced a multi-year strategic partnership coinciding with the launch of Mistral Large on Azure AI. The deal carried a $16 million financial commitment from Microsoft and made Mistral models available natively on Azure as a preferred-partner offering, alongside OpenAI's models.

The partnership drew immediate French-government and EU regulatory scrutiny. The European Commission opened a brief antitrust look at whether the Microsoft commitment constituted a notifiable concentration under EU merger control; the Commission ultimately concluded it did not. Critics in Euronews coverage framed the deal as compromising the “European AI sovereignty” positioning the French government had used to justify its support of Mistral, including in the EU AI Act lobbying campaign covered in the next section.

Mensch's response (in a February 2024 Time interview and subsequent media appearances) framed the Microsoft deal as a commercial-distribution arrangement that did not affect Mistral's research direction or its open-source releases. The empirical record bears this out partially — the Mixtral 8x22B Apache 2.0 release in April 2024 followed two months later, and the Mistral 3 family relaunch in December 2025 re-committed the line to permissive open-source licensing across every scale — though the Mistral Large proprietary tier on Azure has remained a continuous commercial product.

The EU AI Act lobbying narrative

Through late 2023 and into early 2024, Mistral mounted a sustained lobbying campaign against the European Parliament's tiered-foundation-model proposal in the EU AI Act, which would have imposed compliance and transparency obligations on general-purpose AI providers above defined capability thresholds. The campaign was led by Cédric O, France's former Secretary of State for Digital Affairs (2019–2022), who had joined Mistral as a strategic advisor and lobbyist after leaving government. Reporting at the time in Corporate Europe Observatory documented the privileged-access dynamic between Mistral and the highest levels of the French and German governments.

Mistral's stated position (in a November 2023 TechCrunch interview) was that the AI Act should regulate applications of AI rather than foundation models — that capability thresholds applied at the model-developer level would chill European open-source frontier-model development and entrench U.S. incumbents whose compliance teams were already large enough to absorb the regulatory cost. Critics countered that the “regulate applications, not models” framing aligned conveniently with the lobbying interests of the largest model developers themselves, which by 2025 included Mistral.

The final EU AI Act, agreed in December 2023 and entering force in August 2024, includes obligations on general-purpose AI providers but also includes broad exemptions for open-source models that do not pose “systemic risk.” The exemptions are widely credited — including by Corporate Europe Observatory's investigation — to the lobbying campaign Mistral led. The systemic-risk threshold, by contrast, captures the largest models and so applies to Mistral Large 2 / Mistral Large 3 onward; how the Code of Practice and the AI Office's compute-threshold methodology will treat each Mistral release is a continuous compliance question for the lab.

The licensing turn — from MRL back to Apache 2.0

Mistral's licensing has evolved across three distinct conventions. The founding lineage — Mistral 7B (September 2023), Mixtral 8x7B (December 2023), Mixtral 8x22B (April 2024), Mistral NeMo (July 2024), Codestral Mamba and Mathstral (July 2024), Pixtral 12B (September 2024) — shipped under the standard Apache 2.0: weights public, commercial use unrestricted. In parallel, the proprietary tier (Mistral Medium December 2023, Mistral Large February 2024 onward) shipped closed-weights via la Plateforme.

The licensing innovation arrived in May 2024 with Codestral 22B, which introduced the bespoke Mistral AI Non-Production License (MNPL): weights public on HuggingFace, but commercial / production deployment requires a separate Mistral Commercial License. The MNPL pattern was generalized into the Mistral Research License (MRL), which was applied to Mistral Large 2 (July 2024) and Pixtral Large (November 2024). The MRL/MNPL pattern is the one tracked by the violet “Research” family pill on this page.

The licensing turn arrived on January 30, 2025 with Mistral Small 3, which returned to Apache 2.0. Every subsequent Mistral open release through April 2026 has shipped under Apache 2.0: Mistral Small 3.1, Devstral, Devstral 2 / Devstral Small 2, Magistral Small, Mistral Large 3, the Ministral 3 family (3B / 8B / 14B), and Mistral Small 4. The December 2025 “Mistral 3” family relaunch formalized the Apache 2.0 commitment across every scale — including, notably, the 675B-total-parameter Mistral Large 3 frontier flagship, which is the largest open-weights Mistral model to date and was explicitly framed at launch as a return to permissive open-source.

The proprietary tier remains active in parallel: Mistral Medium 3 (May 2025), Mistral Saba (February 2025), Mistral OCR / OCR 3, Magistral Medium, and Voxtral TTS (March 2026) all ship closed-weights via la Plateforme. The hybrid model — Apache 2.0 open releases for the public-facing flagship lineage, proprietary closed-weights for specialized commercial products — is the licensing equilibrium Mistral has settled into.

The 2026 release cadence and the NVIDIA partnership

Mistral's release rhythm accelerated through early 2026. In March 2026 alone, the lab shipped six major products in roughly two weeks: Mistral Small 4 (March 16), the Devstral Vibe CLI, Voxtral TTS (March 23, the lab's first audio model), Forge, the Spaces CLI, and a publicly-announced NVIDIA partnership for inference-stack optimization across the Mistral lineage on NVIDIA Blackwell hardware.

The cadence reflects the engineering capacity unlocked by the September 2025 ASML round and the Microsoft / NVIDIA / hyperscaler distribution deals collectively. Whether the cadence is sustainable through 2026 is an open question; the prior Mistral release rhythm averaged roughly 6–10 model rows per year, and the March 2026 wave represented an outlier rather than a new steady state. The recurring refresh task should re-check the cadence on every run.

Where to run Mistral

Mistral is widely deployed because most flagship releases are open-weights and the proprietary releases are available across every major hyperscaler. Inference paths through 2025–2026 break into four categories.

Mistral's own surfaces. Le Chat is the consumer chat product (free + paid tiers, iOS / Android / web). la Plateforme is the developer API endpoint (OpenAI-compatible).

Self-host from HuggingFace. Download from the mistralai org and run with vLLM, llama.cpp, Ollama, or NVIDIA TensorRT-LLM. The Apache 2.0 releases (everything from Mistral 7B through Mistral Small 4) self-host without commercial restriction; the MRL releases (Codestral 22B, Mistral Large 2, Pixtral Large) require a separate commercial license for production use.

Hyperscalers. Microsoft Azure AI Foundry (the Microsoft launch partner since February 2024), AWS Bedrock, Google Cloud Vertex AI, IBM watsonx, Oracle OCI, NVIDIA NIM (the Mistral NeMo launch partner since July 2024). Most hyperscalers carry both the proprietary and the open-weights lineage; the per-model availability matrix lives at docs.mistral.ai.

Hosted-inference providers. Together AI, Fireworks, OpenRouter, Replicate, Groq. Most providers serve the Apache 2.0 lineage with similar latency / cost characteristics; the MRL-licensed weights (Mistral Large 2, Pixtral Large) typically require an additional commercial-license attestation from the provider.

People who shaped Mistral

Arthur Mensch — co-founder and CEO. PhD from École normale supérieure / INRIA; researcher at Google DeepMind 2020–2023 working on RETRO and Chinchilla. The face of Mistral in EU and US policy debates; profiled in Time and the 20VC podcast.

Guillaume Lample — co-founder and Chief Scientist. PhD from Sorbonne / Facebook AI Research; co-author of the Meta LLaMA paper. Leads the model-architecture and pretraining work behind every Mistral release; first author on the Mistral 7B paper.

Timothée Lacroix — co-founder and CTO. PhD from École polytechnique / Facebook AI Research; co-author of the Meta LLaMA paper. Leads the engineering and infrastructure work, including the la Plateforme API, the inference stack, and the hyperscaler integrations.

Cédric O — strategic advisor and lobbyist. France's former Secretary of State for Digital Affairs (2019–2022); has led Mistral's EU AI Act lobbying campaign since 2023, with documented privileged access to the highest levels of French and German government per Corporate Europe Observatory's reporting. Not a corporate officer; Mistral's regulatory voice in Brussels.

The competitive landscape

Mistral occupies a distinctive position: the leading European frontier-AI lab, with a hybrid commercial strategy that puts open-weights flagships (Apache 2.0 from Mistral Small 3 onward) alongside a proprietary API tier on la Plateforme and the hyperscalers. The closest open-weights competitors are Meta's Llama (custom Llama Community License with the >700M-MAU carve-out, see Llama Versions), DeepSeek (Chinese, MIT-licensed for the V3 / R1 line and onward, see DeepSeek Versions), and Alibaba's Qwen (Apache-2.0-or-permissive across most releases — see Qwen Versions). The closed-weights frontier competitors — ChatGPT, Claude, Gemini, Grok — have all stayed closed-weights since their inception. Mistral's regulatory and political position in Europe is the variable that distinguishes the line from every other frontier lab; whether the EU AI Act's Code of Practice and the AI Office's enforcement methodology will treat European model providers more or less stringently than U.S. and Chinese ones remains the open structural question for the lab through 2026. This page does not attempt a benchmark roundup or a ranking.

Use Mistral

The browser cannot detect which Mistral model you've used — there's no fingerprint or header that exposes it. The block below carries the practical information instead: the current model identifiers, a copy-paste API call, the surfaces where Mistral is available, and the licensing summary.

Current model identifiers

la Plateforme model strings on the left; HuggingFace ids on the mistralai org on the right. Verify against docs.mistral.ai/getting-started/changelog and huggingface.co/mistralai for the freshest list.

# Open-weights flagship line (Apache 2.0)
mistralai/Mistral-Small-4
mistralai/Mistral-Large-3-2512
mistralai/Ministral-{3B, 8B, 14B}-2512        # base + instruct + reasoning

# Open-weights specialized (Apache 2.0)
mistralai/Devstral-{Small-2-2512, 2-2512}     # coding agents
mistralai/Magistral-Small-2506                 # reasoning
mistralai/Pixtral-12B-2409                     # vision

# Proprietary la Plateforme model strings
mistral-large-3-2512        # open-weights, served on la Plateforme too
mistral-medium-3-2505       # proprietary
mistral-ocr-2512            # proprietary, OCR 3
magistral-medium-2506       # proprietary reasoning preview
voxtral-tts-2603            # proprietary audio

Quick API call (OpenAI-compatible)

la Plateforme is OpenAI-API-compatible — point any OpenAI SDK at the Mistral base URL with a Mistral API key. Replace the placeholder values before running.

$ curl https://api.mistral.ai/v1/chat/completions \
    -H "Authorization: Bearer $MISTRAL_API_KEY" \
    -H "Content-Type: application/json" \
    -d '{
      "model":    "mistral-large-3-2512",
      "messages": [{ "role": "user", "content": "Hello, Mistral." }]
    }'

Where to run Mistral

Four categories — Mistral's own surfaces, self-host from HuggingFace, hosted-inference providers, and hyperscalers. Pricing varies by provider; the open weights are the same across all of them.

# Mistral first-party
https://chat.mistral.ai/                    # Le Chat consumer chat
https://api.mistral.ai/                     # la Plateforme OpenAI-compatible API

# Self-host from HuggingFace
https://huggingface.co/mistralai            # every model card lives here
https://github.com/vllm-project/vllm        # production-grade throughput
https://github.com/ggerganov/llama.cpp      # CPU + GPU, edge-friendly
https://ollama.com/                         # single-binary, easiest entry

# Hyperscalers
Azure AI Foundry, AWS Bedrock, Google Cloud Vertex AI, IBM watsonx, Oracle OCI, NVIDIA NIM

# Hosted-inference providers
https://www.together.ai/
https://fireworks.ai/
https://openrouter.ai/
https://replicate.com/

Licensing

Three license tiers across the line. Read the legal.mistral.ai page for the relevant model before shipping at scale.

# Apache 2.0 — commercial use unrestricted
Mistral 7B, Mixtral 8x7B, Mixtral 8x22B
Mistral NeMo, Pixtral 12B, Codestral Mamba, Mathstral
Ministral 8B (3B is Mistral Commercial)
Mistral Small 3, 3.1, 4
Devstral, Devstral 2 / Devstral Small 2
Magistral Small
Mistral Large 3, Ministral 3 family

# Mistral Research License (MRL/MNPL) — commercial requires separate license
Codestral 22B (MNPL)
Mistral Large 2 (MRL)
Pixtral Large (MRL)

# Proprietary — closed-weights, API only via la Plateforme
Mistral Medium (original), Mistral Large (original)
Mistral Saba, Mistral OCR, Mistral OCR 3
Mistral Medium 3, Magistral Medium, Voxtral TTS

# Per-model AI-governance pages
https://legal.mistral.ai/

Sources: mistral.ai/news; docs.mistral.ai changelog; legal.mistral.ai; huggingface.co/mistralai; research papers on arXiv (Mistral 7B, Mixtral, Magistral); contemporaneous reporting in NYT, FT, Bloomberg, Le Monde, Reuters, TechCrunch, The Information, Time, Euronews, VentureBeat, Corporate Europe Observatory. Last updated April 2026.

Mungomash LLC · More AI pages