2023 – 2026
Mistral Versions
Every Mistral release — Mistral 7B (September 2023) through Mistral Small 4 (March 2026) and Mistral Large 3 (December 2025) — with HuggingFace ids, ship dates, family (Open / Research / Proprietary), license terms, and the major changes per version. Plus the April 2023 founding by Mensch, Lample, and Lacroix, the funding arc through ASML's September 2025 €1.3B Series C, the February 2024 Microsoft partnership and the EU regulatory friction, the EU AI Act lobbying narrative, and the January 2025 Apache 2.0 turn at Mistral Small 3.
The April 2023 founding
Mistral AI was incorporated in Paris on April 28, 2023 by three French AI researchers in their early thirties. Arthur Mensch (CEO) had been a researcher at Google DeepMind, where he co-authored the original LLaMA-related research; Guillaume Lample (Chief Scientist) and Timothée Lacroix (CTO) had worked on the LLaMA paper at Meta. The three had attended École polytechnique together a decade earlier.
Mistral's seed round closed in June 2023 at €105 million ($117M USD) — an unusual scale for a four-week-old company, valuing the lab at roughly $260M pre-product. Lead investor was Lightspeed Venture Partners; the syndicate included Eric Schmidt, Xavier Niel (Iliad / Free), JCDecaux Holding, and Bpifrance (the French sovereign investment bank). The seed-round valuation drew sustained press coverage as a marker of European AI capital commitment.
The first model, Mistral 7B, shipped on September 27, 2023 — five months after incorporation. It was released via a magnet link posted to X, a deliberate stylistic choice that the lab repeated for Mixtral 8x7B in December 2023. The release-by-torrent pattern signaled an open-source identity that the subsequent licensing turn at the Microsoft partnership and the Mistral Research License would complicate.
The funding arc — through ASML's September 2025 Series C
After the June 2023 seed, Mistral raised a $415M Series A in December 2023 led by Andreessen Horowitz at a ~$2B valuation. A €600M (~$640M) round in June 2024 mixed equity and debt, led by General Catalyst, valued the lab at roughly $6B. The headline fundraise came on September 9, 2025: a Series C led by Dutch semiconductor company ASML at €1.3 billion (~$1.5B) for an undisclosed minority stake, valuing Mistral at €11.7 billion (~$13.8B).
ASML's strategic position in EUV lithography for advanced semiconductor fabrication makes the investment substantively unusual: the deal is widely read as a Europe-on-Europe sovereignty bet, ASML buying a stake in the European frontier-AI tenant most likely to consume next-generation compute that ASML's lithography ultimately enables. Through April 2026, the Series C remains Mistral's largest single round and the largest European AI investment to date. The lab has not publicly disclosed gross revenue or operating margin.
The February 2024 Microsoft partnership
On February 26, 2024, Microsoft and Mistral jointly announced a multi-year strategic partnership coinciding with the launch of Mistral Large on Azure AI. The deal carried a $16 million financial commitment from Microsoft and made Mistral models available natively on Azure as a preferred-partner offering, alongside OpenAI's models.
The partnership drew immediate French-government and EU regulatory scrutiny. The European Commission opened a brief antitrust look at whether the Microsoft commitment constituted a notifiable concentration under EU merger control; the Commission ultimately concluded it did not. Critics in Euronews coverage framed the deal as compromising the “European AI sovereignty” positioning the French government had used to justify its support of Mistral, including in the EU AI Act lobbying campaign covered in the next section.
Mensch's response (in a February 2024 Time interview and subsequent media appearances) framed the Microsoft deal as a commercial-distribution arrangement that did not affect Mistral's research direction or its open-source releases. The empirical record bears this out partially — the Mixtral 8x22B Apache 2.0 release in April 2024 followed two months later, and the Mistral 3 family relaunch in December 2025 re-committed the line to permissive open-source licensing across every scale — though the Mistral Large proprietary tier on Azure has remained a continuous commercial product.
The EU AI Act lobbying narrative
Through late 2023 and into early 2024, Mistral mounted a sustained lobbying campaign against the European Parliament's tiered-foundation-model proposal in the EU AI Act, which would have imposed compliance and transparency obligations on general-purpose AI providers above defined capability thresholds. The campaign was led by Cédric O, France's former Secretary of State for Digital Affairs (2019–2022), who had joined Mistral as a strategic advisor and lobbyist after leaving government. Reporting at the time in Corporate Europe Observatory documented the privileged-access dynamic between Mistral and the highest levels of the French and German governments.
Mistral's stated position (in a November 2023 TechCrunch interview) was that the AI Act should regulate applications of AI rather than foundation models — that capability thresholds applied at the model-developer level would chill European open-source frontier-model development and entrench U.S. incumbents whose compliance teams were already large enough to absorb the regulatory cost. Critics countered that the “regulate applications, not models” framing aligned conveniently with the lobbying interests of the largest model developers themselves, which by 2025 included Mistral.
The final EU AI Act, agreed in December 2023 and entering force in August 2024, includes obligations on general-purpose AI providers but also includes broad exemptions for open-source models that do not pose “systemic risk.” The exemptions are widely credited — including by Corporate Europe Observatory's investigation — to the lobbying campaign Mistral led. The systemic-risk threshold, by contrast, captures the largest models and so applies to Mistral Large 2 / Mistral Large 3 onward; how the Code of Practice and the AI Office's compute-threshold methodology will treat each Mistral release is a continuous compliance question for the lab.
The licensing turn — from MRL back to Apache 2.0
Mistral's licensing has evolved across three distinct conventions. The founding lineage — Mistral 7B (September 2023), Mixtral 8x7B (December 2023), Mixtral 8x22B (April 2024), Mistral NeMo (July 2024), Codestral Mamba and Mathstral (July 2024), Pixtral 12B (September 2024) — shipped under the standard Apache 2.0: weights public, commercial use unrestricted. In parallel, the proprietary tier (Mistral Medium December 2023, Mistral Large February 2024 onward) shipped closed-weights via la Plateforme.
The licensing innovation arrived in May 2024 with Codestral 22B, which introduced the bespoke Mistral AI Non-Production License (MNPL): weights public on HuggingFace, but commercial / production deployment requires a separate Mistral Commercial License. The MNPL pattern was generalized into the Mistral Research License (MRL), which was applied to Mistral Large 2 (July 2024) and Pixtral Large (November 2024). The MRL/MNPL pattern is the one tracked by the violet “Research” family pill on this page.
The licensing turn arrived on January 30, 2025 with Mistral Small 3, which returned to Apache 2.0. Every subsequent Mistral open release through April 2026 has shipped under Apache 2.0: Mistral Small 3.1, Devstral, Devstral 2 / Devstral Small 2, Magistral Small, Mistral Large 3, the Ministral 3 family (3B / 8B / 14B), and Mistral Small 4. The December 2025 “Mistral 3” family relaunch formalized the Apache 2.0 commitment across every scale — including, notably, the 675B-total-parameter Mistral Large 3 frontier flagship, which is the largest open-weights Mistral model to date and was explicitly framed at launch as a return to permissive open-source.
The proprietary tier remains active in parallel: Mistral Medium 3 (May 2025), Mistral Saba (February 2025), Mistral OCR / OCR 3, Magistral Medium, and Voxtral TTS (March 2026) all ship closed-weights via la Plateforme. The hybrid model — Apache 2.0 open releases for the public-facing flagship lineage, proprietary closed-weights for specialized commercial products — is the licensing equilibrium Mistral has settled into.
The 2026 release cadence and the NVIDIA partnership
Mistral's release rhythm accelerated through early 2026. In March 2026 alone, the lab shipped six major products in roughly two weeks: Mistral Small 4 (March 16), the Devstral Vibe CLI, Voxtral TTS (March 23, the lab's first audio model), Forge, the Spaces CLI, and a publicly-announced NVIDIA partnership for inference-stack optimization across the Mistral lineage on NVIDIA Blackwell hardware.
The cadence reflects the engineering capacity unlocked by the September 2025 ASML round and the Microsoft / NVIDIA / hyperscaler distribution deals collectively. Whether the cadence is sustainable through 2026 is an open question; the prior Mistral release rhythm averaged roughly 6–10 model rows per year, and the March 2026 wave represented an outlier rather than a new steady state. The recurring refresh task should re-check the cadence on every run.
Where to run Mistral
Mistral is widely deployed because most flagship releases are open-weights and the proprietary releases are available across every major hyperscaler. Inference paths through 2025–2026 break into four categories.
Mistral's own surfaces. Le Chat is the consumer chat product (free + paid tiers, iOS / Android / web). la Plateforme is the developer API endpoint (OpenAI-compatible).
Self-host from HuggingFace. Download from the mistralai org and run with vLLM, llama.cpp, Ollama, or NVIDIA TensorRT-LLM. The Apache 2.0 releases (everything from Mistral 7B through Mistral Small 4) self-host without commercial restriction; the MRL releases (Codestral 22B, Mistral Large 2, Pixtral Large) require a separate commercial license for production use.
Hyperscalers. Microsoft Azure AI Foundry (the Microsoft launch partner since February 2024), AWS Bedrock, Google Cloud Vertex AI, IBM watsonx, Oracle OCI, NVIDIA NIM (the Mistral NeMo launch partner since July 2024). Most hyperscalers carry both the proprietary and the open-weights lineage; the per-model availability matrix lives at docs.mistral.ai.
Hosted-inference providers. Together AI, Fireworks, OpenRouter, Replicate, Groq. Most providers serve the Apache 2.0 lineage with similar latency / cost characteristics; the MRL-licensed weights (Mistral Large 2, Pixtral Large) typically require an additional commercial-license attestation from the provider.
People who shaped Mistral
Arthur Mensch — co-founder and CEO. PhD from École normale supérieure / INRIA; researcher at Google DeepMind 2020–2023 working on RETRO and Chinchilla. The face of Mistral in EU and US policy debates; profiled in Time and the 20VC podcast.
Guillaume Lample — co-founder and Chief Scientist. PhD from Sorbonne / Facebook AI Research; co-author of the Meta LLaMA paper. Leads the model-architecture and pretraining work behind every Mistral release; first author on the Mistral 7B paper.
Timothée Lacroix — co-founder and CTO. PhD from École polytechnique / Facebook AI Research; co-author of the Meta LLaMA paper. Leads the engineering and infrastructure work, including the la Plateforme API, the inference stack, and the hyperscaler integrations.
Cédric O — strategic advisor and lobbyist. France's former Secretary of State for Digital Affairs (2019–2022); has led Mistral's EU AI Act lobbying campaign since 2023, with documented privileged access to the highest levels of French and German government per Corporate Europe Observatory's reporting. Not a corporate officer; Mistral's regulatory voice in Brussels.
The competitive landscape
Mistral occupies a distinctive position: the leading European frontier-AI lab, with a hybrid commercial strategy that puts open-weights flagships (Apache 2.0 from Mistral Small 3 onward) alongside a proprietary API tier on la Plateforme and the hyperscalers. The closest open-weights competitors are Meta's Llama (custom Llama Community License with the >700M-MAU carve-out, see Llama Versions), DeepSeek (Chinese, MIT-licensed for the V3 / R1 line and onward, see DeepSeek Versions), and Alibaba's Qwen (Apache-2.0-or-permissive across most releases — see Qwen Versions). The closed-weights frontier competitors — ChatGPT, Claude, Gemini, Grok — have all stayed closed-weights since their inception. Mistral's regulatory and political position in Europe is the variable that distinguishes the line from every other frontier lab; whether the EU AI Act's Code of Practice and the AI Office's enforcement methodology will treat European model providers more or less stringently than U.S. and Chinese ones remains the open structural question for the lab through 2026. This page does not attempt a benchmark roundup or a ranking.
Sources:
mistral.ai/news;
docs.mistral.ai changelog;
legal.mistral.ai;
huggingface.co/mistralai;
research papers on arXiv (Mistral 7B, Mixtral, Magistral);
contemporaneous reporting in NYT, FT, Bloomberg, Le Monde, Reuters, TechCrunch, The Information, Time, Euronews, VentureBeat, Corporate Europe Observatory.
Last updated April 2026.
Mungomash LLC · More AI pages