Mistral’s first reasoning model, Magistral, launches with large and small Apache 2.0 version

plot magistral

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more


European AI powerhouse Mistral today launched Magistral, a new family of large language models (LLMs) that marks the first from the company to enter the increasingly competitive space of “reasoning,” or models that take time to reflect on their thinking to catch errors and solve more complex tasks than basic text-based LLMs.

The announcement features a strategic dual release: a powerful, proprietary Magistral Medium for enterprise clients, and, notably, a 24-billion parameter open-source version, Magistral Small.

The latter release appears calculated to reinforce the company’s commitment to its foundational roots, following a period where it faced criticism for leaning into more closed, proprietary models such as its Medium 3 for enterprises, launched back in May 2025.

A return to open source roots

In a move that will undoubtedly be celebrated by developers and the wider AI community, Mistral is releasing Magistral Small under the permissive open source Apache 2.0 license.

This is a crucial detail. Unlike more restrictive licenses, Apache 2.0 allows anyone to freely use, modify, and distribute the model’s source code, even for commercial purposes.

This empowers startups and established companies alike to build and deploy their own applications on top of Mistral’s latest reasoning architecture without licensing fees or fear of vendor lock-in.

This open approach is particularly significant given the context. While Mistral built its reputation on powerful open models, its recent release of Medium 3 as a purely proprietary offering drew concern from some quarters of the open-source community, who worried the company was drifting towards a more closed ecosystem, similar to competitors like OpenAI.

The release of Magistral Small under such a permissive license serves as a powerful counter-narrative, reaffirming Mistral’s dedication to arming the open community with cutting-edge tools.

Competitive performance against formidable foes

Mistral isn’t just talking a big game; it came with receipts. The company released a suite of benchmarks pitting Magistral-Medium against its own predecessor, Mistral-Medium 3, and competitors from Deepseek. The results show a model that is fiercely competitive in the reasoning arena.

On the AIME-24 mathematics benchmark, Magistral-Medium scores an impressive 73.6% on accuracy, neck-and-neck with its predecessor and significantly outperforming Deepseek’s models. When using majority voting (a technique where the model generates multiple answers and the most common one is chosen), its performance on AIME-24 jumps to a staggering 90%.

plot magistral

The new model also holds its own across other demanding tests, including GPQA Diamond, a graduate-level question-answering benchmark, and LiveCodeBench for coding challenges.

While Deepseek-V3 shows strong performance on some benchmarks, Magistral-Medium consistently proves itself to be a top-tier reasoning model, validating Mistral’s claims of its advanced capabilities.

Enterprise power

While Magistral Small caters to the open-source world, the benchmark-validated Magistral Medium is aimed squarely at the enterprise.

It’s acessible via Mistral’s Le Chat interface and La Plateforme API, it delivers the top-tier performance needed for mission-critical tasks.

Mistral is making this model available on major cloud platforms, including Amazon SageMaker, with Azure AI, IBM WatsonX, and Google Cloud Marketplace to follow.

This dual-release strategy allows Mistral to have its cake and eat it too: fostering a vibrant ecosystem around its open models while monetizing its most powerful, performance-tested technology for corporate clients.

Cost comparison

When it comes to cost, Mistral is positioning Magistral Medium as a distinct, premium offering, even compared to its own models.

At $2 per million input tokens and $5 per million output tokens, it represents a significant price increase from the older Mistral Medium 3, which costs just $0.40 for input and $2 for output.

However, when placed against its external rivals, Magistral Medium’s pricing strategy appears highly aggressive. Its input cost matches that of OpenAI’s latest model and sits within the range of Gemini 2.5 Pro, yet its $5 output price substantially undercuts both, which are priced at $8 and upwards of $10, respectively.

Screenshot 2025 06 10 at 2.54.28E280AFPM
Magistral API cost compared to other leading LLM reasoners. Credit: VentureBeat made with Google Gemini 2.5 Pro (Preview)

While it is considerably more expensive than specialized models like DeepSeek-Reasoner, it is an order of magnitude cheaper than Anthropic’s flagship Claude Opus 4, making it a compelling value proposition for customers seeking state-of-the-art reasoning without paying the absolute highest market prices.

Reasoning you can view, understand and use

Mistral is pushing three core advantages with the Magistral line: transparency, multilingualism, and speed.

Breaking away from the “black box” nature of many AI models, Magistral is designed to produce a traceable “chain-of-thought.” This allows users to follow the model’s logical path, a critical feature for high-stakes professional fields like law, finance, and healthcare, where conclusions must be verifiable.

Furthermore, these reasoning capabilities are global. Mistral emphasizes the model’s “multilingual dexterity,” highlighting high-fidelity performance in languages including French, Spanish, German, Italian, Arabic, Russian, and Simplified Chinese.

On the performance front, the company claims a major speed boost. A new “Think mode” and “Flash Answers” feature in Le Chat reportedly enables Magistral Medium to achieve up to 10 times the token throughput of competitors, facilitating real-time reasoning at a scale previously unseen.

From code gen to creative strategy and beyond

The applications for Magistral are vast. Mistral is targeting any use case that demands precision and structured thought, from financial modeling and legal analysis to software architecture and data engineering. The company even showcased the model’s ability to generate a one-shot physics simulation, demonstrating its grasp of complex systems.

But it’s not all business. Mistral also recommends the model as a “creative companion” for writing and storytelling, capable of producing work that is either highly coherent or, as the company puts it, “delightfully eccentric.”

With Magistral, Mistral AI is making a strategic play to not just compete, but lead in the next frontier of AI. By re-engaging its open-source base with a powerful, permissively licensed model while simultaneously pushing the envelope on enterprise-grade performance, the company is signaling that the future of reasoning AI will be both powerful and, in a meaningful way, open to all.

memoment editorial note: This article analyzes new advancements in artificial intelligence, AGI research, and singularity theories that reshape our technological future.


This article was curated by memoment.jp from the feed source: Venture Beat AI.

Original article: https://venturebeat.com/ai/mistrals-first-reasoning-model-magistral-launches-with-large-and-small-apache-2-0-version/

© All rights belong to the original publisher.