Mistral Medium 3.5▌
Mistral Medium 3.5 is a flagship model designed for instruction-following, reasoning, and coding tasks. It operates as a dense 128B model with a 256k context window, enabling efficient performance in real-world applications.
Details
- organization
- Mistral AI
- context
- 256,000 tokens
- license
- Modified MIT
- Hugging Face
- https://huggingface.co/models/mistral-medium-3.5
- website
- https://mistral.ai
Tags
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
About this listing
Mistral Medium 3.5 is in the explainx.ai LLM directory. Mistral Medium 3.5 is a flagship model designed for instruction-following, reasoning, and coding tasks. It operates as a dense 128B model with a 256k context window, enabling efficient performance in real-world applications.. It is labeled open-weights / public artifacts, with publisher field Mistral AI and license Modified MIT. Structured FAQs below clarify source, weights, and benchmark data. Canonical URL: /llms/mistral-medium-3-5.
FAQ
- What is Mistral Medium 3.5?
- Mistral Medium 3.5 — Mistral Medium 3.5 is a flagship model designed for instruction-following, reasoning, and coding tasks. It operates as a dense 128B model with a 256k context window, enabling efficient performance in real-world applications. It appears in the explainx.ai LLM marketplace as a discoverability aid. Reported specs on explainx.ai include type: language; scale: 128B; context window (listed): about 256,000 tokens. Links and license data should be verified with the publisher before production use.
- Who created or publishes Mistral Medium 3.5?
- On this listing, the organization or lab field is “Mistral AI” (sourced from the directory import or editor). That usually matches the publisher; confirm on the official model card or vendor site.
- Is Mistral Medium 3.5 open source or closed source?
- The listing is categorized as open-weights or publicly downloadable where the publisher allows it; the recorded license is “Modified MIT”. Closed or gated releases can still appear on Hugging Face—always read the license on the publisher’s page.
- Where can I download weights or find model files for Mistral Medium 3.5?
- This listing points to the Hugging Face model repo (https://huggingface.co/models/mistral-medium-3.5), where files and weight artifacts are typically hosted. explainx.ai does not host weights; download and license terms are set by the publisher on that site.
- What do Arena leaderboard numbers mean for Mistral Medium 3.5?
- This profile does not include Arena benchmark rows yet. You can still use organization, license, and outbound links to evaluate the model.
- Is explainx.ai the publisher of this model?
- No. explainx.ai hosts directory listings for discovery. The publisher is the organization or project behind the linked Hugging Face repo, API, or website. Pricing, safety, and terms are always set by that publisher.
- How does this page help AI search visibility?
- Structured FAQs, FAQPage JSON-LD, breadcrumbs, and answer-first copy follow SEO and GEO (Generative Engine Optimization) practices so search engines and citation-style assistants can summarize this listing accurately.
More on AI-visible pages: SEO + GEO on explainx.ai · Tools directory · Agent skills
Readme
Mistral Medium 3.5 is our first flagship merged model, available in public preview. It is a dense 128B model with a 256k context window, handling instruction-following, reasoning, and coding in a single set of weights. It performs strongly in real-world use, with self-hosting possible on as few as four GPUs. Reasoning effort is now configurable per request, so the same model can answer a quick chat reply or work through a complex agentic run. We trained the vision encoder from scratch to handle variable image sizes and aspect ratios. Mistral Medium 3.5 scores 77.6% on SWE-Bench Verified, ahead of Devstral 2 and models like Qwen3.5 397B A17B. It also has strong agentic capabilities and scores 91.4 on τ³-Telecom.
Listing on explainx.ai. Information may change; verify with the publisher.