MiniMax-M2: Advanced AI for Coding and Agent Workflows
MiniMax-M2 is a powerful 230B parameter MoE (Mixture of Experts) AI model designed specifically for coding and intelligent agent workflows. With its massive 204K context window and exceptional programming capabilities, it delivers enterprise-grade performance while maintaining cost efficiency. Released under Apache 2.0 license, it's completely open-source and ready for commercial use.
What People Are Saying About MiniMax-M2
Hear what developers and AI experts are saying about MiniMax-M2 from their YouTube videos and technical reviews

Minimax M2 (Fully Tested): I am switching to this. Better than Claude & GLM-4.6 on Long Running Task
With 230 B A10B i think is good for local business M2 vs qwen 3 235b M2 win

¡Nuevo MiniMax M2! Agente IA GRATIS e ILIMITADO 🚀 Trabaja SOLO por TI (¡Pruébalo ya!)
Me ha ayudado con una base de datos con la que estaba atascado varios días, y lo ha resuelto en 5 minutos. Gracias.

RIP Deepseek. We have a new #1 open-source AI model
I need to sleep, its 3AM... but then i remember that AI never sleeps.
Performance Comparison with Leading AI Models
See how MiniMax-M2 stands against the world's most advanced AI models across key capabilities and performance metrics.
| Benchmarks | MiniMax-M2 | Claude Sonnet 4 | Claude Sonnet 4.5 | Gemini 2.5 Pro | GPT-5 (thinking) | GLM-4.6 | Kimi K2 0905 | DeepSeek-V3.2 |
|---|---|---|---|---|---|---|---|---|
| SWE-bench Verified | 69.4 | 72.7 * | 77.2 * | 63.8 * | 74.9 * | 68 * | 69.2 * | 67.8 * |
| Multi-SWE-Bench | 36.2 | 35.7 * | 44.3 | / | / | 30 | 33.5 | 30.6 |
| SWE-bench Multilingual | 56.5 | 56.9 * | 68 | / | / | 53.8 | 55.9 * | 57.9 * |
| Terminal-Bench | 46.3 | 36.4 * | 50 * | 25.3 * | 43.8 * | 40.5 * | 44.5 * | 37.7 * |
| ArtifactsBench | 66.8 | 57.3* | 61.5 | 57.7* | 73* | 59.8 | 54.2 | 55.8 |
| BrowseComp | 44 | 12.2 | 19.6 | 9.9 | 54.9* | 45.1* | 14.1 | 40.1* |
| BrowseComp-zh | 48.5 | 29.1 | 40.8 | 32.2 | 65 | 49.5 | 28.8 | 47.9* |
| GAIA (text only) | 75.7 | 68.3 | 71.2 | 60.2 | 76.4 | 71.9 | 60.2 | 63.5 |
| xbench-DeepSearch | 72 | 64.6 | 66 | 56 | 77.8 | 70 | 61 | 71 |
| HLE (w/ tools) | 31.8 | 20.3 | 24.5 | 28.4 * | 35.2 * | 30.4 * | 26.9 * | 27.2 * |
| τ²-Bench | 77.2 | 65.5* | 84.7* | 59.2 | 80.1* | 75.9* | 70.3 | 66.7 |
| FinSearchComp-global | 65.5 | 42 | 60.8 | 42.6* | 63.9* | 29.2 | 29.5* | 26.2 |
| AgentCompany | 36 | 37 | 41 | 39.3* | / | 35 | 30 | 34 |
Performance benchmarks across different AI agent evaluation metrics
* indicates values directly from official technical reports/blogs | / indicates no data provided
Quick Start Guide
Deploy MiniMax-M2 on your infrastructure using SGLang. Simple 6-step setup for production-ready inference.
Hardware Requirements
Minimum setup for deploying MiniMax-M2:
- •Recommended: 8x NVIDIA A100 80GB
- •Alternative: 8x RTX 4090 24GB
Prepare Environment
Install Docker and NVIDIA Container Toolkit for GPU support
# Install Docker curl -fsSL https://get.docker.com | sh # Install NVIDIA Container Toolkit distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit sudo systemctl restart docker
Pull Model
SGLang automatically downloads the model from Hugging Face - no manual git clone needed
💡 The model will be automatically downloaded when you start the server in the next step. First run may take 2-5 minutes depending on your network speed.
Start Server
Launch SGLang server with one Docker command
docker run --gpus all \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path MiniMaxAI/MiniMax-M2 \ --host 0.0.0.0 \ --port 30000
Verify Deployment
Test the API with a simple curl command
curl http://localhost:30000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "MiniMaxAI/MiniMax-M2",
"messages": [
{"role": "user", "content": "Write a quick sort function in Python"}
]
}'Start Using
Use the OpenAI-compatible API with your favorite tools
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:30000/v1",
api_key="EMPTY"
)
response = client.chat.completions.create(
model="MiniMaxAI/MiniMax-M2",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)For complete deployment guide including SGLang and KTransformers:
View Full Deployment DocumentationTry MiniMax-M2 Live Demo
Experience MiniMax-M2's powerful code generation capabilities in real-time. Write prompts and watch the AI generate high-quality code instantly with intelligent understanding and context awareness.
Loading MiniMax-M1...
Key Features of MiniMax-M2
Discover the powerful capabilities that make MiniMax-M2 the ideal choice for modern development workflows.
Mixture of Experts Architecture
Advanced MoE design with 230B total parameters and 10B active parameters, delivering maximum performance with minimal computational overhead for cost-effective AI solutions.
Ultra-Large Context Window
Industry-leading 204K token context window allows processing of entire codebases, complex documentation, and multi-file projects without losing important context.
Superior Coding Capabilities
Optimized for programming tasks including code generation, multi-file editing, compile-run-fix loops, debugging, and test validation with exceptional accuracy.
Intelligent Agent Workflows
Designed for complex agentic tasks with tool integration, seamless workflow automation, and the ability to handle multi-step problem-solving processes.
Open Source Freedom
Released under Apache 2.0 license, providing complete freedom for commercial use, modification, and distribution without licensing restrictions or fees.
Exceptional Performance Efficiency
Ranks #1 among global open-source models while using only 8% of the computational cost compared to similar-sized traditional models.
