• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Sunday, April 12, 2026
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Al, Analytics and Automation

MiniMax Just Open Sourced MiniMax M2.7: A Self-Evolving Agent Model that Scores 56.22% on SWE-Pro and 57.0% on Terminal Bench 2

Josh by Josh
April 12, 2026
in Al, Analytics and Automation
0
MiniMax Just Open Sourced MiniMax M2.7: A Self-Evolving Agent Model that Scores 56.22% on SWE-Pro and 57.0% on Terminal Bench 2


MiniMax has officially open-sourced MiniMax M2.7, making the model weights publicly available on Hugging Face. Originally announced on March 18, 2026, MiniMax M2.7 is the MiniMax’s most capable open-source model to date — and its first model to actively participate in its own development cycle, a meaningful shift in how large language models are built and iterated.

What is MiniMax M2.7?

MiniMax M2.7 is part of MiniMax’s M2-series of Mixture-of-Experts (MoE) models. MoE is an architectural design where only a subset of the total parameters are ‘activated’ during any inference pass, which makes the model significantly faster and cheaper to serve compared to a dense model of similar output quality.

MiniMax M2.7 is built around three core capability areas: professional software engineering, professional office work, and what MiniMax calls Agent Teams — native multi-agent collaboration. MiniMax M2.7 is capable of building complex agent harnesses and completing highly elaborate productivity tasks, leveraging capabilities such as Agent Teams, complex Skills, and dynamic tool search.

SOTA Benchmark Performance: SWE-Pro and Terminal Bench 2

On SWE-Pro, which covers multiple programming languages, MiniMax M2.7 achieved a 56.22% accuracy rate, matching GPT-5.3-Codex. SWE-Pro tasks span log analysis, bug troubleshooting, code security review, and machine learning workflow debugging — much closer to the messy reality of production systems than standard algorithmic coding tests.

On Terminal Bench 2 (57.0%) and NL2Repo (39.8%), both of which demand a high degree of system-level comprehension, MiniMax M2.7 performs solidly. The model excels not only at code generation but can also deeply understand the operational logic and collaborative dynamics of software systems.

On the repo-level code generation benchmark VIBE-Pro, MiniMax M2.7 scored 55.6%, nearly on par with Opus 4.6 — meaning whether the requirement involves Web, Android, iOS, or simulation tasks, they can be handed directly to MiniMax M2.7 to complete. It also demonstrates a strong advantage on benchmarks closer to real-world engineering scenarios: SWE Multilingual (76.5) and Multi SWE Bench (52.7).

Production Debugging: Under Three Minutes

When faced with alerts in production, MiniMax M2.7 can correlate monitoring metrics with deployment timelines to perform causal reasoning, conduct statistical analysis on trace sampling and propose precise hypotheses, proactively connect to databases to verify root causes, pinpoint missing index migration files in the code repository, and use non-blocking index creation to stop the bleeding before submitting a merge request. MiniMax team reports that on multiple occasions, this reduced recovery time for live production system incidents to under three minutes. From observability analysis and database expertise to SRE-level decision-making, this positions MiniMax M2.7 as something beyond a code-generation model.

The Self-Evolution Architecture

To test the boundaries of autonomous improvement, MiniMax M2.7 was tasked with optimizing a model’s programming performance on an internal scaffold. It ran entirely autonomously, executing an iterative loop of ‘analyze failure trajectories → plan changes → modify scaffold code → run evaluations → compare results → decide to keep or revert changes’ for over 100 rounds. During this process, MiniMax M2.7 discovered effective optimizations on its own: systematically searching for the optimal combination of sampling parameters such as temperature, frequency penalty, and presence penalty; designing more specific workflow guidelines (such as automatically searching for the same bug pattern in other files after a fix); and adding loop detection to the scaffold’s agent loop. This achieved a 30% performance improvement on internal evaluation sets.

Within MiniMax’s own reinforcement learning team workflows, M2.7 is now capable of handling 30%–50% of the workflow end-to-end, with human researchers only interacting for critical decisions and discussions.

MLE Bench Lite: Testing Autonomous ML Experimentation

MiniMax team also tested MiniMax M2.7 on MLE Bench Lite, OpenAI’s open-sourced suite of 22 machine learning competitions runnable on a single A30 GPU, covering virtually all stages of the ML workflow.

For this evaluation, MiniMax team designed a simple three-component harness: short-term memory, self-feedback, and self-optimization. After each iteration round, the agent generates a short-term memory markdown file, performs self-criticism on the current results, and provides optimization directions for the next round. Three trials were run, each with a 24-hour window for iterative evolution.

The best run achieved 9 gold medals, 5 silver medals, and 1 bronze medal. The average medal rate across the three runs was 66.6%, a result second only to Opus-4.6 (75.7%) and GPT-5.4 (71.2%), tying with Gemini-3.1 (66.6%).

Professional Office Work and Finance

Beyond software engineering, MiniMax M2.7 targets professional office tasks. In the GDPval-AA evaluation, which measures domain expertise and task delivery capability across 45 models, MiniMax M2.7 achieved an ELO score of 1495 — the highest among open-source models, second only to Opus 4.6, Sonnet 4.6, and GPT-5.4, and surpassing GPT-5.3.

On Toolathon, MiniMax M2.7 achieved an accuracy of 46.3%, reaching the global top tier. In MM Claw testing — an evaluation MiniMax built based on real-world usage patterns from the OpenClaw personal agent platform — MiniMax M2.7 maintained a 97% skill compliance rate across 40 complex skills (each exceeding 2,000 tokens) and achieved an overall accuracy of 62.7%, approaching Sonnet 4.6.

In finance, MiniMax M2.7 can autonomously read a company’s annual reports and earnings call transcripts, cross-reference multiple research reports, independently design assumptions and build a revenue forecast model, and produce a PPT and Word research report based on templates — understanding, making judgments, and producing output like a junior analyst.

Key Takeaways

  • MiniMax M2.7 is now officially open source, with weights available on Hugging Face, making a frontier-grade agentic model freely accessible for developers to deploy and build on.
  • MiniMax M2.7 achieves SOTA performance on real-world software engineering benchmarks, scoring 56.22% on SWE-Pro (matching GPT-5.3-Codex) and 57.0% on Terminal Bench 2 — tests that measure production-level reasoning, not just code generation.
  • MiniMax M2.7 is the first model to actively participate in its own development, running over 100 autonomous rounds of scaffold optimization and achieving a 30% performance improvement — an early, concrete example of AI-assisted AI development in practice.
  • The model is built for real agentic deployments, maintaining 97% skill adherence across 40 complex skills (each exceeding 2,000 tokens), supporting native Agent Teams with stable role boundaries, and handling 30–50% of MiniMax’s internal RL team workflows autonomously.
  • MiniMax M2.7 is the highest-ranked open-source model on GDPval-AA with an ELO score of 1495 across 45 models, demonstrating strong professional work capabilities spanning office document editing, financial analysis, and multi-round high-fidelity task delivery.

Check out the Technical details and Model Weight. Also, feel free to follow us on Twitter and don’t forget to join our 130k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us




Source_link

READ ALSO

Researchers from MIT, NVIDIA, and Zhejiang University Propose TriAttention: A KV Cache Compression Method That Matches Full Attention at 2.5× Higher Throughput

How Knowledge Distillation Compresses Ensemble Intelligence into a Single Deployable AI Model

Related Posts

Al, Analytics and Automation

Researchers from MIT, NVIDIA, and Zhejiang University Propose TriAttention: A KV Cache Compression Method That Matches Full Attention at 2.5× Higher Throughput

April 12, 2026
Al, Analytics and Automation

How Knowledge Distillation Compresses Ensemble Intelligence into a Single Deployable AI Model

April 11, 2026
Washington Is Getting Ready to Slow AI Down. And This Has Nothing to Do with Politics
Al, Analytics and Automation

Washington Is Getting Ready to Slow AI Down. And This Has Nothing to Do with Politics

April 11, 2026
Alibaba’s Tongyi Lab Releases VimRAG: a Multimodal RAG Framework that Uses a Memory Graph to Navigate Massive Visual Contexts
Al, Analytics and Automation

Alibaba’s Tongyi Lab Releases VimRAG: a Multimodal RAG Framework that Uses a Memory Graph to Navigate Massive Visual Contexts

April 11, 2026
New technique makes AI models leaner and faster while they’re still learning | MIT News
Al, Analytics and Automation

New technique makes AI models leaner and faster while they’re still learning | MIT News

April 10, 2026
Five AI Compute Architectures Every Engineer Should Know: CPUs, GPUs, TPUs, NPUs, and LPUs Compared
Al, Analytics and Automation

Five AI Compute Architectures Every Engineer Should Know: CPUs, GPUs, TPUs, NPUs, and LPUs Compared

April 10, 2026
Next Post
Use Breakdowns for Troubleshooting – Jon Loomer Digital

Use Breakdowns for Troubleshooting - Jon Loomer Digital

POPULAR NEWS

Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
Comparing the Top 7 Large Language Models LLMs/Systems for Coding in 2025

Comparing the Top 7 Large Language Models LLMs/Systems for Coding in 2025

November 4, 2025

EDITOR'S PICK

How to watch today, start time, where to stream and more

How to watch today, start time, where to stream and more

November 27, 2025
How to Choose the Best Cloud Solution for Business Needs

How to Choose the Best Cloud Solution for Business Needs

September 27, 2025
Google Photos celebrates 10 years with 10 tips

Google Photos celebrates 10 years with 10 tips

May 29, 2025
Digital Therapeutics Software Development Made Simple

Digital Therapeutics Software Development Made Simple

November 22, 2025

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • Use Breakdowns for Troubleshooting – Jon Loomer Digital
  • MiniMax Just Open Sourced MiniMax M2.7: A Self-Evolving Agent Model that Scores 56.22% on SWE-Pro and 57.0% on Terminal Bench 2
  • The AI code wars are heating up
  • Want to earn trust? The strategy is simple.
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions