• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Friday, January 23, 2026
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Technology And Software

Mixture-of-recursions delivers 2x faster inference—Here’s how to implement it

Josh by Josh
July 23, 2025
in Technology And Software
0
Mixture-of-recursions delivers 2x faster inference—Here’s how to implement it
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter

READ ALSO

Websites That Pay You Money – 33 Legit Sites To Try In 2026

Everything in voice AI just changed: how enterprise AI builders can benefit


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


Researchers at KAIST AI and Mila have introduced a new Transformer architecture that makes large language models (LLMs) more memory- and compute-efficient. The architecture, called Mixture-of-Recursions (MoR), significantly improves model accuracy and delivers higher throughput compared with vanilla transformers, even when constrained by the same parameter count and compute budget.

The scaling challenges of LLMs

The impressive capabilities of today’s LLMs are directly tied to their ever-increasing size. But as these models scale, their memory footprints and computational requirements often become untenable, making both training and deployment challenging for organizations outside of hyperscale data centers. This has led to a search for more efficient designs.

Efforts to improve LLM efficiency have focused mainly on two methods: parameter sharing and adaptive computation. Parameter sharing techniques reduce the total number of unique parameters by reusing weights across different parts of the model, thereby reducing the overall computational complexity. For example, “layer tying” is a technique that reuses a model’s weights across several layers. Adaptive computation methods adjust models so that they only use as much inference resources as they need. For example, “early exiting” dynamically allocates compute by allowing the model to stop processing “simpler” tokens early in the network.

However, creating an architecture that effectively unifies both parameter efficiency and adaptive computation remains elusive.


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


How Mixture-of-Recursions works

Mixture-of-Recursions is a framework that combines parameter sharing with adaptive computation to tackle the high computational demands of LLMs. It builds on the concept of Recursive Transformers, models that repeatedly apply a set of shared layers multiple times. Instead of a deep stack of unique layers, a Recursive Transformer partitions the model into a few “recursion blocks,” each with a shared pool of parameters. This design allows for more computation without increasing the model’s size.

MoR enhances this recursive approach with two key components. The first is a lightweight router that intelligently assigns a specific recursion depth to each token. This concept is similar to the routing mechanism in Mixture-of-Experts (MoE) models, where a router directs tokens to specialized expert networks. In MoR, however, the “experts” are the different recursion depths, allowing the model to choose how much computation to apply to each token dynamically. It decides how many times a shared block of layers should be applied based on a token’s complexity, or its required “depth of thinking.” This directs computation only where it is most needed, avoiding wasted cycles on easy-to-process parts of the input.

Mixture-of-recursion (source: arXiv)
Mixture-of-recursion Source: arXiv

The second component is a more efficient key-value (KV) caching strategy. KV caching is a standard technique that stores information from previous tokens to speed up generation, but it becomes a memory bottleneck in recursive models. MoR introduces a “recursion-wise” KV caching mechanism that selectively stores and retrieves key-value pairs only for the tokens that are still active at a given recursion step. This targeted caching reduces memory traffic and improves throughput without needing complex, post-training modifications.

As the researchers state in their paper, “In essence, MoR enables models to efficiently adjust their thinking depth on a per-token basis, unifying parameter efficiency with adaptive computation.”

Different token routing and KV caching mechanisms for recursive transformers (source: arXiv)
Different token routing and KV caching mechanisms for recursive transformers Source: arXiv

MoR in action

To test their framework, the researchers trained MoR models ranging from 135 million to 1.7 billion parameters and compared them against vanilla and standard recursive baseline models on validation loss and few-shot accuracy benchmarks.

The results demonstrate significant gains. When given an equal training compute budget, an MoR model achieved higher average few-shot accuracy (43.1% vs. 42.3%) than a vanilla baseline despite using nearly 50% fewer parameters. When trained on the same amount of data, the MoR model reduced training time by 19% and cut peak memory usage by 25% compared to the vanilla model.

The MoR architecture also proves to be scalable. While it slightly underperformed the vanilla model at the smallest 135M parameter scale, the gap closed rapidly as the model size increased. For models with over 360M parameters, MoR matched or exceeded the performance of standard Transformers, especially on lower compute budgets. Furthermore, MoR’s design dramatically boosts inference throughput. One MoR configuration achieved a 2.06x speedup over the vanilla baseline. For a company operating at scale, this could translate into significant operational cost savings.

Sangmin Bae, co-author of the paper and a PhD student at KAIST, broke down the practical impact in an email to VentureBeat. “While it’s difficult to provide exact numbers, at a high level, reducing model parameter size and KV cache footprint means we can perform inference on many more samples simultaneously,” he said. “This translates to an increased number of tokens processed at once, and handling longer context windows becomes feasible.”

A practical path for enterprise adoption

While the paper’s results come from models trained from scratch, a key question for enterprises is how to adopt MoR without massive upfront investment. According to Bae, “uptraining” existing open-source models is a “definitely more cost-effective approach.” He noted that while training a new model is straightforward, an “uptraining approach could be more suitable and efficient until the scalability of MoR itself is fully validated.”

Adopting MoR also introduces new architectural “knobs” for developers, allowing them to fine-tune the balance between performance and efficiency. This trade-off will depend entirely on the application’s needs.

“For simpler tasks or scenarios, it may be beneficial to use models with more recursion steps, offering greater flexibility, and vice versa,” Bae explained. He stressed that the “optimal settings will highly depend on the specific deployment setting,” encouraging teams to explore the trade-offs based on the paper’s findings.

Looking ahead, the MoR framework is “modality-agnostic,” meaning its adaptive computation principles are not limited to text. This opens the door to significant efficiency gains in processing video, audio, and other complex data types.

“We’re very excited about its potential extension to multi-modality scenarios where efficiency gains are crucial,” Bae said.

By dynamically adjusting the processing depth for each segment of a video or audio stream, MoR could unlock even greater cost savings and performance improvements, bringing the power of large-scale AI to a wider range of enterprise applications. As the paper concludes, MoR offers “an effective path towards achieving large-model capabilities with significantly reduced computational and memory overhead.”

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source_link

Related Posts

Websites That Pay You Money – 33 Legit Sites To Try In 2026
Technology And Software

Websites That Pay You Money – 33 Legit Sites To Try In 2026

January 23, 2026
Everything in voice AI just changed: how enterprise AI builders can benefit
Technology And Software

Everything in voice AI just changed: how enterprise AI builders can benefit

January 23, 2026
Robot butlers look more like Roombas than Rosey from the Jetsons
Technology And Software

Robot butlers look more like Roombas than Rosey from the Jetsons

January 23, 2026
Sennheiser introduces new TV headphones bundle with Auracast
Technology And Software

Sennheiser introduces new TV headphones bundle with Auracast

January 23, 2026
Legislators Push to Make Companies Tell Customers When Their Products Will Die
Technology And Software

Legislators Push to Make Companies Tell Customers When Their Products Will Die

January 22, 2026
Humans& thinks coordination is the next frontier for AI, and they’re building a model to prove it
Technology And Software

Humans& thinks coordination is the next frontier for AI, and they’re building a model to prove it

January 22, 2026
Next Post
The AI Revolution is Here: What Every Marketer Needs to Know

The AI Revolution is Here: What Every Marketer Needs to Know

POPULAR NEWS

Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
Google announced the next step in its nuclear energy plans 

Google announced the next step in its nuclear energy plans 

August 20, 2025

EDITOR'S PICK

More content from creators and publishers

More content from creators and publishers

September 22, 2025
Google Translate gets new Gemini AI translation models

Google Translate gets new Gemini AI translation models

December 13, 2025
How to Grow & Scale an Ecommerce Business (2025 Guide)

How to Grow & Scale an Ecommerce Business (2025 Guide)

September 5, 2025

‘Zero tolerance for those who don’t follow the law’: Hyundai’s response to immigration raid

September 9, 2025

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • Davos microcosm needs PR to help navigate an unprecedentedly complicated world
  • Websites That Pay You Money – 33 Legit Sites To Try In 2026
  • Qwen Researchers Release Qwen3-TTS: an Open Multilingual TTS Suite with Real-Time Latency and Fine-Grained Voice Control
  • Inside the Minds of B2B Brand Marketers: What 2026 Will Look Like
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?