• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Sunday, February 15, 2026
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Technology And Software

Nvidia’s new technique cuts LLM reasoning costs by 8x without losing accuracy

Josh by Josh
February 15, 2026
in Technology And Software
0
Nvidia’s new technique cuts LLM reasoning costs by 8x without losing accuracy
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter



Researchers at Nvidia have developed a technique that can reduce the memory costs of large language model reasoning by up to eight times. Their technique, called dynamic memory sparsification (DMS), compresses the key value (KV) cache, the temporary memory LLMs generate and store as they process prompts and reason through problems and documents.

READ ALSO

AI romance scams are on the rise. Here’s what you need to know.

Airbnb is testing out AI search with a ‘small percentage’ of users

While researchers have proposed various methods to compress this cache before, most struggle to do so without degrading the model's intelligence. Nvidia's approach manages to discard much of the cache while maintaining (and in some cases improving) the model's reasoning capabilities.

Experiments show that DMS enables LLMs to "think" longer and explore more solutions without the usual penalty in speed or memory costs.

The bottleneck of reasoning

LLMs improve their performance on complex tasks by generating "chain-of-thought" tokens, essentially writing out their reasoning steps before arriving at a final answer. Inference-time scaling techniques leverage this by giving the model a larger budget to generate these thinking tokens or to explore multiple potential reasoning paths in parallel.

However, this improved reasoning comes with a significant computational cost. As the model generates more tokens, it builds up a KV cache.

For real-world applications, the KV cache is a major bottleneck. As the reasoning chain grows, the cache grows linearly, consuming vast amounts of memory on GPUs. This forces the hardware to spend more time reading data from memory than actually computing, which slows down generation and increases latency. It also caps the number of users a system can serve simultaneously, as running out of VRAM causes the system to crash or slow to a crawl.

Nvidia researchers frame this not just as a technical hurdle, but as a fundamental economic one for the enterprise.

"The question isn't just about hardware quantity; it's about whether your infrastructure is processing 100 reasoning threads or 800 threads for the same cost," Piotr Nawrot, Senior Deep Learning Engineer at Nvidia, told VentureBeat.

Previous attempts to solve this focused on heuristics-based approaches. These methods use rigid rules, such as a "sliding window" that only caches the most recent tokens and deletes the rest. While this reduces memory usage, it often forces the model to discard critical information required for solving the problem, degrading the accuracy of the output.

"Standard eviction methods attempt to select old and unused tokens for eviction using heuristics," the researchers said. "They simplify the problem, hoping that if they approximate the model's internal mechanics, the answer will remain correct."

Other solutions use paging to offload the unused parts of the KV cache to slower memory, but the constant swapping of data introduces latency overhead that makes real-time applications sluggish.

Dynamic memory sparsification

DMS takes a different approach by "retrofitting" existing LLMs to intelligently manage their own memory. Rather than applying a fixed rule for what to delete, DMS trains the model to identify which tokens are essential for future reasoning and which are disposable.

"It doesn't just guess importance; it learns a policy that explicitly preserves the model's final output distribution," Nawrot said.

The process transforms a standard, pre-trained LLM such as Llama 3 or Qwen 3 into a self-compressing model. Crucially, this does not require training the model from scratch, which would be prohibitively expensive. Instead, DMS repurposes existing neurons within the model’s attention layers to output a "keep" or "evict" signal for each token.

For teams worried about the complexity of retrofitting, the researchers noted that the process is designed to be lightweight. "To improve the efficiency of this process, the model's weights can be frozen, which makes the process similar to Low-Rank Adaptation (LoRA)," Nawrot said. This means a standard enterprise model like Qwen3-8B "can be retrofitted with DMS within hours on a single DGX H100."

One of the important parts of DMS is a mechanism called "delayed eviction." In standard sparsification, if a token is deemed unimportant, it is deleted immediately. This is risky because the model might need a split second to integrate that token's context into its current state.

DMS mitigates this by flagging a token for eviction but keeping it accessible for a short window of time (e.g., a few hundred steps). This delay allows the model to "extract" any remaining necessary information from the token and merge it into the current context before the token is wiped from the KV cache.

“The ‘delayed eviction’ mechanism is crucial because not all tokens are simply ‘important’ (keep forever) or ‘useless’ (delete immediately). Many fall in between — they carry some information, but not enough to justify occupying an entire slot in memory,” Nawrot said. “This is where the redundancy lies. By keeping these tokens in a local window for a short time before eviction, we allow the model to attend to them and redistribute their information into future tokens.”

The researchers found that this retrofitting process is highly efficient. They could equip a pre-trained LLM with DMS in just 1,000 training steps, a tiny fraction of the compute required for the original training. The resulting models use standard kernels and can drop directly into existing high-performance inference stacks without custom hardware or complex software rewriting.

DMS in action

To validate the technique, the researchers applied DMS to several reasoning models, including the Qwen-R1 series (distilled from DeepSeek R1) and Llama 3.2, and tested them on difficult benchmarks like AIME 24 (math), GPQA Diamond (science), and LiveCodeBench (coding).

The results show that DMS effectively moves the Pareto frontier, the optimal trade-off between cost and performance. On the AIME 24 math benchmark, a Qwen-R1 32B model equipped with DMS achieved a score 12.0 points higher than a standard model when constrained to the same memory bandwidth budget. By compressing the cache, the model could afford to "think" much deeper and wider than the standard model could for the same memory and compute budget.

Perhaps most surprisingly, DMS defied the common wisdom that compression hurts long-context understanding. In "needle-in-a-haystack" tests, which measure a model's ability to find a specific piece of information buried in a large document, DMS variants actually outperformed the standard models. By actively managing its memory rather than passively accumulating noise, the model maintained a cleaner, more useful context.

For enterprise infrastructure, the efficiency gains translate directly to throughput and hardware savings. Because the memory cache is significantly smaller, the GPU spends less time fetching data, reducing the wait time for users. In tests with the Qwen3-8B model, DMS matched the accuracy of the vanilla model while delivering up to 5x higher throughput. This means a single server can handle five times as many customer queries per second without a drop in quality.

The future of memory

Nvidia has released DMS as part of its Model Optimizer framework. Regarding how enterprises can get started with DMS, Nawrot emphasized that the barrier to entry is low. "The 'minimum viable infrastructure' is standard Hugging Face pipelines — no custom CUDA kernels are required," Nawrot said, noting that the code is fully compatible with standard FlashAttention. 

Looking ahead, the team views DMS as part of a larger shift where memory management becomes a distinct, intelligent layer of the AI stack. Nawrot also confirmed that DMS is "fully compatible" with newer architectures like the Multi-Head Latent Attention (MLA) used in DeepSeek’s models, suggesting that combining these approaches could yield even greater efficiency gains.

As enterprises move from simple chatbots to complex agentic systems that require extended reasoning, the cost of inference is becoming a primary concern. Techniques like DMS provide a path to scale these capabilities sustainably.

"We’ve barely scratched the surface of what is possible," Nawrot said, "and we expect inference-time scaling to further evolve."



Source_link

Related Posts

AI romance scams are on the rise. Here’s what you need to know.
Technology And Software

AI romance scams are on the rise. Here’s what you need to know.

February 15, 2026
Airbnb is testing out AI search with a ‘small percentage’ of users
Technology And Software

Airbnb is testing out AI search with a ‘small percentage’ of users

February 15, 2026
Best Apple Watch (2026): Series 11, SE 3, and Ultra 3
Technology And Software

Best Apple Watch (2026): Series 11, SE 3, and Ultra 3

February 15, 2026
Is safety is ‘dead’ at xAI?
Technology And Software

Is safety is ‘dead’ at xAI?

February 14, 2026
AI agents turned Super Bowl viewers into one high-IQ team — now imagine this in the enterprise
Technology And Software

AI agents turned Super Bowl viewers into one high-IQ team — now imagine this in the enterprise

February 14, 2026
Bitcoin biopic starring Casey Affleck to use AI to generate locations and tweak performances
Technology And Software

Bitcoin biopic starring Casey Affleck to use AI to generate locations and tweak performances

February 14, 2026
Next Post
Beyond Locks and Keys: Important Security Measures for Every Business

Beyond Locks and Keys: Important Security Measures for Every Business

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
Google announced the next step in its nuclear energy plans 

Google announced the next step in its nuclear energy plans 

August 20, 2025

EDITOR'S PICK

Cross-Platform App Development 2026 – Trends & Guide

Cross-Platform App Development 2026 – Trends & Guide

January 22, 2026
New MIT program to train military leaders for the AI age | MIT News

New MIT program to train military leaders for the AI age | MIT News

December 13, 2025
How to Create a Social Media Marketing Strategy in 7 Steps

How to Create a Social Media Marketing Strategy in 7 Steps

November 24, 2025
PR Strategies for Litigation Investigations

PR Strategies for Litigation Investigations

February 7, 2026

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • AI Is Reshaping Parenting Products and Family Marketing in 2026
  • Beyond Locks and Keys: Important Security Measures for Every Business
  • Nvidia’s new technique cuts LLM reasoning costs by 8x without losing accuracy
  • AI in the Workplace Statistics 2025–2035
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?