• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Friday, April 24, 2026
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Al, Analytics and Automation

NVIDIA Researchers Propose Reinforcement Learning Pretraining (RLP): Reinforcement as a Pretraining Objective for Building Reasoning During Pretraining

Josh by Josh
October 14, 2025
in Al, Analytics and Automation
0
NVIDIA Researchers Propose Reinforcement Learning Pretraining (RLP): Reinforcement as a Pretraining Objective for Building Reasoning During Pretraining


NVIDIA AI has introduced Reinforcement Learning Pretraining (RLP), a training objective that injects reinforcement learning into the pretraining stage rather than deferring it to post-training. The core idea is simple and testable: treat a short chain-of-thought (CoT) as an action sampled before next-token prediction and reward it by the information gain it provides on the observed next token, measured against a no-think EMA baseline. This produces a verifier-free, dense, position-wise reward that can be applied to ordinary text streams at pretraining scale.

https://github.com/NVlabs/RLP/blob/main/pdf/RLP_Reinforcement_as_a_Pretraining_Objective.pdf

Mechanism: Information-Gain Rewards with an EMA Counterfactual

RLP uses a single network (shared parameters) to (1) sample a CoT policy
𝜋
𝜃
(
𝑐
𝑡
∣
𝑥
<
𝑡
)
π
θ
​

(c
t
​

∣x
<t
​

) and then (2) score the next token
𝑝
𝜃
(
𝑥
𝑡
∣
𝑥
<
𝑡
,
𝑐
𝑡
)
p
θ
​

(x
t
​

∣x
<t
​

,c
t
​

). A slowly updated EMA teacher
𝑝
𝜙
(
𝑥
𝑡
∣
𝑥
<
𝑡
)
p
ϕ
​

(x
t
​

∣x
<t
​

) provides a no-think counterfactual. The per-token reward is the log-likelihood ratio-

r(ct​)=logpθ​(xt​∣x<t​,ct​)−logpϕ​(xt​∣x<t​), computed under teacher forcing. Training updates only the thought tokens using a clipped surrogate with per-token importance ratios and group-relative advantages (multiple sampled thoughts per context reduce variance). The objective maximizes expected information gain; theoretical results connect the expected reward to reductions in cross-entropy and bound it via marginalization over thoughts.

Why this matters technically: unlike prior “reinforcement pretraining” variants that rely on sparse, binary correctness signals or proxy filters, RLP’s dense, verifier-free reward attaches position-wise credit wherever thinking improves prediction, enabling updates at every token position in general web-scale corpora without external verifiers or curated answer keys.

Understanding the Results

Qwen3-1.7B-Base: Pretraining with RLP improved the overall math+science average by ~19% vs the base model and ~17% vs compute-matched continuous pretraining (CPT). After identical post-training (SFT + RLVR) across all variants, the RLP-initialized model retained a ~7–8% relative advantage, with the largest gains on reasoning-heavy benchmarks (AIME25, MMLU-Pro).

Nemotron-Nano-12B v2: Applying RLP to a 12B hybrid Mamba-Transformer checkpoint yielded an overall average increase from 42.81% to 61.32% and an absolute +23% gain on scientific reasoning, even though the RLP run used ~200B fewer tokens (training for 19.8T vs 20T tokens; RLP applied for 250M tokens). This highlights data efficiency and architecture-agnostic behavior.

https://github.com/NVlabs/RLP/blob/main/pdf/RLP_Reinforcement_as_a_Pretraining_Objective.pdf

RPT comparison: Under matched data and compute with Omni-MATH-style settings, RLP outperformed RPT on math, science, and overall averages—attributed to RLP’s continuous information-gain reward versus RPT’s sparse binary signal and entropy-filtered tokens.

https://github.com/NVlabs/RLP/blob/main/pdf/RLP_Reinforcement_as_a_Pretraining_Objective.pdf

Positioning vs. Post-Training RL and Data Curation

Reinforcement Learning Pretraining (RLP) is orthogonal to post-training pipelines (SFT, RLVR) and shows compounding improvements after standard alignment. Because the reward is computed from model log-evidence rather than external verifiers, it scales to domain-agnostic corpora (web crawl, academic text, textbooks) and SFT-style reasoning corpora, avoiding the brittleness of narrow curated datasets. In compute-matched comparisons (including CPT with 35× more tokens to match FLOPs), RLP still led on overall averages, suggesting the improvements derive from objective design, not budget.

Key Takeaways

  • RLP makes reasoning a pretraining objective: sample a chain-of-thought before next-token prediction and reward it by information gain over a no-think EMA baseline.
  • Verifier-free, dense, position-wise signal: works on ordinary text streams without external graders, enabling scalable pretraining updates on every token.
  • Qwen3-1.7B results: +19% vs Base and +17% vs compute-matched CPT during pretraining; with identical SFT+RLVR, RLP retains ~7–8% gains (largest on AIME25, MMLU-Pro).
  • Nemotron-Nano-12B v2: overall average rises 42.81% → 61.32% (+18.51 pp; ~35–43% rel.) and +23 points on scientific reasoning, using ~200B fewer NTP tokens.
  • Training details that matter: update gradients only on thought tokens with a clipped surrogate and group-relative advantages; more rollouts (≈16) and longer thought lengths (≈2048) help; token-level KL anchoring offers no benefit.

Conclusion

RLP reframes pretraining to directly reward “think-before-predict” behavior using a verifier-free, information-gain signal, yielding durable reasoning gains that persist through identical SFT+RLVR and extend across architectures (Qwen3-1.7B, Nemotron-Nano-12B v2). The method’s objective—contrasting CoT-conditioned likelihood against a no-think EMA baseline—integrates cleanly into large-scale pipelines without curated verifiers, making it a practical upgrade to next-token pretraining rather than a post-training add-on.


Check out the Paper, Code and Project Page. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.

🙌 Follow MARKTECHPOST: Add us as a preferred source on Google.



Source_link

READ ALSO

MIT scientists build the world’s largest collection of Olympiad-level math problems, and open it to everyone | MIT News

Google DeepMind Introduces Decoupled DiLoCo: An Asynchronous Training Architecture Achieving 88% Goodput Under High Hardware Failure Rates

Related Posts

MIT scientists build the world’s largest collection of Olympiad-level math problems, and open it to everyone | MIT News
Al, Analytics and Automation

MIT scientists build the world’s largest collection of Olympiad-level math problems, and open it to everyone | MIT News

April 24, 2026
Google DeepMind Introduces Decoupled DiLoCo: An Asynchronous Training Architecture Achieving 88% Goodput Under High Hardware Failure Rates
Al, Analytics and Automation

Google DeepMind Introduces Decoupled DiLoCo: An Asynchronous Training Architecture Achieving 88% Goodput Under High Hardware Failure Rates

April 24, 2026
Mend Releases AI Security Governance Framework: Covering Asset Inventory, Risk Tiering, AI Supply Chain Security, and Maturity Model
Al, Analytics and Automation

Mend Releases AI Security Governance Framework: Covering Asset Inventory, Risk Tiering, AI Supply Chain Security, and Maturity Model

April 24, 2026
“Your Next Coworker May Not Be Human” as Google Bets Everything on AI Agents to Power the Office
Al, Analytics and Automation

“Your Next Coworker May Not Be Human” as Google Bets Everything on AI Agents to Power the Office

April 23, 2026
Google Cloud AI Research Introduces ReasoningBank: A Memory Framework that Distills Reasoning Strategies from Agent Successes and Failures
Al, Analytics and Automation

Google Cloud AI Research Introduces ReasoningBank: A Memory Framework that Distills Reasoning Strategies from Agent Successes and Failures

April 23, 2026
The Most Efficient Approach to Crafting Your Personal AI Productivity System
Al, Analytics and Automation

The Most Efficient Approach to Crafting Your Personal AI Productivity System

April 23, 2026
Next Post
Plants Vs Brainrots Script (No Key, Auto Farm, Auto Sell)

Plants Vs Brainrots Script (No Key, Auto Farm, Auto Sell)

POPULAR NEWS

Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
Comparing the Top 7 Large Language Models LLMs/Systems for Coding in 2025

Comparing the Top 7 Large Language Models LLMs/Systems for Coding in 2025

November 4, 2025

EDITOR'S PICK

Google responds to wrongful death lawsuit of man talking to Gemini

Google responds to wrongful death lawsuit of man talking to Gemini

March 5, 2026
Google’s healthcare AI made up a body part — what happens when doctors don’t notice?

Google’s healthcare AI made up a body part — what happens when doctors don’t notice?

August 5, 2025
Behind the social handle: How to balance risk and trust

Behind the social handle: How to balance risk and trust

February 17, 2026
How to Read a Machine Learning Research Paper in 2026

How to Read a Machine Learning Research Paper in 2026

February 1, 2026

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • Top 25 SEM Tools: Content, SEO, and More!
  • MIT scientists build the world’s largest collection of Olympiad-level math problems, and open it to everyone | MIT News
  • Which is the Best Knowledge Base Software for Contact Centers?
  • 10 Critical Benefits of Computer Vision for Business in 2026
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions