• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Saturday, October 25, 2025
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Technology And Software

Thinking Machines challenges OpenAI's AI scaling strategy: 'First superintelligence will be a superhuman learner'

Josh by Josh
October 25, 2025
in Technology And Software
0
Thinking Machines challenges OpenAI's AI scaling strategy: 'First superintelligence will be a superhuman learner'
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter



While the world's leading artificial intelligence companies race to build ever-larger models, betting billions that scale alone will unlock artificial general intelligence, a researcher at one of the industry's most secretive and valuable startups delivered a pointed challenge to that orthodoxy this week: The path forward isn't about training bigger — it's about learning better.

READ ALSO

Which is Best for Creators?

CSaaS: Your 24/7 Digital Security Team Explained

"I believe that the first superintelligence will be a superhuman learner," Rafael Rafailov, a reinforcement learning researcher at Thinking Machines Lab, told an audience at TED AI San Francisco on Tuesday. "It will be able to very efficiently figure out and adapt, propose its own theories, propose experiments, use the environment to verify that, get information, and iterate that process."

This breaks sharply with the approach pursued by OpenAI, Anthropic, Google DeepMind, and other leading laboratories, which have bet billions on scaling up model size, data, and compute to achieve increasingly sophisticated reasoning capabilities. Rafailov argues these companies have the strategy backwards: what's missing from today's most advanced AI systems isn't more scale — it's the ability to actually learn from experience.

"Learning is something an intelligent being does," Rafailov said, citing a quote he described as recently compelling. "Training is something that's being done to it."

The distinction cuts to the core of how AI systems improve — and whether the industry's current trajectory can deliver on its most ambitious promises. Rafailov's comments offer a rare window into the thinking at Thinking Machines Lab, the startup co-founded in February by former OpenAI chief technology officer Mira Murati that raised a record-breaking $2 billion in seed funding at a $12 billion valuation.

Why today's AI coding assistants forget everything they learned yesterday

To illustrate the problem with current AI systems, Rafailov offered a scenario familiar to anyone who has worked with today's most advanced coding assistants.

"If you use a coding agent, ask it to do something really difficult — to implement a feature, go read your code, try to understand your code, reason about your code, implement something, iterate — it might be successful," he explained. "And then come back the next day and ask it to implement the next feature, and it will do the same thing."

The issue, he argued, is that these systems don't internalize what they learn. "In a sense, for the models we have today, every day is their first day of the job," Rafailov said. "But an intelligent being should be able to internalize information. It should be able to adapt. It should be able to modify its behavior so every day it becomes better, every day it knows more, every day it works faster — the way a human you hire gets better at the job."

The duct tape problem: How current training methods teach AI to take shortcuts instead of solving problems

Rafailov pointed to a specific behavior in coding agents that reveals the deeper problem: their tendency to wrap uncertain code in try/except blocks — a programming construct that catches errors and allows a program to continue running.

"If you use coding agents, you might have observed a very annoying tendency of them to use try/except pass," he said. "And in general, that is basically just like duct tape to save the entire program from a single error."

Why do agents do this? "They do this because they understand that part of the code might not be right," Rafailov explained. "They understand there might be something wrong, that it might be risky. But under the limited constraint—they have a limited amount of time solving the problem, limited amount of interaction—they must only focus on their objective, which is implement this feature and solve this bug."

The result: "They're kicking the can down the road."

This behavior stems from training systems that optimize for immediate task completion. "The only thing that matters to our current generation is solving the task," he said. "And anything that's general, anything that's not related to just that one objective, is a waste of computation."

Why throwing more compute at AI won't create superintelligence, according to Thinking Machines researcher

Rafailov's most direct challenge to the industry came in his assertion that continued scaling won't be sufficient to reach AGI.

"I don't believe we're hitting any sort of saturation points," he clarified. "I think we're just at the beginning of the next paradigm—the scale of reinforcement learning, in which we move from teaching our models how to think, how to explore thinking space, into endowing them with the capability of general agents."

In other words, current approaches will produce increasingly capable systems that can interact with the world, browse the web, write code. "I believe a year or two from now, we'll look at our coding agents today, research agents or browsing agents, the way we look at summarization models or translation models from several years ago," he said.

But general agency, he argued, is not the same as general intelligence. "The much more interesting question is: Is that going to be AGI? And are we done — do we just need one more round of scaling, one more round of environments, one more round of RL, one more round of compute, and we're kind of done?"

His answer was unequivocal: "I don't believe this is the case. I believe that under our current paradigms, under any scale, we are not enough to deal with artificial general intelligence and artificial superintelligence. And I believe that under our current paradigms, our current models will lack one core capability, and that is learning."

Teaching AI like students, not calculators: The textbook approach to machine learning

To explain the alternative approach, Rafailov turned to an analogy from mathematics education.

"Think about how we train our current generation of reasoning models," he said. "We take a particular math problem, make it very hard, and try to solve it, rewarding the model for solving it. And that's it. Once that experience is done, the model submits a solution. Anything it discovers—any abstractions it learned, any theorems—we discard, and then we ask it to solve a new problem, and it has to come up with the same abstractions all over again."

That approach misunderstands how knowledge accumulates. "This is not how science or mathematics works," he said. "We build abstractions not necessarily because they solve our current problems, but because they're important. For example, we developed the field of topology to extend Euclidean geometry — not to solve a particular problem that Euclidean geometry couldn't handle, but because mathematicians and physicists understood these concepts were fundamentally important."

The solution: "Instead of giving our models a single problem, we might give them a textbook. Imagine a very advanced graduate-level textbook, and we ask our models to work through the first chapter, then the first exercise, the second exercise, the third, the fourth, then move to the second chapter, and so on—the way a real student might teach themselves a topic."

The objective would fundamentally change: "Instead of rewarding their success — how many problems they solved — we need to reward their progress, their ability to learn, and their ability to improve."

This approach, known as "meta-learning" or "learning to learn," has precedents in earlier AI systems. "Just like the ideas of scaling test-time compute and search and test-time exploration played out in the domain of games first" — in systems like DeepMind's AlphaGo — "the same is true for meta learning. We know that these ideas do work at a small scale, but we need to adapt them to the scale and the capability of foundation models."

The missing ingredients for AI that truly learns aren't new architectures—they're better data and smarter objectives

When Rafailov addressed why current models lack this learning capability, he offered a surprisingly straightforward answer.

"Unfortunately, I think the answer is quite prosaic," he said. "I think we just don't have the right data, and we don't have the right objectives. I fundamentally believe a lot of the core architectural engineering design is in place."

Rather than arguing for entirely new model architectures, Rafailov suggested the path forward lies in redesigning the data distributions and reward structures used to train models.

"Learning, in of itself, is an algorithm," he explained. "It has inputs — the current state of the model. It has data and compute. You process it through some sort of structure, choose your favorite optimization algorithm, and you produce, hopefully, a stronger model."

The question: "If reasoning models are able to learn general reasoning algorithms, general search algorithms, and agent models are able to learn general agency, can the next generation of AI learn a learning algorithm itself?"

His answer: "I strongly believe that the answer to this question is yes."

The technical approach would involve creating training environments where "learning, adaptation, exploration, and self-improvement, as well as generalization, are necessary for success."

"I believe that under enough computational resources and with broad enough coverage, general purpose learning algorithms can emerge from large scale training," Rafailov said. "The way we train our models to reason in general over just math and code, and potentially act in general domains, we might be able to teach them how to learn efficiently across many different applications."

Forget god-like reasoners: The first superintelligence will be a master student

This vision leads to a fundamentally different conception of what artificial superintelligence might look like.

"I believe that if this is possible, that's the final missing piece to achieve truly efficient general intelligence," Rafailov said. "Now imagine such an intelligence with the core objective of exploring, learning, acquiring information, self-improving, equipped with general agency capability—the ability to understand and explore the external world, the ability to use computers, ability to do research, ability to manage and control robots."

Such a system would constitute artificial superintelligence. But not the kind often imagined in science fiction.

"I believe that intelligence is not going to be a single god model that's a god-level reasoner or a god-level mathematical problem solver," Rafailov said. "I believe that the first superintelligence will be a superhuman learner, and it will be able to very efficiently figure out and adapt, propose its own theories, propose experiments, use the environment to verify that, get information, and iterate that process."

This vision stands in contrast to OpenAI's emphasis on building increasingly powerful reasoning systems, or Anthropic's focus on "constitutional AI." Instead, Thinking Machines Lab appears to be betting that the path to superintelligence runs through systems that can continuously improve themselves through interaction with their environment.

The $12 billion bet on learning over scaling faces formidable challenges

Rafailov's appearance comes at a complex moment for Thinking Machines Lab. The company has assembled an impressive team of approximately 30 researchers from OpenAI, Google, Meta, and other leading labs. But it suffered a setback in early October when Andrew Tulloch, a co-founder and machine learning expert, departed to return to Meta after the company launched what The Wall Street Journal called a "full-scale raid" on the startup, approaching more than a dozen employees with compensation packages ranging from $200 million to $1.5 billion over multiple years.

Despite these pressures, Rafailov's comments suggest the company remains committed to its differentiated technical approach. The company launched its first product, Tinker, an API for fine-tuning open-source language models, in October. But Rafailov's talk suggests Tinker is just the foundation for a much more ambitious research agenda focused on meta-learning and self-improving systems.

"This is not easy. This is going to be very difficult," Rafailov acknowledged. "We'll need a lot of breakthroughs in memory and engineering and data and optimization, but I think it's fundamentally possible."

He concluded with a play on words: "The world is not enough, but we need the right experiences, and we need the right type of rewards for learning."

The question for Thinking Machines Lab — and the broader AI industry — is whether this vision can be realized, and on what timeline. Rafailov notably did not offer specific predictions about when such systems might emerge.

In an industry where executives routinely make bold predictions about AGI arriving within years or even months, that restraint is notable. It suggests either unusual scientific humility — or an acknowledgment that Thinking Machines Lab is pursuing a much longer, harder path than its competitors.

For now, the most revealing detail may be what Rafailov didn't say during his TED AI presentation. No timeline for when superhuman learners might emerge. No prediction about when the technical breakthroughs would arrive. Just a conviction that the capability was "fundamentally possible" — and that without it, all the scaling in the world won't be enough.



Source_link

Related Posts

Which is Best for Creators?
Technology And Software

Which is Best for Creators?

October 25, 2025
CSaaS: Your 24/7 Digital Security Team Explained
Technology And Software

CSaaS: Your 24/7 Digital Security Team Explained

October 25, 2025
Young Republicans group chat: Why the racist, sexist, antisemitic messages matter.
Technology And Software

Young Republicans group chat: Why the racist, sexist, antisemitic messages matter.

October 25, 2025
How to watch the 2025 MLB World Series without cable
Technology And Software

How to watch the 2025 MLB World Series without cable

October 24, 2025
This Upgraded SteelSeries Gaming Headset Is $80 Off
Technology And Software

This Upgraded SteelSeries Gaming Headset Is $80 Off

October 24, 2025
Rivian will pay $250M to settle lawsuit over R1 price hike
Technology And Software

Rivian will pay $250M to settle lawsuit over R1 price hike

October 24, 2025
Next Post
Fish It Script (No Key, Pastebin, Auto Farm)

Fish It Script (No Key, Pastebin, Auto Farm)

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
7 Best EOR Platforms for Software Companies in 2025

7 Best EOR Platforms for Software Companies in 2025

June 21, 2025

EDITOR'S PICK

Guest Column: From Greenwashing to Green Wins. Measuring Event Sustainability

Guest Column: From Greenwashing to Green Wins. Measuring Event Sustainability

September 5, 2025
Five Ideas for Experiential Upfronts

Five Ideas for Experiential Upfronts

June 7, 2025
The EPA Plans to ‘Reconsider’ Ban on Cancer-Causing Asbestos

The EPA Plans to ‘Reconsider’ Ban on Cancer-Causing Asbestos

June 19, 2025
Experiential Trend of the Week: Relaxed Connection

Experiential Trend of the Week: Relaxed Connection

July 16, 2025

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • 3 ways we’re helping make classrooms more accessible
  • How to Find & Fix ChatGPT 404 Referrals
  • Monitoring Emerging Risks in Real-Time: An Implementation Guide
  • October Overtakes Summer: New Analysis Names Fall’s First Full Month the Deadliest for U.S. Drivers
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?