• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Wednesday, July 30, 2025
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Technology And Software

Google study shows LLMs abandon correct answers under pressure, threatening multi-turn AI systems

Josh by Josh
July 16, 2025
in Technology And Software
0
Google study shows LLMs abandon correct answers under pressure, threatening multi-turn AI systems
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

AI vs. AI: Prophet Security raises $30M to replace human analysts with autonomous defenders

The best smartwatches for 2025


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


A new study by researchers at Google DeepMind and University College London reveals how large language models (LLMs) form, maintain and lose confidence in their answers. The findings reveal striking similarities between the cognitive biases of LLMs and humans, while also highlighting stark differences.

The research reveals that LLMs can be overconfident in their own answers yet quickly lose that confidence and change their minds when presented with a counterargument, even if the counterargument is incorrect. Understanding the nuances of this behavior can have direct consequences on how you build LLM applications, especially conversational interfaces that span several turns.

Testing confidence in LLMs

A critical factor in the safe deployment of LLMs is that their answers are accompanied by a reliable sense of confidence (the probability that the model assigns to the answer token). While we know LLMs can produce these confidence scores, the extent to which they can use them to guide adaptive behavior is poorly characterized. There is also empirical evidence that LLMs can be overconfident in their initial answer but also be highly sensitive to criticism and quickly become underconfident in that same choice.

To investigate this, the researchers developed a controlled experiment to test how LLMs update their confidence and decide whether to change their answers when presented with external advice. In the experiment, an “answering LLM” was first given a binary-choice question, such as identifying the correct latitude for a city from two options. After making its initial choice, the LLM was given advice from a fictitious “advice LLM.” This advice came with an explicit accuracy rating (e.g., “This advice LLM is 70% accurate”) and would either agree with, oppose, or stay neutral on the answering LLM’s initial choice. Finally, the answering LLM was asked to make its final choice.


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


Example test of confidence in LLMs (source: arXiv)
Example test of confidence in LLMs Source: arXiv

A key part of the experiment was controlling whether the LLM’s own initial answer was visible to it during the second, final decision. In some cases, it was shown, and in others, it was hidden. This unique setup, impossible to replicate with human participants who can’t simply forget their prior choices, allowed the researchers to isolate how memory of a past decision influences current confidence. 

A baseline condition, where the initial answer was hidden and the advice was neutral, established how much an LLM’s answer might change simply due to random variance in the model’s processing. The analysis focused on how the LLM’s confidence in its original choice changed between the first and second turn, providing a clear picture of how initial belief, or prior, affects a “change of mind” in the model.

Overconfidence and underconfidence

The researchers first examined how the visibility of the LLM’s own answer affected its tendency to change its answer. They observed that when the model could see its initial answer, it showed a reduced tendency to switch, compared to when the answer was hidden. This finding points to a specific cognitive bias. As the paper notes, “This effect – the tendency to stick with one’s initial choice to a greater extent when that choice was visible (as opposed to hidden) during the contemplation of final choice – is closely related to a phenomenon described in the study of human decision making, a choice-supportive bias.”

The study also confirmed that the models do integrate external advice. When faced with opposing advice, the LLM showed an increased tendency to change its mind, and a reduced tendency when the advice was supportive. “This finding demonstrates that the answering LLM appropriately integrates the direction of advice to modulate its change of mind rate,” the researchers write. However, they also discovered that the model is overly sensitive to contrary information and performs too large of a confidence update as a result.

Sensitivity of LLMs to different settings in confidence testing Source: arXiv

Interestingly, this behavior is contrary to the confirmation bias often seen in humans, where people favor information that confirms their existing beliefs. The researchers found that LLMs “overweight opposing rather than supportive advice, both when the initial answer of the model was visible and hidden from the model.” One possible explanation is that training techniques like reinforcement learning from human feedback (RLHF) may encourage models to be overly deferential to user input, a phenomenon known as sycophancy (which remains a challenge for AI labs).

Implications for enterprise applications

This study confirms that AI systems are not the purely logical agents they are often perceived to be. They exhibit their own set of biases, some resembling human cognitive errors and others unique to themselves, which can make their behavior unpredictable in human terms. For enterprise applications, this means that in an extended conversation between a human and an AI agent, the most recent information could have a disproportionate impact on the LLM’s reasoning (especially if it is contradictory to the model’s initial answer), potentially causing it to discard an initially correct answer.

Fortunately, as the study also shows, we can manipulate an LLM’s memory to mitigate these unwanted biases in ways that are not possible with humans. Developers building multi-turn conversational agents can implement strategies to manage the AI’s context. For example, a long conversation can be periodically summarized, with key facts and decisions presented neutrally and stripped of which agent made which choice. This summary can then be used to initiate a new, condensed conversation, providing the model with a clean slate to reason from and helping to avoid the biases that can creep in during extended dialogues.

As LLMs become more integrated into enterprise workflows, understanding the nuances of their decision-making processes is no longer optional. Following foundational research like this enables developers to anticipate and correct for these inherent biases, leading to applications that are not just more capable, but also more robust and reliable.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source_link

Related Posts

AI vs. AI: Prophet Security raises $30M to replace human analysts with autonomous defenders
Technology And Software

AI vs. AI: Prophet Security raises $30M to replace human analysts with autonomous defenders

July 30, 2025
The best smartwatches for 2025
Technology And Software

The best smartwatches for 2025

July 30, 2025
ChatGPT’s Study Mode Is Here. It Won’t Fix Education’s AI Problems
Technology And Software

ChatGPT’s Study Mode Is Here. It Won’t Fix Education’s AI Problems

July 30, 2025
Nvidia AI chip challenger Groq said to be nearing new fundraising at $6B valuation 
Technology And Software

Nvidia AI chip challenger Groq said to be nearing new fundraising at $6B valuation 

July 29, 2025
How to Convert Virtual Machines from VMware to VirtualBox
Technology And Software

How to Convert Virtual Machines from VMware to VirtualBox

July 29, 2025
Chinese startup Z.ai launches powerful open source GLM-4.5 model family with PowerPoint creation
Technology And Software

Chinese startup Z.ai launches powerful open source GLM-4.5 model family with PowerPoint creation

July 29, 2025
Next Post
How to Develop Corporate AI Guidelines

How to Develop Corporate AI Guidelines

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
7 Best EOR Platforms for Software Companies in 2025

7 Best EOR Platforms for Software Companies in 2025

June 21, 2025
Top B2B & Marketing Podcasts to Lead You to Succeed in 2025 – TopRank® Marketing

Top B2B & Marketing Podcasts to Lead You to Succeed in 2025 – TopRank® Marketing

May 30, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025

EDITOR'S PICK

The Mentor Turning Financial Education Into a $300M

The Mentor Turning Financial Education Into a $300M

July 25, 2025
Introducing the Google ML and Systems Junior Faculty Awards

Introducing the Google ML and Systems Junior Faculty Awards

July 2, 2025

Leveraging Data Analysis for Strategic Marketing: A Comprehensive Guide

June 20, 2025
Does Being Mentioned on Highly Linked Pages Influence AI Mentions?

Does Being Mentioned on Highly Linked Pages Influence AI Mentions?

July 11, 2025

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • AI-Powered Podcast Production: My Podcast Tech Stack
  • AI vs. AI: Prophet Security raises $30M to replace human analysts with autonomous defenders
  • LiveRamp’s Data Collaboration Platform Drove 313% ROI for Brands, According to Total Economic Impact Study
  • Your First Containerized Machine Learning Deployment with Docker and FastAPI
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?