• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Tuesday, March 10, 2026
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Technology And Software

OpenAI and Anthropic researchers decry ‘reckless’ safety culture at Elon Musk’s xAI

Josh by Josh
July 16, 2025
in Technology And Software
0
OpenAI and Anthropic researchers decry ‘reckless’ safety culture at Elon Musk’s xAI


AI safety researchers from OpenAI, Anthropic, and other organizations are speaking out publicly against the “reckless” and “completely irresponsible” safety culture at xAI, the billion-dollar AI startup owned by Elon Musk.

The criticisms follow weeks of scandals at xAI that have overshadowed the company’s technological advances.

Last week, the company’s AI chatbot, Grok, spouted antisemitic comments and repeatedly called itself “MechaHitler.” Shortly after xAI took its chatbot offline to address the problem, it launched an increasingly capable frontier AI model, Grok 4, which TechCrunch and others found to consult Elon Musk’s personal politics for help answering hot-button issues. In the latest development, xAI launched AI companions that take the form of a hyper-sexualized anime girl and an overly aggressive panda.

Friendly joshing among employees of competing AI labs is fairly normal, but these researchers seem to be calling for increased attention to xAI’s safety practices, which they claim to be at odds with industry norms.

“I didn’t want to post on Grok safety since I work at a competitor, but it’s not about competition,” said Boaz Barak, a computer science professor currently on leave from Harvard to work on safety research at OpenAI, in a Tuesday post on X. “I appreciate the scientists and engineers @xai but the way safety was handled is completely irresponsible.”

I didn’t want to post on Grok safety since I work at a competitor, but it’s not about competition.

I appreciate the scientists and engineers at @xai but the way safety was handled is completely irresponsible. Thread below.

— Boaz Barak (@boazbaraktcs) July 15, 2025

Barak particularly takes issue with xAI’s decision to not publish system cards — industry standard reports that detail training methods and safety evaluations in a good faith effort to share information with the research community. As a result, Barak says it’s unclear what safety training was done on Grok 4.

OpenAI and Google have a spotty reputation themselves when it comes to promptly sharing system cards when unveiling new AI models. OpenAI decided not to publish a system card for GPT-4.1, claiming it was not a frontier model. Meanwhile, Google waited months after unveiling Gemini 2.5 Pro to publish a safety report. However, these companies historically publish safety reports for all frontier AI models before they enter full production.

Techcrunch event

San Francisco
|
October 27-29, 2025

Barak also notes that Grok’s AI companions “take the worst issues we currently have for emotional dependencies and tries to amplify them.” In recent years, we’ve seen countless stories of unstable people developing concerning relationship with chatbots, and how AI’s over-agreeable answers can tip them over the edge of sanity.

READ ALSO

I Used Google’s New Gemini-Powered ‘Help Me Create’ Tool in Docs. It’s Great at Corporate-Speak

Uzbekistan’s Uzum valuation leaps over 50% in seven months to $2.3B

Samuel Marks, an AI safety researcher with Anthropic, also took issue with xAI’s decision not to publish a safety report, calling the move “reckless.”

“Anthropic, OpenAI, and Google’s release practices have issues,” Marks wrote in a post on X. “But they at least do something, anything to assess safety pre-deployment and document findings. xAI does not.”

xAI launched Grok 4 without any documentation of their safety testing. This is reckless and breaks with industry best practices followed by other major AI labs.

If xAI is going to be a frontier AI developer, they should act like one. 🧵

— Samuel Marks (@saprmarks) July 13, 2025

The reality is that we don’t really know what xAI did to test Grok 4. In a widely shared post in the online forum LessWrong, one anonymous researcher claims that Grok 4 has no meaningful safety guardrails based on their testing.

Whether that’s true or not, the world seems to be finding out about Grok’s shortcomings in real time. Several of xAI’s safety issues have since gone viral, and the company claims to have addressed them with tweaks to Grok’s system prompt.

OpenAI, Anthropic, and xAI did not respond to TechCrunch’s request for comment.

Dan Hendrycks, a safety adviser for xAI and director of the Center for AI Safety, posted on X that the company did “dangerous capability evaluations” on Grok 4. However, the results to those evaluations have not been publicly shared.

“It concerns me when standard safety practices aren’t upheld across the AI industry, like publishing the results of dangerous capability evaluations,” said Steven Adler, an independent AI researcher who previously led safety teams at OpenAI, in a statement to TechCrunch. “Governments and the public deserve to know how AI companies are handling the risks of the very powerful systems they say they’re building.”

What’s interesting about xAI’s questionable safety practices is that Musk has long been one of the AI safety industry’s most notable advocates. The billionaire leader of xAI, Tesla, and SpaceX has warned many times about the potential for advanced AI systems to cause catastrophic outcomes for humans, and he’s praised an open approach to developing AI models.

And yet, AI researchers at competing labs claim xAI is veering from industry norms around safely releasing AI models. In doing so, Musk’s startup may be inadvertently making a strong case for state and federal lawmakers to set rules around publishing AI safety reports.

There are several attempts at the state level to do so. California state Sen. Scott Wiener is pushing a bill that would require leading AI labs — likely including xAI — to publish safety reports, while New York Gov. Kathy Hochul is currently considering a similar bill. Advocates of these bills note that most AI labs publish this type of information anyway — but evidently, not all of them do it consistently.

AI models today have yet to exhibit real-world scenarios in which they create truly catastrophic harms, such as the death of people or billions of dollars in damages. However, many AI researchers say that this could be a problem in the near future given the rapid progress of AI models, and the billions of dollars Silicon Valley is investing to further improve AI.

But even for skeptics of such catastrophic scenarios, there’s a strong case to suggest that Grok’s misbehavior makes the products it powers today significantly worse.

Grok spread antisemitism around the X platform this week, just a few weeks after the chatbot repeatedly brought up “white genocide” in conversations with users. Musk has indicated that Grok will be more ingrained in Tesla vehicles, and xAI is trying to sell its AI models to The Pentagon and other enterprises. It’s hard to imagine that people driving Musk’s cars, federal workers protecting the U.S., or enterprise employees automating tasks will be any more receptive to these misbehaviors than users on X.

Several researchers argue that AI safety and alignment testing not only ensures that the worst outcomes don’t happen, but they also protect against near-term behavioral issues.

At the very least, Grok’s incidents tend to overshadow xAI’s rapid progress in developing frontier AI models that best OpenAI and Google’s technology, just a couple years after the startup was founded.





Source_link

Related Posts

I Used Google’s New Gemini-Powered ‘Help Me Create’ Tool in Docs. It’s Great at Corporate-Speak
Technology And Software

I Used Google’s New Gemini-Powered ‘Help Me Create’ Tool in Docs. It’s Great at Corporate-Speak

March 10, 2026
Uzbekistan’s Uzum valuation leaps over 50% in seven months to $2.3B
Technology And Software

Uzbekistan’s Uzum valuation leaps over 50% in seven months to $2.3B

March 10, 2026
Andrej Karpathy's new open source 'autoresearch' lets you run hundreds of AI experiments a night — with revolutionary implications
Technology And Software

Andrej Karpathy's new open source 'autoresearch' lets you run hundreds of AI experiments a night — with revolutionary implications

March 10, 2026
Dutch intelligence services warn of Russian hackers targeting Signal and WhatsApp
Technology And Software

Dutch intelligence services warn of Russian hackers targeting Signal and WhatsApp

March 9, 2026
Our Favorite Wireless Headphones Are $60 Off
Technology And Software

Our Favorite Wireless Headphones Are $60 Off

March 9, 2026
The 2027 Chevy Bolt is the McRib of the automotive world
Technology And Software

The 2027 Chevy Bolt is the McRib of the automotive world

March 9, 2026
Next Post
Social Media Listening Tools for Crisis Monitoring

Social Media Listening Tools for Crisis Monitoring

POPULAR NEWS

Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
Google announced the next step in its nuclear energy plans 

Google announced the next step in its nuclear energy plans 

August 20, 2025

EDITOR'S PICK

When Words Cut Deeper Than Weapons

When Words Cut Deeper Than Weapons

September 5, 2025
How to Build an Atomic-Agents RAG Pipeline with Typed Schemas, Dynamic Context Injection, and Agent Chaining

How to Build an Atomic-Agents RAG Pipeline with Typed Schemas, Dynamic Context Injection, and Agent Chaining

February 11, 2026
What Web Personalization Can Do in B2B

What Web Personalization Can Do in B2B

July 9, 2025
Google France hosted a hackathon to tackle healthcare’s biggest challenges

Google France hosted a hackathon to tackle healthcare’s biggest challenges

July 16, 2025

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • The reputational risk hidden inside drug pricing
  • How to Upgrade One Marketplace to Level 3 in Demacia Rising in League of Legends
  • I Used Google’s New Gemini-Powered ‘Help Me Create’ Tool in Docs. It’s Great at Corporate-Speak
  • My Picks Based on G2 Data
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions