Plus: Can you tell the difference between real and AI-generated voices?; Treasury Secretary Bessent tight-lipped on potential TikTok deal.
OpenAI CEO Sam Altman recently admitted he hasn’t had “a good night’s sleep” since the launch of ChatGPT, citing the moral weight of decisions that affect “hundreds of millions” of people worldwide.
In an interview last week, Altman addressed controversies around AI safety, ethical boundaries and responsibility for user harm, including an ongoing case in which parents allege ChatGPT contributed to their teenage son’s suicide.
Altman said OpenAI is exploring policies to intervene when minors express suicidal ideation through the platform, potentially contacting authorities if parents can’t be reached. But he also acknowledged the privacy and ethical trade-offs such a system would bring.
“The CEO said that out of the thousands of people who commit suicide each week, many of them could possibly have been talking to ChatGPT in the lead-up,” CNBC reports.
“They probably talked about [suicide], and we probably didn’t save their lives,” Altman said in the interview. “Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about, hey, you need to get this help…This is a really hard problem. We have a lot of users now, and they come from very different life perspectives…”
He added that despite this, he has been “pleasantly surprised” with ChatGPT’s ability to “learn and apply a moral framework.”
Online, at least one X user shared that, “When you build an AI dream team with blank checks, you risk creating a nightmare you can’t manage.”
Why it matters: Altman’s remarks highlight the reputational challenges facing companies in high-stakes, fast-moving industries – and not everyone will be receptive to Altman’s message, given the unknowns of AI.
However, Altman’s framing positions OpenAI as both vulnerable and conscientious; bound by responsibility in a field with little regulatory framework but committed to transparency and solutions.
For communicators, the interview underscores several lessons including that transparency can help build credibility. By admitting sleepless nights and uncertainty, Altman humanizes OpenAI and signals sincerity in his concerns.
Altman also emphasizes that accountability matters, admitting he doesn’t have all the answers but understands the sensitive nature around moral issues.
Altman’s communication avoids both defensiveness and false certainty.
He is candid in his approach and honest about what he doesn’t know. He leans on vulnerability and accountability, which serves as a model for leaders navigating controversial, high-impact issues with the right tone.
Editor’s Top Reads:
- The Wall Street Journal is putting its audience to the test in order to see who can determine real human voices from AI-generated voices. WSJ worked with David Falkenstein from corporate security firm IOActive “to clone a few Wall Street Journal colleagues. (Falkenstein) pulled down bits of…publicly available social-media and podcast audio, clips just 10 to 30 seconds in length,” WSJ reports. “He used OpenAudio…to make our voices say some pretty crazy things.” The test is five clips that range between 12 and 14 seconds, and they’re admittedly pretty convincing (I got a 3 out of 5). If this quiz can show businesses anything, it’s that they must be prepared and establish internal protocols for verifying voice content, especially for urgent or sensitive announcements. Understand that voice is now another asset/identity that can be imitated. People impersonating a spokesperson or executive could cause reputational damage, financial losses or spread misinformation. Take the quiz and let us know how you did.
- In the days after Charlie Kirk’s death, some companies are grappling with backlash over viral posts their employees shared on social accounts that appear to celebrate the death. Microsoft, Delta Air Lines and Office Depot are some of the companies who have released statements condemning the actions with Office Depot saying it terminated the employees in question after review. Office Depot apologized to its customers and said the actions were “unacceptable and insensitive.” Microsoft said it was investigating the incidents and also publicly addressed the controversy: “Comments celebrating violence against anyone are unacceptable and do not align with our values,” Business Insider reports. In the age of virality, response times are compressed. When controversies arise, delayed action tends to amplify negative perception. Companies in this case acted quickly by investigating, making public statements and terminating where deemed necessary. Companies must stand by their values or risk alienating stakeholders, losing trust or being thrust inadvertently into the spotlight. This also serves as a reminder that it’s not a bad idea to have a social media policy that balances personal views with upholding company standards.
- Wednesday marks the deadline for a TikTok ban after being pushed three times this year, but will the latest stop date prove to be another false alarm? At this point, businesses are less concerned about the app’s disappearing presence in the U.S. and more concerned about what’s actually going on. Treasury Secretary Scott Bessent told reporters today that, “We have a framework for a TikTok deal,” per the New York Times. Applovin, Oracle and Blackstone were previous buyer contenders. Bessent didn’t offer any further details, saying the negotiations were “private.” For the last nine months, government leaders have been ambiguous in their messaging over the app, saying enough is enough at some points, then creating a White House official TikTok page in another. While saying there’s a framework in place is progress from what we’ve seen before, the message doesn’t do much to convince there’s a solid plan or that any real progress is being made. Companies that aren’t clear in their communications can’t reassure stakeholders. When companies cannot clearly and simply explain their message, they risk losing trust and credibility, or being taken seriously at all.
Courtney Blackann is a communications reporter. Connect with her on LinkedIn or email her at courtneyb@ragan.com.
The post The Scoop: OpenAI CEO addresses moral concerns of ChatGPT appeared first on PR Daily.