
Plus: Microsoft emphasizes safety, human connection for new Copilot features; Trump’s ballroom donors just became public.
A group of tech leaders, academics and political figures, including Apple cofounder Steve Wozniak and former White House strategist Steve Bannon, just signed a public letter warning against the creation of “superintelligent” AI.
The open letter, published by the Future of Life Institute and titled “Statement on Superintelligence,” calls for a global pause on developing AI systems that could outsmart humans until there’s solid proof they can be kept safe and under control. It has more than 32,000 signatures.
The signers span from influential voices like AI pioneers Geoffrey Hinton and Yoshua Bengio to cultural personalities and religious figures across multiple ideologies, Futurism reports.
The statement provides context, stating: “Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks.
“This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”
It also urges policymakers to give the public more say in shaping how AI evolves.
FLI cofounder Anthony Aguirre said, “Many people want powerful AI tools for science, medicine, productivity, and other benefits. But the path AI corporations are taking, of racing toward smarter-than-human AI that is designed to replace people, is wildly out of step with what the public wants, scientists think is safe, or religious leaders feel is right. Nobody developing these AI systems has been asking humanity if this is OK. We did — and they think it’s unacceptable.”
Why it matters: When high-profile figures publicly call for limits in a bipartisan effort, it puts pressure on tech leaders to respond and to show how they’re addressing safety, ethics and oversight.
If companies don’t weigh in now, they risk being seen as evasive or reckless. They’ll need to be more transparent than ever in their approach.
Instead of just showing what they’re building, they’ll have to explain why and how it benefits society without crossing ethical lines. In other words, the burden of proof shifts from “should AI be regulated?” to “what are you doing to make sure it’s safe?”
For brands in the AI space, this means reinforcing credibility through transparency, publicly available safety research and clear messaging.
Those that can show accountability early on will likely earn more trust, while those who dismiss growing concern risk becoming the next headline for all the wrong reasons.
Editor’s Top Reads:
- And on that note, Microsoft is rolling out new updates for its Copilot AI including an optional, sassy-toned chat agent and more ways to engage with other people through a “groups” setting that allows up to 32 people to collaborate and interact. But the company is placing emphasis on strengthening the safety of its AI, not just furthering its capabilities. Unlike Meta’s recent age restrictions, Microsoft AI CEO Mustafa Suleyman said the company’s focus is on “safety for all,” meaning safeguards will apply to everyone. In a recent interview with CNN, Suleyman said: “We are creating AIs that are emotionally intelligent, that are kind and supportive, but that are fundamentally trustworthy. I want to make an AI that you trust your kids to use, and that means it needs to be boundaried and safe.” Suleyman also explained how Microsoft is leaning into human-to-human connection which is “a very significant tonal shift to other things that are happening in the industry at the moment.” When a CEO publicly acknowledges concerns about safe usage of AI and outlines concrete updates or policies, it builds trust with parents, regulators and the wider public. This kind of messaging helps Microsoft frame itself as proactively working on ethics and oversight, which not only mitigates risk, both regulatory and reputational, but also strengthens the brand’s credibility.
- President Donald Trump released a list of donors who are helping pay for his $300 ballroom at the White House, which required demolishing the East Wing. The donor list includes major tech and defense companies like Apple, Amazon, Google, Meta, Microsoft and Lockheed Martin, along with other crypto players. “Lockheed Martin is grateful for the opportunity to help bring the President’s vision to reality and make this addition to the People’s House, a powerful symbol of the American ideals we work to defend every day,” a spokesperson told CNBC. By putting their names on such a politically charged project, these companies build favor with the president, but can also risk being seen as enabling a divisive initiative. While they might earn some applause, other customers, employees and stakeholders may push back, demanding explanations or distancing altogether. Lockheed Martin did share that the donation mirrors the “American ideals” it values, but neglected to say what those were. Going forward, these companies will need clear messaging about why they gave and how it aligns with their values or interests in order to remain being seen as trustworthy and credible.
- Incoming Target CEO Michael Fiddelke announced in a memo to employees that he’d be cutting 1,800 jobs, or about 8% of its workforce. CNBC reports that Fiddelke said, “The truth is, the complexity we’ve created over time has been holding us back. Too many layers and overlapping work have slowed decisions, making it harder to bring ideas to life….Decisions that affect our team are the most significant ones we make, and we never make them lightly. I know the real impact this has on our team, and it will be difficult. And, it’s a necessary step in building the future of Target and enabling the progress and growth we all want to see.” He went on to share a list of next steps and goals while assuring affected employees they’d be receiving severance packages. While this is the largest round of layoffs made by the retailer in a decade, and not happy news, Fiddelke is speaking plainly and honestly about the cuts. He’s being open and transparent about why it’s necessary and not holding back on how this will bring the company closer to its business goals. Change is rarely smooth, and leaders have to make tough decisions occasionally, but Fiddelke’s comms here strike the right balance of honesty and compassion.
Courtney Blackann is a communications reporter. Connect with her on LinkedIn or email her at courtneyb@ragan.com.
The post The Scoop: Open letter on AI safeguards puts pressure on tech companies to respond appeared first on PR Daily.












