
Plus: Local newsroom faces criticism over AI use; U.S. Coast Guard clarifies policy after softening language on hate symbols.
Sky Sports has apologized and taken down its TikTok channel Halo days after it launched following criticism over what its audience said were sexist undertones.
Sky said the channel would offer “sports content through a female lens,” covering all kinds of sports and celebrating women athletes, but in a fun, trend-led way. The reality was videos in pastels, pinks and had overlays of sparkly text.
People quickly criticized the tone and style, with many viewers calling the content condescending, patronizing and sexist. Sky Sports pulled the account after just three days, CNN reports.
Many felt it leaned too heavily into pink aesthetics, “hot girl walk” references and stereotypical tropes, which they argued reduced the experience of being a female sports fan to superficial trends.
After removing the channel, Sky Sports apologized, saying they’d listened to the feedback, admitted they didn’t get it right and promised to learn from the mistake.
The statement, posted on TikTok, reads: “Our intention for Halo was to create a space alongside our existing channel for new, young, female fans. We’ve listened. We didn’t get it right. As a result, we’re stopping all activity on this account. We’re learning and remain as committed as ever to creating spaces where fans feel included and inspired.”
Why it matters: This could have gone a different way for Sky Sports had their messaging resonated with their intended audience. But the channel leaned too heavily on stereotypes instead of genuinely understanding what female sports fans actually want.
A beta test with feedback could have highlighted the tone and content issues before going public, avoiding a crisis altogether.
That being said, Sky Sports acted quickly to shut down Halo once backlash became clear. They didn’t try to defend the channel or argue with critics. Acting fast helped prevent further reputational damage.
Their statement also admitted they “didn’t get it right.” Owning the error is far more effective than deflecting blame, which builds credibility with audiences.
For PR pros, it’s best to know your audience deeply, test your messaging and act quickly and transparently when things go wrong.
Editor’s Top Reads:
- A group of reporters at Suncoast Searchlight, a small nonprofit newsroom in Florida, asked their board of directors to investigate their editor-in-chief after discovering what they believe was undisclosed use of AI in editing processes. Nieman Lab reports they claim the editor, Emily Le Coz, added quotes that were never said and inserted incorrect facts, including a reference to a law that doesn’t exist. They told the board that the situation had damaged their trust and asked for an internal audit, a formal AI policy and a commitment that AI wouldn’t be used in editing without transparency. One of the four full-time reporters that raised the issue was fired the day after the board received the complaint. Editors cited performance issues, but the reporter believes it was retaliation for sharing her concerns. The board responded with a public statement saying they had spoken with the editors and still had “full confidence in the ethics and integrity of the editorial leadership, the editing process and in the accuracy and trustworthiness of our reporting.” They also said they had “found no evidence that any published content includes inaccuracies,” but committed to creating a newsroom AI policy and continuing their internal review. Undisclosed AI use, or even the appearance of it, especially when it leads to errors or fabricated material, undermines trust internally and externally. PR pros would be rightfully hesitant to work with an organization that was accused of inaccuracies and false quotes in published content. For brands working with local media, who’s editing, how stories are produced and whether AI is influencing the storyline could affect partnerships, credibility and pose a major reputational risk to their own organization if left unchecked.
- The U.S. Coast Guard faced blowback this week after drafting a new policy that softened how it described hate symbols, like swastikas and nooses. Instead of clearly calling them examples of “hate incidents,” the draft labeled them “potentially divisive” and left it up to local commanders to decide what counted as a problem, Newsweek That wording triggered immediate backlash from lawmakers, advocacy groups and the public, who said it looked like the Coast Guard was weakening protections and downplaying the seriousness of hateful language. After the criticism, the Coast Guard reversed course, clarified the policy and explicitly stated that hate symbols, including swastikas and nooses, are still banned. “Symbols such as swastikas, nooses and other extremist or racist imagery violate our core values and are treated with the seriousness they warrant under current policy,” Admiral Kevin Lunday, acting commandant of the Coast Guard, told the outlet. This incident is a reminder that wording matters just as much as policy. How the initial language was written created confusion, looked like a step backward and damaged trust. When dealing with sensitive issues, especially symbols tied to racism and violence, any ambiguity can spark outrage fast. Clear, direct language is essential to avoid misunderstandings and maintain credibility, especially in moments where public scrutiny is high.
- Meta recently launched a new Content Protection tool to stop people from reposting a creator’s Reels without credit. It uses the same detection technology behind Rights Manager to automatically find full or partial copies of a creator’s Reels across Facebook and IG. When it spots a match, users can choose to track it, block it or release the claim. Tracking lets the repost stay up while giving the user performance insights and an optional “original by” attribution link. Blocking hides the repost across Meta’s platforms and releasing simply drops the claim. Creators can also create “allow-lists” for trusted accounts and dispute wrongful ownership claims. The tool is rolling out first to creators in Facebook’s monetization program and is designed to make original content easier to protect, control and credit. PR pros could find this tool useful because it helps protect owned content, maintain brand integrity and control the narrative in a world where videos are constantly reposted, remixed or taken out of context.
Courtney Blackann is a communications reporter. Connect with her on LinkedIn or email her at courtneyb@ragan.com.
The post The Scoop: Sky Sports removes TikTok page and apologizes after ‘condescending’ launch appeared first on PR Daily.











