Brands that succeed treat AI as a sidekick, not a substitute.
Depending on how it’s used, AI can enhance social strategy or become a lesson in what not to do.
As AI continues to reshape the way people create and consume online content, Gabrielle K. Too-A-Foo, social media and brand strategy lead at PwC, says brands face both tremendous opportunity and heightened responsibility.
“AI can help scale content, streamline processes so you can be more efficient with your time, and it can also be a brainstorming tool,” Too-A-Foo said. “But the challenge comes if people start to use it as an outsourcing of their own thought process.”
She stressed that while there are significant benefits, brands can’t afford to treat AI as autopilot.
“The onus is really on the person creating the content to make sure that what’s being produced is accurate, precise and has the right brand tone,” she said. “If you’re putting information into a tool that’s going to spread at scale, what’s going to happen (when you get it wrong)? It’s going to be misinformation at scale. It’s going to be the wrong branding voice at scale.”
Social editors and specialists’ approach to AI should consider accuracy, ethical implications, tone and purpose when deploying their strategy. She outlined both concerns and helpful steps to make sure brands are striking the right balance.
Here’s what she said to consider.
Ethics and guardrails
Part of the concern comes down to ethics. Too-a-Foo argued that companies need to bake ethical standards into AI use from the start, rather than bolt them on later. This includes the kind of language brands are using and what is acceptable.
She cited PwC’s own guidelines against absolute promises as an example: “We don’t speak in absolutes, like ‘we will always make sure that our clients are happy.’ Instead, we say, ‘we’ll try our best to ensure client success.’ Those kinds of parameters need to be set upfront in your prompts.”
Brands can train their AI tools, including ChatGPT, Sprinklr and Canva, to reflect their ethical guidelines in order to avoid misleading or false promises, she said. This should be checked and verified before posting anything on social media.
Accountability in usage
Responsibility ultimately rests with people, not the technology, she said. “If content goes out via an AI tool under your brand, you’re the one responsible. God forbid you get into a car accident, we don’t shake our fists at the car. It’s the person driving it.”
With regulation still catching up, brands should expect evolving safety measures around the use of AI and work to ensure they’re making choices that protect the safety of others and the brand. Disclose AI use when necessary and ensure that a human is checking the information before it’s published. And then brands should take accountability when they make a mistake, she said.
“Other countries are already moving forward with AI regulations, and even some states are introducing restrictions,” Too-A-Foo said. “At some point, we might be in an environment like we saw with social media, where if you don’t follow privacy standards, you can face hefty fines.”
Authenticity reduces ‘sameness’
Beyond compliance, anything produced with AI still needs to feel genuine, particularly with younger audiences who grew up in a social media world and can easily suss out AI-use more readily.
“Gen Z has a very close connection to technology, and they’re quick to tell what’s real versus what’s fake,” she said. “In social media communities, being authentic is key. It’s almost like sharks smelling blood in the water. They can tell when something isn’t authentic, and they’re just not interested.”
That raises the risk of what she called “creative fatigue,” or when audiences disengage because content starts to sound the same. Avoid this at all costs, she said.
“We’ve already seen it with sentence structures and emoji-heavy styles that scream ‘AI-generated,’” Too-A-Foo said. “If everybody’s doing the same thing, how is that going to resonate with an audience that craves authentic content?”
Equity and inclusivity
One of the biggest blind spots brands are forgetting with AI is equity and inclusivity as core responsibilities.
“Depending on the people who create these platforms, it’s easy for certain demographics to get marginalized,” Too-A-Foo said. “When we’re creating tools and content, it’s important to think about all the different types of users, from neurodivergent audiences to different races and genders. Equity is key, even down to your brand tone.”
Consider who the audience is in social strategy and train AI tools to reflect this when drafting ideas and content, she said. This will be critical to ensure you aren’t alienating a specific group of people and that the message resonates.
AI should also be thought of as a companion, not a replacement, Too-A-Foo said. Humans should always be at the center of social strategy.
“AI can be a sidekick, but it doesn’t replace the need for people to make sure the human element is included,” she said. “We should be good stewards of this powerful machinery. It can scale information fast, but if it’s wrong, it will scale misinformation just as fast.”
For brands, that means approaching AI with caution. “Start from the beginning with clear parameters, think about your end user and make sure your process is inclusive,” Too-A-Foo said. “If you do that, AI can be an incredible tool. But it’s only as good as the humans guiding it.”
Join Ragan’s Social Media Certificate Course on Oct. 1, 8 and 15 from 1 p.m. to 3 p.m. ET and learn more from Too-A-Foo here.
Courtney Blackann is a communications reporter. Connect with her on LinkedIn or email her at courtneyb@ragan.com.
The post AI can scale your social media. But it can also scale your mistakes. appeared first on PR Daily.