
Insights from a legal expert.
Many companies have AI policies, or guidelines that spell out how the technology should be used, but fewer have clear decision-making structures to back them up. That gap is what AI governance is meant to address.
As Paavana Kumar, partner at Davis +Gilbert LLP, said during Ragan’s AI Horizons Conference, most organizations are already using AI, but very few are prepared to explain or defend how they’re using it. Failing to do this could result in reputational damage, credibility concerns or legal ramifications.
“Most companies, if they’re sophisticated, have an AI policy in place. What is a little bit more rare is to have a really robust governance strategy in place,” she said.
To avoid making mistakes that could damage the brand, comms teams can take these measures to protect their organizations right now.
- Move from AI policy to AI governance: Document AI use cases and how they’re governed. Kumar urged teams to map where AI is being used, what tools are approved and how each use case ties back to the brand’s policy so they can respond quickly if asked.
- Treat AI like an existing legal risk: Apply the same legal review standards you’re already using to AI. Teams should always review AI-generated copy, images and claims for copyright, publicity rights and bias before anything goes live, Kumar said. “A lot of these legal risks, they’re not new,” Kumar said. “They are existing, familiar legal issues coming up in new ways.” Teams should constantly review performance claims and avoid overstating what AI tools can do, she said. “The FTC has basically said there is no AI exception to those laws,” Kumar said.
- Build transparency into external AI use: Disclose when consumers are interacting with AI every time, Kumar said.. That means clearly labeling chatbots and AI-generated content so consumers aren’t misled.
- Build clarity into language: Be specific about tools, data and approval thresholds within your organization.Kumar recommended clearly stating what data is allowed to be accessed, what’s off-limits, which tools are approved and when human oversight is required. “A good AI policy will define boundaries, it will assign responsibility,” Kumar said.
- Create expectations for partners: Update agreements with partners or vendors to address AI use directly, she said. “Chances are that (another company’s) risk tolerances are different from yours.” Kumar said it’s best to require enterprise licenses when possible, define AI use in contracts and demand “representations, warranties and indemnities.”
- Treat record-keeping as protection: Keep clear records of policies, tools and all decisions, even if they seem insignificant, Kumar said. She emphasized documenting how AI is used and governed because it’s the first thing regulators will ask to see. “Record-keeping is boring until it’s not,” Kumar said.
To learn even more tips on AI governance, head over to view this presentation and more at Ragan Training here.
Courtney Blackann is a communications reporter. Connect with her on LinkedIn or email her at courtneyb@ragan.com.
The post How to strengthen your AI governance appeared first on PR Daily.










