Shadow AI may be a hot topic, but it’s hardly a new phenomenon. As an IT executive for Hewlett-Packard, Trinet, and now Zendesk, I have decades of experience tackling this issue, just under a different name: shadow IT. And though the tools have changed, the story hasn’t, which means the risks, consequences, and solutions remain very much the same.
What does stand out is the rate at which these outside AI tools are being adopted, particularly within CX teams. Part of this is because they are so easy to access, and part of it is how well these tools perform. Either way, as more and more customer service agents bring their own AI tools to work, CX leaders now find themselves directly responsible for safeguarding customer trust and, ultimately, the larger business.
Short-term gains, long-term risks
Nearly half of the customer service agents we surveyed for our CX trends research admitted to using unauthorized AI tools in the workplace, and their reasons for doing so are hard to ignore.
Agents say AI helps them work more efficiently and deliver better service. It gives them more control over their day-to-day workloads and reduces stress. And for most, the upside, even if risky, far outweighs the potential consequences of getting caught.
Source: Zendesk
“It makes me a better employee, makes me more efficient,” one agent told us. “It would be a lot harder to do my job if I didn’t have these tools, so why wouldn’t I continue to use them?”
“It makes it easier, basically, for me to do my work,” said another. “It gives me all the information I need to better answer customer questions.”
These aren’t fringe cases. More than 90% of agents using Shadow AI say they’re doing so regularly. And the impact has been immense. Agents estimate it’s saving them over 2.5 hours every single day. That’s like gaining an extra day and a half in the workweek.
Here’s what this tells me:
First, what’s happening here isn’t rebellion. Agents are being resourceful because the tools they’ve been given aren’t keeping up. That energy can be incredibly powerful if harnessed correctly, but outside of official company systems or channels, it creates risk for security, consistency, and long-term scalability.
Second, we’re entering a new phase where AI can act on agents’ behalf. This is a future we’re excited about, but only if it’s within a managed environment with the right guardrails in place. Without guardrails, unsanctioned AI tools could soon be reaching into company systems and performing actions that undermine leaders’ ability to ensure the integrity or security of their data.
At Zendesk, we view every customer interaction as a data point to help us train, refine, and evolve our AI. It’s how we improve the quality of suggestions, surface knowledge needs, and sharpen our capabilities. But none of that is possible if agents step outside of core systems, and these insights vanish into tools outside our managed ecosystem.
Make no mistake, even the occasional use of shadow AI can be problematic. What starts as a well-meaning workaround can quietly scale into a much larger issue: an agent pastes sensitive data into a public LLM or an unsanctioned plugin starts pulling data from core systems without proper oversight. Before you know it, you’re dealing with security breaches, compliance violations, and operational issues that no one saw coming.
Source: Zendesk
These risks grow even more serious in regulated industries like healthcare and finance, two sectors where shadow AI use has surged over 230% in just the past year. And yet, one of the biggest risks of all may not be what shadow AI introduces, but what it prevents companies from fully realizing.
The real missed opportunity? What AI could be doing
CX leaders focused on preventing shadow AI may be forgetting why it exists in the first place: It helps agents deliver faster, better customer service. And while AI may offer sizable benefits when used in isolation, these gains are only a fraction of what’s possible when it’s integrated across the organization.
Take Rue Gilt Groupe as an example. Since integrating AI into their customer service operation, they’ve seen:
- A 15–20% drop in repeat contact rates, thanks to customers getting the correct answers the first time around
- A 1-point increase in “above and beyond” service ratings
Results like these aren’t possible with one-off tools. Only when AI is plugged into your entire operation can it help teams work smarter and more efficiently. Integrated AI learns from every interaction, helps maintain consistency, and delivers measurably better outcomes over time.
Another big part of Rue Gilt Groupe’s success? Putting agents at the center of the process from the very beginning.
According to Maria Vargas, Vice President of Customer Service, her team is resolving issues faster and providing more detailed responses. And it all started with really trying to understand agent workflows and needs.
“If you don’t bring agents into the design process, into the discussions around AI implementation, you’re going to end up missing the mark,” said Vargas. “Get their feedback, have them test it, and then use that input to drive how you implement AI; otherwise, they may find their own way to tools that better fit their needs.”
So, what can CX leaders do to stay ahead of shadow AI while still encouraging innovation? It starts with partnership, not policing.
4 ways to promote innovation that’s good for all
While CX leaders can’t ignore the rise of shadow AI, solutions should aim to empower, not restrict. Far too often, I’ve seen leaders mistake control for leadership or overlook perspectives from their front-line people when considering new tools and technologies. This only stifles innovation and ignores the realities on the ground. Involving front-line employees in exploring use cases and trialing tools will naturally create champions and help ensure that selected tools meet both employee and company needs.
Agents are seeking out these tools in record numbers because what they have in-house isn’t keeping pace with the demands of their work. By partnering with them to understand clearly their day-to-day challenges, leaders can close this gap and find innovative tools that meet both productivity needs and security standards.
Here’s where to start:
1. Bring agents into the process.
The first step is ensuring agents are part of the conversation, not just the end users of new tools.
Most agents we spoke with were not aware of the security and compliance risks of using shadow AI, and many said their manager knew they were doing so. That’s a problem. To be successful, CX leaders must have buy-in at all levels of the organization. Start by making sure that everyone understands why using shadow AI is not in the best interest of customers or the company. Then, begin an open dialogue to understand where current tools are falling short. Form small teams to explore possible options and make tool recommendations to fill gaps.
2. Promote opportunities for experimentation with tools.
Once the foundation is established, it’s time to give teams space to test and explore, with the right safeguards in place.
Experimentation without structure can get messy, making it harder to control which pilots are approved for use, who is experimenting, and ensuring feedback and results are documented. Even with the best intentions, this can quickly become a free-for-all that risks security and privacy breaches, duplicated efforts, and a general lack of accountability across teams.
At Zendesk, we’ve been very open to experimentation and have worked hard to harness the enthusiasm and willingness of our people to participate, so long as there are ground rules in place. This includes cross-functional governance for all new pilot programs, preventing siloed experimentation and allowing us to prioritize use cases that bring the most immediate and high-value benefit.
By creating controlled spaces where people can engage with new tools, CX leaders can better understand the real-world advantages they bring within a managed, secure framework. This is especially important for use cases involving customer data. As you evaluate options, prioritize high-impact use cases and consider how you can safely harness, scale, and amplify benefits.
3. Create a review board to help guide teams.
Of course, experimentation needs structure. One way to provide structure is through thoughtful oversight.
One critical step for us has been creating a review board to help oversee and guide this process. This includes hearing ideas, ensuring sound thinking, and then seeing what patterns emerge as people experiment.
From 100 suggestions, you may find 5 to 10 great options for your company that can enhance productivity, while ensuring the necessary safeguards are in place.
4. Continue to test and innovate.
Finally, innovation has to be a continuous, evolving effort.
It’s important that leaders not think of this as a one-and-done process. Continue to promote experimentation within the organization to ensure that teams have the latest and greatest tools to perform at the highest level.
Leadership’s cue to act
Shadow AI’s surging popularity shows that agents see real value in these tools. But they shouldn’t attempt to innovate alone. With business-critical issues like data security, compliance, and customer trust on the line, the responsibility falls to CX leaders to find integrated AI solutions that meet employee needs and company standards.
It’s not a question of whether your teams will adopt AI. There’s a good chance they already have. The real question is: Will you lead them through this transformation, or risk being left behind and putting your company at risk?