Executive Summary. Jeff Fettes argues that the real challenge in customer experience AI is not building smarter models but defining clear operational boundaries for what AI agents are allowed to do.
Customer experience operations are emerging as a proving ground for enterprise AI. Yet many initiatives stall when pilot projects meet the complexity of real-world operations. In this conversation, Laivly CEO Jeff Fettes draws on decades of experience running large-scale contact centers to explain why the next phase of CX AI will depend less on model capability and more on operational clarity. He discusses the importance of defining clear boundaries for AI agents, the economics of automation at scale, and why enterprises must treat AI as a continuously supervised operational system rather than a one-time deployment.
AITJ: Jeff, you have said the real competitive advantage in CX AI will not come from model quality, but from clarity around what an agent is allowed to do. What does operational clarity actually look like inside a large enterprise?
Operational clarity starts with a very clear definition of what we want AI to do and what we want people to do. A lot of failed deployments come from complexity and a lack of ownership over edge cases. In customer experience environments, there is often a lack of understanding of how contact centers actually operate.
What we recommend to clients is a written document we call an agent charter. In that charter, we define very carefully what the AI agent is allowed to do and what it should never do.
Importantly, those decisions are not made purely on what is technologically possible. The technology is now so powerful that you can technically build almost anything. The more relevant question becomes whether you should.
So we work with clients to answer questions like: does this align with your brand culture? Are customers expecting to be served this way? Will they accept it?
Because success ultimately depends on user acceptance. For example, if someone calls a support line expecting a human and immediately encounters an agentic voice without warning, they may simply hang up. Understanding expectations and defining scope early is critical.
That clarity around responsibilities and success criteria is what allows organizations to deploy AI agents at scale.
Editor’s note: Many CX AI pilots fail once deployed at scale because edge cases and operational economics change the equation.
Operational clarity starts with a very clear definition of what we want AI to do and what we want people to do.
Jeff Fettes
Why do so many AI agents perform well in pilots but struggle once deployed at scale in real production environments?
A big part of it comes down to the complexity of real operations and the impact of edge cases.
Before founding Laivly, I spent about 25 years running contact centers for some of the world largest brands. What looks simple from a technology perspective becomes extremely complex when you are dealing with thousands of employees and millions of customer interactions.
In pilots, companies often try to capture edge cases. But in production, a single rare edge case can escalate dramatically. One unusual interaction might end up reaching the CEO or creating a serious customer issue.
Another reason pilots fail is simple math.
Many AI vendors promise something like 30–35 percent automation. That sounds great. But if the system does not properly address the remaining 65 percent of interactions, you create a hidden cost.
All interactions still pass through the automation layer first. That means the 65 percent that ultimately require human handling now carry extra processing cost without producing additional value.
So you end up adding cost to the majority of interactions to automate the minority. In many cases, the financial impact becomes a wash.
That is why we advise clients to design solutions that address the full end-to-end experience, not only a narrow automation use case.
What is fundamentally different about moving from answering questions to taking actions in customer service workflows?
The main difference is risk.
When AI delivers answers in plain language, there are already risks you need to manage. But once the system starts taking actions and interacting with backend systems, the risk profile increases significantly.
You need to think carefully about access controls, system integrations, and monitoring.
Another common mistake is assuming that once an AI system is deployed, it can run indefinitely without oversight. Organizations sometimes treat AI deployments as projects with a beginning, middle, and end.
But that is not how contact centers operate.
With human agents, you constantly run quality assurance, calibration sessions, and performance reviews. The same principle applies to AI agents. They need ongoing tuning and monitoring.
Businesses change constantly. New products launch. Websites evolve. Customer behavior shifts. Those changes introduce new scenarios that your AI agents must adapt to.
Operational AI therefore requires continuous supervision and refinement, not a one-time implementation.
Where do deflection-first strategies tend to break down in enterprise environments?
One of the most visible failures comes from the math I mentioned earlier.
If you route 100 percent of your customer volume through an automation layer to capture a 30 percent containment rate, you risk adding friction and cost for the majority of customers who still need human support.
Another issue is the language around “deflection”. As someone who has spent an entire career in customer service, I dislike that term.
Marketing teams spend millions trying to get customers to engage with a company. The last thing you want to communicate internally is that your goal is to deflect them.
A better concept is containment. The objective is not to push customers away, but to resolve their issue in the most effective way possible. Sometimes that means automation. Sometimes that means human support.
The strategy should focus on solving the customer problem efficiently, not avoiding the interaction.
Based on your experience running contact centers, do customers react differently to AI versus human agents across industries?
Absolutely. Customer expectations vary widely depending on the demographic and the industry.
For example, companies in video games or software often serve younger, digitally native customers. Many of those users actively prefer self-service. They might spend 45 minutes researching a solution rather than speaking to a human agent, even if a phone call could solve the issue in five minutes.
In those environments, automation and AI-driven experiences are often welcomed.
Other sectors are very different. Healthcare is a good example. When someone is dealing with a sensitive issue, they often expect a human interaction. Even if AI systems are technically secure, speaking to a machine can feel less trustworthy.
In those cases, the best use of AI may be behind the scenes. AI can assist human agents, improve workflows, or provide recommendations without being visible to the customer.
Each organization needs to understand the expectations and culture of its user base before deciding how to deploy automation.
What changes organizationally when a company moves from experimenting with AI to operating with AI at scale? And what governance structures become necessary?
Traditionally, software deployments were treated as projects.
Companies would spend months planning a transformation initiative, then another year implementing it. Once everything was rolled out and stabilized, the project team would hand the system over to operations and move on.
AI does not work that way.
AI systems become part of the daily operation of the business. They handle large volumes of interactions and must be continuously monitored and improved.
That means organizations need dedicated roles responsible for managing and evolving those systems. Either companies build those capabilities internally or they work closely with external partners on an ongoing basis.
Governance is another major shift.
Large enterprises now increasingly have formal documentation defining acceptable AI usage. These documents outline which models can be used, how they can access data, and what types of automation are allowed.
Interestingly, governance delays actually slowed down AI adoption in large enterprises.
Smaller companies were faster to experiment because they were more comfortable taking risks. Large organizations needed time to develop legal frameworks, internal policies, and board-level approval processes.
Over the past six months, that governance infrastructure has started to solidify. Today it is increasingly common for a Fortune 500 company to provide its AI governance documentation at the very beginning of a project.
That shift is enabling much faster progress toward real deployments.
What are the most common misconceptions executives have about replacing frontline agents with AI?
The biggest misconception is focusing on what technology can do rather than what it should do.
Executives often approach AI by asking questions like: can we automate this? Can we eliminate that? Can we deflect these interactions?
Those questions focus on capability rather than outcome.
The more important question is whether automation improves the experience for customers and aligns with the organization culture and brand.
Just because something is technically possible does not mean it should be implemented.
Looking ahead 12 to 24 months, what will separate companies that successfully operationalize AI agents from those that remain stuck in perpetual pilots?
Many of the barriers slowing adoption are already being solved. The remaining challenge is operational expertise. The companies that succeed will be the ones that invest in people who can connect technology and business operations. They need individuals who understand both the operational realities of customer experience and the technical capabilities of AI systems. Those hybrid roles are becoming extremely valuable and difficult to hire.
Successful companies are also focusing on simple, transparent use cases.
When a customer interacts with an AI system, it should be obvious that they are speaking to an AI agent. The system should clearly communicate what it can and cannot do. That transparency helps customers interact with it effectively.
For internal tools such as agent assist, the technology should resemble tools employees already know how to use. If the interface feels familiar, adoption increases quickly.
High adoption leads to stronger outcomes and better ROI. That is why organizations that focus on clear use cases and operational alignment are now starting to turn their pilots into real infrastructure.




