Clearing the Air: Debunking the Sensationalism of Air Canada’s AI Mishap
Air Canada made a mistake and their aging AI made a mistake, yet the media and "AI experts" lost their minds and ignored decades of research and objective measures in the workplace. Let's nosedive in.
In February 2024, a Canadian court ordered Air Canada to compensate a passenger who was given inaccurate information about the company's bereavement fare policy by the airline's AI-powered chatbot. The passenger had already traveled to attend his grandmother's funeral, and he was told by the chatbot that he could retroactively apply for a discounted fare. However, Air Canada's actual policy did not allow for such retroactive refunds. When the passenger tried to claim the refund and was denied, he took the airline to court and won.
The case has reignited a familiar narrative: AI is unaccountable, unreliable, and harmful to consumers. With each high-profile AI misstep, critics are quick to argue that the technology is more trouble than it's worth, painting a dystopian picture of a future where we're all at the mercy of rogue algorithms.
In the Air Canada example, media reports and experts were quick to characterize the AI as "rogue" and "lying," painting a picture of a malevolent entity deliberately misleading customers. But the media overlooked a range of crucial questions that could have reframed the narrative. No one seemed to ask, for instance, whether customers have actually experienced better or worse service since the AI rollout, despite the potential for chatbots to provide more consistent support during the wild swings in demand that airlines face.
There was little exploration of how the technology might be changing the economics of customer service at Air Canada, either by reducing costs and freeing up resources or proving an expensive misadventure. Even on the question of accuracy, Responsible AI experts and the media fixated on the chatbot's isolated error without contextualizing it within the company's overall performance or the rapid advancement of conversational AI systems since the incident. They also have consistently ignored That, of course, is irresponsible. By ignoring these vital lines of inquiry, the coverage missed an opportunity to paint an objective picture of both the current realities and future potential of AI in customer service.
Air Canada - Owning the failure
There's no denying that Air Canada mishandled the situation with the bereaved passenger. The airline made a series of missteps, from the chatbot providing inaccurate information to the company's clumsy attempts to deflect responsibility. In court, Air Canada attempted to argue that the chatbot was a separate legal entity, a claim that one Canadian tribunal judge called "a remarkable submission." The court found that the chatbot is “still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website.”
It serves as a cautionary tale – not just in understanding the benefits and limitations of AI – but for the actions taken by the company and the potential response from the public and media. What could have been an $800 credit for a grieving customer, turned into a legal burden and a media feeding frenzy. As AI becomes more integrated into daily life, incidents like this are likely to occur. This particular case, a first in Canada, highlights the potential liability of companies for their chatbots' actions, and sets a precedent that may influence similar cases.
Experts and Media - A hundred years of context ignored
At its core, the customer service process is a queuing system with several quantitative and qualitative measurements well-defined by decades of research in academia and industry. Queuing has been studied intensely for over a hundred years. Quality management principles were established and advocated post-World War II, by the likes of W. Edward, Deming, Joseph Juran, Genichi Taguchi and others. These ideas rebuilt Japan, revitalized the US auto industry, and became the core of basically all manufacturing execution for almost 50 years by establish methods of statistical measurement for processes and emphasizing the need for organizations to take ownership of failures and strive for continuous improvement.
In the context of Air Canada's AI mishap, insights from queueing theory could shed light on how the chatbot's response time, accuracy, and need for humans to contribute to overall service quality. Queueing models help identify bottlenecks and inefficiencies in customer service systems, offering guidance on resource allocation and process redesign to improve performance.
Any logical assessment of chatbot performance and AI failures, should at least take into account some of the basic principles, approaches, and measurements that have guided the field, and ALL process management for a hundred years…and yet, too often they’re completely ignored by AI doomers, experts, and the media.
Klarna - Reframing the conversation
One week after the Air Canada news, Klarna, a popular Swedish-based “buy now, pay later” service, shared eye-popping AI results from 2.3 million digital conversations with customers. By the metrics that matter most to customers, the AI was not just holding its own against human agents — it was quietly excelling.
Take the all-important measure of customer satisfaction. One might expect that the empathy and flexibility of human representatives would be tough for a machine to match. But Klarna's data told a different story: customers were just as happy with the support they received from AI chatbots as they were with flesh-and-blood agents.
Even more impressively, the AI was able to resolve customer issues over five times faster than its human counterparts. In a world where we all expect instant gratification, that speediness is a huge advantage. And it wasn't just about racing through interactions — the AI also had 25% fewer repeat queries, suggesting that it was getting to the heart of customer needs more effectively on the first try.
But perhaps the most remarkable thing about Klarna's AI success story is its sheer scope. This isn't a small-scale pilot project or a niche application. The company's chatbots are running 24/7 across 23 global markets, handling conversations in 35 different languages. That's a staggering level of coverage and accessibility, the kind that would be incredibly difficult and expensive to replicate with human staffing alone.
Klarna is not alone. AI systems now being deployed in customer service, from GPT-powered chatbots to voice assistants with human-like natural language understanding, represent a quantum leap forward. These technologies have the potential to deliver faster, more accurate, and more personalized support than ever before, freeing up human agents to focus on higher-stakes, emotionally complex interactions.
Technological Turbulence Should Be Expected
The reality is that no transformative technology has ever evolved without stumbles along the way. The key is not to let isolated missteps derail the broader effort, but rather to extract lessons, refine implementation, and keep sight of the larger objective. We didn't ground all planes after the first crash or abandon the internet when early websites sometimes failed to load. We learned, improved, and kept reaching for the skies and the digital frontier.
The same perspective is needed in the conversation around AI-powered customer service. Yes, chatbots — like people — will sometimes get things wrong, and there will be frustrating edge cases that test even the most advanced systems. But if implemented thoughtfully and with clear human goals, these tools can enhance the service experience for customers and employees alike. That potential shouldn't be discounted or derailed by the inevitable but mild turbulence during the journey.
Bringing some sanity to a zany situation, as usual! Great article BVR