The Air Canada Chatbot Case: A Call for Rigorous Oversight in AI Deployment
- Katy Kelly

- May 24
- 3 min read
Updated: 4 days ago
A seemingly minor customer service dispute escalated into a significant legal precedent when Air Canada was held liable for misinformation provided by its AI-powered chatbot. The case serves as a critical reminder for businesses deploying public-facing Large Language Model (LLM) products: innovation in AI must be paired with rigorous testing and continuous risk monitoring to avoid substantial liabilities.
In early 2024, a passenger, grieving the loss of their grandmother, consulted Air Canada's website chatbot regarding bereavement fares. The chatbot erroneously informed the customer that bereavement fares could be applied retroactively, a statement directly contradicting the airline's official policy detailed on another page of its website. The passenger, relying on the chatbot's advice, captured a screenshot of the exchange. When Air Canada later denied the retroactive application, citing its official policy, the passenger pursued the matter in Canada’s small claims court, the Civil Resolution Tribunal.
Air Canada's defense centered on the argument that the chatbot had provided a link to the correct bereavement policy, implying the passenger had the means to verify the information. However, the Tribunal rejected this, questioning why a customer should need to cross-reference information provided by an official company representative – in this case, the chatbot – on the same website.
The Tribunal Member, Christopher C. Rivers, found Air Canada responsible for negligent misrepresentation. In a noteworthy statement, he rebuffed Air Canada's implicit suggestion that the chatbot was a separate entity responsible for its own actions: "While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot," The court ruled that Air Canada did not take reasonable care to ensure its chatbot was accurate and awarded the passenger approximately $812 in damages and fees.
This incident highlights the phenomenon of "AI hallucinations," where LLMs generate incorrect or nonsensical information. While Air Canada had been investing in AI since 2019 to improve customer experience and operational efficiency, this case demonstrates the potential pitfalls. The relatively small financial loss in this specific instance is overshadowed by the reputational damage and, more importantly, the legal precedent set regarding corporate liability for AI-generated information.
The Air Canada case underscores that AI tools, despite their sophistication, are not infallible. For businesses integrating LLMs into customer-facing roles, the implications are clear:
Accountability: Companies are likely to be held responsible for the information and actions of their AI systems, just as they are for human employees.
Need for Accuracy: Ensuring the accuracy of information provided by AI is paramount. This requires meticulous training, ongoing updates, and robust verification processes.
Rigorous Testing and Monitoring: Before and after deployment, AI systems must undergo thorough testing in diverse scenarios. Continuous monitoring for errors, "hallucinations," and deviations from policy is crucial.
The Air Canada chatbot lawsuit does not signal a retreat from AI innovation. Instead, it serves as a compelling case study on the importance of a diligent and responsible approach. As businesses increasingly leverage AI to enhance customer interaction and streamline operations, investing in comprehensive testing, validation, and risk management strategies is not just advisable, but essential to harnessing AI's benefits while mitigating potential legal and reputational damage.
Conformance AI ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.



