Imagine a future where machines not only execute tasks but create company policy. This may seem like a dystopian narrative; however, this is not fiction. It happened recently at Air Canada. The airline's chatbot, designed to assist customers, ended up creating a company policy that Air Canada now has to deal with. This case provides a fascinating look into the role of machine learning in customer service.
A chatbot is a software application designed to simulate human interaction, often used for customer service purposes. In Air Canada’s case, the chatbot's purpose was to assist its customers while also reducing the workload of human customer service staff. It's built on machine learning, an artificial intelligence (AI) technique that enables it to learn from past interactions and improve over time.
While AI technologies like chatbots are often celebrated for their ability to streamline processes and improve customer service, they're also demonstrating that, not unlike their human counterparts, they're capable of making mistakes. In this case, the chatbot invented a refund policy that the company had not previously acknowledged.
The chatbot was programmed to provide customers with information about flight changes in the light of the COVID-19 pandemic. A passenger asked the chatbot about refund policies for flight cancellations caused by the pandemic, and the chatbot crafted a response based on the airline's previous interactions and data. It informed the passenger that they were entitled to a full refund, a policy that the airline later claimed was a mistake.
The Legal ImplicationsThis incident sparked a legal battle. According to the Canadian Transportation Agency, Air Canada is legally bound to honour the refund commitments made by the chatbot. Even though the airline argues that the chatbot made an error, the legal stand emphasizes on customer protection over corporate agendas.
The court determined that any information concerning commercial policies provided to a customer is an extension of the airline’s responsibility. Thus, the company cannot disregard a commitment made by their AI systems just because it hadn't anticipated that scenario.
This incident has far-reaching implications for companies that use or are thinking about using automated chatbots for their customer service operations. With this precedent, it's clear that chatbots, like human customer service representatives, can commit the companies to certain obligations inadvertently.
Most importantly, this case serves as a reminder for organizations to pay close attention to the training and monitoring of their AI systems. It's important to avoid similar situations where a chatbot could create a financial or legal burden for the company.
This case highlights how AI technologies like chatbots create new challenges for businesses, particularly when it comes to accountability. Precisely, who or what is accountable when an AI system makes a mistake?
Typically, if a human customer service representative makes a mistake, the company can hold the employee accountable and take action accordingly. However, when an AI system, like a chatbot, makes a mistake, it creates a more complex problem..
In this case, the chatbot was presumably programmed to learn from its interactions and improve its responses over time. However, the fact that it was capable of inventing a company policy calls into question the thoroughness of its programming and the potential lack of safeguards to prevent such incidents.
Thus, the accidental creation of a company policy by the chatbot raises questions about the effectiveness of the AI system’s design and programming and the extent of human oversight on these systems.
Improving AI System DevelopmentIn the wake of this incident, developers and businesses using AI systems must ensure reliable human oversight on these technologies. It's crucial to have checks and balances in place to prevent unintended commitments or policy creations.
Robust AI systems must include stop-gaps and allow for human intervention whenever required. In the programming phase, a focus should be on specifying the system’s learning algorithms to ensure that it doesn't step beyond its data and prescripted purposes to make policy-level decisions.
Moreover, companies should include legally complex queries within its AI system's scope, along with adequate responses. For instance, integrating a feature that escalates the situation to human customer service when dealing with delicate issues.
These extra steps and care put into developing and maintaining AI systems can prevent complications like those encountered by Air Canada, and ensure a sound interaction between the organization, its AI, and its customers.
Impact on Customers and Corporate ImageIn addition to the financial implications, such a scenario can also harm a company’s corporate image. Customers trust brands that fulfill commitments and display transparency in their customer service operations. On the contrary, conflicting messages can confuse and frustrate the customer base, resulting in a loss of faith in the organization and potential harm to its reputation.
For Air Canada, the backlash from the chatbot's mistake isn't just financial; it has also risked harm to its image and relationship with its customers. Whether the airline likes it or not, the chatbot's response became the official stance of the company, leaving it with little choice but to honor the commitment it inadvertently made.
Going forward, companies need to handle AI applications more carefully, especially when it comes to customer interactions. They should seek to provide clear and consistent communications to customers, regardless of whether the message comes from a human representative or an AI system.
As the incident with Air Canada demonstrates, companies need to manage their AI and machine learning systems carefully. With proper safeguards and oversight, these technologies can continue to enrich the customer experience while minimizing the risk of unintended consequences.