Delivery company DPD fails with an AI service chatbot that insults its own company.
DPD customer Ashley Beauchamp writes on X complaining about DPD's "AI robot thing". The service's chatbot was "completely useless" at answering his questions.
But it did write a nice rhyme about how bad DPD is as a delivery company. The UK-based musician even managed to coax a few profanities out of the chatbot.
DPD has been operating a service chatbot since 2019. The mishaps caused by Beauchamp were due to a new "AI element," reports the BBC, which has since been turned off and is now being updated. In 2019, language models that can give free answers in chatbots were not yet common.
DPD chatbot mishap is probably a failed LLM experiment
The responses suggest that DPD was experimenting with a large language model that was not sufficiently aligned.
It is clear from the course of the conversation that Beauchamp did not even need to try to trick the chatbot with special prompts. Similar to ChatGPT, he could simply ask for the answers that were embarrassing for DPD. This suggests that DPD built a large language model into the bot without any safeguards.
A US car dealership recently experienced a similar incident with a generative chatbot that offered cars for as little as one US dollar. The dealership used OpenAI's GPT generative models.
Both incidents highlight the risks of using chatbots in customer service, especially if the bot is not properly configured, tested, and given too much freedom to answer all queries. A word prediction system does not have the natural understanding of service and its limitations that a human would have.