DPD is the "worst delivery firm in the world" according to DPD's service chatbot
Key Points
- Delivery company DPD experienced an embarrassing situation when its AI customer service chatbot insulted its own company in response to a customer query.
- The mishap was attributed to a new "AI element" that had already been switched off and is now being updated - presumably DPD was experimenting with a generative language model.
- The incident highlights the risks associated with using chatbots in customer service, especially if they are not properly configured and tested.
Delivery company DPD fails with an AI service chatbot that insults its own company.
DPD customer Ashley Beauchamp writes on X complaining about DPD's "AI robot thing". The service's chatbot was "completely useless" at answering his questions.
But it did write a nice rhyme about how bad DPD is as a delivery company. The UK-based musician even managed to coax a few profanities out of the chatbot.

DPD has been operating a service chatbot since 2019. The mishaps caused by Beauchamp were due to a new "AI element," reports the BBC, which has since been turned off and is now being updated. In 2019, language models that can give free answers in chatbots were not yet common.
DPD chatbot mishap is probably a failed LLM experiment
The responses suggest that DPD was experimenting with a large language model that was not sufficiently aligned.
It is clear from the course of the conversation that Beauchamp did not even need to try to trick the chatbot with special prompts. Similar to ChatGPT, he could simply ask for the answers that were embarrassing for DPD. This suggests that DPD built a large language model into the bot without any safeguards.
A US car dealership recently experienced a similar incident with a generative chatbot that offered cars for as little as one US dollar. The dealership used OpenAI's GPT generative models.
Both incidents highlight the risks of using chatbots in customer service, especially if the bot is not properly configured, tested, and given too much freedom to answer all queries. A word prediction system does not have the natural understanding of service and its limitations that a human would have.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now