Content
summary Summary

Delivery company DPD fails with an AI service chatbot that insults its own company.

DPD customer Ashley Beauchamp writes on X complaining about DPD's "AI robot thing". The service's chatbot was "completely useless" at answering his questions.

But it did write a nice rhyme about how bad DPD is as a delivery company. The UK-based musician even managed to coax a few profanities out of the chatbot.

Image: Ashley Beauchamp via X

DPD has been operating a service chatbot since 2019. The mishaps caused by Beauchamp were due to a new "AI element," reports the BBC, which has since been turned off and is now being updated. In 2019, language models that can give free answers in chatbots were not yet common.

Ad
Ad

DPD chatbot mishap is probably a failed LLM experiment

The responses suggest that DPD was experimenting with a large language model that was not sufficiently aligned.

It is clear from the course of the conversation that Beauchamp did not even need to try to trick the chatbot with special prompts. Similar to ChatGPT, he could simply ask for the answers that were embarrassing for DPD. This suggests that DPD built a large language model into the bot without any safeguards.

A US car dealership recently experienced a similar incident with a generative chatbot that offered cars for as little as one US dollar. The dealership used OpenAI's GPT generative models.

Both incidents highlight the risks of using chatbots in customer service, especially if the bot is not properly configured, tested, and given too much freedom to answer all queries. A word prediction system does not have the natural understanding of service and its limitations that a human would have.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Delivery company DPD experienced an embarrassing situation when its AI customer service chatbot insulted its own company in response to a customer query.
  • The mishap was attributed to a new "AI element" that had already been switched off and is now being updated - presumably DPD was experimenting with a generative language model.
  • The incident highlights the risks associated with using chatbots in customer service, especially if they are not properly configured and tested.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.