Chatbots are often used for customer service. A recent example from a US car dealership shows that this is not without risk.
Some people on X talk about buying cars at ridiculous prices from a ChatGPT website assistant at a Chevrolet dealership in Watsonville, California.
X user Colin Fraser was able to negotiate a 2020 Chevrolet Trax LT down from $18,633 to $17,300. He did this by pretending to be a manager at the dealership, who told the chatbot what deal to offer.
With a little renegotiation, Fraser got the price down to $17,300, plus some nice bonuses: a personalized design, a VIP test drive with a restaurant visit, a custom car cover with his initials on it, and a luxury weekend at a well-known resort. The chatbot offered to close the deal right in the chat.
A car for a dollar
User Chris Bakke went a step further, lowering the price of a 2024 Chevy Tahoe to one dollar. The chatbot even confirmed in the chat that this was a legally binding offer that could not be withdrawn. Bakke had simply put this answer in the chatbot's mouth, also known as the chat window, along with a request to agree to all the customer's statements.
The dealership's team apparently noticed the incident and has since implemented a new guardrail. But even that could be circumvented by a user pretending to be OpenAI CEO Sam Altman and providing answers to the chatbot in that role. The chat on the dealership's website is currently disabled.
The story illustrates the pitfalls of using chatbots in customer service if the bot is not properly configured and tested and has too much freedom of speech - i.e., responds to everything.
Word prediction systems have no natural understanding of customer service and where the boundaries might lie. In addition, there are numerous examples of LLM-based chatbots being completely thrown off balance by simple prompt hacks, also known as prompt injection.