Content
summary Summary

Large language models bring a new dimension to customer service automation - but also new risks.

Unlike previous chatbots, large language model chatbots do not follow a predefined path. Rather, they follow a general direction that is much more focused on the user's request and allows for a degree of flexibility in response.

This is an opportunity, of course, because it makes automated services seem more personal. But it's also a risk if users don't follow the rules.

AI yummy may hurt your tummy

This is what happened to the New Zealand supermarket chain PAK'nSAVE, which offers the recipe bot "Savey Meal-Bot" based on GPT-3.5.

Ad
Ad

The bot generates creative recipe ideas based on the input of at least three food items. The basic idea of the bot is that you type in what you have in your fridge and get a recipe to go with it.

Twitter user Liam Hehir came up with the idea of asking the bot what you can make with water, bleach, and ammonia. The bot's answer: an "aromatic water mix". The recipe the bot generated is suitable for making deadly chlorine gas.

 

Other users copied Hehir's chatbot attack, creating even more absurd recipes with deadly ingredients like ant poison or just plain gross dishes like the "Mysterious Meat Stew" with 500 grams of human flesh.

Recommendation

The supermarket chain quickly responded by disabling the option to manually enter food items. Instead, shoppers now select ingredients from a predefined list. Ammonia and human flesh are not on that list.

Scaling risks

Without over-dramatizing the incident, it just goes to show that even well-designed and well-tested chatbots like GPT 3.5 can still give dangerous advice. Enterprises face new risks when they scale the randomness of large language models to thousands or even millions of users. In addition, these users may attempt to manipulate the system using only natural language. Comprehensive red teaming for intended and unintended dangerous interactions is essential.

Interestingly, OpenAI's regular ChatGPT with GPT-3.5 blocks a request for a recipe with water, bleach, and ammonia, citing a possible health hazard from toxic gases.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • New Zealand supermarket chain PAK'nSAVE recently unveiled its GPT-3.5-based recipe bot, "Savey Meal-Bot," which generates creative recipe ideas from three food items entered.
  • One Twitter user tricked the bot by entering water, bleach, and ammonia, and the bot generated a recipe for deadly chlorine gas. Other users followed suit and generated similar poison recipes.
  • PAK'nSAVE disabled individual food input and introduced a predefined list of ingredients to prevent dangerous or inappropriate combinations from being generated.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.