Content
summary Summary

Google's parent company, Alphabet, is warning employees not to share personal or business information with AI chatbots. This includes the company's own chatbot, Bard.

Ad

This leak could not have come at a worse time: As the EU is in the midst of talks with Google before Bard is allowed to launch in member states, with privacy on the agenda, a report has surfaced about parent company Alphabet's warnings to employees about AI chatbots.

People familiar with the matter revealed this to Reuters, the news site reports. According to the report, employees are forbidden from providing confidential material to the chatbot, which the company later confirmed. In a related development, Google has also added a note to its privacy policy asking users not to feed Bard with sensitive or confidential information.

Google aims to be "transparent about the limitations of its technology"

Alphabet is also telling employees that code generated by Bard should not be used in production (Bard can do this after its latest update), according to Reuters sources. In a statement, Alphabet said that while Bard can make unwanted code suggestions, it can still help programmers. According to Reuters, Google "wanted to be transparent about the limitations of its technology."

Ad
Ad

Like OpenAI with ChatGPT, Google uses the data users enter into Bard to further train its AI models. Only after political pressure did OpenAI introduce a way to opt out, but at the cost of convenience, as past chats are immediately deleted.

Google isn't the only company to issue such warnings about chatbots; Samsung recently banned the use of ChatGPT and Bard after discovering that employees had been entering sensitive lines of code into the chatbot.

Those who don't comply with the ban could be fired, according to an internal memo. Companies are concerned that data about the chatbot could be leaked or that third parties could gain insight from OpenAI and its partners, such as when preparing data for AI training.

The fact that even Alphabet is now warning its employees about chatbots, including its own, shows that even the creators of the new tools are unsure how trustworthy these systems are.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Alphabet has prohibited its employees from entering sensitive data into AI chatbots, including Bard.
  • In addition, the code generated by Bard cannot be used in production.
  • The timing is bad from Google's point of view: the EU is preventing Google from launching Bard in European countries. The authorities may feel vindicated.
Sources
Jonathan works as a freelance tech journalist for THE DECODER, focusing on AI tools and how GenAI can be used in everyday work.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.