According to new research from Robust Intelligence, Nvidia's NeMo framework, designed to make chatbots more secure, could be manipulated to bypass guardrails using prompt injection attacks.

In one test scenario, the researchers instructed the Nvidia system to swap the letter "I" for "J," causing the system to expose personally identifiable information. Nvidia says it has since fixed one of the causes of the problem, but Robust Intelligence advises customers to avoid the software product. You can read a detailed description of Robust Intelligence's findings on their blog.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.