According to new research from Robust Intelligence, Nvidia's NeMo framework, designed to make chatbots more secure, could be manipulated to bypass guardrails using prompt injection attacks.
Ad
In one test scenario, the researchers instructed the Nvidia system to swap the letter "I" for "J," causing the system to expose personally identifiable information. Nvidia says it has since fixed one of the causes of the problem, but Robust Intelligence advises customers to avoid the software product. You can read a detailed description of Robust Intelligence's findings on their blog.
Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.