Nvidia is adding three new safety features to its NeMo Guardrails platform, aiming to give companies more control over their AI chatbots. The company says these microservices address common challenges in AI safety and content moderation. According to Nvidia, the Content Safety service checks AI responses for potentially harmful content before they reach users, while the Topic Control service tries to keep conversations within approved subject areas. A third service, Jailbreak Detection, works to spot and block attempts to bypass the AI's security features. Rather than using large language models, Nvidia says these services run on smaller, specialized models that should need less computing power. A few companies, including Amdocs, Cerence AI, and Lowe's, are currently testing the technology in their systems. The microservices are available to developers as part of Nvidia's open-source NeMo Guardrails package.
Ad
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Sources
News, tests and reports about VR, AR and MIXED Reality.
Walkabout Mini Golf: New Elvis DLC takes you to Las Vegas in the 70s
Nvidia presents mini supercomputer for AI developers: Project DIGITS coming in May
Arken Age Review: A visually appealing sci-fi adventure only held back by its story
MIXED-NEWS.com
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.