Nvidia is adding three new safety features to its NeMo Guardrails platform, aiming to give companies more control over their AI chatbots. The company says these microservices address common challenges in AI safety and content moderation. According to Nvidia, the Content Safety service checks AI responses for potentially harmful content before they reach users, while the Topic Control service tries to keep conversations within approved subject areas. A third service, Jailbreak Detection, works to spot and block attempts to bypass the AI's security features. Rather than using large language models, Nvidia says these services run on smaller, specialized models that should need less computing power. A few companies, including Amdocs, Cerence AI, and Lowe's, are currently testing the technology in their systems. The microservices are available to developers as part of Nvidia's open-source NeMo Guardrails package.
You have read 2 of our articles this month. Thank you for your interest!
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Sources
News, tests and reports about VR, AR and MIXED Reality.
Some must-play VR Games are dirt cheap right now
Trap Your Friends offers local multiplayer fun with Quest 3 - try it for free!
Meta Quest: Mixed reality pet game Stay Forever Home delayed to end of April
MIXED-NEWS.com
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.