Ad
Short

Nvidia is adding three new safety features to its NeMo Guardrails platform, aiming to give companies more control over their AI chatbots. The company says these microservices address common challenges in AI safety and content moderation. According to Nvidia, the Content Safety service checks AI responses for potentially harmful content before they reach users, while the Topic Control service tries to keep conversations within approved subject areas. A third service, Jailbreak Detection, works to spot and block attempts to bypass the AI's security features. Rather than using large language models, Nvidia says these services run on smaller, specialized models that should need less computing power. A few companies, including Amdocs, Cerence AI, and Lowe's, are currently testing the technology in their systems. The microservices are available to developers as part of Nvidia's open-source NeMo Guardrails package.

Ad
Ad
Ad
Ad
Ad
Ad
Google News