Ad
Short

Nvidia is adding three new safety features to its NeMo Guardrails platform, aiming to give companies more control over their AI chatbots. The company says these microservices address common challenges in AI safety and content moderation. According to Nvidia, the Content Safety service checks AI responses for potentially harmful content before they reach users, while the Topic Control service tries to keep conversations within approved subject areas. A third service, Jailbreak Detection, works to spot and block attempts to bypass the AI's security features. Rather than using large language models, Nvidia says these services run on smaller, specialized models that should need less computing power. A few companies, including Amdocs, Cerence AI, and Lowe's, are currently testing the technology in their systems. The microservices are available to developers as part of Nvidia's open-source NeMo Guardrails package.

Ad
Ad
Short

A new open source voice model called Kokoro just landed on HuggingFace, and early tests show it can generate voices that rival commercial services like Eleven Labs. The model packs 82 million parameters under the hood, and is on the first place in the TTS Spaces Arena. The model is trained on less than 100 hours of audio data, supporting just American and British English for now. Users can currently choose from 10 different voices. While the model shows promise, it does have its limitations. Unlike some commercial alternatives, it can't clone voices, and there aren't any plans to add support for other languages yet. For developers interested in using Kokoro, the inference code is available under an MIT license, while the model itself uses an Apache 2.0 license.

Ad
Ad
Ad
Ad
Google News