Amazon SageMaker JumpStart now offers Llama Guard, a component of Meta's Purple Llama project. Llama Guard is a safety classifier for input/output filtering. It provides input and output safeguards for large language model deployments, enabling developers to build AI models responsibly. The model has been trained on a mix of publicly available datasets to detect potentially risky or offensive content. It can be integrated into developers' risk mitigation strategies for applications such as chatbots and content moderation. Llama Guard can be used alongside other foundational models available in SageMaker JumpStart.
Ad