Google is expanding its Vulnerability Rewards Program (VRP) to include generative AI-specific attack scenarios to incentivize AI security research. The company is also expanding its open-source security work to make AI supply chain security information universally discoverable and verifiable. As part of the VRP expansion, Google is revising its bug categorization and reporting policies to address new concerns raised by generative AI. In addition, Google is launching the Secure AI Framework (SAIF) to help the industry build trustworthy applications and is working with the Open Source Security Foundation to protect the integrity of AI supply chains.
Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations).