Google is expanding its Vulnerability Rewards Program (VRP) to include generative AI-specific attack scenarios to incentivize AI security research. The company is also expanding its open-source security work to make AI supply chain security information universally discoverable and verifiable. As part of the VRP expansion, Google is revising its bug categorization and reporting policies to address new concerns raised by generative AI. In addition, Google is launching the Secure AI Framework (SAIF) to help the industry build trustworthy applications and is working with the Open Source Security Foundation to protect the integrity of AI supply chains.

Ad

Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations).

Google

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.