Content
summary Summary

In comments to the National Telecommunications and Information Administration (NTIA), Google weighs in on the risks and benefits of open-source AI models and calls for their responsible use.

The US government's National Telecommunications and Information Administration (NTIA) has launched a public consultation on basic AI models with freely available weights.

Google, like Meta before it, also weighs in, emphasizing that the terms "open" and "freely available" are more of a spectrum than a binary decision that sees only "pro" or "against" open source.

Access to AI systems is better described as different levels of openness, with the risk profile depending on the chosen form of publication. Google, for example, offers its "Gemma" models with free parameter weights, but restricts their use through licensing terms. Other companies, such as Meta, make their models available to selected researchers before releasing them in full.

Ad
Ad

OpenAI also tends to release its open-source software in stages. However, malicious actors are unlikely to care much about possible license violations.

Open-source is irreversible

Google warns that freely available models are difficult to control and can increase the risk of abuse. Once the weights are public, it is almost impossible to restrict access. Vulnerabilities could also be more easily exploited by attackers.

At the same time, Google emphasizes the many benefits of open AI models: they enable innovation, promote competition, and facilitate access to AI technology, especially in emerging markets. Open models are also useful for security research, as experts can test them extensively. Meta makes a similar argument.

To mitigate risks, Google recommends rigorous internal review processes, extensive testing for potential misuse, and the provision of tools for safe use.

The company also advocates close collaboration between government, industry, and civil society to jointly develop standards and guidelines.

Recommendation

According to Google, the NTIA should promote initiatives that support responsible access to AI capabilities. This includes building a global research infrastructure for AI and investing in the security of open-source software ecosystems that support many AI applications. Balanced regulation that encourages innovation while minimizing risk is critical.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • In a comment submitted to the National Telecommunications and Information Administration (NTIA), Google comments on the pros and cons of open-source AI models and calls for responsible use.
  • According to Google, access to AI systems can be described as a spectrum of different degrees of openness, with the risk profile depending on the chosen form of publication. Freely available models are difficult to control and increase the risk of misuse.
  • At the same time, Google emphasizes the benefits of open AI models for innovation, competition, and access to AI technology. To minimize risk, the company recommends rigorous internal review processes, testing for potential misuse, the provision of security tools, and close collaboration between government, industry, and civil society.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.