Content
summary Summary

Technology companies and politicians never tire of saying that artificial intelligence must be deployed safely and responsibly because of its far-reaching implications. But when the technology enters society in other ways, it remains lip service. The most recent example is the leak of Meta's latest language model.

Ad

In February 2019, OpenAI unveiled its large language model, GPT-2, and announced that it would only release it in stages due to the high risk of misuse. The model would be too dangerous to put in the hands of many people unchecked, who could use it to generate masses of false messages, the company said at the time.

Now, OpenAI may have been right with this thesis and has successfully managed the rollout of its technology so far. The large language models GPT-3 and ChatGPT are only accessible via a programming and web interface, not directly. This allows OpenAI to control the applications and, among other things, set filters against violent or sexual content. Those who use ChatGPT know the numerous disclaimers.

Meta's LLaMA language model leaked at 4Chan

Almost exactly four years after OpenAI's GPT-2 (!) concerns, Meta's LLaMA language model has been leaked, which according to scientific benchmarks matches and in some cases outperforms OpenAI's powerful GPT-3 language model, although it is more compact and faster. LLaMA is even said to be on par with Google's mighty PaLM model, which has not yet been released, partly for security reasons.

Ad
Ad

The LLaMA leak reportedly came from the notorious online forum 4chan, an Internet hotbed of hate speech, sexism, and conspiracy theories, the opposite of what tech companies want to make their language models available for.

But the leak itself may not be the big story here. Meta, according to numerous comments on Reddit, gave access to LLaMA without much questioning. And while not yet at the level of enterprise models, there are numerous efforts to make large language models available as open source.

In the aftermath of the LLaMA leak, we'll now find out if OpenAI's fears about misuse of large language models were justified. The question is whether we will be able to detect that misuse - or whether we will simply suffer the consequences.

The hardware barriers to using the largest and most powerful LLaMA models are still significant, and the costs are beyond the reach of the average do-it-yourselfer. But organizations could fund the operation.

Is it possible to deploy AI responsibly?

Of course, the commitment of OpenAI, Google, and others to the responsible use of AI is a key factor in securing their "license to operate," i.e., social acceptance of their existence. Or maybe we can just trust that companies and businesses, and the people who work there, take their role seriously and make a sincere effort to deploy AI technology responsibly.

Recommendation

Still, after the LLaMA leak, one has to wonder: does it make a difference? It seems that once the technology is out there, it will find its way into the wild - and with it, all the potential risks.

The LLaMA leak is one example. The training and distribution of large image models like Stable Diffusion, with all its unresolved copyright issues, or the spread of deepfake technology, which is also used for political manipulation or misogyny, are other examples.

Of course, we don't know what's coming and what will be possible with AI one day. We may be at the peak of AI hype - or we may just be seeing the tip of the iceberg. That's why organizations need to continue to think about and plan for AI safety and responsibility.

At this point, however, those who want to conclude that the responsible deployment of AI has already failed may have a good case.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Meta recently released LLaMA, a very capable large language model, to the research community.
  • However, the language model, including the weights, was leaked via the Internet forum 4Chan. It could be used unsupervised for any purpose.
  • In light of developments such as copyright issues with Stable Diffusion and the misuse of deepfakes, it appears that the much-vaunted "safe deployment" of AI is more of a pipe dream.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.