Content
summary Summary

A study by Stanford University shows that AI developers have disclosed significantly more information about their models in the past six months. However, there is still significant room for improvement.

Ad

AI companies have become more transparent about their large language models over the past six months, according to a recent Stanford University study. The Foundation Model Transparency Index (FMTI) ranks companies based on 100 transparency indicators covering topics such as the data, computing power, capabilities, and risks of the models.

In October 2023, the average score of the evaluated companies was only 37 out of 100. The follow-up study published in May 2024 shows an increase to an average of 58 points.

Compared to October 2023, all eight companies evaluated in both studies have improved. Some improved dramatically, such as AI21 Labs, which went from 25 to 75 points. Others, like OpenAI, improved only slightly, from 48 to 49 points.

Ad
Ad

Overall, this is an improvement, but still far from genuine transparency. This is especially true because developers are most transparent about the capabilities of their models and the documentation for downstream applications. In contrast, sensitive issues such as training data, model reliability, and model impact remain very murky.

The study shows a systemic lack of transparency in some areas, where almost all developers withhold information, such as the copyright status of training data and the use of models in different countries and industries.

Image: Stanford CRFM

For 96 of the 100 indicators, at least one company scored one point, and for 89 indicators, even more. According to the researchers, great progress would be possible if all the companies in each category were to follow the most transparent ones.

For the current study, 14 developers, including eight from the previous study such as OpenAI and Anthropic and six new ones like IBM, submitted transparency reports. On average, they provided information on 16 additional indicators that were not previously publicly available.

A detailed evaluation of the individual models assessed is available on an interactive website.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • AI developers have become significantly more transparent about their large language models over the past six months, according to a Stanford University study. The Foundation Model Transparency Index (FMTI) ranks companies based on 100 transparency indicators.
  • Compared to October 2023, all eight companies evaluated in both studies improved, some dramatically, like AI21 Labs from 25 to 75 points, others only slightly, like OpenAI from 48 to 49 points. Overall, the companies scored an average of 58 out of a possible 100 points in May 2024.
  • Developers are most transparent about the capabilities of their models and documentation for downstream applications. However, issues such as training data, trustworthiness, and model impact remain opaque. The study also reveals a systemic lack of transparency in areas such as the copyright status of training data.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.