Since the beta launch of DALL-E 2, the first users have generated more than three million images. OpenAI draws a first conclusion.
Similar to the text AI GPT-3, OpenAI is taking a cautious approach to the launch of the image AI system DALL-E 2. The company’s biggest concern may be that the system will repeatedly generate images that violate social conventions or even the law.
How will the system behave when hundreds of thousands of people generate tens of millions of images? That is difficult to predict.
Even when DALL-E 2 was first introduced, OpenAI revealed its weaknesses, such as the fact that the system serves common gender stereotypes, especially in stock photos. Flight attendants, for example, are female, while judges are male.
DALL-E 2 generates largely compliant images
After generating three million images, OpenAI draws an initial conclusion about DALL-E 2’s compliance with its own content policies. The system has identified 0.05 percent of the generated images as potential violations of content guidelines. Of these 0.05 percent, 30 percent were evaluated by human reviewers as actual violations that led to an account being blocked.
OpenAI will continue to refrain from creating photorealistic faces. This is an effective way to limit potential damage, writes the company, which will continue to work on biases in the AI system that stem from training data.
In its content guidelines, OpenAI prohibits the generation of sexual content, extreme violence, negative stereotypes, criminal offenses and many other motives.
OpenAI remains cautious
So 450 out of three million images generated violate OpenAI’s content guidelines. That sounds like little, but it could still lead to a flood of negative impressions about the image AI if the system is deployed on a large scale.
OpenAI continues to act cautiously: The company wants to learn from practice as before, but only allows new users in small numbers – 1,000 per week. All beta testers must also agree to the content guidelines.
“We hope to increase the rate at which we onboard new users as we learn more and gain confidence in our safety system,” OpenAI writes. A larger rollout of DALL-E 2 is expected to take place this summer.
Who is responsible – the artists or the art machine?
Similar to the text AI GPT-3, where drastic violations of OpenAI’s guidelines occurred, and with prospects of even more powerful generative AI systems in the future, one essential question remains unanswered: Who bears the responsibility – the tool’s manufacturer or its users? This question also arises in other AI contexts, such as military AI systems or autonomous driving.
OpenAI proactively takes responsibility through its self-imposed content policies and their strict monitoring. Ultimately, however, this puts the company in a role where it must define the boundaries of morality, art, freedom of expression, and good taste across cultures. This is not exactly the core competency of technology companies – nor is it their job.