A new survey has exposed a significant disconnect between public awareness and one of Europe's most important tech regulations.
According to a representative survey of 1,001 people conducted by polling firm Forsa on behalf of the German technical inspection and certification organization TÜV Association, 68% of Germans lack confidence in their government's AI policy. The breakdown shows that 45% have little confidence, and 23% have no confidence at all in official efforts to control AI risks.
The survey asked two questions: "How much trust do you have that government policies in Germany and Europe will limit potential negative effects of AI technologies through laws and regulations?" and "Have you ever heard of the European AI regulation (EU AI Act)?"
Most Germans unaware of EU AI Act
Perhaps most striking is that 72% of respondents had never heard of the EU AI Act - the central piece of European legislation designed to create a legal framework for safe and trustworthy AI development.
This shows that the low level of trust in the ability of European and German policymakers to regulate isn't even based on intensive engagement with specific policy content. Instead, results suggest the findings reflect a general distrust of political decisions rather than specific concerns about AI policy.
To make matters worse, few EU initiatives in recent years have received as much international attention as the EU AI Act. It aims to regulate a technology that some say has the potential to transform entire societies. Yet, the law appears to be neither well-known, nor is politics credited with any particular AI expertise. This reveals a gap between politics and large segments of society.
The TÜV Association calls for swift implementation of the AI Act in Germany: "People's concerns about the government's ability to act make it clear that the European AI Act must be implemented quickly now - despite the current government crisis," says Dr. Joachim Bühler, CEO of the TÜV Association.
The EU AI Act creates four risk categories for AI applications, with stricter requirements for higher-risk uses. It also imposes new transparency rules on general-purpose AI systems like ChatGPT.