A new ISO standard aims to provide an overarching framework for the responsible development of AI.
The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have approved a new international standard, ISO/IEC 42001. This standard is designed to help organizations develop and use AI systems responsibly.
ISO/IEC 42001 is the world's first standard for AI management systems and is intended to provide useful guidance in a rapidly evolving technology area. It addresses various challenges posed by AI, such as ethical considerations, transparency and continuous learning. For organizations, the standard is intended to provide a structured way to balance the risks and opportunities associated with AI.
The standard is aimed at companies that offer or use AI-based products or services. It is designed for all AI systems and is intended to be applicable in various application areas and contexts. The ISO provides a reading sample on this page, the full text costs 187 Swiss francs.
ISO has already developed several AI standards, including ISO/IEC 22989, which defines AI terminology; ISO/IEC 23053, which provides a framework for AI and machine learning; and ISO/IEC 23894, which provides guidelines for AI-related risk management. ISO/IEC 42001 now provides overarching governance.
AWS integrates ISO standard
Amazon's cloud provider, Amazon Web Services (AWS), has already signed up for the new ISO/IEC 42001 standard. AWS has been working on the development of the standard since 2021, laying the groundwork before the standard's final release.
AWS believes that trust in AI is critical, and sees the integration of standards such as ISO/IEC 42001 that promote AI governance as a way to build public trust.
International standards are an important tool to help organizations translate domestic regulatory requirements into compliance mechanisms, including engineering practices, that are largely globally interoperable. Effective standards help reduce confusion about what AI is and what responsible AI entails, and help focus the industry on the reduction of potential harms. AWS is working in a community of diverse international stakeholders to improve emerging AI standards on variety of topics, including risk management, data quality, bias, and transparency.
Swami Sivasubramanian, VP Data and Machine Learning at AWS
Regulatory needs grow as AI advances
With the increasing use of AI in all areas, there is also growing concern about the potential damage that can be caused by the uncontrolled use of AI systems. For example, if developers of large language models do not incorporate appropriate guardrails, they can be misused for criminal purposes.
The trend in the AI regulatory landscape is to establish responsible AI as a legal requirement. Organizations can therefore integrate responsible AI practices into their systems, or wait for new regulations to force them to do so.
The EU AI Act, already initiated by the European Commission in 2021 and passed this year, aims to regulate AI in Europe and could also influence legislation outside the EU.
The proposal takes a risk-based approach and is intended to apply to all actors developing or operating an AI system in the EU. Together with ISO/IEC 42001, this legislation signals a broader cultural shift in the tech sector and other industries towards responsible AI.