Content
newsletter Newsletter

Why we need responsible AI, the harms of unchecked AI adoption, and how responsible AI practices benefit companies and customers.

Ad

AI is rapidly being adopted across all sectors, with the global revenue of the AI market set to grow by 19.6% each year and reach $500 billion in 2023. This increase in use cases and normalization of AI in everyday life and industry have been closely followed by regulations and initiatives to protect consumers from the potential harm that unchecked adoption and usage of AI systems have the potential to cause.

The most obvious of these harms are the financial and reputational damage caused by a system failure in a critical business process, such as when Knight Capital Group's trading algorithm malfunctioned, costing the company $440m through one day of trading. Errors such as these can be highly damaging to organizations. As a result, such flaws are generally weeded out through rigorous testing during the development that has become fundamental as automated systems have matured.

However, there is a less obvious and arguably more insidious side of potential harm, that of bias. It still has the potential for financial and reputational damage, though more often through fines from regulators and public opinion when the bias in the system is revealed. The subtle nature of automated systems' bias comes about because it generally does not manifest as a catastrophic failure of a critical business system, such as a rogue trading algorithm.

Ad
Ad

Instead, it is often the users of these systems that are impacted over an extended period. Several high-profile scandals have occurred in recent years where people's lives have been negatively affected by automated systems ostensibly working as intended, ranging from CV filtering systems favouring men to recidivism systems ranking convicts' likelihood of reoffending penalising those who are black.

Such events show the necessity of regulating AI systems by legislative bodies but also raise the question of individual companies' responsibility in avoiding potential harm to users of their systems and serving their interests in avoiding financial and reputational damage. This has led to a growing trend within the tech industry of responsible AI.

What is responsible AI?

Responsible AI is about safeguarding against potential damage from AI systems, ensuring they are used ethically, safely, and securely. More specifically, it typically encompasses five best practices: Data governance, stakeholder communication, engagement and collaboration at the board level, compliance with relevant regulation, and taking steps towards external assurance through third-party audits.

These practices all have in common that they necessitate transparency and explainability in AI systems to allow them to take place and, as a result, ensure that systems can be held accountable for their decisions.

Responsible AI signaling a cultural shift in the tech industry

The move towards responsible AI is also signaling a broader cultural shift in tech and other industries, with consumer confidence in companies at an all-time low, with 30% considering companies trustworthy. This is also reflected in trust in AI, with one survey measuring only 28% of respondents willing to trust AI systems.

Recommendation

As a result, many organizations' fundamental goal is building trust with their customers and stakeholders. By willingly committing to responsible AI practices, companies can demonstrate an ethical standard and build trust with their customers and stakeholders.

Regulation is mirroring responsible AI aims

 2022 was a big year for AI generally, with automation coming to the forefront of the public consciousness, especially around generative AI tools, such as OpenAI's ChatGPT, catching people's imagination with their potential but almost simultaneously leading to disputes on the ethical usage of these tools.

Less prominent to most was the flurry of AI regulations that were also proposed with the general approach to the EU AI Act adopted in December, the publishing of the United States (US) AI Bill of Rights in October, the UK's AI Regulation Policy Paper in July, and the enforcement of China's Algorithmic Recommendation Management Provisions in March, has set a strong precedent of what is to come.

This year, 2023, the groundwork will be laid for the EU AI Act to take effect within the next two years, prompting the establishment of risk management frameworks. The United States will focus on how regulatory bodies and case law lead the pack in targeting companies that proliferate algorithmic discrimination or intentionally use flawed data and dark patterns.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

The continuous thread between all legislation proposed is a focus on transparency and explainability, often in the form of third-party assurance. In essence, responsible AI practices.

The choice facing companies

The trend in the AI regulatory landscape moving towards responsible AI becoming a legal requirement leaves organizations with the option of adopting responsible AI practices into how they manage their AI systems pre-emptively or waiting until forced to do so by incoming regulations.

Outside of the previously mentioned benefits to trust and fostering an ethical reputation, business benefits can be associated with responsible AI. Firstly, by enabling the cataloging of all automated systems used across a business.

Secondly, providing defense from legal claims against companies for accusations of negligence or malicious intent from the adverse application of an automated system. As found by Apple for their Apple Card, which reportedly gave a much higher credit limit to a man compared to his wife, despite her having a higher credit score. However, Goldman Sachs, the provider of the card, was able to justify why the model came to the decision that it did due to a responsible approach to their AI systems which allowed them to make the decision transparent and clearing the company of illegal activity.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Sources
Joe Davenport

Joe Davenport is an editor and digital marketing executive at Holistic AI, focusing on the governance and ethical use of artificial intelligence.

Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.