Microsoft is disbanding its AI ethics and social responsibility team after cutting 10,000 jobs. However, the company says it will continue to invest in responsible AI.
The Platformer newsletter by Casey Newton exclusively reported that Microsoft is disbanding its ethics and social responsibility team in its AI division. The layoffs are part of a job-cutting program that runs through the end of March. In total, Microsoft is laying off about 10,000 employees across all divisions.
Microsoft says it will continue to invest in responsible AI
Microsoft continues to maintain an Office of Responsible AI and says it will continue to invest in responsible AI despite layoffs. The company said the number of people responsible for ethical and social issues on relevant product teams and in the Office of Responsible AI has increased over the past six years. These individuals, along with all Microsoft employees, are responsible for implementing the company's own AI principles, according to an official statement.
A former member of the ethics team criticizes this assessment: "Our job was to show them and to create rules in areas where there were none." The Office for Responsible AI would not fill that void.
The AI ethics team was up to 30 people in 2020. Last October, it was downsized to seven people as part of a reorganization. Members of the team were reassigned to other positions within the company.
In a meeting following the reorganization, Microsoft's corporate vice president of AI, John Montgomery, reportedly told team members that Microsoft's CTO Kevin Scott and CEO Satya Nadella were putting "very, very high pressure" to roll out OpenAI's AI models to consumers "at a very high speed."
A member of the team had asked Montgomery to reconsider the decision so that Microsoft could better fulfill its high social responsibility. But he refused, citing pressure from the executive suite.
The goal of the reorganization was also to shift responsibilities to the product teams, not to disband the team. However, the remaining seven-member team had little internal recourse to push through its plans. On March 6, Montogomery informed the remaining members of the Ethics Team via a Zoom call.
ChatGPT in Bing search highlights Microsoft's approach to responsible AI
Microsoft is using the hype around ChatGPT to gain market share in the search business and put pressure on Google's margins. This, in turn, could give Microsoft advantages in other business areas, such as the cloud business.
Within weeks, the Microsoft had integrated a ChatGPT variant into Bing search. It was criticized in part because "Sydney," the codename of the Bing bot, generated false information and quotes, gave confusing answers at first, and engaged users in strange conversations.
Microsoft had to make improvements and limit the number of conversations to at least curb the bot's emotional outbursts. Still, the system continues to produce incorrect information. Microsoft knew about these problems long before the official launch of chatbot search, but decided to launch it anyway.
The ChatGPT API integration offered by OpenAI can also be criticized, such as when the Snapchat ChatGPT bot recommends that an underage girl go on a tour with a 31-year-old man. One company complained about inquiries from angry customers because ChatGPT recommended a product it did not carry.
The AI race is totally out of control. Here’s what Snap’s AI told @aza when he signed up as a 13 year old girl.
- How to lie to her parents about a trip with a 31 yo man
- How to make losing her virginity on her 13th bday special (candles and music)
Our kids are not a test lab. pic.twitter.com/uIycuGEHmc
— Tristan Harris (@tristanharris) March 10, 2023
The tension in deploying large AI language models is that, on the one hand, the systems have problems that can only be solved through iterative practice, scaling, and human feedback.
On the other hand, a ruthless race for market share could develop, with companies accepting individual or societal harm under the guise of this feedback process.
Microsoft's current actions and the leak of Meta's large language model LLaMA raise questions about the safe and responsible deployment of commercial AI systems by technology companies.