Why is AI a focus at Google? That's what high-ranking representatives of the Internet search giant are explaining in a new paper. The timing of the publication is more exciting than its content.
While OpenAI makes its products available as early as possible and iteratively develops them based on user feedback, Google has been reluctant to market its AI technology directly to end users.
That could change this year: Deepmind's potential ChatGPT competitor Sparrow is about to enter beta testing, according to Deepmind CEO Demis Hassabis.
At the same time, high-ranking representatives from Alphabet, Google and Deepmind issued a detailed statement. "Why we focus on AI (and to what end)," is signed by Hassabis, Google CEO Sundar Pichai, Google Brain lead developer Jeff Dean, Google's VP Engineering Marian Croak and SVP Technology and Society James Manyka.
Google warns of AI risks and acknowledges its own responsibility
"It is an exciting time in the development of AI," they write. "Our approach to developing and harnessing the potential of AI is grounded in our founding mission—to organize the world’s information and make it universally accessible and useful—and it is shaped by our commitment to improve the lives of as many people as possible. It is our view that AI is now, and more than ever, critical to delivering on that mission and commitment."
From the flowery talk about what Google will use AI for and how it will go about it, one thing is clear: progress, but not at any price. The potential impact of AI is too far-reaching to take lightly, Google says.
"We understand that AI, as a still-emerging technology, poses various and evolving complexities and risks. Our development and use of AI must address these risks. That’s why we as a company consider it an imperative to pursue AI responsibly."
According to Google, the risks of AI include:
- Does not work as intended
- It relies on data that is not used appropriately and responsibly
- It is used in unsafe ways
- It is misused or used in harmful ways by its developers or users
- Creates or reinforces negative societal biases
- Creates or exacerbates cybersecurity risks
- Creates or exacerbates information risks
- Creates the impression of having capabilities that it does not in fact have
- Creates or exacerbates inequality or other socioeconomic harm
Two of the five chapters are devoted to "responsible AI," in which Google emphasizes the need for collaboration among various innovation drivers.
Google responds to market speculation
Google has made similar remarks in the past about the use of artificial intelligence in everyday life. The document summarizes these, but the authors share no new insights in their nine-page paper.
But the timing is interesting: given recent events - such as Google's alleged "code red" due to ChatGPT's success, or Sam Altman's promise to develop a secure GPT-4 - Google's paper can be read as an explanation for why the company has not yet responded to or pre-empted ChatGPT's success.
The document could also serve to reassure investors. After all, OpenAI is backed by big tech competitor Microsoft, which is speculated to reinvent its search engine Bing with ChatGPT. With the paper, Google's management shows attention, ambition, and foresight.
An AI roadmap for Google search is not mentioned in the paper. Search is referred to in one place, but it is well known that Google has been equipping Internet search with language models for years, for example for auto-completion. According to the paper, Google also uses AI for Maps, Photos, Workspace and Android smartphones in general.
In addition to safety concerns, AI could also have a significant impact on Google's existing search business model. This could also delay Google's AI ambitions as it runs into the so-called "innovator's dilemma." Business models are not a topic in the paper.
Google positioned itself as an AI-centric company in May 2018. At the time, it said that AI would be at the heart of every Google product in the future. The research department was renamed Google AI.