Content
newsletter Newsletter

AI researcher Melanie Mitchell shows why the road to artificial general intelligence is difficult and how AI research could evolve into a real science.

Ad

Since AI research began in the 1950s, the field has been marked by optimistic predictions, big investments - and subsequent disappointment. AI researcher Melanie Mitchell warns that this interplay of "AI spring" and "AI winter" may not yet be over. Despite rapid developments and breakthroughs thanks to neural networks, we still wait for self-driving cars, multifunctional household robots, and artificial conversation partners.

Mitchell sees one reason for the cycle through our still limited understanding of the nature and complexity of intelligence. In her paper "Why AI is harder than you think," she identifies four fallacies that still shape AI research and its communication - and whose avoidance would foster real progress.

Fallacy 1: Narrow intelligence is on a continuum with general intelligence

AI advances in a particular task, such as reading or writing, are often described as the first step toward a more general form of artificial intelligence, Mitchell said. The Deep Blue chess computer was described as the first step in an AI revolution. OpenAI's GPT-3 text AI was described as a step toward general intelligence. Numerous other examples demonstrated this optimism.

Ad
Ad

The American philosopher Hubert Dreyfus was early on at war with the great promises of AI research. He spoke of the "first-step fallacy." This is the claim that since the beginning of AI research we have been moving along a continuum toward general AI "so that any improvement in our programs, no matter how trivial, counts as progress."

This assumption, he said, is comparable to the one that the first monkey to climb a tree made progress toward landing on the moon. The "unexpected obstacle" in the assumed continuum of AI progress has always been the problem of common sense, Mitchell quotes Dreyfus.

Fallacy 2: Easy things are easy and hard things are hard

Early on after AI research began, it became clear that it was "harder than anticipated" to create artificial intelligence, said the likes of John McCarthy, a founding father of AI research. AI legend Marvin Minsky, another co-founder of the research field, explained the facts in one sentence: "Easy things are hard."

Things that humans do every day without thinking, like moving around in the world, having conversations, or walking down a busy sidewalk, turned out to be the most difficult challenges for machines. In contrast, the supposedly difficult tasks such as logical thinking, playing chess, Go, or translating sentences into hundreds of languages turned out to be comparatively easy.

AI is harder than we think because we are largely unaware of the complexity of our own thought processes, Mitchell says. Humans are so naturally capable in the areas of perception and motor skills that they make the difficult look easy.

Recommendation

"In general, we are least aware of what our minds do best," Minsky said.

Fallacy 3: The lure of hopeful mnemonic devices

People often describe animals or machines in terms that refer to human cognitive, conative, and affective processes. In 1976, computer scientist Drew McDermott coined the term "hopeful mnemonics" to describe this.

"A major source of simpletonism in AI programs is the use of terms such as 'UNDERSTAND' or 'GOAL' to refer to programs and data structures. If a researcher calls the main loop of his program 'UNDERSTAND,' he is merely committing circular reasoning. But he could also be misleading a lot of people, most likely himself. What he should do instead is refer to this main loop as 'G0034' and see if he can convince himself or anyone else that G0034 implements approaches to understanding," Mitchell quotes.

Even today, it is common to talk about AI in this way, the AI researcher writes. The brain is the inspiration for neural networks, she says, but they are still completely different. AIs learn, but cannot apply what they learn in other contexts.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Companies like IBM, however, talk about their products reading, understanding, or seeing, she said. Benchmarks tested the ability to answer questions, reading comprehension or natural language understanding. At best, such demonstrations misled the public. At worst, they subconsciously shaped how AI researchers think about their own systems and how close they see them to human intelligence.

Fallacy 4: All intelligence is in the brain

There's not much going on in the human mind without a brain, that's clear. But can intelligence exist without a body? Is a pickled brain in a jar as intelligent as one in a human skull?

According to Mitchell, the prevailing AI research answers with a resounding "yes." The mind is understood as a kind of computer that takes in, stores, processes, and outputs information. The body does not play a major role. At least in theory, the brain is completely detachable from the rest of the body.

Almost all current AI systems, therefore, have no body - with rare exceptions in robotics. They also have severely limited environmental interactions.

A cognitive science thesis called "embodiment" challenges the notion that biological intelligence has nothing to do with bodies. Briefly summarized, the embodiment thesis says that our consciousness requires a body and presupposes physical interactions.

Cognition, for instance, uses motor skills. A classic example is counting on fingers. Children learn to represent the concept of a number with the movement of the same number of fingers. Over time, the required movement fades away.

Ad
Ad

However, the embodiment thesis assumes that the brain continues to use the programs for motor movement as a representation for numbers, even when the movement no longer occurs. Without the body, these motor skills would not exist - and thus no way to use them for counting.

Human intelligence, according to the embodiment thesis, is a highly physically integrated system. It builds on attributes such as emotions, desires, a sense of self and autonomy, and common sense. It is not yet clear whether intelligence can be separated from these attributes at all, Mitchell writes.

Is AI more alchemy than science?

In 1892, psychologist William James said of the psychology of his day, "It is not a science; it is merely the hope of a science."

This is also the perfect characterization of current AI research, Mitchell said. We need a more accurate vocabulary for the capabilities of machines, he said. Also, a better understanding of intelligence and how it manifests itself in various systems in nature.

AI researchers need to do more joint research and collaboration on intelligence with other sciences, Mitchell urges. Otherwise, AI research will remain a kind of alchemy. Only in this way could questions be answered, such as:

  • How can we assess the actual progress towards "general" or "human," or the difficulty of a particular domain for AI compared to humans?
  • How should we describe the actual capabilities of AI systems without deceiving ourselves and others with wishful thinking?
  • To what extent can the various dimensions of human cognition (including cognitive biases, emotions, goals, and embodiment) exist separately?
  • How can we improve our intuitions about what intelligence is?

To make true advances in AI research, and especially to understand why they are harder than they appear, AI research must "move from alchemy to developing a scientific understanding of intelligence," Mitchell concludes.

Ad
Ad
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.