You are currently viewing Sorry DeepMind, you haven’t explained human intelligence

Sorry DeepMind, you haven’t explained human intelligence

By Paul Thagard, Distinguished Professor Emeritus of Philosophy, University of Waterloo

KEY POINTS

  • DeepMind researchers claim that reward maximization is sufficient for intelligence.
  • The failures of behaviorist psychology show the limits of reward learning.
  • Human intelligence requires the constrained generation of creative representations.

DeepMind is an artificial intelligence company owned by the parent of Google that has achieved remarkable success in playing games such as Go and in solving hard problems such as predicting protein folding. This success derives from powerful computational methods that combine deep learning in many-layered neural networks with reinforcement learning that rewards actions that accomplish goals. In a recent article published in a top AI journal, four DeepMind authors claim that their approach generalizes to accomplish general intelligence (Silver, et al., 2021). However, they underestimate the creativity that operates in human language and problem-solving through the operation of constrained generation of representations

Silver and his co-authors defend the following claim (p. 4):

Hypothesis (Reward-is-Enough). Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.

Defense of this hypothesis consists of arguments that reward is enough for knowledge, learning, perception, social intelligence, language, generalization, imitation, and general intelligence.

The philosopher George Santayana warned that those who cannot remember the past are condemned to repeat it. Reward learning was emphasized by behaviorist psychologists such as Edward Thorndike and B. F. Skinner and served to explain many kinds of animal actions. Thorndike’s law of effect states: “responses that produce a satisfying effect in a particular situation become more likely to occur again in that situation, and responses that produce a discomforting effect become less likely to occur again in that situation.” This law is true as far as it goes, but its limitations became clear by the 1960s when behaviorism was supplanted by cognitive psychology.

B. F. Skinner tried to explain human language on behaviorist principles of stimulus, response, and reward but was aggressively criticized by Noam Chomsky (1959), who was developing his own (1972) theory of language based on internal rules and representations which showed that language goes far beyond reinforcement learning. Chomsky emphasized the creativity of language—the capability of human speakers to generate an unlimited number of novel sentences.

For example, here is a sentence that probably has never been uttered before: “Watermelons are the Beethoven of gastronomy.” It was not randomly generated but rather was produced in accord with syntactic, semantic, and pragmatic constraints internalized in my brain. Moreover, it can be used recursively to generate more complex sentences such as “I think that you do not believe that watermelons are the Beethoven of gastronomy.” A representation is recursive if it loops back to other representations, and is creative if it is novel, valuable, and surprising.

The constrained generation of recursive representations occurs in many human contexts.

  • Social intelligence: We understand other minds by empathizing with them (a kind of analogy) and by forming hypotheses about their non-observable mental states such as beliefs and emotions.
  • Science: Scientists do not merely describe what happens in the world but also generate representations of non-observable causes such as forces and atoms that explain the observations.
  • Mathematics: To make sense of empirical and conceptual puzzles, mathematicians generate abstract concepts such as transfinite sets and non-Euclidean geometry.
  • Music: Composers like Mozart and McCartney create unprecedented patterns of rhythms and tones that satisfy cultural constraints but transcend them in surprising ways.
  • Art: Painters like Picasso produce novel styles such as cubism.

None of these advances could have been produced merely by deep learning from examples, reinforcement learning, or random generation of mental structures. The space of possible representations is far too large to be explored by random combinations but depends on mechanisms that generate concepts, rules, images, analogies, and emotions that are constrained but not determined by the current context. The claim that reward is enough requires DeepMind researchers to show that their algorithms are capable of generating representations for analogies, causal connections, and hidden causes.

The capacity for recursive representation seems to have evolved in human brains only in the past 100,000 years and does not operate in other animals (Corballis 2011, Thagard, 2021). The creative ability to combine representations into rich, novel ones enabled the development of complex language, art, religion, mathematics, technology, and science, taking our species far beyond the animal world of reinforcement learning. Reward drives much of human behavior, but so does creativity.

©Paul Thagard

Paul Thagard is a philosopher, cognitive scientist, and author of many interdisciplinary books. He is Distinguished Professor Emeritus of Philosophy at the University of Waterloo and a Fellow of the Royal Society of Canada, the Cognitive Science Society, and the Association for Psychological Science. The Canada Council awarded him a Molson Prize (2007) and a Killam Prize (2013). Paul Thagard’s Treatise on Mind and Society was published by Oxford University Press in February, 2019.

References

Chomsky, N. (1959). A review of B. F. Skinner’s Verbal Behavior. Language, 26-58.

Chomsky, N. (1972). Language and mind (2 ed.). New York: Harcourt Brace Jovanovich.

Corballis, M. C. (2011). The recursive mind: The origins of human language, thought, and civilization. Princeton: Princeton University Press.

Silver, D., Singh, S., Precup, D., & Sutton, R. S. (2021). Reward Is enough. Artificial Intelligence, 299, 103535.

Thagard, P. (2021). Bots and beasts: What makes machines, animals, and people smart? Cambridge, MA: MIT Press. October

Note: The views expressed in this article are the author/s, and not the position of Intellectual Dose, or iDose (its online publication). This article is republished from Psychology Today with permission.