Artificial Intelligence which is seriously intended to match human intelligence in generality

artificial intelligence“What is General Intelligence?” is a short introductory article to the theory of “real AI”, Artificial Intelligence which is seriously intended to match human intelligence in generality. If you are reading an offline version, further material on general intelligence can be found at the Singularity Institute website at “http://singinst.org/seedAI/“.

The Size of a Real Mind: Beyond the Physicist’s Paradigm
The human brain contains somewhere around 100 billion neurons and 100 trillion synapses. That’s the hardware. The software of the human brain is the result of millions of years of evolution and contains perhaps tens of thousands of complex functional adaptations. The brain itself is not a uniform lump but a highly modular super system; each of the two hemispheres of the cerebral cortex is divided into 52 areas, most of which can be further subdivided into five or six maps. The evolutionarily ancient sub cortical structures are more modular still.

The development of physics over the past few centuries – at least, the dramatic part – has been characterized by the discovery of simple equations that neatly account for complex phenomena. In physics, the task is finding a single insight that explains everything. Newton took a single assumption (masses attract each other with a force equal to the product of the masses divided by the square of the distance) and churned through some calculus to show that, if an apple falls towards the ground, then this explains why planets move in elliptical orbits.

For a long time, AI has been conducted under the Physicist’s Paradigm; the assumption that intelligence itself can be described by some single underlying insight that will, in one sweeping explanation, account for all the functionality of the human mind.

This has led to a tendency to pile far too much workload onto each new principle as it is discovered. Back in the heyday of neural networks, every press release trumpeted the phrase “the same parallel architecture found in the human brain”. (This is open to question; most neural networks are nothing remotely like biological neural networks.) “Parallel neurons” may be the underlying architecture used by the human brain; it is also, however, the underlying architecture used by an earthworm’s brain. To explain human functionality, the invocation of massively parallel neural networks is necessary but not remotely sufficient. Human intelligence is the result of literally hundreds of millions of years of adaptation on top of the simple fact of neural networks.

Another frequently abused concept in AI is “emergence”. Emergence has at least two definitions. In the first sense, “emergence” refers to phenomena that arise on a higher level of a system as the outcome of low-level interaction rules. In the second sense, “emergence” refers to high-level phenomena that arise within a system without requiring design – neither design by humans, nor design by evolution.

All brain phenomena are definitely “emergent” in the first sense, just as neurons themselves are “emergent” from molecules. This does not mean, however, that the complex, intricate functionality of the brain arises without design. It does not mean that implementing neurons is sufficient to give rise to the complex, layered feature-extraction of the human visual super system. The idea that general intelligence does not require complex adaptation is as out of touch with current evolutionary thought as the idea that the brain is an indistinct lump is out of touch with functional microanatomy.

General intelligence is not simple. It is a theory composed of many distinct sub theories; there’s no single insight that fits on a T-Shirt, is exalted above all others, and can be used as a convenient label for the theory. That is progress. Most AI projects that sought to explain the whole of cognition implemented a single underlying process with the complexity of one module in a modern-day code library. Even those AI projects that claimed to reject the “one big idea” paradigm generally tried to pile on massive amounts of unconnected data or computing elements, without an attempt to design complex interacting subsystems, in the hopes that intelligence would spontaneously emerge. This may have been worth a shot at the time, but in retrospect, it was not a realistic expectation.

Our Model of General Intelligence
Different schools of AI are distinguished by different underlying substrates – different “mind stuffs”, one might say. Classical AI consists of “predicate calculus” or “propositional logic”, plus directly coded procedures intended to imitate human formal logic. Connectionist AI consists of neurons implemented on the token level, with each neuron in the input and output layers having a programmer-determined interpretation, plus some number of intervening layers, with the overall network being trained by an external algorithm to perform perceptual tasks. (Biologically realistic connectionist AI uses more neurologically accurate network structures and more complex neurons.) Agent-based AI consists of hundreds or thousands of humanly-written pieces of code which do whatever the programmer wants, with interactions ranging from handing around data structures to tampering with each other’s behaviors. Of the three, agent-based AI has been the most fruitful so far; it’s the only mandate broad enough for pieces of real cognition to occasionally sneak in.

Classical AI is a two-layer system; predicate logic on the bottom level or “token level”, and formal logic operating on the tokens. Connectionist AI revolted against the two-layer system and replaced it with a one-layer system composed of nothing but neurons.

If we were to describe our model of intelligence in terms of multiple layers, each layer built on top of the underlying layer, this would be the list:

  • At the bottom level is source code and data structures.
  • The next layer is sensory modalities. In humans, the archetypal examples of sensory modalities are sight, sound, touch, taste, smell; implemented by the visual cortex, auditory cortex, sensor motor cortex, and so on. (Airs would probably have a different set of sensory modalities – see seed AI.)
  • In biological brains, sensory modalities come the closest to being “hardwired”; they generally involve clearly defined stages of information-processing and feature-extraction, sometimes with individual neurons playing clearly defined roles. Thus, sensory modalities are some of the best candidates for processes that can safely be directly coded by the programmers without rendering the system crystalline and fragile.
  • The visual cortex doesn’t know about butterflies; it knows about edge-detection. Hardwiring knowledge of butterflies makes the system fragile, if such knowledge is considered “knowledge” at all; hardwiring edge detection is something that the human brain seems to get away with.
  • The next layer is the concept level. Concepts (also sometimes known as “symbols”) are abstracted from our experiences; they describe some quality common to a group of experiences. Furthermore, this abstracted quality can then be applied to a mental image, altering it. For example, having abstracted the concept of “redness”, we can take a mental image of a non-red object (for example, grass) and imagine “red grass”. Concepts are patterns within sensory modalities; concepts are complex, flexible, reusable patterns that have been abstracted and stored.
  • The next layer is thoughts, built from structures of concepts. By applying a series of concepts to a single target, it becomes possible to build up complex mental images within the “workspace” provided by one or more sensory modalities. The archetypal example of a thought is a human “sentence” – an arrangement of concepts, invoked by their symbolic tags, structured by the constraints of syntax, which constructs a complex mental image that can be used in reasoning.
  • Finally, it is sequences of thoughts that implement deliberate intelligence – explanation and prediction, planning and design.

The previous description is not a complete recipe for AI. The observation that a human is made up of molecules, cells, tissues, organs, and a body with arms and legs, will not enable you to build a human body. There is still the question of which molecules; which tissues; which organs; how to implement each design as it is called for; how all the components on any one level work together; and, finally, what it is that humans do with their arms and legs.

Having a broad grasp on the levels of description of the human body is certainly necessary – it is a knowledge that we would expect any biologist to have – but it is not sufficient. As much as anything else, AI has suffered from the idea that its role is to explain human intelligence rather than reverse-engineer it. This confuses the task of the physicist, who must explain a skyscraper by reference to molecular properties, with the task of the engineer who actually builds the skyscraper. A five-layer model does not explain intelligence; it is simply the very first step towards the list of design requirements needed to start designing intelligence.

What are some of the challenges faced on each level, then?

  • On the source code level: Ensuring that the code doesn’t break, or that the code is error-tolerant, or that errors in the code are tolerated by the higher levels of organization. In general, writing good code. (Ideally, for seed AI, this includes writing code that the AI can understand, or tweak, as early as possible.)
  • On the modality level: Describing low-level, mid-level, and high-level features of the underlying pixels or “models” for each sensory modality. Ensuring that higher-level features are “reversible” (can be translated into lower-level features) such that a remembered feature can be reconstructed, or an abstracted feature applied to a novel mental image. Processing salient or central parts of an image at higher resolution (modality-level focus of attention). Formation, storage, association, recall, and reconstruction of experiential memories.
  • On the concept level: Describing conditions that trigger concept formation on some experiential base. Formation and storage of the concept. Conditions that trigger (associate to) a stored concept for use in perception or mental image manipulation. Concept reconstruction. Concept combination. Application of a concept to a pre-existing mental image. Formation of concepts across multiple sensory modalities.
  • On the thought level: Assembling thoughts from structures of concepts. Describing conditions that trigger the use of a thought to describe a mental image (complex perception using concept structures). Using thoughts to manipulate mental images and build up complex mental images. Formation and storage of declarative knowledge. Retrieval and application of declarative knowledge as a constraint upon problem-solving. Detecting when declarative knowledge is relevant to a mental image. Distinguishing beliefs from what-if scenarios and what-if scenarios from expectations.

Leave a Reply

Your email address will not be published. Required fields are marked *