Intelligence has been defined in different ways, including the abilities for abstract thought, understanding, communication, reasoning, learning, planning, emotional intelligence and problem solving. Intelligence is most widely studied in humans, but has also been observed in animals and plants. Artificial intelligence is the intelligence of machines or the simulation of intelligence in machines. Numerous definitions of and hypotheses about intelligence have been proposed since before the twentieth century, with no consensus reached by scholars. Within the discipline of psychology, various approaches to human intelligence have been adopted. The psychometric approach is especially familiar to the general public, as well as being the most researched and by far the most widely used in practical settings.

 

History of the term

Intelligence derives from the Latin verb intelligere which derives from inter-legere meaning to “pick out” or discern. A form of this verb, intellectus, became the medieval technical term for understanding, and a translation for the Greek philosophical term nous. This term was however strongly linked to the metaphysical and cosmological theories of teleological scholasticism, including theories of the immortality of the soul, and the concept of the Active Intellect (also known as the Active Intelligence). This entire approach to the study of nature was strongly rejected by the early modern philosophers such as Francis Bacon, Thomas Hobbes, John Locke, and David Hume, all of whom preferred the word “understanding” in their English philosophical works. Hobbes for example, in his Latin De Corpore, used “intellectus intelligit” (translated in the English version as “the understanding understandeth”) as a typical example of a logical absurdity. The term “intelligence” has therefore become less common in English language philosophy, but it has later been taken up (without the scholastic theories which it once implied) in more contemporary psychology.

 

Definitions

How to define intelligence is controversial. Groups of scientists have stated the following:

from “Mainstream Science on Intelligence” (1994), an editorial statement by fifty-two researchers:

A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—”catching on,” “making sense” of things, or “figuring out” what to do.

from “Intelligence: Knowns and Unknowns” (1995), a report published by the Board of Scientific Affairs of the American Psychological Association:

Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person’s intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of “intelligence” are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.

Besides the foregoing definitions, these psychology and learning researchers also have defined intelligence as:

  • Alfred Binet – Judgment, otherwise called “good sense,” “practical sense,” “initiative,” the faculty of adapting one’s self to circumstances … auto-critique.
  • David Wechsler – The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.
  • Lloyd Humphreys – “…the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills.”
  • Cyril Burt – Innate general cognitive ability
  • Howard Gardner – To my mind, a human intellectual competence must entail a set of skills of problem solving — enabling the individual to resolve genuine problems or difficulties that he or she encounters and, when appropriate, to create an effective product — and must also entail the potential for finding or creating problems — and thereby laying the groundwork for the acquisition of new knowledge.
  • Linda Gottfredson – The ability to deal with cognitive complexity.
  • Sternberg & Salter – Goal-directed adaptive behavior.
  • Reuven Feuerstein – The theory of Structural Cognitive Modifiability describes intelligence as “the unique propensity of human beings to change or modify the structure of their cognitive functioning to adapt to the changing demands of a life situation.”

What is considered intelligent varies with culture. For example, when asked to sort, the Kpelle people take a functional approach. A Kpelle participant stated “the knife goes with the orange because it cuts it.” When asked how a fool would sort, they sorted linguistically, putting the knife with other implements and the orange with other foods, which is the style considered intelligent in other cultures.

 

Human intelligence

Psychometrics

The approach to understanding intelligence with the most supporters and published research over the longest period of time is based on psychometric testing. It is also by far the most widely used in practical settings. Intelligence quotient (IQ) tests include the Stanford-Binet, Raven’s Progressive Matrices, the Wechsler Adult Intelligence Scale and the Kaufman Assessment Battery for Children. There are also psychometric tests that are not intended to measure intelligence itself but some closely related construct such as scholastic aptitude. In the United States examples include the SSAT, the SAT, the ACT, the GRE, the MCAT, the LSAT, and the GMAT.

Intelligence tests are widely used in educational, business, and military settings due to their efficacy in predicting behavior. IQ and g (discussed in the next section) are correlated with many important social outcomes—individuals with low IQs are more likely to be divorced, have a child out of marriage, be incarcerated, and need long-term welfare support, while individuals with high IQs are associated with more years of education, higher status jobs and higher income. Intelligence is significantly correlated with successful training and performance outcomes, and IQ/g is the single best predictor of successful job performance.

 

General intelligence factor or g

There are many different kinds of IQ tests using a wide variety of test tasks. Some tests consist of a single type of task, others rely on a broad collection of tasks with different contents (visual-spatial, verbal, numerical) and asking for different cognitive processes (e.g., reasoning, memory, rapid decisions, visual comparisons, spatial imagery, reading, and retrieval of general knowledge). The psychologist Charles Spearman early in the 20th century carried out the first formal factor analysis of correlations between various test tasks. He found a trend for all such tests to correlate positively with each other, which is called a positive manifold. Spearman found that a single common factor explained the positive correlations among test. Spearman named it g for “general intelligence factor”. He interpreted it as the core of human intelligence that, to a larger or smaller degree, influences success in all cognitive tasks and thereby creates the positive manifold. This interpretation of g as a common cause of test performance is still dominant in psychometrics. An alternative interpretation was recently advanced by van der Maas and colleagues. Their mutualism model assumes that intelligence depends on several independent mechanisms, none of which influences performance on all cognitive tests. These mechanisms support each other so that efficient operation of one of them makes efficient operation of the others more likely, thereby creating the positive manifold.

IQ tasks and tests can be ranked by how highly they load on the g factor. Tests with high g-loadings are those that correlate highly with most other tests. One comprehensive study investigating the correlations between a large collection of tests and tasks has found that the Raven’s Progressive Matrices have a particularly high correlation with most other tests and tasks. The Raven’s is a test of inductive reasoning with abstract visual material. It consists of a series of problems, sorted approximately by increasing difficulty. Each problem presents a 3 x 3 matrix of abstract designs with one empty cell; the matrix is constructed according to a rule, and the person must find out the rule to determine which of 8 alternatives fits into the empty cell. Because of its high correlation with other tests, the Raven’s Progressive Matrices are generally acknowledged as a good indicator of general intelligence. This is problematic, however, because there are substantial gender differences on the Raven’s, which are not found when g is measured directly by computing the general factor from a broad collection of tests.

 

Historical psychometric theories

Several different theories of intelligence have historically been important. Often they emphasized more factors than a single one like in g

Cattell-Horn-Carroll theory

Many of the broad, recent IQ tests have been greatly influenced by the Cattell-Horn-Carroll theory. It is argued to reflect much of what is known about intelligence from research. A hierarchy of factors is used. g is at the top. Under it there are 10 broad abilities that in turn are subdivided into 70 narrow abilities. The broad abilities are:

  • Fluid Intelligence (Gf): includes the broad ability to reason, form concepts, and solve problems using unfamiliar information or novel procedures.
  • Crystallized Intelligence (Gc): includes the breadth and depth of a person’s acquired knowledge, the ability to communicate one’s knowledge, and the ability to reason using previously learned experiences or procedures.
  • Quantitative Reasoning (Gq): the ability to comprehend quantitative concepts and relationships and to manipulate numerical symbols.
  • Reading & Writing Ability (Grw): includes basic reading and writing skills.
  • Short-Term Memory (Gsm): is the ability to apprehend and hold information in immediate awareness and then use it within a few seconds.
  • Long-Term Storage and Retrieval (Glr): is the ability to store information and fluently retrieve it later in the process of thinking.
  • Visual Processing (Gv): is the ability to perceive, analyze, synthesize, and think with visual patterns, including the ability to store and recall visual representations.
  • Auditory Processing (Ga): is the ability to analyze, synthesize, and discriminate auditory stimuli, including the ability to process and discriminate speech sounds that may be presented under distorted conditions.
  • Processing Speed (Gs): is the ability to perform automatic cognitive tasks, particularly when measured under pressure to maintain focused attention.
  • Decision/Reaction Time/Speed (Gt): reflect the immediacy with which an individual can react to stimuli or a task (typically measured in seconds or fractions of seconds; not to be confused with Gs, which typically is measured in intervals of 2–3 minutes). See Mental chronometry.

Modern tests do not necessarily measure of all of these broad abilities. For example, Gq and Grw may be seen as measures of school achievement and not IQ. Gt may be difficult to measure without special equipment. g was earlier often subdivided into only Gf and Gc which were though to correspond to the Nonverbal or Performance subtests and Verbal subtests in earlier versions of the popular Wechsler IQ test. More recent research has shown the situation to be more complex.

 

Controversies

While not necessarily a dispute about the psychometric approach itself, there are several controversies regarding the results from psychometric research. Examples are the role of genetics vs. environment, the causes of average group differences, or the Flynn effect.

One criticism has been against the early research such as craniometry. A reply has been that drawing conclusions from early intelligence research is like condemning the auto industry by criticizing the performance of the Model T.

Several critics, such as Stephen Jay Gould, have been critical of g, seeing it as a statistical artifact, and that IQ tests instead measure a number of unrelated abilities. The American Psychological Association’s report “Intelligence: Knowns and Unknowns” stated that IQ tests do correlate and that the view that g is a statistical artifact is a minority one.

 

Other theories

There are critics of IQ, who do not dispute the stability of IQ test scores or the fact that they predict certain forms of achievement rather effectively. They do argue, however, that to base a concept of intelligence on IQ test scores alone is to ignore many important aspects of mental ability.

On the other hand, Linda S. Gottfredson (2006) has argued that the results of thousands of studies support the importance of IQ for school and job performance (see also the work of Schmidt & Hunter, 2004). IQ also predicts or correlates with numerous other life outcomes. In contrast, empirical support for non-g intelligences is lacking or very poor. She argued that despite this the ideas of multiple non-g intelligences are very attractive to many due to the suggestion that everyone can be smart in some way.

 

Multiple intelligences

Howard Gardner’s theory of multiple intelligences is based on studies not only of normal children and adults but also by studies of gifted individuals (including so-called “savants”), of persons who have suffered brain damage, of experts and virtuosos, and of individuals from diverse cultures. This led Gardner to break intelligence down into at least eight different components: logical, linguistic, spatial, musical, kinesthetic, interpersonal, intrapersonal, naturalist and existential intelligences. He argues that psychometric tests address only linguistic and logical plus some aspects of spatial intelligence. A major criticism of Gardner’s theory is that it has never been tested, or subjected to peer review, by Gardner or anyone else, and indeed that it is unfalsifiable. Others (e.g. Locke, 2005) have suggested that recognizing many specific forms of intelligence (specific aptitude theory) implies a political—rather than scientific—agenda, intended to appreciate the uniqueness in all individuals, rather than recognizing potentially true and meaningful differences in individual capacities. Schmidt and Hunter (2004) suggest that the predictive validity of specific aptitudes over and above that of general mental ability, or “g”, has not received empirical support.

Howard Gardner mentions in his Multiple Intelligences The Theory in Practice book, briefly about his main seven intelligences he introduced. In his book, he starts off telling describing Linguistic and Logical Intelligence because he believed in society, we have put the two subject on a pedestal. However, Gardner believes every of the intelligences he found has equality. Note: At the time of the publication of Gardner’s book Multiple Intelligences The Theory in Practice, naturalist and existential were not mentioned.

  • Linguistic Intelligence: The kind of ability exhibited in its fullest form, perhaps, by poets.
  • Logical-Mathematics Intelligence: Is logical and mathematical ability, as well as scientific ability. Howard Gardner believed Jean Piaget may have thought he was studying all intelligence, but in truth, Piaget was really only focusing on the logical mathematical intelligence.
  • Spatial Intelligence: The ability to form a mental model of a spatial world and to be able to maneuver and operate using that model.
  • Musical Intelligence: Leonard Bernstein had lots of it; Mozart, presumably, had even more.
  • Bodily-kinesthetic intelligence: The ability to solve problems or to fashion products using one’s whole body, or parts of the body. For example, dancers, athletes, surgeon, crafts people, etc.
  • Interpersonal Intelligence: The ability to understand people. People who are well in interpersonal are most likely teachers, politicians, clinicians, religious leaders, etc.
  • Intrapersonal Intelligence: A correlative ability, turned inward. It is a capacity to form an accurate, veridical model of oneself and to be able to use that model to operate effectively in life.

 

Triarchic theory of intelligence

Robert Sternberg proposed the triarchic theory of intelligence to provide a more comprehensive description of intellectual competence than traditional differential or cognitive theories of human ability. The triarchic theory describes three fundamental aspects of intelligence. Analytic intelligence comprises the mental processes through which intelligence is expressed. Creative intelligence is necessary when an individual is confronted with a challenge that is nearly, but not entirely, novel or when an individual is engaged in automatizing the performance of a task. Practical intelligence is bound in a sociocultural milieu and involves adaptation to, selection of, and shaping of the environment to maximize fit in the context. The triarchic theory does not argue against the validity of a general intelligence factor; instead, the theory posits that general intelligence is part of analytic intelligence, and only by considering all three aspects of intelligence can the full range of intellectual functioning be fully understood.

More recently, the triarchic theory has been updated and renamed the Theory of Successful Intelligence by Sternberg. Intelligence is defined as an individual’s assessment of success in life by the individual’s own (idiographic) standards and within the individual’s sociocultural context. Success is achieved by using combinations of analytical, creative, and practical intelligence. The three aspects of intelligence are referred to as processing skills. The processing skills are applied to the pursuit of success through what were the three elements of practical intelligence: adapting to, shaping of, and selecting of one’s environments. The mechanisms that employ the processing skills to achieve success include utilizing one’s strengths and compensating or correcting for one’s weaknesses. Sternberg’s theories and research on intelligence remain contentious within the scientific community.

 

Piaget’s theory and Neo-Piagetian theories

In Piaget’s theory of cognitive development the focus is not on mental abilities but rather on a child’s mental models of the world. As a child develops, increasingly more accurate models of the world are developed which enable the child to interact with the world better. One example being object permanence where the child develops a model where objects continue to exist even when they cannot be seen, heard, or touched.

Piaget’s theory described four main stages and many sub-stages in the development. Degree of progress through these is correlated with but is not identical with psychometric IQ.

Neo-Piagetian theories of cognitive development expand Piaget’s theory in various ways such as also considering psychometric-like factors such as processing speed and working memory, “hypercognitive” factors like self-monitoring, more stages, and more consideration on how progress may vary in different domains such as spatial or social.

Piaget’s theory has been criticized for the age of appearance of a new model of the world, such as object permanence, being dependent on how the testing is done (see the article on object permanence). More generally, the theory may be very difficult to test empirically due to the difficulty of proving or not proving that a mental model is the explanation for the results of the testing.

 

Emotional intelligence

Emotional intelligence is an argued ability, capacity, skill or, a self-perceived ability to identify, assess, and control the emotions of oneself, of others, and of groups. Different models have been proposed for the definition of emotional intelligence and there is disagreement about how the term should be used. The concept is controversial (Locke, 2005), with some seeing it as a skill or form of personality rather than intelligence, and its predicative ability, especially after controlling for the effects of IQ and the Big Five personality traits, is disputed.

 

Latent inhibition

Latent inhibition (potential prevention) is a technical term used in Classical conditioning. A stimulus that has not had any significance in the past takes longer to acquire meaning (as a signal) than a new stimulus. It is “a measure of reduced learning about a stimulus to which there has been prior exposure without any consequence.” One is practicing latent inhibition when one tries to ignore an ongoing sound (like an air conditioner) or tune out the conversation of others.

This tendency to disregard or even inhibit formation of memory, by preventing associative learning of observed stimuli, is an unconscious response and is assumed to prevent sensory overload and cognitive overload. Latent inhibition is observed in many species, and is believed to be an integral part of learning, enabling an organism to interact successfully in an environment (e.g., social).

Latent inhibition (LI) is demonstrated when a previously unattended stimulus is less effective in a new learning situation than a novel stimulus. The term “latent inhibition” dates back to Lubow and Moore (1959). The LI effect is “latent” in that it is not exhibited in the stimulus pre-exposure phase, but rather in the subsequent test phase. “Inhibition”, here, simply connotes that the effect is expressed in terms of relatively poor learning. The LI effect is extremely robust, appearing in all mammalian species that have been tested and across many different learning paradigms, thereby suggesting some adaptive advantages, such as protecting the organism from associating irrelevant stimuli with other, more important, events.

 

Theories

The LI effect has received a number of theoretical interpretations. One class of theory holds that inconsequential stimulus pre-exposure results in reduced associability for that stimulus. The loss of associability has been attributed to a variety of mechanisms that reduce attention, which then must be reacquired in order for learning to proceed normally. Alternatively, it has been proposed that LI is a result of retrieval failure rather than acquisition failure. Such a position advocates that, following stimulus pre-exposure, the acquisition of the new association to the old stimulus proceeds normally. However, in the test stage, two associations (the stimulus-no consequence association from the pre-exposure stage and the stimulus-consequence stimulus association of the acquisition stage) are retrieved and compete for expression. The group not pre-exposed to the stimulus performs better than the pre-exposed group because for the first group there is only the second association to be retrieved.

 

Variation

LI is affected by many factors, one of the most important of which is context. In virtually all LI studies, the context remains the same in the stimulus pre-exposure and test phases. However, if context is changed from the pre-exposure to the test phase, then LI is severely attenuated. The context-dependency of LI plays major roles in all current theories of LI , and in particular to their applications to schizophrenia, where it has been proposed that relationship between the pre-exposed stimulus and the context breaks down; context no longer sets the occasion for the expression of the stimulus-no consequence association. Consequently, working-memory is inundated with experimentally familiar but phenomenally novel stimuli, each competing for the limited resources required for efficient information processing. This description fits well with the positive symptoms of schizophrenia, particularly high distractibility, as well as with research findings.

 

Physiology

The assumption that the attentional process that produces LI in normal subjects is dysfunctional in schizophrenia patients has stimulated considerable research, with humans, as well as with rats and mice. There is much data that indicate that dopamine agonists and antagonists modulate LI in rats and in normal humans. Dopamine agonists, such as amphetamine, abolish LI while dopamine antagonists, such as haloperidol and other anti-psychotic drugs, produce a super-LI effect. In addition, manipulations of putative dopamine pathways in the brain also have the expected effects on LI. Thus, hippocampal and septal lesions interfere with the development of LI, as do lesions in selective portions of the nucleus accumbens. With human subjects, there is evidence that acute, non-medicated schizophrenics show reduced LI compared to chronic, medicated schizophrenics and to healthy subjects, while there is no difference in the amount of LI in the latter two groups. Finally, symptomatically normal subjects who score high on self-report questionnaires that measure psychotic-proneness or schizotypality also exhibit reduced LI compared to those who score low on the scales.

In addition to LI illustrating a fundamental strategy for information processing and providing a useful tool for examining attentional dysfunctions in pathological groups, the LI procedure has been used to screen for drugs that can ameliorate schizophrenia symptoms LI. LI has also been used to explain why certain therapies, such as alcohol aversion treatments, are not as effective as might be expected. On the other hand, LI procedures may be useful in counteracting some of the undesirable side-effects that frequently accompany radiation and chemo-therapies for cancer, as for example food aversion. LI research also has suggested techniques that may be efficacious in the prophylactic treatment of certain fears and phobias. Of popular interest, several studies have attempted to relate LI to creativity.

In summary, the basic LI phenomenon represents some output of a selective attention process that results in learning to ignore irrelevant stimuli. It has become an important tool for understanding information processing in general, as well as attentional dysfunctions in schizophrenia, and it has implications for a variety of practical problems.

 

Pathology

Low latent inhibition

Most people are able to ignore the constant stream of incoming stimuli, but this capability is reduced in those with low latent inhibition. Low latent inhibition seems to cause one person to be more distractible than another. It is hypothesized that a low level of latent inhibition can cause either psychosis or a high level of creative achievement or both, which is usually dependent on the individual’s intelligence.

Those of above average intelligence are thought to be capable of processing this stream effectively, enabling their creativity. Those with less than average intelligence, on the other hand, are less able to cope, and as a result are more likely to suffer from mental illness and sensory overload.

High levels of the neurotransmitter dopamine (or its agonists) in the brain have been shown to decrease latent inhibition. Certain dysfunctions of the neurotransmitters glutamate, serotonin and acetylcholine have also been implicated.

Low latent inhibition is not a mental disorder but an observed personality trait, and a description of how an individual absorbs and assimilates data or stimuli. Furthermore, it does not necessarily lead to mental disorder or creative achievement—this is, like many other factors of life, a case of environmental and predispositional influences, whether these be positive (e.g., education) or negative (e.g., abuse) in nature.

There is at least some evidence to suggest that one can reduce one’s latent inhibition, contributing to more distractibility and sensory overload, through the use of Cannabis.

 

Evolution of intelligence

Our hominid and human ancestors evolved large and complex brains exhibiting an ever-increasing intelligence through a long evolutionary process. Many different explanations have been proposed.

 

Improving intelligence

Eugenics is a social philosophy which advocates the improvement of human hereditary traits through various forms of intervention. Conscious efforts to influence intelligence raise ethical issues. Eugenics has variously been regarded as meritorious or deplorable in different periods of history, falling greatly into disrepute after the defeat of Nazi Germany in World War II.

Neuroethics considers the ethical, legal and social implications of neuroscience, and deals with issues such as the difference between treating a human neurological disease and enhancing the human brain, and how wealth impacts access to neurotechnology. Neuroethical issues interact with the ethics of human genetic engineering.

Because intelligence appears to be at least partly dependent on brain structure and the genes shaping brain development, it has been proposed that genetic engineering could be used to enhance the intelligence, a process sometimes called biological uplift in science fiction. Experiments on mice have demonstrated superior ability in learning and memory in various behavioral tasks.

Transhumanist theorists study the possibilities and consequences of developing and using techniques to enhance human abilities and aptitudes, and individuals ameliorating what they regard as undesirable and unnecessary aspects of the human condition.

The evolution of human intelligence refers to a set of theories that attempt to explain how human intelligence has evolved. The question is closely tied to the evolution of the human brain, and to the emergence of human language.

The timeline of human evolution spans some 7 million years, from the separation of the Pan genus until the emergence of behavioral modernity by 50,000 years ago. Of this timeline, the first 3 million years concern Sahelanthropus, the following 2 million concern Australopithecus, while the final 2 million span the history of actual human species (the Paleolithic).

Many traits of human intelligence, such as empathy, theory of mind, mourning, ritual, and the use of symbols and tools, are already apparent in great apes although in lesser sophistication than in humans.

 

History

Hominidae

The great apes show considerable abilities for cognition and empathy.

Chimpanzees make tools and use them to acquire foods and for social displays; they have sophisticated hunting strategies requiring cooperation, influence and rank; they are status conscious, manipulative and capable of deception; they can learn to use symbols and understand aspects of human language including some relational syntax, concepts of number and numerical sequence.

In one study, young chimpanzees outperformed human college students in tasks requiring remembering numbers. Chimpanzees are capable of empathy, having been observed to feed turtles in the wild, and show curiosity in wildlife (such as pythons).

 

Hominina

Around 10 million years ago, the Earth’s climate entered a cooler and drier phase, which led eventually to the ice ages beginning some 2.6 million years ago. One consequence of this was that the north African tropical forest began to retreat, being replaced first by open grasslands and eventually by desert (the modern Sahara). This forced tree-dwelling animals to adapt to their new environment or die out. As their environment changed from continuous forest to patches of forest separated by expanses of grassland, some primates adapted to a partly or fully ground-dwelling life. Here they were exposed to predators, such as the big cats, from whom they had previously been safe.

Some Hominina (Australopithecines) adapted to this challenge by adopting bipedalism: walking on their hind legs. This gave their eyes greater elevation and the ability to see approaching danger further off. It also freed the forelimbs (arms) from the task of walking and made the hands available for tasks such as gathering food. At some point the bipedal primates developed handedness, giving them the ability to pick up sticks, bones and stones and use them as weapons, or as tools for tasks such as killing smaller animals, cracking nuts, or cutting up carcasses. In other words, these primates developed the use of primitive technology. Bipedal tool-using primates form the Hominina subtribe, of which the earliest species, such as Sahelanthropus tchadensis, date to about 7 to 5 million years ago.

From about 5 million years ago, the Hominin brain began to develop rapidly in both size and differentiation of function.

It has been shown that Great Ape cooperation and communication is severely impeded by their competitiveness, and thus that the apes would revolutionize their culture-bearing ability if they could just shrug off their competitiveness. It is also well known that even early hominins lacked the size and sharpness of their canine teeth that apes use as a threat signal, suggesting prehumans simply had no use for threat signals. That means they had already transcended ape competitiveness and thus developed superior cooperation and communication.

 

Homo

By 2.4 million years ago Homo habilis had appeared in East Africa: the first known human species, and the first known to make stone tools.

The use of tools conferred a crucial evolutionary advantage, and required a larger and more sophisticated brain to co-ordinate the fine hand movements required for this task. The evolution of a larger brain created a problem for early humans, however. A larger brain requires a larger skull, and thus requires the female to have a wider birth canal for the newborn’s larger skull to pass through. But if the female’s birth canal grew too wide, her pelvis would be so wide that she would lose the ability to run: still a necessary skill in the dangerous world of 2 million years ago.

The solution to this was to give birth at an early stage of fetal development, before the skull grew too large to pass through the birth canal. This adaptation enabled the human brain to continue to grow, but it imposed a new discipline. The need to care for helpless infants for long periods of time forced humans to become less mobile. Human bands increasingly stayed in one place for long periods, so that females could care for infants, while males hunted food and fought with other bands that competed for food sources. As a result, humans became even more dependent on tool-making to compete with other animals and other humans, and relied less on body size and strength.

About 200,000 years ago Europe and the Middle East were colonized by Neanderthal man, extinct by 20,000 following the appearance of modern humans in the region from 40,000 years ago.

 

Homo sapiens

Around 200,000 years ago, Homo sapiens first appears in East Africa. It is unclear to what extent these early modern humans had developed language, music, religion etc. They spread throughout Africa over the following 50,000 years or so.

According to proponents of the Toba catastrophe theory, the climate in non-tropical regions of the earth experienced a sudden freezing about 70,000 years ago, because of a huge explosion of the Toba volcano that filled the atmosphere with volcanic ash for several years. This reduced the human population to less than 10,000 breeding pairs in equatorial Africa, from which all modern humans are descended. Being unprepared for the sudden change in climate, the survivors were those intelligent enough to invent new tools and ways of keeping warm and finding new sources of food (for example, adapting to ocean fishing based on prior fishing skills used in lakes and streams that became frozen).

Around 100-80,000 years ago, three main lines of Homo sapiens diverged, bearers of mitochondrial haplogroup L1 (mtDNA) / A (Y-DNA) colonizing Southern Africa (the ancestors of the Khoisan/Capoid peoples), bearers of haplogroup L2 (mtDNA) / B (Y-DNA) settling Central and West Africa (the ancestors of Niger-Congo and Nilo-Saharan speaking peoples), while the bearers of haplogroup L3 remained in East Africa.

The “Great Leap Forward” leading to full behavioral modernity sets in only after this separation. Rapidly increasing sophistication in tool-making and behaviour is apparent from about 80,000 years ago, and the migration out of Africa follows towards the very end of the Middle Paleolithic, some 60,000 years ago. Fully modern behaviour, including figurative art, music, self-ornamentation, trade, burial rites etc. is evident by 30,000 years ago. The oldest unequivocal examples of prehistoric art date to this period, the Aurignacian and the Gravettian periods of prehistoric Europe, such the Venus figurines and cave painting (Chauvet Cave) and the earliest musical instruments (the bone pipe of Geissenklösterle, Germany, dated to about 36,000 years ago).

 

Models

Social brain hypothesis

The model was proposed by Robin Dunbar, who argues that human intelligence did not evolve primarily as a means to solve ecological problems, but rather intelligence evolved as a means of surviving in large and complex social groups. Some of the behaviors associated with living in large groups include reciprocal altruism, deception and coalition formation. These group dynamics relate to Theory of Mind or the ability to understand the thoughts and emotions of others, though Dunbar himself admits in the same book that it is not the flocking itself that causes intelligence to evolve (as shown by ruminants).

Dunbar argues that when the size of a social group increases, the number of different relationships in the group may increase by orders of magnitude. Chimpanzees live in groups of about 50 individuals whereas humans typically have a social circle of about 150 people, which is now referred to as Dunbar’s number. According to the social brain hypothesis, when hominids started living in large groups, selection favored greater intelligence. As evidence, Dunbar cites a relationship between neocortex size and group size of various mammals. Howewer, meerkats have far more social relationships than their small brain capacity would suggest. Another hypothesis is that it is actually intelligence that causes social relationships to become more complex, because intelligent individuals are more difficult to learn to know.

 

Sexual selection

This model is proposed by Geoffrey Miller who argues that human intelligence is unnecessarily sophisticated for the needs of hunter-gatherers to survive. He argues that the manifestations of intelligence such as language, music and art are of no utilitarian value to the survival of ancient hominids. Rather, intelligence may have been a fitness indicator. Hominids would have selected for intelligence as a proxy for healthy genes and a positive feedback loop leading runaway sexual selection would have led to the evolution of human intelligence in a relatively short period.

 

Ecological dominance-social competition model

A predominant model describing the evolution of human intelligence is ecological dominance-social competition (EDSC) explained by Mark V. Flinn, David C. Geary and Carol V. Ward based mainly on work by Richard D. Alexander. According to the model, human intelligence was able to evolve to significant levels because of the combination of increasing domination over habitat and increasing importance of social interactions. As a result the primary selective pressure for increasing human intelligence shifted from learning to master the natural world to competition for dominance among members or groups of its own species.

As advancement and survival within an increasing complex social structure favored ever more advanced social skills, communication of concepts through increasingly complex language patterns ensued. Since competition had shifted bit by bit from controlling “nature” to influencing other humans, it became of relevance to outmaneuver other members of the group seeking leadership or acceptance, by means of more advanced social skills. A more social and communicative person would be more easily naturally selected.

Common misconceptions about the model are that (1) it implies that transitional early humans with gradually evolving brains (such as Homo erectus) were ecologically dominant in the sense of no longer being subject to Darwin’s traditional hostile forces of nature such as predation (indeed, big cats ate them), and (2) that early humans were “competitive” but not “cooperative”. The EDSC model, however, posits that humans cooperate in order to effectively compete against other coalitions of humans. Hence it predicts increasing levels of cooperation, including systems of ethics and morality, and apparent reciprocity as shown by their care for their sick and disabled, e.g. the Dmanisi jaw from Georgia.

 

Intelligence as a resistance signal

Human intelligence developed to an extreme level that is not necessarily adaptive in an evolutionary sense. Firstly, larger-headed babies are more difficult to give birth and large brains are costly in terms of nutrient and oxygen requirements. Thus the direct adaptive benefit of human intelligence is questionable at least in modern societies, while it is difficult to study in prehistoric societies. However, alleles coding for even larger human brains are spreading continuously even in modern societies This suggests that cleverer humans may gain indirect selective benefits.

A recent study argues that human cleverness is simply selected within the context of sexual selection as an honest signal of genetic resistance against parasites and pathogens. The number of people living with cognitive abilities seriously damaged by childhood infections is high; estimated in hundreds of millions. Even more people live with moderate mental damages, that are not classified as ‘diseases’ by medical standards, who may still be considered as inferior mates by potential sexual partners. Pathogens currently playing a major role in this global challenge against human cognitive capabilities include viral infections like meningitis, protists like Toxoplasma and Plasmodium, and animal parasites like intestinal worms and Schistosomes.

Thus, widespread, virulent, and archaic infections are greatly involved. Given this situation, our sexual preferences for clever partners increase the chance that our descendants will inherit the best resistance alleles. Like some people search for mates based on their (perceived) bodily beauty, height, or social position (e.g. wealth or fame), or psychological traits such as benevolence or confidence; people are just searching for signals of good resistance genes. Intelligence appears to be one of these signals. But the term intelligence used for humans is ill-defined.

 

Group Selection and Evolvability

Group selection theory contends that organism characteristics that provide benefits to a group (clan, tribe, or larger population) can evolve despite individual disadvantages such as those cited above. The group benefits of intelligence (including language, the ability to communicate between individuals, the ability to teach others, and other cooperative aspects) have apparent utility in increasing the survival potential of a group.

Intelligence is one of a class of inherited characteristics that depend for their utility on the acquisition of something (in this case, experience or information concerning the outside world) that can be retained indefinitely by an individual but not genetically transmitted to descendents. The ability of an organism to acquire such information and then non-genetically transmit it to descendents that could then benefit from the experience of their parent without having to acquire the experience themselves appears to be a major group advantage and essentially multiplies the intelligence of an individual by allowing progressive group accumulation of experience.

Evolvability, another proposed modification to classical evolution theory suggests a connection between a programmed limitation of organism life span and the evolution of intelligence. The suggestion is that without a limited life span, the acquired characteristic (experience) would tend to override the inherited characteristic (intelligence). An older and more experienced animal would tend to have an advantage over a younger more intelligent but less experienced animal thus interfering with the evolution of intelligence. This factor is ameliorated by an organism design that limits life span. See Evolution of ageing.

 

Nutritional Status

Higher cognitive functioning develops better in an environment with adequate nutrition, and diets deficient in iron, zinc, protein, iodine, B vitamins, omega 3 fatty acids, magnesium and other nutrients can result in lower intelligence either in the mother during pregnancy or in the child during development. While these inputs did not have an effect on the evolution of intelligence they do govern its expression. A higher intelligence could be a signal that an individual comes from and lives in a physical and social environment where nutrition levels are high, whereas a lower intelligence could imply a child (and/or the child’s mother) comes from a physical and social environment where nutritional levels are low. Previc emphasizes the contribution of nutritional factors, especially meat and shellfish consumption, to elevations of dopaminergic activity in the brain, which may have been responsible for the evolution of human intelligence since dopamine is crucial to working memory, cognitive shifting, abstract, distant concepts, and other hallmarks of advanced intelligence.

 

Flexible problem solving

The statement that such high intelligence “lack survival value”, which is used by believers in social intelligence and sexual selection, invariably assumes a stable environment. If climate change is factored in, howewer, the evolution of human intelligence can be perfectly explained by flexible problem solving during those climate changes.

 

Factors associated with intelligence

A number of factors are known to correlate with IQ but since correlation does not imply causation the true relationship between these factors is uncertain unless there are also other forms of evidence. There are also group differences regarding IQ.

 

Animal and plant intelligence

Although humans have been the primary focus of intelligence researchers, scientists have also attempted to investigate animal intelligence, or more broadly, animal cognition. These researchers are interested in studying both mental ability in a particular species, and comparing abilities between species. They study various measures of problem solving, as well as mathematical and language abilities. Some challenges in this area are defining intelligence so that it means the same thing across species (e.g. comparing intelligence between literate humans and illiterate animals), and then operationalizing a measure that accurately compares mental ability across different species and contexts.

Wolfgang Köhler’s pioneering research on the intelligence of apes is a classic example of research in this area. Stanley Coren’s book, The Intelligence of Dogs is a notable popular book on the topic. Nonhuman animals particularly noted and studied for their intelligence include chimpanzees, bonobos (notably the language-using Kanzi) and other great apes, dolphins, elephants and to some extent parrots and ravens. Controversy exists over the extent to which these judgments of intelligence are accurate.

Cephalopod intelligence also provides important comparative study. Cephalopods appear to exhibit characteristics of significant intelligence, yet their nervous systems differ radically from those of most other notably intelligent life-forms (mammals and birds).

It has been argued that plants should also be classified as being intelligent based on their ability to sense the environment and adjust their morphology, physiology and phenotype accordingly.

 

Click on the topics below to explore further: