Skip to main content


Once upon a time, it was the size of your head that mattered. Then somebody had the bright idea of measuring people's intelligence by setting them little tests. Before long, assessing mental ability had become a self-justifying industry; self-justifying because, as sceptics were quick to point out, the definition of mental ability tended to vary depending on the nature of the tests. And soon the tests themselves were changing, adapting to the requirements of the times. So is it any wonder that, over a century later, there are more conflicting ideas than ever about the nature of intelligence? Is it a single entity or many? Is it fixed from birth or can it develop? And how, if at all, can it be measured in any meaningful way? We are, it seems, in the middle of an intellectual quagmire. So isn't it time to retrace our steps?


In the beginning, there was craniometry, the physical measuring of the human head. Inspired by Charles Darwin to rank all living creatures in order of merit, and at the same time determined to prove that this "soft science" was as solid as any Newtonian physics, 19th-century social scientists were set on quantifying the human mind. But although they satisfied themselves that whites had bigger brains than blacks, and that women were generally lacking in grey matter, the odd pea-brained genius always popped up to cast doubt on their findings. So when Alfred Binet, a researcher in psychology at the Sorbonne, developed a test to measure intellectual ability, his ideas found fertile ground. Binet, for most of his career an advocate of craniometry, had been commissioned by France's education minister to identify children who might benefit from special help, and in 1905 he produced a test that brought together a range of skills.

Three years later, he assigned an age level to each task, and used the test to determine what he called the "mental age" of participants. He insisted, however, that such numbers were merely shorthand, and were in no way intended to grade mental worth or to suggest that intelligence was a single, measurable entity fixed at birth. Such thinking, he said, would amount to "brutal pessimism", and such measurements would be "an illusion".


In 1912, the German psychologist William Stern was studying Binet test scores when he noticed that the variations in mental age increased proportionally with a student's chronological age. By dividing the chronological age by the mental age and removing some decimal points, he arrived at a constant, which he called the "intelligence quotient".

Paradoxically, Stern advocated the study of the total individual, and wrote: "There are persons who have a pretty high grade of general intelligence, but who manifest it much better in critical than in synthetic work; again, there are persons in whom the receptive activities of the intelligence are superior to the more spontaneous activities, and so on."

Yet, for all this, his lasting gift to posterity was to be that single, constant number, which we now know as "IQ".


Nowhere was there more interest in Binet's work than in the United States, a melting-pot nation that in the early years of the 20th century was anxiously engaged in defining itself. Gregor Mendel's recently rediscovered writings on heredity were all the rage - albeit in a simplified form that amounted to little more than the idea that two bad genes add up to one very small pea.

So when research psychologist Herbert H Goddard came across Binet's articles, he put two and two together and made five. Goddard worked at the Vineland training school for feeble-minded girls and boys in New Jersey, where he had a captive sample through which to develop his theories.

Applying Stern's IQ grading system, he categorised his charges, in descending order, as "morons", "imbeciles" and "idiots". Then he applied a little cod Mendel and concluded that the nation's gene pool would be severely compromised if such defectives were permitted to reproduce.

Rather, he said, they should be rounded up and confined in institutions like Vineland. At Goddard's behest, immigrants arriving at Ellis Island were forced to sit mental tests, and those falling below a certain level were barred from the Land of the Free, regardless of what part factors such as confusion, cultural diversity or plain fear might have played in their poor performance.


If Binet's simple test and Stern's IQ scores could be used to weed out mental defectives, might it not be possible to test and rank the more able?

Lewis H Terman, a professor at Stanford University, tightened up and standardised the test, turning it into a rigid product that allowed no leeway for variable responses. Armed with his "Stanford-Binet", he set about reordering society. Those with high IQs should be leaders, he proclaimed, while barbers, for instance, could get by with an IQ of 85 maximum. But Terman got a bit of a shock when he ran some tests on "hobos", most of whom proved to have higher IQs than the average police officer. He continued undeterred.

A job for the army in the First World War gave the budding IQ industry the break it needed. Robert M Yerkes, a psychologist at Harvard, persuaded the US Army to assess the aptitude of 1.75 million men by giving them intelligence tests. The results of this first mass screening provided testers with the body of statistics they coveted, even if those statistics were highly dubious (the "average" adult supposedly had a mental age of 13).

More alarming than such nonsense was the "discovery" that Slavs and darker Europeans were less intelligent than their fairer cousins, and that Africans verged on the moronic. By 1924, evidence acquired from the Stanford-Binet was being used to restrict immigration of the "genetically undesirable". At the same time, the test that won the war in Europe was being drafted into schools to wage war on ignorance.

And while Binet's original test was administered by a trained tester working with individual children, the package Yerkes was pushing could be dispensed en masse. "Scoring is unusually simple," said one advertisement for the Stanford-Binet, the whole process of awarding a number to a student taking just 30 minutes. The age of "brutal pessimism" had arrived.


Thanks in part to this relentless marketing of IQ tests (there were fortunes to be made in royalties alone), the public and the education establishments on both sides of the Atlantic quickly latched on to the idea of intelligence as a single entity the quality of which - high, average or low - was given at birth and remained fixed for life. In post-war Britain, the findings of Cyril Burt, who had carried out research in the 1930s on twins who had been raised apart, led to an intelligence test being incorporated into the new 11-plus exam. It was suggested after his death in 1971 that Burt had invented much of his data. But, from the 1950s, the 11-plus was crucial in deciding whether children should be educated to GCE level or be consigned to the so-called secondary modern schools.

Critics of intelligence testing have long argued that tests are culturally biased and class loaded, and measure little but an individual's ability to perform well in intelligence tests (why else would prospective participants be encouraged to "practise" in advance?). But once the simplistic idea of intelligence had taken root, it proved extraordinarily resilient. Today, the BBC happily "tests the nation" by way of an "interactive IQ show", while Mensa, the club founded in 1946 for people "whose IQ is in the top 2 per cent of the population", boasts 100,000 members worldwide and 28,000 in the UK ("including 2,200 junior Mensans under the age of 16"). But it would be wrong to suppose that any such consensus is mirrored in academic circles. For among those whose job it is to devise theories of intelligence - to consider the nature of what is being tested - the argument has moved onwards and outwards. In the 19th century, Sir Francis Galton related intelligence to the keenness of the senses. Subsequent definitions have ranged from the ability to solve problems to the ability to learn, or to adapt to novel situations. A "nature or nurture" debate has long raged between those who believe intelligence is entirely inherited, and those who argue that environmental factors - anything from diet to education to hearing the music of Mozart - make a difference over time. Then there's that vexed question of the "single entity".


After studying a group of boys at an English prep school in the early years of the 20th century, Charles Spearman proposed the idea of something he called "g" - a "general level" of mental energy or brain power, with a consistent influence that could be detected over a range of tests. He went on to suggest that a minimum level of "g" might be a reasonable requirement for the right to vote, "and, above all, for the right to have offspring".

Today, the debate on intelligence still rages between those who believe in the paramount importance of Spearman's general factor of intelligence (without necessarily endorsing his views on social policy) and those who discern only a family of separate mental abilities at work.


The idea of many intelligences has been around in academic circles since the 1920s, and indeed has much in common with classical and medieval thinking.

In the 1960s, the Californian psychologist J P Guilford proposed a three-dimensional structure of intelligence and suggested the existence of 120 separate categories that defined the intellectual abilities that make up an individual's "intelligence". The New York psychologist David Wechsler also described intelligence as a multi-faceted "aggregate or global capacity", and even devised a battery of tests that are widely used by educational psychologists when assessing the strengths and weaknesses of individual children. But it wasn't until 1983, when the Harvard-based educational psychologist Howard Gardner published a book called Frames Of Mind (see resources), that the wider world began to scratch its head and take notice.

According to Gardner's "MI theory", humans exist in many contexts, which call for and nourish varied assemblies of intelligence. While acknowledging that "g" can accurately predict school success, he says this is hardly surprising given that most tests are of the paper and pencil variety, and test the linguistic and logical-mathematic abilities schools value. "While I do not question that some individuals may have the potential to excel in more than one sphere, I strongly challenge the notion of large general powers," he says.

"To my way of thinking, the mind has the potential to deal with several kinds of content, but an individual's facility with one content has little predictive power about his or her facility with other kinds of content." He cites at least seven types of ability or intelligence - linguistic, musical, logical-mathematical, spatial, bodily-kinaesthetic, interpersonal and intrapersonal and argues that each of these areas may develop independently. Individuals may have strengths and weaknesses, and intelligences can be fashioned and combined in many ways. More recently, in his book The Unschooled Mind: how children think and how schools should teach, Gardner has discussed a range of learning styles, and calls on teachers to jettison the "fast-food approach to education", and seek instead to accommodate all children, not just those who are suited to traditional ways of learning.


As a sixth-grader in a US high school, Robert Sternberg was made to resit an intelligence test with fifth-graders. "The absurdity of that situation helped me get over the test anxiety," he wrote later. And, inspired by that breakthrough, he subsequently devised his own "Sternberg test of mental ability", which he administered to classmates as part of a science project.

It was the start of a career devoted to the study of IQ testing and existing theories of intelligence. In 1985, he introduced his own "triarchic", or three-part theory of intelligence, based on his understanding of the relationship between intelligence and experience, the external world, and the internal world of the individual. One of his most important contributions has been the redefinition of intelligence to incorporate practical knowledge. "Real life is where intelligence operates, and not in the classroom," he says. "The true measure of success is not how well one does in school, but how well one does in life." Sternberg advocates a system known as dynamic assessment, which was pioneered in Russia and is currently being used in the London borough of Southwark to assess children with special needs. The system measures a child's ability to benefit from instruction by first showing the subject how to perform a given task.


The theories of Gardner and Sternberg have struck a chord in many UK schools, providing a rationale for teachers who have long believed the strictures of the national curriculum hinder their efforts to reach less academically gifted children. With teaching strategies aimed at individual types of intelligence now available off the shelf and by the book-full, and a range of tests designed to identify each individual's strengths and weaknesses, the possibilities for accommodating all children seem now to extend in all directions. Yet while expressions such as "emotional intelligence" and "kinaesthetic learning" now trip off every tongue, British secondary schools are increasingly using tests that are not far removed from the old IQ tests to stream their Year 7 intake, monitor their progress, identify those needing extra help, and assess added value, while many selective schools continue to use intelligence tests to screen out potential underperformers.

Almost a million children a year now sit "cognitive ability tests" produced by nfer-Nelson, a company owned by Granada Learning, while a third of English secondaries use a rival form of baseline assessment devised at the University of Durham. Proponents of these tests point out that the uses to which they are being put - assessing the need for additional funding, for example - has more in common with the ideas of Alfred Binet than those of Lewis Terman, while opponents argue that they are still a poor indicator of potential for children from deprived backgrounds and ethnic minorities because they are as culturally and linguistically loaded as ever.

And, all the while, the world at large continues to speak of IQ and intelligence in terms that first became current in the years immediately following the First World War. At the height of last summer's A-level results crisis, one academic high-flier told the media what it felt like to be given unexpectedly low marks. "It made me think I wasn't as intelligent as I'd thought," she said. Perhaps she was right.

Main text: David Newnham. Illustration: Lasse Skarbovik. Additional research: Tracey Thomas. Next week: Pupil power.

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you