The overlap between two measures, ranges from 0 (no association at all) to 1 (perfect agreement). For example, the more paperwork you do, the more you feel like throwing up or, in the case of a negative correlation, the more paperwork, the less time you have for a life. Imagine two overlapping circles. Square the correlation and you get the amount they have in common.
Thus 0.5 gives 25 per cent in common, while 0.7 gives 49 per cent. Height and weight probably correlate at about 0.7 or 0.8, whereas A-level grade and degree class is nearer 0.3 or 0.4.
Literally, "things given" in Latin, the information a researcher collects.
May be "quantitative" (such as test scores) or "qualitative" (for example, how people describe their views in interview). Pedants like me use the plural "the data are ..."
Related to anthropological methods, when researchers observed tribal rituals to see why members danced round trees or lit celebratory fires. A "positivist" researcher in an unruly classroom might count up disruptive acts and calculate which tactics reduced their number, whereas an "ethnographer" would want to know why pupils were dancing round the teacher and setting fire to the classroom.
Better known as "bias", conscious or unwitting. You might smile more at pupils you hope will do better on a test. William Labov, an American researcher, found that white interviewers obtained more sparse responses from minority pupils than interviewers from the same ethnic group.
The Hawthorn plant in Chicago carried out experiments to improve workers' conditions - better heating, lighting and so on. Every innovation increased productivity, so they concluded it was interest in the workforce that was effective. Hawthorn effect can be partly countered in educational research by using an extra control group, which receives a novel programme.
Adding up several studies to calculate an "effect size" (such as working out an average correlation). Medicine uses meta-analysis to gross up research on, for example, aspirin preventing heart attacks. In education, there have been meta-analyses of issues such as the effects of different kinds of teachers' questions. You can do separate meta-analyses of older and newer studies, or large-scale and small-scale projects to see if the effect size is different.
Proposing that there will be no difference (between the achievement of boys and girls, for example). Supposed to make you more objective than if you assumed that girls would beat the hell out of boys, or vice versa, but as capable of being fiddled as any other hypothesis.
A researcher who is also involved in what is being studied (for example, teachers researching their classroom or school). Needs consummate skill to avoid "proving" what a wonderful person, or flop, you are. Headteachers studying their own school need to be on their guard against being smug or masochistic. ("Woe is me, what a terrible school I run.") Standard deviation
A measure of dispersal. Most people cluster around the middle, so on a standardised test of reading, for example, where the mean score is 100 and the standard deviation 15, about 68 per cent of candidates' scores are expected to fall between 85 and 115. Two standard deviations either side (70 to 130) ought to bring in more than 95 per cent of people.
Tests of significance work out the likelihood of a result having occurred by chance. Usually one in 20 (the 5 per cent level) is taken as the minimum for quoting as "significant".
Validity and reliability
"Validity" is how far research measures what it claims to measure, while reliability is how stable the measure is. For example, whether different assessors would come up with the same result, or whether one half of a test would correlate with the other half. Testing five-year-olds' knowledge of science with a written paper would not be a valid test, as it would measure reading and writing more than science. Not that this will stop the Government giving one.
Things that vary, of course. Maths test scores, big toe length, attitude to school. "Continuous variables" are numerical measures like percentages and test scores, while "dichotomous variables" have two categories, for example, malefemale, yesno. "Categoric variables" are discrete categories such as bluebrowngreen eyes. I'll stop now before I get blinded by the completely obvious.
Ted Wragg is emeritus professor of education at Exeter University