As part of their package of revision tips, teachers often recommend that students create their own test questions. Students also often do this naturally, as part of their own self-testing, whether they have been told to or not.
The benefits of the task appear obvious: generating questions encourages students to revisit material, rework what they have learned and encourages self-testing on that information.
Research has indicated that testing lies at the heart of information retention. For example, in their review of the research, Henry Roediger and Jeffery Karpicke found that frequent testing can boost attainment at all levels of education (Roediger & Karpicke, 2006).
In part, testing appears to enhance memory retention due to the effort placed on our cognitive architecture to recall the information. So as information is accessed, the memory trace itself is strengthened. These findings aren’t particularly new, and have their origins in the classic memory studies of the 1960s.
Quick read: GCSEs: how building resilience leads to better results
Quick listen: GCSE revision: a guide to what works best
Want to know more? 5 things you need to know about research
So, full steam ahead for this approach? Well, unfortunately, there hasn’t been a great deal of research investigating the effectiveness of student-generated questions as part of the testing and revision process.
Studies have found that getting students to generate their own questions fosters engagement with the learning material, helps students to focus on the main ideas and improve long-term retention (see Fiorella & Mayer, 2016, for a discussion of generative learning strategies).
But a more recent study led by Vincent Hoogerheide, of Utrecht University, (Hoogerheide et al.,2019) gives reason to doubt. Hoogerheide and his colleagues were interested in how well participants recalled information after being instructed to either reread it or to generate test questions of the most relevant points.
GCSE revision: Students setting their own questions
After both groups had completed the task, they were given a multiple-choice test based on the most relevant elements of the text they had studied.
Unlike other studies of this kind, the time allowed was kept constant for both groups.
Overall, generating multiple-choice test questions led to lower test performance than restudying the material.
Generating multiple-choice questions was also found to be more time-consuming and required more effort than simply restudying the material – participants in the rereading condition obtained better results for less effort.
As the authors of the paper point out, this result does seem to be counterintuitive, but this is often the case when seemingly common sense views are put to the test.
It’s worth noting that participants consisted of university students, all of whom studied psychology. This isn’t unusual in such studies, yet we still need to consider if this sample is representative of the wider population and if naive participants would have made any difference to the results.
Such studies are also highly controlled and, therefore, take place in very artificial environments. It would be interesting to see if the study could be replicated in real-world settings, such as classrooms.
We also need to bear in mind that volunteers hadn’t been provided with any training on how to set questions, particularly seeing as it has been found that students do benefit from training programmes designed to help them acquire certain self-questioning skills (García et al., 2014).
Does this then mean that teachers should abandon this and similar learning strategies?
Research from 2010 found that generating questions and completing comprehension tasks were equally better than rereading, although in this study time was not kept constant (Weinstein, McDermott, & Roediger, 2010). Again, however, this study found that generating questions was the most time consuming, leading the authors to suggest that question generation is a useful strategy when no other alternative is available.
Having students generate their own questions, therefore, may well need to be relegated to the position of an occasional activity rather than a useful strategy, not just because of its questionable efficacy, but also due to the time it takes up.
Marc Smith is a chartered psychologist and teacher. He is the author of The Emotional Learner and Psychology in the Classroom (with Jonathan Firth). He tweets @marcxsmith
- Dunlosky, J., Rawson, K. a., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology. Psychological Science in the Public Interest, 14(1), 4–58.
- Fiorella, L., & Mayer, R. E. (2016). Eight Ways to Promote Generative Learning. Educational Psychology Review, 28(4), 717–741.
- García, F., García, A., Berbén, a B., Pichardo, M. C., & Justicia, F. (2014). The effects of question-generation training on metacognitive knowledge, self-regulation and learning approaches in Science. Psicothema, 26(3), 385–90.
- Hoogerheide, V., Staal, J., Schaap, L., & van Gog, T. (2019). Effects of study intention and generating multiple choice questions on expository text retention. Learning and Instruction, 60(February 2018), 191–198.
- Pyc, M. A., & Rawson, K. A. (2009). Testing the retrieval effort hypothesis: Does greater difficulty correctly recalling information lead to higher levels of memory? Journal of Memory and Language, 60(4), 437–447.
- Roediger, H. R., & Karpicke, J. D. (2006). The Power of Testing Memory: Basic Research and Implications for Educational Practice. Perspectives on Psychological Science, 1(3), 181–210.
- Weinstein, Y., McDermott, K. B., & Roediger, H. L. (2010). A comparison of study strategies for passages: Rereading, answering questions, and generating questions. Journal of Experimental Psychology: Applied, 16(3), 308–316.