Visit any primary classroom and you are highly likely to encounter children using apps on tablets to support their learning. And pupils seem more than happy to be able to access this technology at school, as well as at home.
It is easy to see that apps are motivating and engaging, but how do we know the apps being used are educationally valuable and have a positive impact on learning?
Quick read: Why we have no idea if most things in education work
Quick listen: How to train a teacher
Want more articles like this? Join our Tes Teaching and Learning Facebook group
Reading is a crucial life skill and apps to support development of reading skills are becoming more widespread, as teachers begin to see the benefits of exploiting these new technologies.
At present, however, there is a lack of systematic guidance on how to choose these apps to ensure they are good quality, as well as to check they have a sound theoretical and research base.
This is a big problem, as choosing an app can be overwhelming and faced with the plethora available, it can become like "a child in a sweetshop" experience.
There is a danger of leaving parents or teachers at risk of what Professor Jane Hurry describes as "the butterfly or smorgasbord approach", meaning we can flit from one app to another, not knowing which is best, or be overwhelmed by too much choice.
Reading apps: wrong choices
Ease of downloading and accessibility, as well as low costs, can lead to apps being used to support reading with little thought. Content can be variable and not always evidence based, yet there is little accountability and users may not be aware of this.
This leads us to the question: how can we judge which apps we should be recommending to support reading?
As part of our work on the EU-funded iRead project, we are working to develop a diverse set of learning apps and teaching tools that include a personalised and adaptive literacy game (Navigo) and a Reader app (Amigo Reader).
Through this work we have created an evaluation framework tool, which can be used to evaluate the quality of reading apps and guide design and the feedback types within learning games for children more generally. We believe it could be a big help to teachers.
Picking apart the app
This tool encourages users to first identify which reading skill the app is focusing on (eg phonics, decoding, fluency, comprehension) and reflect on three defined categories.
- "Teaching concepts" involves asking questions about whether games introduce skills prior to gameplay and whether support for gameplay is included.
- "Where am I going?" involves analysing whether apps make learning objectives specific and success criteria clear.
- "How am I doing?" encourages users to reflect on whether feedback is "outcome-based" (eg a score) or "elaborative" (eg a hint).
Our tool places an emphasis on feedback design as it is recognised as a key pedagogical dimension of games, particularly in early learning, but there has been little research on how commercial reading games embody existing feedback theories (eg Hattie and Timperley, 2007).
The right type of feedback
This prior research suggests "elaborative feedback" can be more effective as it supports "deep learning" and encourages learners to understand and correct errors through metacognition.
Our own research examining some widely used, existing commercial reading games revealed gaps in "elaborative feedback" use and inconsistencies in feedback design. These gaps were especially visible in how little support children were offered to recover from errors.
Use of the iRead framework tool could be adopted across primary to help ensure apps are suitable.
It is crucial to now begin systematically scrutinising the design of early learning games given their increasing use in classrooms. In future, we hope to build further on this work, to support teachers to make more informed decisions about literacy apps.
Elisabeth Herbert, Dr Laura Benton and Dr Emma Sumner are part of the iRead project at UCL
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 731724. The article reflects only the authors' views and the agency is not responsible for any use that may be made of the information it contains.