The Formative Future: AI, standardised testing and student outcomes
I admit: it is not fun to sit for many hours to take a summative standardised test. Whether we like it or not – standardised testing always had a clear purpose. It can be to account for education outcomes on a district or state level. Or to measure college readiness for university admissions.
There are good predictors for college success, like GPA, but this and any other measures lack the comparability that the ACT and SAT offer. Now, however, there are less intrusive alternatives for standardised testing.
The coronavirus pandemic paused testing, giving us a chance to reconsider the existing paradigm. What has emerged is the possibility of doing away with summative testing – tests that sum up a student’s knowledge at a point in time – replacing it with a formative learning approach – micro-quizzes and other measures of student progress that are integrated into the learning process. Artificial intelligence is at the heart of this new approach.
My kids took standardised tests in April and got the results back at the end of October. That may be useful for keeping state and school districts accountable. But for the students, it has no use at all because you cannot learn from it. Academic measurements should inform learning so that we create better outcomes.
The key is to have a fast feedback loop, so that you discover what the student knows, and the teacher can redirect the student in real time.
AI-enabled formative learning is based on mastery learning, a concept that’s been around for half a century. The idea is that students must master material in stages, building a solid foundation before moving to the next stage. But it has never fully caught on because the fixed teacher-classroom paradigm doesn’t allow for the kind of flexibility required. Some students will naturally take longer and need more attention than others to reach their goals.
AI helps overcome this problem by taking on much of the work otherwise performed by teachers. An AI system can combine test or quiz questions with learning resources on a very granular level to make the learning process dynamic and personalised. As the system receives data from student performance, it makes predictions about the student, and picks learning resources based on that – a short video, or short piece of text.
“An AI system can combine test or quiz questions with learning resources on a very granular level to make the learning process dynamic and personalised”
This is very different from using a traditional textbook that you go through cover to cover always in the same way. Rather than one size fitting all students, AI-enabled mastery learning is very personalised. It adapts to the individual.
Once a student masters a specific learning objective, the student moves on to the next objective, with the system asking questions and offering learning resources for that target.
In this way, we can optimise a learning plan for the student. Suddenly, the assessment itself becomes formative. It’s not just a conclusive, one-test value judgment. It becomes part of the learning process during which we micro assess the learner, with low stakes, multiple times throughout a school semester or school year.
That’s an exciting proposition. We thought it would take a decade before people were ready to accept this kind of methodology. But Covid-19 accelerated everything. Because people haven’t been able to gather at a physical location and everything is asynchronised anyway, we’re finding more people eager to adopt this mode of assessment.
AI can now predict with over 90% accuracy a student’s test score, identifying their weakness and strengths with about 10 minutes of interaction. We can predict what questions a student will get wrong before they even try to answer them. We can even predict when a student will get tired and disengage.
If such a system tells a student early that their grade at the end of the class will be a C, but tells them what they should do to improve and they follow the recommendations and see their predicted grade move to a C+, that becomes a motivational factor in their learning. Many, if not most students will work to improve the predicted outcome. Eventually when they reach the end of the course and they meet the outcome that the system has predicted, that success becomes an incentive to further learning.
If students interact with such a system on a daily or weekly basis, and that system can predict students’ scores at any point in time and recommend the learning path they should follow for optimal results, it obviates the need for standardised testing.
A standardised test is a snapshot that people cram to get ready for. Once the picture is taken, there’s little follow-up. But if you have a system that’s continually evaluating the student, you don’t need that snapshot. You know at any point in time where the student is in their learning path and what their likely outcome will be.
Mastery learning tells the student, “Here’s where you are. Here’s what you’re weak at. And here’s what you need to do to improve.”
When you’re assessing for the sake of learning versus assessing for the sake of judging, it completely changes the process. And, frankly, it makes learning more enjoyable for the student.
If we, as a community, society, and nation agree that the learning process is valuable and that it says something about the student, we should be able to replace that final summative test score with mastery learning. AI enables that.
About the author: Marten Roorda is chief measurement and learning officer at AI and education company Riiid Labs. He was the chief executive of ACT from 2015 to 2020 and, before that, spent 13 years as CEO of Cito, an international educational measurement organisation based in the Netherlands.
The post The Formative Future: AI, standardised testing and student outcomes appeared first on The PIE News.