It can be challenging for instructors to provide useful feedback on exam performance to university students in a timely way, even more so when the classroom has upwards of 300 students. And it’s another challenge entirely to get students to heed the feedback.
Okan Bulut, a professor and researcher in the University of Alberta’s Faculty of Education, is hoping to change that with an automated interactive process he calls “next generation formative feedback.”
“You finish an exam, you go home, and your instructor just tells you that your score is 85 per cent. But the instructors often don’t tell you how and why you missed that 15 per cent,” Bulut says of conventional testing. “Everyone highlights the importance of feedback for improving student learning, but no one is sure how to make the feedback more effective. If you ask instructors, every single one will say feedback is important, but when it comes to applying feedback to our classrooms, unfortunately we fail.”
Meaningful feedback at the click of a button
With the help of an Insight Development Grant from the Social Sciences and Humanities Research Council of Canada, Bulut is developing a system for computer-based testing, marking and feedback that enables students to find out how they did on an exam—and how they can do better next time—almost instantly.
“We are creating interactive web-based reports. As soon as everybody finishes writing their exams, they will be able to get really detailed feedback,” Bulut says. “In that way the instructor doesn’t have to worry about providing written feedback to each student two or three weeks later, because the marking is automatically done and the feedback is automatically generated based on how the student responded to the test items.”
More importantly, Bulut says, the feedback will be presented in interactive visualizations that engage students in interpreting their results in more meaningful ways.
“The good thing about this system is that it’s really fast. The other thing is written feedback is usually ignored. The scientific evidence tells us visual representations of data are more convincing than written data,” says Bulut, whose area of expertise is educational assessment and technology in the Department of Educational Psychology.
“It’s like when a doctor tells you that you need to quit smoking but you probably question this verbal suggestion and think, ‘Why would I do that?’ Then the doctor shows you a chest X-ray and you realize just how bad it is. It is the same idea here—we show them how they did on the test and engage them in the interpretation of those visualizations, rather than just reading about their performance on the exam.”
Putting theory into practice
Bulut and his co-investigators are in the process of designing the system and the visualizations, and testing on the system will begin in the fall of 2017 in the Learning Assessment Centre, a computer assessment lab located on the third floor of Education Centre North where more than 15,000 students took exams last year. He adds that this automated process will do more than just lighten the workload for instructors.
“Once the exam is over, the system will provide an overall summary to instructors so they can see in which areas their students are failing and in which areas they are doing better, so they can shape their instruction accordingly, providing more instructions on problematic areas to help with understanding,” Bulut says.
Though the project is focused on improving feedback for post-secondary students, Bulut says the basic premise should be applicable to students at all levels, such as K-12 students.
“There is no limitation on who can use it. It can be simpler, less technical, depending on the age group,” he says. “The idea remains the same—providing feedback to improve their learning and to engage them in the interpretation of their performance, rather than just telling them how they did.”