Most tests represent a snapshot of one moment in the trajectory of a student’s academic journey, extrapolating what the student has learned overall. There are plenty of ways educators are trying to supplement those tests with more nuanced, formative assessments. With the advent of game-based learning, educators have been investigating how data collected from video game play could provide insight into the way students think as they explore new concepts.
A report from the game developers, learning specialists and psychometricians involved with GlassLab’s project SimCityEDU finds that there’s great potential for games and assessment, but a lot of work to be done before games are used as primary assessment tools.
SimCityEDU is a game created to introduce environmental science to middle school students. Through game play, students are asked to conduct interviews, review research, interpret information they’ve gathered, draw conclusions, graph data, and make decisions about how to best protect the environment in a prototype city, and with each step, the game assesses every choice.
The goal is to move beyond tests that give little meaningful information about how or what students have learned, instead using data gathered from actions in a game to paint a picture of growth of knowledge. The report finds that this game-based assessment is a promising new field, but the work is still in its initial phases. The report finds that the close collaboration of all parties is crucial. Psychometricians, game designers, and learning specialists all need to work together to understand how to create the best learning experience. Just as game creators should work closely with teachers when creating the games, rather than just handing off their finished products, psychometricians need to be part of the designing and creating phases, too.
“The tests that we use today in school don’t test for things that are that useful,” said Anya Kamenetz, author of forthcoming book on future of assessment called The Test and author of the GlassLab report’s executive summary, (and a contributing writer to MindShift). “Accountability tests [measure] reading and math,” she said, but students need to know so much more than those subjects in the world outside of school. Games more accurately test those less tangible qualities by building higher order thinking skills into the game play.
“The state of mind that game designers are chasing is similar to what teachers are trying to do, which is get students in the zone of proximal development,” Kamenetz said. That means finding the “sweet spot” where the material isn’t too easy or too hard. It should provide a challenging and thus engaging experience, but not be such a frustrating experience that students quit. Games reach this state naturally by preventing players from moving on until they’ve mastered the skills needed at lower levels.
That’s why psychometricians are so excited about game play as a lens into learning. Psychometrics is an old discipline that tries to quantify intellectual achievement. Its practitioners use statistics to try and figure out what people know and how they think. Psychometricians are often responsible for creating tests.
“Psychometrics has something to bring to the table,” Kamenetz said. But at the same time, game-based learning is challenging the psychometric field to move into the 21st century. The data analytics available to educators now are changing the landscape for psychometricians, who need to evolve a new set of tools to measure the principles important to the discipline — reliability, generalizability, comparability and validity — within a new context.
Games are proving to be a good place to start the delicate partnership between the diverse working styles of game developers, educators and psychometricians.
“There’s reading, there’s science involved, there’s strategy and systems thinking and there’s perseverance,” Kamenetz said about SimCityEDU. “You can’t artificially tease those apart, you have to look at them all together.” It’s that holistic view that is so novel. Current tests isolate subjects and skills, rather than testing how a student uses everything he knows to explore, test out ideas, and gain experience.
“Clearly this isn’t the kind of test you can drop from the sky in a high stakes context and expect people to perform,” Kamenetz said. Players respond differently to games for many reasons including level of experience and interest in game play, not to mention whether they understand the academic concepts. But, Kamenetz argues that bubble tests take a certain amount of familiarity with the format as well.
“The tests we have don’t do a good job of measuring what’s important to us and they don’t measure what they do measure very well,” Kamenetz said. “So we need a better test.” The game-based assessment work is promising, but still a long way off, and alternatives need to be developed in tandem.
“One thing that might stop this in its tracks is the ability to extract data from naturalistic interactions,” Kamenetz said. Tools are already being tested that eavesdrop on students in unstructured situations that can fairly accurately predict knowledge based on the words used and complexity of thought. If this technology catches on it would allow educators to removing the artificial game world from the assessment, making it a very real test.
Ultimately, the GlassLab report is a call to psychometricians to continue the work on game-based assessments. It questions how tests are currently built and the role psychometricians can have on changing them more than on the impact of game-based assessment in the classroom. Predictive technologies and complex data analytics already exist in the commercial world, so why shouldn’t those techniques be applied to the important work of teaching?