Kids create stop motion videos in class. (Brad Flickinger/Flickr)
Kids create stop motion videos in class. (Brad Flickinger/Flickr)

“They don’t live in Saskatoon!” a seventh-grade girl says vehemently. She’s working with her class to figure out where another mystery class is located somewhere else in the world. The two classes are competing to figure out the other’s location first. Students must work together to develop good yes or no questions to ask the other class, like the age-old car game “20 Questions.”

It looks like fun and students are certainly engaged in the project. This is a fairly typical use of technology in the classroom, featuring some of the elements technology evangelists talk about — like global connection and collaboration with peers. When this video was shown to a group of educators at the 2014 International Society for Technology in Education conference as part of a session on how to deepen technology use in the classroom, teachers were enthusiastic about the Mystery Skype Project.


“They were practicing important skills, asking questions, problem-solving,” said one teacher. “Everyone was engaged; they all had roles to play,” added another. Other educators were excited the activity had authentic, real-world applications and that it could help students build empathy with children in other parts of the world.

These are typical reactions to activities that use technology in the classroom, but they aren’t sufficient for Julie Graber, an instructional technology consultant for Prairie Lakes Educational Agency in Iowa. She and her colleagues are trying to codify specific traits that coaches can look for to determine if technology in the classroom is having the transformational impact that many hope it will.

“What we’re finding is that there’s really nothing that’s helpful for moving a system in terms of knowing where am I at and where am I trying to go,” Graber said during the ISTE session.

Many schools are using the SAMR (substitution, augmentation, modification, redefinition) framework to help guide technology integration. But Graber doesn’t find that model specific enough to guide educators through the process of improving their use of technology. “When we look at SAMR we find that it’s really difficult for leaders to figure out where they’re at and where they need to go,” Graber said.

The SAMR model anticipates educators will gradually move through a process of transformation with their classroom technology as they become accustomed to teaching in new ways. It assumes that teachers will begin by substituting technology for other activities in the classroom, then move on to augmenting activities, progress to modifying the assignment to focus around specific functionalities offered by technology, and finally to redefine the tasks possible in school because of the technology available to them.

Author, speaker and former teacher Alan November agrees with Graber that SAMR doesn’t provide enough concrete guidance. Many of his graduate students present technology projects that they define as a redefinition of learning — the highest level in the SAMR model — but November sees them as merely substitution. For example, one of his students presented on Leafsnap, an electronic field guide app that allows students to take a photo of a plant leaf and quickly learn about its biological traits.

“What did they just learn?” November asked a crowd of educators at ISTE 2014 in Atlanta. “How to take a picture. That’s what they learned.” While the Leafsnap app is cool, it doesn’t meet November’s criteria for using technology. “I think it’s really important to start with a framework of: Does technology add any value?” he said. He uses six questions to determine value, arguing that if the answer is “no” to any of the questions, the use of technology should be considered suspect.

  1. Did the assignment create capacity for critical thinking on the Web?
  2. Did the assignment reach new areas of teaching students to develop new lines of inquiry?
  3. Are there opportunities to broaden the perspective of the conversation with authentic audiences from around the world?
  4. Is there an opportunity for students to publish (across various media) with an opportunity for continuous feedback?
  5. Is there an option for students to create a contribution (purposeful work)?
  6. Were students introduced to the best example in the world of the content or skill?

“I think these six elements separate what’s transformational from what I would call the $1,000 pencil,” November said. Instead of using Leafsnap, November would like to see teachers challenge students to think critically with a question like, “Which plants will die first when the effects of climate change begin to be felt?”

That question couldn’t have been answered by students before the Internet age, but now a question like that forces students to use the Internet to investigate a globally relevant topic and gives them the opportunity to add value to the conversation about climate change.


Dissatisfaction with the frameworks currently available to evaluate whether technology is transforming learning prompted Graber and her colleagues, including Scott McLeod, to try and develop a new set of questions to help move past obvious qualities like student engagement to a deeper investigation of the pedagogy behind the activity.

Three of the most important traits they look at when evaluating a lesson are whether it is discipline specific, promotes critical thinking and whether technology is used in transformative ways.

  • Discipline specific: Are students learning discipline-specific and appropriate content and knowledge? If so, is student work focused around big important concepts in that discipline? Are students using discipline-specific practices, tools and technologies as part of the activity?

“If you can’t tell which discipline the lesson fits into, that’s a problem,” Graber said. The educators at ISTE returned to the Mystery Skype video example to practice Graber’s suggested evaluation tools. They evaluated only what they could see in the video, treating it like a single classroom visit, when a coach or administrator gets only a tiny snapshot of a classroom.

To many teachers in the room, it wasn’t clear what discipline the activity focused on — while geography might be one guess, the skills discussed were not specific to that discipline, nor were the tools and processes focused on geography.

  • Critical thinking/Creativity/Initiative/Metacognition: Does the activity go beyond facts or previously provided ways of thinking? Do the students have the opportunity to design, create or in other ways add unique value? Do students have the opportunity to take initiative to go beyond the parameters of the given assignment? Do students have the opportunity to reflect on their planning, thinking, work and progress?

In the Mystery Skype example, many of these qualities don’t seem to exist. The video states students each had a task, but some tasks required more critical thinking or reflection than others. A few kids were getting good practice designing smart questions, but they weren’t going beyond the parameters of the project, creating anything unique, or reflecting on their work.

  • Use of technology: Is the technology a means, not the end? Does the technology add value so that students can do their work in better or different ways from what was possible before technology? Are digital technologies used meaningfully for learning tasks?

The group of ISTE educators came up with mixed answers on this item when evaluating the Mystery Skype activity. The activity wouldn’t have been possible without technology, but learning goals like effective questioning, collaboration and reflection on the process might have been better achieved for every student in the room without it.

This sample analysis is merely an example of how this evaluation framework could be used. The ultimate goal is to move from talking about liking or disliking an activity to a deeper evaluation of the lesson, providing useful and actionable feedback to the teacher.

Graber is clear that instructional coaches and administrators should not use this process if they aren’t committed to observation and evaluation as a means for improvement. These conversations can happen within the context of a teacher’s personal goals for the year or within the frame of schoolwide goals, but they shouldn’t be used for evaluation.

“If we aren’t seeing what we want to be seeing, what is the system provided to change that?” Graber asked. “If there isn’t anything, then don’t use this.” She doesn’t think it’s fair to go into a classroom and point out problems with a teacher’s lesson if the district or school doesn’t have a plan for providing support to the teacher as she works to change and deepen her practice.

“Design professional development so it’s all about growth and descriptive feedback, not about evaluation,” Graber said. She suggests coaches find really good positive examples of lessons that elicit strong affirmatives to the questions above. Teachers need to see what it looks like before they can begin to model it themselves. Graber does not recommend dissecting a lesson like Mystery Skype with a group of teachers if one of them is featured in the video. It’s much better to practice with anonymous educators to begin having the conversations. Graber and McLeod have found this framework especially useful as professional development before a lesson. It can help start powerful conversations about technology use and could begin to move practice, too.

The goal of this framework is to push past the typical response to digital activities in the classroom that focus on student engagement and instead evaluate whether students are truly using higher-order thinking skills, learning core aspects of a specific discipline and using technology in the most powerful and transformative ways.


Katrina Schwartz

Katrina Schwartz is a journalist based in San Francisco. She's worked at KPCC public radio in LA and has reported on air and online for KQED since 2010. She's a staff writer for KQED's education blog MindShift.

Sponsored by

Become a KQED sponsor