Games for learning research method overview: interviews

Note-taking for Research on Games and Simulations with Jan Plass

Interview With The Vampire

Magnusson, C., Rassmus-Gröhn, K., Tollmar, K., and Deaner, E. Haptimap User Study.

Gomoll, K. & Nicol, A. (1990). User Observation: Guidelines for Apple Developers.

The Apple developer guidelines recommend incorporating user observation and feedback early and often in the software design process. Rather than receiving a set of requirements and then executing against them, developers should make prototypes quickly, test them regularly, and then iterate repeatedly. While these guidelines refer to observations of users as they interact directly with an application, they can also inform the process of interviewing users. 

How does this method work?

At a basic level, interviewing is a simple matter of asking stakeholders what they want and having them give you their requirements. Interviews can be directive, with a specific informational goal that follows a strict script, or they can be open, going wherever the conversation takes them. The former method is effective for gathering information about known issues, while the latter is valuable for clear blue sky ideation where the parameters of the problem are still unknown. In either case, interviewers need to be careful to avoid leading questions and subtly communicating their own biases.

Who are the participants in this method?

Interview subjects should be representative of your end users, and at least somewhat knowledgable about the subject domain. They need not be familiar with your specific technological approach, but they should have enough general background to be able to imagine and suggest possible solutions.

Selecting subjects is more challenging when you are developing a truly new technology or experience, rather than an improvement on an existing one. In this case, interviewees should be able to articulate their needs and obstacles, without necessarily being able to request or conceive particular solutions.

Interview participants are usually drawn from a broad and uncontrolled population. If they are not developers or experts, participants can feel like they are being tested on their knowledge. It is crucial to set them at ease by helping them understand that you are assessing their needs, not their abilities.

What data is being collected?

Interviews are the emblematic qualitative research method. Closed questions with predetermined answer formats (e.g. yes or no) are easy to analyze, but open questions and conversational formats are more likely to yield the unexpected answers that are  valuable in the early design stages. Data can take the form of notes, audio recordings, or video. Asking “how” questions prompts participants to give you concrete information on the way things work in their problem domain. Asking “why” questions prompts them to explain underlying thoughts and ideas. A successful interview will combine the two.

What kind of insights can this method provide?

Interviews are typically used to “trawl for requirements”, to determine the scope of the problem space and to elicit potential solutions. In experimental studies, researchers will ask each interviewee the same questions to create data  that can be compared across interviews. When doing exploratory research, by contrast, researchers are free to ask questions in a more natural and conversational way, to invite interviewees to think creatively and to make unexpected connections.

Echevarria, J., Short, D., & Powers, K. (2006). School Reform and Standards-Based Education: A Model for English-Language Learners. The Journal of Educational Research, 99:4, 195-211.

The authors of this study of English language learners videotaped a group of teachers three times over the course of a school year. The researchers used the Sheltered Instruction Observation Protocol (SIOP) rating scale to analyze the lesson videos lessons, which they supplemented with in-person classroom observation. They shared their analyses with the teachers and provided written feedback throughout the study, both for the teachers’ benefit, and so the researchers could validate their interpretive approach. At the conclusion of the study, the authers analyzed the SIOP to determine overall changes in teachers’ practice and the resulting changes in their effectiveness. A second group of teachers in the study were observed and videotaped, but did not receive ongoing feedback or training.

The authors collected qualitative data in the form of written classroom observations, online discussion, teacher evaluations and reflections, and ongoing journal entries. They used this data to elucidate which instructional practices resulted in differences between the intervention and comparison classrooms. They measured the effect in students’ academic literacy development over time using an expository writing assessment. This method was chosen because it resembles the tasks the subjects regularly perform in the classroom. Students wrote to the same prompt at the beginning and end of the study to create a baseline for comparison. The researchers were unable to use reading or writing test scores due to school policies. Nor were they able to draw on district-developed writing assessments, since those called for narrative rather than expository text, making them less comparable to tasks in the subject-area classes that were the object of the study.

Plass, J.L., Milne, C., Homer, B.D., Jordan, T., Schwartz, R.N., Hayward, E.O., Verkuilen, J., Ng, F., Wang, Y., & Barrientos, J. (2012). Investigating the Effectiveness of Computer Simulations for Chemistry Learning. Special Issue on Large-Scale Interventions in Science Education for Diverse Student Groups in Varied Educational Settings. Journal of Research in Science Teaching, 49(3), 394–419.

This study measured the effectiveness of interactive software to improve student learning outcomes in chemistry comprehension, chemistry transfer, and graphing skills. The researchers assessed chemistry knowledge with a test adapted from the state Regents examination. The test used multiple-choice comprehension questions designed to assess learning of concepts that were directly addressed in the instructional materials. The test also included open-ended transfer questions assessing whether students were able to apply the information they learned in new situations or problems.

Researchers observed each class session, recording student attendance and which topics were covered. At regular intervals, they noted what the class was engaged in at that moment, as well as the percentage of students on-task with the assigned activity versus off-task. The researchers also recorded videos of class sessions, but due to school policies, these videos could only focus on the instructors, not the students. Still, video observation gave a more accurate picture of the degree of fidelity to the lesson plans shown by the teachers’ implementation than could have been obtained using self-reporting, direct classroom observation or assessment of student work. The video data was coded numerically. The researchers arrived at this coding system by an iterative process, comparing codes assigned by multiple researchers to resolve discrepancies and to make codes more specific or general as needed. The video coding was also compared to the paper observation forms generated during classroom observations.

Gower, L. and McDowall, J. (2012). Interactive music video games and children’s musical development. B. J. Music Ed. 2012 29:1, 91–105

The authors assessed the effectiveness of music video games like Guitar Hero for fostering children’s musical development using a qualitative research  research design. Rather than assessing gains in specific skills as the result of playing specific games over some time interval, the researchers aimed to give an in-depth account of participants’ experiences with and attitudes toward the games. They used purposive sampling to identify a small group of teachers and students who could provide meaningful and representative in-depth responses, rather than collecting data from a broader pool.

The researchers conducted individual semi-structured interviews with each participant, addressing their musical backgrounds and experiences with and views of interactive music games. Interviews with teachers also explored their experiences with and opinions about their teaching backgrounds, knowledge of music technology, and experience with and opinions of the games as educational tools. Researchers made audio recordings of the interviews and transcribed them. They then coded and analyzed the data using content analysis methods. Coded responses were sorted into categories corresponding to themes identified in the literature, and then further into subcategories corresponding to subthemes that emerged from interview questions.