Practice is essential for learning. However, for many interpersonal skills, there often are not enough opportunities and venues for novices to repeatedly practice. Role-playing simulations offer a promising framework to advance practice-based professional training for complex communication skills, in fields such as teaching. In this work, we introduce ELK (Eliciting Learner Knowledge), a role-playing simulation system that helps K-12 teachers develop effective questioning strategies to elicit learners' prior knowledge. We evaluate ELK with 75 pre-service teachers through a mixed-method study. We find that teachers demonstrate a modest increase in effective questioning strategies and develop sympathy towards students after using ELK for 3 rounds. We implement a supplementary activity in ELK in which users evaluate transcripts generated from past role-play sessions. We demonstrate that evaluating conversation moves is as effective for learning as role-playing, while without requiring the presence of a partner. We contribute design implications for role-play systems for communication strategy training.
While learning by teaching is a popular pedagogical technique, it is a learning phenomenon that is difficult to study due to variability in the tutor-tutee pairings and learning environments. In this paper, we introduce the Curiosity Notebook, a web-based research infrastructure for studying learning by teaching via the use of a teachable agent. We describe and provide rationale for the set of features that are essential for such a research infrastructure, outline how these features have evolved over two design iterations of the Curiosity Notebook and through two studies---a 4-week field study with 12 elementary school students interacting with a NAO robot and an hour-long online observational study with 41 university students interacting with an agent---demonstrate the utility of our platform for making observations of learning by teaching phenomena in diverse learning environments. Based on these findings, we conclude the paper by reflecting on our design evolution and envisioning future iterations of the Curiosity Notebook.
https://doi.org/10.1145/3479538
Receiving feedback on preliminary work allows content creators to gain insight and improve outcomes. However, evaluation apprehension can delay feedback seeking. In this paper, we operationalize goal setting theory for planning feedback goals. In an online experiment, participants (N=245) wrote an initial story after planning feedback goals (or not), submitted the story for feedback when desired, and revised the story based on the feedback received. Participants perceived the feedback came from a supervisor or peer to induce different levels of evaluation apprehension. We found that participants who wrote proximal feedback goals sought feedback when their stories were less developed and revised the stories more after receiving feedback compared to when they wrote distant goals. Participants who anticipated feedback from a supervisor in addition to planning a feedback goal revised their work more than participants in the other conditions. Planning feedback goals did not affect evaluation apprehension. These results indicate that instructors and tools should guide content creators to plan proximal feedback goals to encourage sharing early drafts of creative work and provide feedback from someone in a position of higher perceived power (e.g., supervisor) to foster revision.
https://doi.org/10.1145/3449098
Peer assessment, as a form of collaborative learning, can engage students in active learning and improve students' learning gains. However, current teaching platforms and programming environments provide little support to integrate peer assessment for in-class programming exercises. We identified challenges in conducting in-class programming exercises and adopting peer assessment through formative interviews with instructors of introductory programming courses. To address these challenges, we introduce PuzzleMe, a tool to help CS instructors to conduct engaging in-class programming exercises. PuzzleMe leverages peer assessment to support a collaboration model where students provide timely feedback on peers' work. We propose two assessment techniques tailored to in-class programming exercises: live peer testing and live peer code review. Live peer testing can improve students' code robustness by allowing students to create and share lightweight tests with peers. Live peer code review can improve students' code understanding by intelligently grouping students to maximize meaningful code reviews. A two-week deployment study revealed that PuzzleMe encourages students to write useful test cases, identify code problems, correct misunderstandings, and learn a diverse set of problem-solving approaches from peers.
https://doi.org/10.1145/3479559
Peer review has been used in both online and offline classrooms to inspire creativity, gather feedback, and lessen instructor grading loads, especially for design-based tasks without definitive rubrics. To explore the nuances and quality of peer feedback, we developed UX Factor, a peer grading platform that aims to characterize the behavior of peer reviews and the consistency of the ranking models used to aggregate these reviews. This system harnesses the power of pairwise comparisons to minimize bias and encourage context-driven analysis. We adopted UX Factor in a user interface course of 133 students and teaching assistants (TAs) across 3 different individual design projects over a semester and found that the system was effective in eliciting high-quality feedback. We saw that raters have higher agreement than random preferences, and with at least 15 ratings per submission, a simple average of ratings produced rankings that were consistent to both the raw ratings and other more complex models. These rankings were robust to disagreeable raters and changing class sizes, demonstrating the potential of comparative peer review to match the quality of expert feedback at scale.
https://doi.org/10.1145/3479863
Shared gaze visualizations, where the real-time gaze location of group members is shared with one another, have become increasingly studied over the last decade by HCI researchers due to their potential in facilitating communication and increasing group performance. Studies involving peer collaborators find improved outcomes and better collaboration. Less is known, however, on how gaze sharing may aid learners and instructors. In our study, an instructor teaches a learner how to assemble and program a simple microcontroller, communicating either through a webcam feed (webcam condition), a field-of-view video feed (HMC condition), or a field-of-view video feed with a gaze location pointer (gaze condition). We find that with the gaze condition, learning gain is highest, especially for low achievers. Moreover, we see that instructors predict learner post-scores more accurately, suggesting gaze sharing helps instructors track the cognitive state of the learner. This effect was also most salient for low achievers. We find that the HMC condition that only lacked this single dot, many of the benefits for both learning and teaching were lost. The paper concludes with discussions on how gaze visualization may have supported learning and teaching, as well as the tool’s limitations and conditions for usefulness.
https://doi.org/10.1145/3449208
Discussion fora of instructional videos contain previously-discussed questions and answers. These can resolve many things that a learner might be confused about. However, it is difficult for learners to find relevant content in a static discussion forum that is separated from the video. This paper introduces Adjacent Display of Relevant Discussion (ADRD): it places discussion in a panel adjacent to the video and dynamically updates the content of the panel based on the current time of the video. In a between-subjects lab study (N=20), ADRD helped users resolve more points of confusion, skim and read more discussion comments, and engage more with the video.
Multi-touch spherical displays that enable groups of people to collaboratively interact are increasingly being used in informal learning environments such as museums. Prior research on large flatscreen displays has examined group collaboration patterns in museum settings to inform the design of group learning experiences around these displays. However, designing collaborative interfaces for multi-touch spherical displays remains challenging as we do not yet completely understand how visitor groups naturally collaborate around these displays in a naturalistic museum setting. The spherical form factor of the display affords new forms of collaboration: Unlike flatscreen displays, spherical displays do not have a definite front or center, thus intrinsically creating shared and private touch interaction areas on the display based on users’ viewing angles or physical arrangements. We conducted a 5-day long field study at a local science museum during which 571 visitors (370 adults and 201 children) in 211 groups interacted with a walk-up-and-use collaborative learning application showing global science data visualizations, on a multi-touch spherical display. We qualitatively analyzed groups’ natural collaboration patterns including their physical arrangements (F-formations), their collaboration profiles (e.g., turn-taker or independent), and the nature of group discussion around the display. Our results show that groups often engaged in both independent as well as closely collaborative group explorations when interacting around the sphere: physical spacing between group members around the sphere was strongly linked to the way groups collaborated. It was less common for group members to make and accept suggestions or coordinate touch interactions when they did not share the same field of view or touch interaction space around the sphere with each other. Our work will discuss implications for supporting group collaboration in this context and inform the design of future walk-up-and-use multi-touch spherical display applications for use in public settings.
https://doi.org/10.1145/3476067