Technology-Powered Learning

会議の名前
CHI 2023
Olfactory Wearables for Mobile Targeted Memory Reactivation
要旨

This paper investigates how a smartphone-controlled olfactory wearable might improve memory recall. We conducted a within-subjects experiment with 32 participants using the device and without (control). In the experimental condition, bursts of odor were released during visuo-spatial memory navigation tasks, and replayed during sleep the following night in the subjects' home. We found that compared to control, there was an improvement in memory performance when using the scent wearable in memory tasks that involved walking in a physical space. Furthermore, participants recalled more objects and translations when re-exposed to the same scent during the recall test, in addition to during sleep. These effects were statistically significant, and, in the object recall task, they also persisted for more than one week. This experiment demonstrates a potential practical application of olfactory interfaces that can interact with a user during wake as well as sleep to support memory.

著者
Judith Amores Fernandez
Microsoft, Cambridge, Massachusetts, United States
Nirmita Mehra
MIT, Cambridge, Massachusetts, United States
Bjoern Rasch
University of Freiburg, Freiburg, Switzerland
Pattie Maes
MIT Media Lab, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3544548.3580892

動画
FakeForward: Using Deepfake Technology for Feedforward Learning
要旨

Videos are commonly used to support learning of new skills, to improve existing skills, and as a source of motivation for training. Video self-modelling (VSM) is a learning technique that improves performance and motivation by showing a user a video of themselves performing a skill at a level they have not yet achieved. Traditional VSM is very data and labour intensive: a lot of video footage needs to be collected and manually edited in order to create an effective self-modelling video. We address this by presenting FakeForward -- a method which uses deepfakes to create self-modelling videos from videos of other people. FakeForward turns videos of better-performing people into effective, personalised training tools by replacing their face with the user’s. We investigate how FakeForward can be effectively applied and demonstrate its efficacy in rapidly improving performance in physical exercises, as well as confidence and perceived competence in public speaking.

著者
Christopher Clarke
University of Bath, Bath, United Kingdom
Jingnan Xu
University of Bath, Bath, United Kingdom
Ye Zhu
University of Bath, Bath, United Kingdom
Karan Dharamshi
University of Bath, Bath, United Kingdom
Harry McGill
University of Bath, Bath, United Kingdom
Stephen Black
University of Bath, Bath, United Kingdom
Christof Lutteroth
University of Bath, Bath, United Kingdom
論文URL

https://doi.org/10.1145/3544548.3581100

動画
Understanding Personal Data Tracking and Sensemaking Practices for Self-Directed Learning in Non-classroom and Non-computer-based Contexts
要旨

Self-directed learning is becoming a significant skill for learners. However, learners may suffer from difficulties such as distractions, a lack of motivation, and so on. While self-tracking technologies have the potential to address these challenges, existing tools and systems mainly focused on tracking computer-based learning data in classroom contexts. Little is known about how students track and make sense of their learning data from non-classroom learning activities, and which types of learning data are personally meaningful for learners. In this paper, we conducted a qualitative study with 24 users of Timing, a mobile learning tracking application in China. Our findings indicated that users tracked a variety of qualitative learning data (e.g., videos, photos of learning materials, and emotions) and made sense of this data using different strategies such as observing behavioral and contextual details in videos. We then provided implications for designing non-classroom and non-computer-based personal learning tracking tools.

受賞
Honorable Mention
著者
Ethan Z. Rong
University of Toronto, Toronto, Ontario, Canada
Morgana Mo Zhou
City University of Hong Kong, Hong Kong, China
Ge Gao
University of Maryland, College Park, Maryland, United States
Zhicong Lu
City University of Hong Kong, Hong Kong, China
論文URL

https://doi.org/10.1145/3544548.3581364

動画
Dancing with the Avatars: Minimal Avatar Customisation Enhances Learning in a Psychomotor Task
要旨

Virtual environments can support psychomotor learning by allowing learners to observe instructor avatars. Instructor avatars that look like the learner hold promise in enhancing learning; however, it is unclear whether this works for psychomotor tasks and how similar avatars need to be. We investigated `minimal’ customisation of instructor avatars, approximating a learner’s appearance by matching only key visual features: gender, skin-tone, and hair colour. These avatars can be created easily and avoid problems of highly similar avatars. Using modern dancing as a skill to learn, we compared the effects of visually similar and dissimilar avatars, considering both learning on a screen (n=59) and in VR (n=38). Our results indicate that minimal avatar customisation leads to significantly more vivid visual imagery of the dance moves than dissimilar avatars. We analyse variables affecting interindividual differences, discuss the results in relation to theory, and derive design implications for psychomotor training in virtual environments.

著者
Isabel Sophie. Fitton
University of Bath, Bath, United Kingdom
Christopher Clarke
University of Bath, Bath, United Kingdom
Jeremy Dalton
PwC, London, United Kingdom
Michael J. Proulx
University of Bath, Bath, United Kingdom
Christof Lutteroth
University of Bath, Bath, United Kingdom
論文URL

https://doi.org/10.1145/3544548.3580944

動画
Collaborative Online Learning with VR Video: Roles of Collaborative Tools and Shared Video Control
要旨

Virtual Reality (VR) has a noteworthy educational potential by providing immersive and collaborative environments. As an alternative but cost-effective way of delivering realistic environments in VR, using 360-degree videos in immersive VR (VR videos) received more attention. Although many studies reported positive learning experiences with VR videos, little is known about how collaborative learning performs on VR video viewing systems. In this study, we implemented two collaborative VR video viewing modes based on the way of group video control, synchronized or shared (Sync mode) and non-synchronized or individual (Non-sync mode) video control, against a conventional VR video viewing setting (Basic mode). We conducted a within-subject study (N = 54) in a lab-simulated remote learning environment. Our results show that collaborative VR video modes (Sync and Non-sync mode) improve users’ learning experiences and collaboration quality, especially with shared video control. Our findings provide directions for designing and employing collaborative VR video tools in online learning environments.

著者
Qiao Jin
University of Minnesota, Minneapolis, Minnesota, United States
Yu Liu
University of Minnesota, Minneapolis, Minnesota, United States
Ruixuan Sun
University of Minnesota, Minneapolis, Minnesota, United States
Chen Chen
University of Minnesota, Minneapolis, Minnesota, United States
Puqi Zhou
George Mason University, Fairfax, Virginia, United States
Bo Han
George Mason University, Fairfax, Virginia, United States
Feng Qian
University of Minnesota, Minneapolis, Minnesota, United States
Svetlana Yarosh
University of Minnesota, Minneapolis, Minnesota, United States
論文URL

https://doi.org/10.1145/3544548.3581395

動画
“Moment to Moment”: A View From the Front Lines with Computing Ethics Teaching Assistants
要旨

The HCI research community has long centered ethics in HCI research and practice. This interest has persisted as scholars highlight the need for more situated understandings and deeper integration of ethics into HCI. In parallel, HCI scholars and students have become increasingly involved in teaching computing ethics across many different university contexts, bringing in valuable perspectives informed by the connections between HCI and the socio-technical subject matter of computing ethics. Yet explicitly bringing these two threads together – examining the teaching of ethics through an HCI research lens – remains nascent. This paper integrates work in HCI and computing education to focus on the role and experience of computing ethics teaching assistants (CETAs), who are increasingly involved in ethics instruction and whose perspectives are predominantly missing in existing literature spanning HCI and computing education. Drawing on HCI theories and methods, our qualitative study of eleven CETAs at two American universities makes three contributions to the HCI literature. First, we build an understanding of who these TAs are with respect to the unique position of teaching computing ethics. Second, we characterize how CETAs’ teaching and learning is situated and shaped within different communities and institutional contexts. Finally, we sug- gest several implications for the design of ethics instruction within undergraduate computing programs. More broadly, our work can be viewed as a call to action, encouraging HCI scholars to play a more significant role in studying and designing the teaching and learning of computing ethics.

著者
Cass Zegura
unaffiliated, Decatur, Georgia, United States
Ben Rydal. Shapiro
Georgia State University, Atlanta, Georgia, United States
Robert J. MacDonald
Georgia Institute of Technology, Atlanta, Georgia, United States
Jason Borenstein
Georgia Tech, Atlanta, Georgia, United States
Ellen Zegura
Georgia Tech, Atlanta, Georgia, United States
論文URL

https://doi.org/10.1145/3544548.3581572

動画