この勉強会は終了しました。ご参加ありがとうございました。
This paper investigates how a smartphone-controlled olfactory wearable might improve memory recall. We conducted a within-subjects experiment with 32 participants using the device and without (control). In the experimental condition, bursts of odor were released during visuo-spatial memory navigation tasks, and replayed during sleep the following night in the subjects' home. We found that compared to control, there was an improvement in memory performance when using the scent wearable in memory tasks that involved walking in a physical space. Furthermore, participants recalled more objects and translations when re-exposed to the same scent during the recall test, in addition to during sleep. These effects were statistically significant, and, in the object recall task, they also persisted for more than one week. This experiment demonstrates a potential practical application of olfactory interfaces that can interact with a user during wake as well as sleep to support memory.
Videos are commonly used to support learning of new skills, to improve existing skills, and as a source of motivation for training. Video self-modelling (VSM) is a learning technique that improves performance and motivation by showing a user a video of themselves performing a skill at a level they have not yet achieved. Traditional VSM is very data and labour intensive: a lot of video footage needs to be collected and manually edited in order to create an effective self-modelling video. We address this by presenting FakeForward -- a method which uses deepfakes to create self-modelling videos from videos of other people. FakeForward turns videos of better-performing people into effective, personalised training tools by replacing their face with the user’s. We investigate how FakeForward can be effectively applied and demonstrate its efficacy in rapidly improving performance in physical exercises, as well as confidence and perceived competence in public speaking.
Self-directed learning is becoming a significant skill for learners. However, learners may suffer from difficulties such as distractions, a lack of motivation, and so on. While self-tracking technologies have the potential to address these challenges, existing tools and systems mainly focused on tracking computer-based learning data in classroom contexts. Little is known about how students track and make sense of their learning data from non-classroom learning activities, and which types of learning data are personally meaningful for learners. In this paper, we conducted a qualitative study with 24 users of Timing, a mobile learning tracking application in China. Our findings indicated that users tracked a variety of qualitative learning data (e.g., videos, photos of learning materials, and emotions) and made sense of this data using different strategies such as observing behavioral and contextual details in videos. We then provided implications for designing non-classroom and non-computer-based personal learning tracking tools.
Virtual environments can support psychomotor learning by allowing learners to observe instructor avatars. Instructor avatars that look like the learner hold promise in enhancing learning; however, it is unclear whether this works for psychomotor tasks and how similar avatars need to be. We investigated `minimal’ customisation of instructor avatars, approximating a learner’s appearance by matching only key visual features: gender, skin-tone, and hair colour. These avatars can be created easily and avoid problems of highly similar avatars. Using modern dancing as a skill to learn, we compared the effects of visually similar and dissimilar avatars, considering both learning on a screen (n=59) and in VR (n=38). Our results indicate that minimal avatar customisation leads to significantly more vivid visual imagery of the dance moves than dissimilar avatars. We analyse variables affecting interindividual differences, discuss the results in relation to theory, and derive design implications for psychomotor training in virtual environments.
Virtual Reality (VR) has a noteworthy educational potential by providing immersive and collaborative environments. As an alternative but cost-effective way of delivering realistic environments in VR, using 360-degree videos in immersive VR (VR videos) received more attention. Although many studies reported positive learning experiences with VR videos, little is known about how collaborative learning performs on VR video viewing systems. In this study, we implemented two collaborative VR video viewing modes based on the way of group video control, synchronized or shared (Sync mode) and non-synchronized or individual (Non-sync mode) video control, against a conventional VR video viewing setting (Basic mode). We conducted a within-subject study (N = 54) in a lab-simulated remote learning environment. Our results show that collaborative VR video modes (Sync and Non-sync mode) improve users’ learning experiences and collaboration quality, especially with shared video control. Our findings provide directions for designing and employing collaborative VR video tools in online learning environments.
The HCI research community has long centered ethics in HCI research and practice. This interest has persisted as scholars highlight the need for more situated understandings and deeper integration of ethics into HCI. In parallel, HCI scholars and students have become increasingly involved in teaching computing ethics across many different university contexts, bringing in valuable perspectives informed by the connections between HCI and the socio-technical subject matter of computing ethics. Yet explicitly bringing these two threads together – examining the teaching of ethics through an HCI research lens – remains nascent. This paper integrates work in HCI and computing education to focus on the role and experience of computing ethics teaching assistants (CETAs), who are increasingly involved in ethics instruction and whose perspectives are predominantly missing in existing literature spanning HCI and computing education. Drawing on HCI theories and methods, our qualitative study of eleven CETAs at two American universities makes three contributions to the HCI literature. First, we build an understanding of who these TAs are with respect to the unique position of teaching computing ethics. Second, we characterize how CETAs’ teaching and learning is situated and shaped within different communities and institutional contexts. Finally, we sug- gest several implications for the design of ethics instruction within undergraduate computing programs. More broadly, our work can be viewed as a call to action, encouraging HCI scholars to play a more significant role in studying and designing the teaching and learning of computing ethics.