Healthcare Training

会議の名前
CHI 2024
Looking Together ≠ Seeing the Same Thing: Understanding Surgeons' Visual Needs During Intra-operative Coordination and Instruction
要旨

Shared gaze visualizations have been found to enhance collaboration and communication outcomes in diverse HCI subfields including collaborative work and learning. Given the importance of gaze in surgery operations, especially when a surgeon trainer and trainee need to coordinate their actions, research on the use of gaze to facilitate intra-operative coordination and instruction has been limited and shows mixed implications. We performed a field observation of 8 surgeries and an interview study with 14 surgeons to understand their visual needs during operations, informing ways to leverage and augment gaze to enhance intra-operative coordination and instruction. We found that trainees have varying needs in receiving visual guidance which are often unfulfilled by the trainers’ instructions. It is critical for surgeons to control the timing of the gaze-based visualizations and effectively interpret gaze data. We suggest overlay technologies, e.g., gaze-based summaries and depth sensing, to augment raw gaze in support of surgical coordination and instruction.

受賞
Honorable Mention
著者
Vitaliy Popov
University of Michigan, Ann Arbor, Michigan, United States
Xinyue Chen
University of Michigan, Ann Arbor, Michigan, United States
Jingying Wang
University of Michigan, Ann Arbor, Michigan, United States
Michael Kemp
Michigan Medicine, Ann Arbor, Michigan, United States
Gurjit Sandhu
Michigan Medicine, Ann Arbor, Michigan, United States
Taylor Kantor
University of Michigan, Ann Arbor, Michigan, United States
Natalie Mateju
University of Michigan, Ann Arbor, Michigan, United States
Xu Wang
University of Michigan, Ann Arbor, Michigan, United States
論文URL

https://doi.org/10.1145/3613904.3641929

動画
Surgment: Segmentation-enabled Semantic Search and Creation of Visual Question and Feedback to Support Video-Based Surgery Learning
要旨

Videos are prominent learning materials to prepare surgical trainees before they enter the operating room (OR). In this work, we explore techniques to enrich the video-based surgery learning experience. We propose Surgment, a system that helps expert surgeons create exercises with feedback based on surgery recordings. Surgment is powered by a few-shot-learning-based pipeline (SegGPT+SAM) to segment surgery scenes, achieving an accuracy of 92\%. The segmentation pipeline enables functionalities to create visual questions and feedback desired by surgeons from a formative study. Surgment enables surgeons to 1) retrieve frames of interest through sketches, and 2) design exercises that target specific anatomical components and offer visual feedback. In an evaluation study with 11 surgeons, participants applauded the search-by-sketch approach for identifying frames of interest and found the resulting image-based questions and feedback to be of high educational value.

著者
Jingying Wang
University of Michigan, Ann Arbor, Michigan, United States
Haoran Tang
University of Michigan, Ann Arbor, Michigan, United States
Taylor Kantor
University of Michigan, Ann Arbor, Michigan, United States
Tandis Soltani
University of Michigan, Ann Arbor, Michigan, United States
Vitaliy Popov
University of Michigan, Ann Arbor, Michigan, United States
Xu Wang
University of Michigan, Ann Arbor, Michigan, United States
論文URL

https://doi.org/10.1145/3613904.3642587

動画
MR Microsurgical Suture Training System with Level-Appropriate Support
要旨

The integration of advanced technologies in healthcare necessitates the development of systems accommodating the daily routines in medical practices. Neurosurgeons, in particular, require extensive practice in microsurgical suturing in the long term, even in the busy routine of a medical practice. This study collaboratively developed a Mixed Reality system with neurosurgeons to support self-training in microscopic suturing. Based on the neurosurgeons' opinions, we implemented a level-appropriate microsurgical suture training system. For novices, the system offers shadow-matching training to support the practice of precise movements under the high-sensitivity environment of the microscope. For intermediates, it provides a real-time feedback system, which allows users to practice attention to details. Evaluation involved testing the novice system on students with no medical background and the intermediate system on neurosurgery residents. The effectiveness of the system was demonstrated through the experimental results and subsequent discussion.

著者
Yuka Tashiro
Tokyo Institute of Technology, Meguro, Tokyo, Japan
Shio Miyafuji
Tokyo Institute of Technology, Tokyo, Japan
Yusuke Kojima
Tokyo Institute of Technology, Meguro, Japan
Satoshi Kiyofuji
The University of Tokyo, Bunkyo, Tokyo, Japan
Taichi Kin
The University of Tokyo, Tokyo, Japan
Takeo Igarashi
The University of Tokyo, Tokyo, Japan
Hideki Koike
Tokyo Institute of Technology, Tokyo, Japan
論文URL

https://doi.org/10.1145/3613904.3642324

動画
Facilitating Virtual Reality Integration in Medical Education: A Case Study of Acceptability and Learning Impact in Childbirth Delivery Training
要旨

Advancements in Virtual Reality (VR) technology have opened new frontiers in medical education, igniting interest among medical educators to incorporate it into mainstream curriculum, complementing traditional training modalities such as manikin training. Despite numerous VR simulators on the market, their uptake in medical education remains limited. This paper explores the acceptability and educational effectiveness of VR in the context of vaginal childbirth delivery training, with the simulator providing a walkthrough for the second and third stages of labour, contrasting it with established manikin-based methods. We conducted a large-scale empirical study with 117 medical students, revealing a significant 24.9% improvement in knowledge scores when using VR as compared to manikin. However, VR received significantly lower self-reported feasibility scores in Confidence, Usability, Enjoyment, Feedback and Presence, indicating low acceptance. The study provides critical insights into the relationship between technological innovation and educational impact, guiding future integration of VR into medical training curricula.

著者
Chang Liu
National University of Singapore, Singapore, Singapore
Felicia Fang-Yi. Tan
National University of Singapore, Singapore, Singapore
Shengdong Zhao
National University of Singapore, Singapore, Singapore
Abhiram Kanneganti
National University Hospital, Singapore, Singapore
Gosavi Arundhati Tushar
National University of Singapore, Singapore, Singapore
Eng Tat Khoo
National University of Singapore, Singapore, Singapore
論文URL

https://doi.org/10.1145/3613904.3642100

動画
"I'd be watching him contour till 10 o'clock at night'': Understanding Tensions between Teaching Methods and Learning Needs in Healthcare Apprenticeship
要旨

Apprenticeship is the predominant method for transferring specialized medical skills, yet the inter-dynamics between faculty and residents, including methods of feedback exchange are under-explored. We specifically investigate contouring: outlining tumors in preparation for radiotherapy, a critical skill that when performed subpar, severely degrades patient survival. Interviews and design-thinking workshops (N = four faculty; six residents) revealed misalignment between teaching methods and residents who desired timely, relevant, and diverse feedback. We further discuss reasons: overlapping learning content and strategies to ease tensions between clinical and teaching duties, and lack of support for exchange of cognitive processes. The follow-up survey study (N = 67 practitioners from 31 countries), which contained annotation and sketching tasks, provided diverse perspective over effective feedback elements. We lastly present sociotechnical implications in supporting faculty's teaching duties and learners' cognitive models, such as systematically leveraging senior learners in providing case-based guidance and supporting double-sided flow of cognitive information via in-situ video snippets.

受賞
Honorable Mention
著者
Matin Yarmand
University of California San Diego, La Jolla, California, United States
Chen Chen
University of California San Diego, La Jolla, California, United States
Kexin Cheng
UC San Diego, La Jolla, California, United States
James D. Murphy
University of California San Diego, La Jolla, California, United States
Nadir Weibel
UC San Diego, La Jolla, California, United States
論文URL

https://doi.org/10.1145/3613904.3642453

動画