1. Bodily Signals

会議の名前
UIST 2024
Empower Real-World BCIs with NIRS-X: An Adaptive Learning Framework that Harnesses Unlabeled Brain Signals
要旨

Brain-Computer Interfaces (BCIs) using functional near-infrared spectroscopy (fNIRS) hold promise for future interactive user interfaces due to their ease of deployment and declining cost. However, they typically require a separate calibration process for each user and task, which can be burdensome. Machine learning helps, but faces a data scarcity problem. Due to inherent inter-user variations in physiological data, it has been typical to create a new annotated training dataset for every new task and user. To reduce dependence on such extensive data collection and labeling, we present an adaptive learning framework, NIRS-X, to harness more easily accessible unlabeled fNIRS data. NIRS-X includes two key components: NIRSiam and NIRSformer. We use the NIRSiam algorithm to extract generalized brain activity representations from unlabeled fNIRS data obtained from previous users and tasks, and then transfer that knowledge to new users and tasks. In conjunction, we design a neural network, NIRSformer, tailored for capturing both local and global, spatial and temporal relationships in multi-channel fNIRS brain input signals. By using unlabeled data from both a previously released fNIRS2MW visual $n$-back dataset and a newly collected fNIRS2MW audio $n$-back dataset, NIRS-X demonstrates its strong adaptation capability to new users and tasks. Results show comparable or superior performance to supervised methods, making NIRS-X promising for real-world fNIRS-based BCIs.

著者
Liang Wang
Tufts University, Medford, Massachusetts, United States
Jiayan Zhang
Computer Science, University of San Francisco, San Francisco, California, United States
Jinyang Liu
Northeastern University, Boston, Massachusetts, United States
Devon McKeon
Google LLC, Cambridge, Massachusetts, United States
David Guy Brizan
University of San Francisco, San Francisco, California, United States
Giles Blaney
Tufts University, Medford, Massachusetts, United States
Robert J.K. Jacob
Tufts University, Medford, Massachusetts, United States
論文URL

https://doi.org/10.1145/3654777.3676429

動画
Understanding the Effects of Restraining Finger Coactivation in Mid-Air Typing: from a Neuromechanical Perspective
要旨

Typing in mid-air is often perceived as intuitive yet presents challenges due to finger coactivation, a neuromechanical phenomenon that involves involuntary finger movements stemming from the lack of physical constraints. Previous studies were used to examine and address the impacts of finger coactivation using algorithmic approaches. Alternatively, this paper explores the neuromechanical effects of finger coactivation on mid-air typing, aiming to deepen our understanding and provide valuable insights to improve these interactions. We utilized a wearable device that restrains finger coactivation as a prop to conduct two mid-air studies, including a rapid finger-tapping task and a ten-finger typing task. The results revealed that restraining coactivation not only reduced mispresses, which is a classic coactivated error always considered as harm caused by coactivation. Unexpectedly, the reduction of motor control errors and spelling errors, thinking as non-coactivated errors, also be observed. Additionally, the study evaluated the neural resources involved in motor execution using functional Near Infrared Spectroscopy (fNIRS), which tracked cortical arousal during mid-air typing. The findings demonstrated decreased activation in the primary motor cortex of the left hemisphere when coactivation was restrained, suggesting a diminished motor execution load. This reduction suggests that a portion of neural resources is conserved, which also potentially aligns with perceived lower mental workload and decreased frustration levels.

著者
Hechuan Zhang
University of Chinese Academy of Sciences, Beijing, China/Beijing, China
Xuewei Liang
Xi'an jiaotong university, Xi'an, China
Ying Lei
East China Normal University, Shanghai, China
Yanjun Chen
Institute of Software, Chinese Academy of Sciences, Beijing, China
Zhenxuan He
Institute of software, Chinese Academy of Sciences, Beijing, China
Yu Zhang
School of mechanical engineering, Xi'an, Shaanxi, China
Lihan Chen
Peking University, Peking, China
Hongnan Lin
Institute of Software, Chinese Academy of Sciences, Beijing, Beijing, China
Teng Han
Institute of Software, Chinese Academy of Sciences, Beijing, China
Feng Tian
Institute of software, Chinese Academy of Sciences, Beijing, China
論文URL

https://doi.org/10.1145/3654777.3676441

動画
What is Affective Touch Made Of? A Soft Capacitive Sensor Array Reveals the Interplay between Shear, Normal Stress and Individuality
要旨

Humans physically express emotion by modulating parameters that register on mammalian skin mechanoreceptors, but are unavailable in current touch-sensing technology. Greater sensory richness combined with data on affect-expression composition is a prerequisite to estimating affect from touch, with applications including physical human-robot interaction. To examine shear alongside more easily captured normal stresses, we tailored recent capacitive technology to attain performance suitable for affective touch, creating a flexible, reconfigurable and soft 36-taxel array that detects multitouch normal and 2-dimensional shear at ranges of 1.5kPa-43kPa and $\pm$ 0.3-3.8kPa respectively, wirelessly at ~43Hz (1548 taxels/s). In a deep-learning classification of 9 gestures (N=16), inclusion of shear data improved accuracy to 88\%, compared to 80\% with normal stress data alone, confirming shear stress's expressive centrality. Using this rich data, we analyse the interplay of sensed-touch features, gesture attributes and individual differences, propose affective-touch sensing requirements, and share technical considerations for performance and practicality.

著者
Devyani McLaren
University of British Columbia, Vancouver, British Columbia, Canada
Jian Gao
University of British Columbia, Vancouver, British Columbia, Canada
Xiulun Yin
University of British Columbia, Vancouver, British Columbia, Canada
Rúbia Reis Guerra
University of British Columbia, Vancouver, British Columbia, Canada
Preeti Vyas
University of British Columbia, Vancouver, British Columbia, Canada
Chrys Morton
University of British Columbia, Vancouver, British Columbia, Canada
Xi Laura Cang
University of British Columbia , Vancouver , British Columbia, Canada
Yizhong Chen
University of British Columbia, Vancouver, British Columbia, Canada
Yiyuan Sun
University of British Columbia, Vancouver, British Columbia, Canada
Ying Li
University of British Columbia, Vancouver, British Columbia, Canada
John David Wyndham. Madden
University of British Columbia, Vancouver, British Columbia, Canada
Karon E. MacLean
University of British Columbia, Vancouver, British Columbia, Canada
論文URL

https://doi.org/10.1145/3654777.3676346

動画
Exploring the Effects of Sensory Conflicts on Cognitive Fatigue in VR Remappings
要旨

Virtual reality (VR) is found to present significant cognitive challenges due to its immersive nature and frequent sensory conflicts. This study systematically investigates the impact of sensory conflicts induced by VR remapping techniques on cognitive fatigue, and unveils their correlation. We utilized three remapping methods (haptic repositioning, head-turning redirection, and giant resizing) to create different types of sensory conflicts, and measured perceptual thresholds to induce various intensities of the conflicts. Through experiments involving cognitive tasks along with subjective and physiological measures, we found that all three remapping methods influenced the onset and severity of cognitive fatigue, with visual-vestibular conflict having the greatest impact. Interestingly, visual-experiential/memory conflict showed a mitigating effect on cognitive fatigue, emphasizing the role of novel sensory experiences. This study contributes to a deeper understanding of cognitive fatigue under sensory conflicts and provides insights for designing VR experiences that align better with human perceptual and cognitive capabilities.

受賞
Honorable Mention
著者
Tianren Luo
Institute of Software, Beijing, China
Gaozhang Chen
Beihang University, Beijing, China, China
Yijian Wen
School of Artificial Intelligence, Beijing, China
Pengxiang Wang
Information Engineering College,Capital Normal University, Beijing, China
yachun fan
Beijing Normal University, Beijing, China
Teng Han
Institute of Software, Chinese Academy of Sciences, Beijing, China
Feng Tian
Institute of software, Chinese Academy of Sciences, Beijing, China
論文URL

https://doi.org/10.1145/3654777.3676439

動画