Brain-Computer Interfaces (BCIs) using functional near-infrared spectroscopy (fNIRS) hold promise for future interactive user interfaces due to their ease of deployment and declining cost. However, they typically require a separate calibration process for each user and task, which can be burdensome. Machine learning helps, but faces a data scarcity problem. Due to inherent inter-user variations in physiological data, it has been typical to create a new annotated training dataset for every new task and user. To reduce dependence on such extensive data collection and labeling, we present an adaptive learning framework, NIRS-X, to harness more easily accessible unlabeled fNIRS data. NIRS-X includes two key components: NIRSiam and NIRSformer. We use the NIRSiam algorithm to extract generalized brain activity representations from unlabeled fNIRS data obtained from previous users and tasks, and then transfer that knowledge to new users and tasks. In conjunction, we design a neural network, NIRSformer, tailored for capturing both local and global, spatial and temporal relationships in multi-channel fNIRS brain input signals. By using unlabeled data from both a previously released fNIRS2MW visual $n$-back dataset and a newly collected fNIRS2MW audio $n$-back dataset, NIRS-X demonstrates its strong adaptation capability to new users and tasks. Results show comparable or superior performance to supervised methods, making NIRS-X promising for real-world fNIRS-based BCIs.
https://doi.org/10.1145/3654777.3676429
Typing in mid-air is often perceived as intuitive yet presents challenges due to finger coactivation, a neuromechanical phenomenon that involves involuntary finger movements stemming from the lack of physical constraints. Previous studies were used to examine and address the impacts of finger coactivation using algorithmic approaches. Alternatively, this paper explores the neuromechanical effects of finger coactivation on mid-air typing, aiming to deepen our understanding and provide valuable insights to improve these interactions. We utilized a wearable device that restrains finger coactivation as a prop to conduct two mid-air studies, including a rapid finger-tapping task and a ten-finger typing task. The results revealed that restraining coactivation not only reduced mispresses, which is a classic coactivated error always considered as harm caused by coactivation. Unexpectedly, the reduction of motor control errors and spelling errors, thinking as non-coactivated errors, also be observed. Additionally, the study evaluated the neural resources involved in motor execution using functional Near Infrared Spectroscopy (fNIRS), which tracked cortical arousal during mid-air typing. The findings demonstrated decreased activation in the primary motor cortex of the left hemisphere when coactivation was restrained, suggesting a diminished motor execution load. This reduction suggests that a portion of neural resources is conserved, which also potentially aligns with perceived lower mental workload and decreased frustration levels.
https://doi.org/10.1145/3654777.3676441
Humans physically express emotion by modulating parameters that register on mammalian skin mechanoreceptors, but are unavailable in current touch-sensing technology. Greater sensory richness combined with data on affect-expression composition is a prerequisite to estimating affect from touch, with applications including physical human-robot interaction. To examine shear alongside more easily captured normal stresses, we tailored recent capacitive technology to attain performance suitable for affective touch, creating a flexible, reconfigurable and soft 36-taxel array that detects multitouch normal and 2-dimensional shear at ranges of 1.5kPa-43kPa and $\pm$ 0.3-3.8kPa respectively, wirelessly at ~43Hz (1548 taxels/s). In a deep-learning classification of 9 gestures (N=16), inclusion of shear data improved accuracy to 88\%, compared to 80\% with normal stress data alone, confirming shear stress's expressive centrality. Using this rich data, we analyse the interplay of sensed-touch features, gesture attributes and individual differences, propose affective-touch sensing requirements, and share technical considerations for performance and practicality.
https://doi.org/10.1145/3654777.3676346
Virtual reality (VR) is found to present significant cognitive challenges due to its immersive nature and frequent sensory conflicts. This study systematically investigates the impact of sensory conflicts induced by VR remapping techniques on cognitive fatigue, and unveils their correlation. We utilized three remapping methods (haptic repositioning, head-turning redirection, and giant resizing) to create different types of sensory conflicts, and measured perceptual thresholds to induce various intensities of the conflicts. Through experiments involving cognitive tasks along with subjective and physiological measures, we found that all three remapping methods influenced the onset and severity of cognitive fatigue, with visual-vestibular conflict having the greatest impact. Interestingly, visual-experiential/memory conflict showed a mitigating effect on cognitive fatigue, emphasizing the role of novel sensory experiences. This study contributes to a deeper understanding of cognitive fatigue under sensory conflicts and provides insights for designing VR experiences that align better with human perceptual and cognitive capabilities.
https://doi.org/10.1145/3654777.3676439