For evaluations of 2D target selection using Fitts' law, ISO 9241-411 recommends using the effective target width (W_e) calculated using the univariate standard deviation of selection coordinates. Related research proposed using a bivariate standard deviation; however, the proposal was only tested using a single speed-accuracy bias condition, thus the assessment was limited. We compared the univariate and bivariate techniques in a 2D Fitts' law experiment using three speed-accuracy biases and 346 crowdworkers. Calculating W_e using the univariate standard deviation yielded higher model correlations across all bias conditions and produced more stable throughput among the biases. The findings were also consistent in cases using randomly sampled subsets of the participant data. We recommend that future research should calculate W_e using the univariate standard deviation for fair performance evaluations. Also, we found trivial effects when using nominal or effective amplitude and using different perspectives of the task axis.
Smart yarns hold the potential to transform everyday textiles into functional platforms, yet current methods remain constrained. These include conductive yarns, made from silver or stainless steel, which retain the feel of conventional yarns but offer limited functions, and PCB-based solutions, which add capability at the cost of bulk and rigidity. We present Circuit2Yarn, a fabrication framework that transforms planar printed circuits into flexible yarns by rolling copper-traced TPU films with soldered surface-mount components, preserving the capabilities of rigid electronics while producing yarn-like forms suitable for textile integration. We demonstrate yarns as small as 0.8 mm that integrate LEDs and sensors, including temperature, humidity, light, IMU, and capacitive sensing modules, enabling applications ranging from smart garments and interactive musical instruments to responsive tea bags. Characterization confirms durability under bending/stretching. By rolling planar circuits into yarns, Circuit2Yarn paves the way toward comfortable, multifunctional, and interactive textiles in everyday life.
Ensuring timely takeover in conditionally autonomous vehicles presents a significant challenge, especially when drivers are distracted by non-driving-related tasks or are in suboptimal emotional states. Existing driver monitoring systems struggle with a trade-off between practicality and reliability. Physiological sensors are intrusive, vision-based methods are sensitive to occlusions and variable lighting, and current multimodal learning approaches often rely on simple fusion strategies that fail to reconcile heterogeneous data. We introduce MUST (Multimodal Unified Smartwatch-based Takeover), a framework that predicts driver state and takeover performance using unobtrusive smartwatch signals. MUST employs an asymmetric causal fusion mechanism to model the interplay between driver behavior and emotion. The performance of the architecture was validated in diverse simulator environments reflecting real-world driving conditions, demonstrating robust driver state estimation and takeover prediction. This work establishes the smartwatch as a practical tool for adaptive takeover support, enabling reliable readiness assessment without intrusive hardware or fragile vision systems.
DuoTouch is a passive attachment for capacitive touch panels that adds tangible input while minimizing content occlusion and loss of input area. It uses two contact footprints and two traces to encode motion as binary sequences and runs on unmodified devices through standard touch APIs. We present two configurations with paired decoders: an aligned configuration that maps fixed-length codes to discrete commands and a phase-shifted configuration that estimates direction and distance from relative timing. To characterize the system's reliability, we derive a sampling-limited bound that links actuation speed, internal trace width, and device touch sampling rate. Through technical evaluations on a smartphone and a touchpad, we report performance metrics that describe the relationship between these parameters and decoding accuracy. Finally, we demonstrate the versatility of DuoTouch by embedding the mechanism into various form factors, including a hand strap, a phone ring holder, and touchpad add-ons.
Ring interfaces have gained attention in wearable technology for their lightweight and hands-free design. However, their compact form factor limits them to conveying simple information, such as direction or notification, through vibration, electrotactile, or force feedback. In this paper, we introduce HaRing, a novel haptic ring interface equipped with a 4 × 6 pin array display. This dynamic display delivers rich spatial patterns that simple vibration cannot express, effectively conveying high-dimensional information such as directions, semantic symbols, and letters. Its design enables one-handed, eyes-free interaction that does not interfere with visual tasks. We conducted a series of perceptual and user studies to demonstrate its effectiveness, showing a high recognition accuracy of over 94% for complex letters after a brief training period. We anticipate that HaRing can serve as an innovative haptic-only interface for multitasking in real-world or VR environments with high visual load.
Foldable phones naturally support displaying two applications side by side on adjacent screens. However, one-handed interaction with this dual-screen configuration faces two challenges: first, users must change grip to reach each screen; second, menus displayed on each screen take valuable screen real estate and can be difficult
to reach. To address this, we investigated how the hinge of foldable phones can enrich tactile interaction and provide access to hinge menus, unifying interaction across both screens. In a first experiment, we examined how users hold a foldable phone at different fold angles and identified the thumb-accessible screen areas, validating that a hinge-based grip enhances reachability. In two subsequent experiments, we evaluated the feasibility and performance of hinge gestures, defined as touch inputs (tap, tap-tap, or swipe) executed fully or partially on the hinge. Building on these findings, we designed hinge menus that combine hinge gestures for menu activation and item selection. Our final experiment identifies different hinge menus that outperform linear menus on adjacent screens. Our findings provide practical guidelines that can immediately inform and improve interaction on current foldable phones.
The way users hold a smartphone depends on the interaction task, yet little is known about the fingers' engagement with the device's surfaces beyond the touchscreen. Such an understanding not only opens up opportunities for novel on- and off-screen interactions, but also the device’s possible physical affordances. We present a study (N=23) that examines the hands' physical engagement with the smartphone beyond the touchscreen across nine mobile interactions. Grasps were annotated from photographs, and contact regions were captured using residual heat traces from grasping the device. Our findings show that fingers and palms adopt a variety of support roles and postures when engaging with the smartphone's back and side edges. The hand-contact maps reveal distinct patterns, differing in contact frequency and placement. This work contributes an empirical characterisation of hands' back and edge engagement, highlighting design opportunities for future smartphone usage extending beyond the touchscreen.