While smartwatches are widely adopted these days, their input and output space remains fairly limited by their screen size. We present StrapDisplays—interactive watchbands with embedded display and touch technologies—that enhance commodity watches and extend their input and output capabilities. After introducing the physical design space of these StrapDisplays, we explore how to combine a smartwatch and straps in a synergistic Watch+Strap system. Specifically, we propose multiple interface concepts that consider promising content distributions, interaction techniques, usage types, and display roles. For example, the straps can enrich watch apps, display visualizations, provide glanceable feedback, or help avoiding occlusion issues. Further, we provide a modular research platform incorporating three StrapDisplay prototypes and a flexible web-based software architecture, demonstrating the feasibility of our approach. Early brainstorming sessions with 15 participants informed our design process, while later interviews with six experts supported our concepts and provided valuable feedback for future developments.
https://doi.org/10.1145/3313831.3376199
Input techniques have been drawing abiding attention along with the continual miniaturization of personal computers. In this paper, we present BlyncSync, a novel multi-modal gesture set that leverages the synchronicity of touch and blink events to augment the input vocabulary of smartwatches with a rapid gesture, while at the same time, offers a solution to the false activation problem of blink-based input. BlyncSync contributes the concept of a mutual delimiter, where two modalities are used to jointly delimit the intention of each other's input. A study shows that BlyncSync is 33% faster than using a baseline input delimiter (physical smartwatch button), with only 150ms in overhead cost compared to traditional touch events. Furthermore, our data indicates that the gesture can be tuned to elicit a true positive rate of 97% and a false positive rate of 1.68%.
https://doi.org/10.1145/3313831.3376132
Completing tasks on smartwatches often requires multiple gestures due to the small size of the touchscreens and the lack of sufficient number of touch controls that are easily accessible with a finger. We propose to increase the number of functions that can be triggered with the touch gesture by enabling a smartwatch to identify which finger is being used. We developed MagTouch, a method that uses a magnetometer embedded in an off-the-shelf smartwatch. It measures the magnetic field of a magnet fixed to a ring worn on the middle finger. By combining the measured magnetic field and the touch location on the screen, MagTouch recognizes which finger is being used. The tests demonstrated that MagTouch can differentiate among the three fingers used to make contacts at a success rate of 95.03%.
https://doi.org/10.1145/3313831.3376234
Bezel-based gestures expand the interaction space of touch-screen devices (e.g., smartphones and smartwatches). Existing works have mainly focused on bezel-initiated swipe (BIS) on square screens. To investigate the usability of BIS on round smartwatches, we design six different circular bezel layouts, by dividing the bezel into 6, 8, 12, 16, 24, and 32 segments. We evaluate the user performance of BIS on these layouts in an eyes-free situation. The results show that the performance of BIS is highly orientation dependent, and varies significantly among users. Using the Support-Vector-Machine (SVM) model significantly increases the accuracy on 6-, 8-, 12-, and 16-segment layouts. We then compare the performance of personal and general SVM models, and find that personal models significantly improve the accuracy for 8-, 12-, 16-, and 24-segment layouts. Lastly, we discuss the potential smartwatch applications enabled by the BIS.
https://doi.org/10.1145/3313831.3376393
Interacting with non-touchscreens such as TV or public displays can be difficult and inefficient. We propose WATouCH, a novel method that localizes a smartwatch on a display and allows direct input by turning the smartwatch into a tangible controller. This low-cost solution leverages sensor fusion of the built-in inertial measurement unit (IMU) and photoplethysmogram (PPG) sensor on a smartwatch that is used for heart rate monitoring. Specifically, WATouCH tracks the smartwatch movement using IMU data and corrects its location error caused by drift using the PPG responses to a dynamic visual pattern on the display. We conducted a user study on two tasks -- a point and click and line tracing task -- to evaluate the system usability and user performance. Evaluation results suggested that our sensor fusion mechanism effectively confined IMU-based localization error, achieved encouraging targeting and tracing precision, was well received by the participants, and thus opens up new opportunities for interaction.
https://doi.org/10.1145/3313831.3376198