Sensing Sorcery: Novel Sensing Techniques and Systems

会議の名前
UIST 2023
CubeSense++: Smart Environment Sensing with Interaction-Powered Corner Reflector Mechanisms
要旨

Smart environment sensing provides valuable contextual informa- tion by detecting occurrences of events such as human activities and changes of object status, enabling computers to collect per- sonal and environmental informatics to perform timely responses to user’s needs. Conventional approaches either rely on tags that re- quire batteries and frequent maintenance, or have limited detection capabilities bounded by only a few coarsely predefined activities. In response, this paper explores corner reflector mechanisms that encode user interactions with everyday objects into structured responses to millimeter wave radar, which has the potential for integration into smart environment entities such as speakers, light bulbs, thermostats, and autonomous vehicles. We presented the design space of 3D printed reflectors and gear mechanisms, which are low-cost, durable, battery-free, and can retrofit to a wide array of objects. These mechanisms convert the kinetic energy from user interactions into rotational motions of corner reflectors which we computationally designed with a genetic algorithm. We built an end-to-end radar detection pipeline to recognize fine-grained activity information such as state, direction, rate, count, and usage based on the characteristics of radar responses. We conducted stud- ies for multiple instrumented objects in both indoor and outdoor environments, with promising results demonstrating the feasibility of the proposed approach.

著者
Xiaoying Yang
University of California, Los Angeles, Los Angeles, California, United States
Jacob Sayono
University of California, Los Angeles, Los Angeles, California, United States
Yang Zhang
University of California, Los Angeles, Los Angeles, California, United States
論文URL

https://doi.org/10.1145/3586183.3606744

動画
SmartPoser: Arm Pose Estimation With a Smartphone and Smartwatch Using UWB and IMU Data
要旨

The ability to track a user's arm pose could be valuable in a wide range of applications, including fitness, rehabilitation, augmented reality input, life logging, and context-aware assistants. Unfortunately, this capability is not readily available to consumers. Systems either require cameras, which carry privacy issues, or utilize multiple worn IMUs or markers. In this work, we describe how an off-the-shelf smartphone and smartwatch can work together to accurately estimate arm pose. Moving beyond prior work, we take advantage of more recent ultra-wideband (UWB) functionality on these devices to capture absolute distance between the two devices. This measurement is the perfect complement to inertial data, which is relative and suffers from drift. We quantify the performance of our software-only approach using off-the-shelf devices, showing it can estimate the wrist and elbow joints with a median positional error of 11.0~cm, without the user having to provide training data.

著者
Nathan DeVrio
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Vimal Mollyn
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Chris Harrison
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3586183.3606821

動画
PressurePick: Muscle Tension Estimation for Guitar Players Using Unobtrusive Pressure Sensing
要旨

When learning to play an instrument, it is crucial for the learner's muscles to be in a relaxed state when practicing. Identifying, which parts of a song lead to increased muscle tension requires self-awareness during an already cognitively demanding task. In this work, we investigate unobtrusive pressure sensing for estimating muscle tension while practicing songs with the guitar. First, we collected data from twelve guitarists. Our apparatus consisted of three pressure sensors (one on each side of the guitar pick and one on the guitar neck) to determine the sensor that is most suitable for automatically estimating muscle tension. Second, we extracted features from the pressure time series that are indicative of muscle tension. Third, we present the hardware and software design of our PressurePick prototype, which is directly informed by the data collection and subsequent analysis.

著者
Andreas Rene. Fender
ETH Zürich, Zurich, Switzerland
Derek Alexander. Witzig
ETH Zürich, Zurich, Switzerland
Max Möbus
ETH Zurich, Zurich, Switzerland
Christian Holz
ETH Zürich, Zurich, Switzerland
論文URL

https://doi.org/10.1145/3586183.3606742

動画
SUPREYES: SUPer Resolutin for EYES Using Implicit Neural Representation Learning
要旨

We introduce SUPREYES – a novel self-supervised method to increase the spatio-temporal resolution of gaze data recorded using low(er)-resolution eye trackers. Despite continuing advances in eye tracking technology, the vast majority of current eye trackers – particularly mobile ones and those integrated into mobile devices – suffer from low-resolution gaze data, thus fundamentally limiting their practical usefulness. SUPREYES learns a continuous implicit neural representation from low-resolution gaze data to up-sample the gaze data to arbitrary resolutions. We compare our method with commonly used interpolation methods on arbitrary scale super-resolution and demonstrate that SUPREYES outperforms these baselines by a significant margin. We also test on the sample downstream task of gaze-based user identification and show that our method improves the performance of original low-resolution gaze data and outperforms other baselines. These results are promising as they open up a new direction for increasing eye tracking fidelity as well as enabling new gaze-based applications without the need for new eye tracking equipment.

著者
Chuhan Jiao
University of Stuttgart, Stuttgart, Germany
Zhiming Hu
University of Stuttgart, Stuttgart, Germany
Mihai Bâce
University of Stuttgart, Stuttgart, Germany
Andreas Bulling
University of Stuttgart, Stuttgart, Germany
論文URL

https://doi.org/10.1145/3586183.3606780

動画
Joie: a Joy-based BCI
要旨

The size and cost of electroencephalography (EEG) headsets have been decreasing at a steadfast pace. Cortical frontal activity is a promising input method that is also important for affect regulation. We created Joie, a joy-based EEG brain-computer interface (BCI) which uses prefrontal asymmetries associated with joyful thoughts as input to an endless runner video game. The more prefrontal asymmetries are activated, the more the character collects coins in response. In a lab study (20 participants, 15 training sessions per participant, up to two weeks of training), we found that our experiment group instructed to imagine positive music, winning awards, and similar strategies, demonstrated significantly greater ability in activating asymmetries compared to our placebo and control groups. In our analysis, Joie demonstrates the ability for frontal asymmetries to be used as input to an affective BCI and builds upon prior work in this area. In the future, training these asymmetries can teach mental strategies that have applications in mental health.

著者
Angela Vujic
MIT, Cambridge, Massachusetts, United States
Shreyas Nisal
MIT, Cambridge, Massachusetts, United States
Pattie Maes
MIT Media Lab, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3586183.3606761

動画
Pantœnna: Mouth Pose Estimation for AR/VR Headsets Using Low-Profile Antenna and Impedance Characteristic Sensing
要旨

Methods for faithfully capturing a user's holistic pose have immediate uses in AR/VR, ranging from multimodal input to expressive avatars. Although body-tracking has received the most attention, the mouth is also of particular importance, given that it is the channel for both speech and facial expression. In this work, we describe a new RF-based approach for capturing mouth pose using an antenna integrated into the underside of a VR/AR headset. Our approach side-steps privacy issues inherent in camera-based methods, while simultaneously supporting silent facial expressions that audio-based methods cannot. Further, compared to bio-sensing methods such as EMG and EIT, our method requires no contact with the wearer's body and can be fully self-contained in the headset, offering a high degree of physical robustness and user practicality. We detail our implementation along with results from two user studies, which show a mean 3D error of 2.6 mm for 11 mouth keypoints across worn sessions without re-calibration.

著者
Daehwa Kim
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Chris Harrison
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3586183.3606805

動画