Wear is my input

Paper session

会議の名前
CHI 2020
Zippro: The Design and Implementation of An Interactive Zipper
要旨

Zippers are common in a wide variety of objects that we use daily. This work investigates how we can take advantage of such common daily activities to support seamless interaction with technology. We look beyond simple zipper-sliding interactions explored previously to determine how to weave foreground and background interactions into a vocabulary of natural usage patterns. We begin by conducting two user studies to understand how people typically interact with zippers. The findings identify several opportunities for zipper input and sensing, which inform the design of Zippro, a self-contained prototype zipper slider, which we evaluate with a standard jacket zipper. We conclude by demonstrating several applications that make use of the identified foreground and background input methods.

キーワード
Zipper
Wearable
Smart Things
著者
Pin-Sung Ku
Dartmouth College & National Taiwan University, Hanover, NH, USA
Jun Gong
Dartmouth College, Hanover, NH, USA
Te-Yen Wu
Dartmouth College, Hanover, NH, USA
Yixin Wei
Dartmouth College & Beijing University of Posts and Telecommunications, Hanover, NH, USA
Yiwen Tang
Dartmouth College & Carnegie Mellon University, Hanover, NH, USA
Barrett Ens
Monash University, Melbourne, VIC, Australia
Xing-Dong Yang
Dartmouth College, Hanover, NH, USA
DOI

10.1145/3313831.3376756

論文URL

https://doi.org/10.1145/3313831.3376756

動画
Sensock: 3D Foot Reconstruction with Flexible Sensors
要旨

Capturing 3D foot models is important for applications such as manufacturing customized shoes and creating clubfoot orthotics. In this paper, we propose a novel prototype, Sensock, to offer a fully wearable solution for the task of 3D foot reconstruction. The prototype consists of four soft stretchable sensors, made from silk fibroin yarn. We identify four characteristic foot girths based on the existing knowledge of foot anatomy, and measure their lengths with the resistance value of the stretchable sensors. A learning-based model is trained offline and maps the foot girths to the corresponding 3D foot shapes. We compare our method with existing solutions using red–green–blue (RGB) or RGBD (RGB-depth) cameras, and show the advantages of our method in terms of both efficiency and accuracy. In the user experiment, we find that the relative error of Sensock is lower than 0.55%. It performs consistently across different trials and is considered comfortable and suitable for long-term wearing.

キーワード
Flexible sensors
3D reconstruction
Foot modeling
著者
Hechuan Zhang
Xiamen University, Xiamen, China
Zhiyong Chen
Xiamen University, Xiamen, China
Shihui Guo
Xiamen University, Xiamen, China
Juncong Lin
Xiamen University, Xiamen, China
Yating Shi
Xiamen University, Xiamen, China
Xiangyang Liu
Xiamen University, Xiamen, China
Yong Ma
Jiangxi Normal University, Nanchang, China
DOI

10.1145/3313831.3376387

論文URL

https://doi.org/10.1145/3313831.3376387

Fabriccio: Touchless Gestural Input on Interactive Fabrics
要旨

We present Fabriccio, a touchless gesture sensing technique developed for interactive fabrics using Doppler motion sensing. Our prototype was developed using a pair of loop antennas (one for transmitting and the other for receiving), made of conductive thread that was sewn onto a fabric substrate. The antenna type, configuration, transmission lines, and operating frequency were carefully chosen to balance the complexity of the fabrication process and the sensitivity of our system for touchless hand gestures, performed at a 10 cm distance. Through a ten-participant study, we evaluated the performance of our proposed sensing technique across 11 touchless gestures as well as 1 touch gesture. The study result yielded a 92.8% cross-validation accuracy and 85.2% leave-one-session-out accuracy. We conclude by presenting several applications to demonstrate the unique interactions enabled by our technique on soft objects.

キーワード
Doppler Effect
Interactive Fabrics
Touchless Gesture
著者
Te-Yen Wu
Dartmouth College, Hanover, NH, USA
Shutong Qi
Dartmouth College & Beihang University, Hanover, NH, USA
Junchi Chen
Dartmouth College & Shanghai Jiao Tong University, Hanover, NH, USA
MuJie Shang
Dartmouth College & WuHan University, Hanover, NH, USA
Jun Gong
Dartmouth College, Hanover, NH, USA
Teddy Seyed
Microsoft Research, Redmond, WA, USA
Xing-Dong Yang
Dartmouth College, Hanover, NH, USA
DOI

10.1145/3313831.3376681

論文URL

https://doi.org/10.1145/3313831.3376681

動画
Wearable Microphone Jamming
要旨

We engineered a wearable microphone jammer that is capable of disabling microphones in its user's surroundings, including hidden microphones. Our device is based on a recent exploit that leverages the fact that when exposed to ultrasonic noise, commodity microphones will leak the noise into the audible range.<br>Unfortunately, ultrasonic jammers are built from multiple transducers and therefore exhibit blind spots, i.e., locations in which transducers destructively interfere and where a microphone cannot be jammed. To solve this, our device exploits a synergy between ultrasonic jamming and the naturally occur- ring movements that users induce on their wearable devices (e.g., bracelets) as they gesture or walk. We demonstrate that these movements can blur jamming blind spots and increase jamming coverage. Moreover, current jammers are also directional, requiring users to point the jammer to a microphone; instead, our wearable bracelet is built in a ring-layout that al- lows it to jam in multiple directions. This is beneficial in that it allows our jammer to protect against microphones hidden out of sight.<br>We evaluated our jammer in a series of experiments and found that: (1) it jams in all directions, e.g., our device jams over 87% of the words uttered around it in any direction, while existing devices jam only 30% when not pointed directly at the microphone; (2) it exhibits significantly less blind spots; and, (3) our device induced a feeling of privacy to participants of our user study. We believe our wearable provides stronger privacy in a world in which most devices are constantly eavesdropping on our conversations.

受賞
Honorable Mention
キーワード
Wearable
microphone
jamming
privacy
ultrasound
著者
Yuxin Chen
University of Chicago, Chicago, IL, USA
Huiying Li
University of Chicago, Chicago, IL, USA
Shan-Yuan Teng
University of Chicago, Chicago, IL, USA
Steven Nagels
University of Chicago, Chicago, IL, USA
Zhijing Li
University of Chicago, Chicago, IL, USA
Pedro Lopes
University of Chicago, Chicago, IL, USA
Ben Y. Zhao
University of Chicago, Chicago, IL, USA
Haitao Zheng
University of Chicago, Chicago, IL, USA
DOI

10.1145/3313831.3376304

論文URL

https://doi.org/10.1145/3313831.3376304

動画
Evaluation of Machine Learning Techniques for Hand Pose Estimation on Handheld Device with Proximity Sensor
要旨

Tracking finger movement for natural interaction using hand is commonly studied. For vision-based implementations of finger tracking in virtual reality (VR) application, finger movement is occluded by a handheld device which is necessary for auxiliary input, thus tracking finger movement using cameras is still challenging. Finger tracking controllers using capacitive proximity sensors on the surface are starting to appear. However, research on estimating articulated hand pose from curved capacitance sensing electrodes is still immature. Therefore, we built a prototype with 62 electrodes and recorded training datasets using an optical tracking system. We have introduced 2.5D representation to apply convolutional neural network methods on a capacitive image of the curved surface, and two types of network architectures based on recent achievements in the computer vision field were evaluated with our dataset. We also implemented real-time interactive applications using the prototype and demonstrated the possibility of intuitive interaction using fingers in VR applications.

キーワード
Hand pose estimation
finger tracking controller
capacitive image
human computer interactiton
virtual reality
著者
Kazuyuki Arimatsu
Sony Interactive Entertainment Inc., Tokyo, Japan
Hideki Mori
Sony Interactive Entertainment Inc., Minato-ku, Japan
DOI

10.1145/3313831.3376712

論文URL

https://doi.org/10.1145/3313831.3376712