この勉強会は終了しました。ご参加ありがとうございました。
Zippers are common in a wide variety of objects that we use daily. This work investigates how we can take advantage of such common daily activities to support seamless interaction with technology. We look beyond simple zipper-sliding interactions explored previously to determine how to weave foreground and background interactions into a vocabulary of natural usage patterns. We begin by conducting two user studies to understand how people typically interact with zippers. The findings identify several opportunities for zipper input and sensing, which inform the design of Zippro, a self-contained prototype zipper slider, which we evaluate with a standard jacket zipper. We conclude by demonstrating several applications that make use of the identified foreground and background input methods.
Capturing 3D foot models is important for applications such as manufacturing customized shoes and creating clubfoot orthotics. In this paper, we propose a novel prototype, Sensock, to offer a fully wearable solution for the task of 3D foot reconstruction. The prototype consists of four soft stretchable sensors, made from silk fibroin yarn. We identify four characteristic foot girths based on the existing knowledge of foot anatomy, and measure their lengths with the resistance value of the stretchable sensors. A learning-based model is trained offline and maps the foot girths to the corresponding 3D foot shapes. We compare our method with existing solutions using red–green–blue (RGB) or RGBD (RGB-depth) cameras, and show the advantages of our method in terms of both efficiency and accuracy. In the user experiment, we find that the relative error of Sensock is lower than 0.55%. It performs consistently across different trials and is considered comfortable and suitable for long-term wearing.
We present Fabriccio, a touchless gesture sensing technique developed for interactive fabrics using Doppler motion sensing. Our prototype was developed using a pair of loop antennas (one for transmitting and the other for receiving), made of conductive thread that was sewn onto a fabric substrate. The antenna type, configuration, transmission lines, and operating frequency were carefully chosen to balance the complexity of the fabrication process and the sensitivity of our system for touchless hand gestures, performed at a 10 cm distance. Through a ten-participant study, we evaluated the performance of our proposed sensing technique across 11 touchless gestures as well as 1 touch gesture. The study result yielded a 92.8% cross-validation accuracy and 85.2% leave-one-session-out accuracy. We conclude by presenting several applications to demonstrate the unique interactions enabled by our technique on soft objects.
We engineered a wearable microphone jammer that is capable of disabling microphones in its user's surroundings, including hidden microphones. Our device is based on a recent exploit that leverages the fact that when exposed to ultrasonic noise, commodity microphones will leak the noise into the audible range.<br>Unfortunately, ultrasonic jammers are built from multiple transducers and therefore exhibit blind spots, i.e., locations in which transducers destructively interfere and where a microphone cannot be jammed. To solve this, our device exploits a synergy between ultrasonic jamming and the naturally occur- ring movements that users induce on their wearable devices (e.g., bracelets) as they gesture or walk. We demonstrate that these movements can blur jamming blind spots and increase jamming coverage. Moreover, current jammers are also directional, requiring users to point the jammer to a microphone; instead, our wearable bracelet is built in a ring-layout that al- lows it to jam in multiple directions. This is beneficial in that it allows our jammer to protect against microphones hidden out of sight.<br>We evaluated our jammer in a series of experiments and found that: (1) it jams in all directions, e.g., our device jams over 87% of the words uttered around it in any direction, while existing devices jam only 30% when not pointed directly at the microphone; (2) it exhibits significantly less blind spots; and, (3) our device induced a feeling of privacy to participants of our user study. We believe our wearable provides stronger privacy in a world in which most devices are constantly eavesdropping on our conversations.
Tracking finger movement for natural interaction using hand is commonly studied. For vision-based implementations of finger tracking in virtual reality (VR) application, finger movement is occluded by a handheld device which is necessary for auxiliary input, thus tracking finger movement using cameras is still challenging. Finger tracking controllers using capacitive proximity sensors on the surface are starting to appear. However, research on estimating articulated hand pose from curved capacitance sensing electrodes is still immature. Therefore, we built a prototype with 62 electrodes and recorded training datasets using an optical tracking system. We have introduced 2.5D representation to apply convolutional neural network methods on a capacitive image of the curved surface, and two types of network architectures based on recent achievements in the computer vision field were evaluated with our dataset. We also implemented real-time interactive applications using the prototype and demonstrated the possibility of intuitive interaction using fingers in VR applications.