Interaction Techniques / Sketch and Illustration / Privacy

[B] Paper Room 08, 2021-05-14 01:00:00~2021-05-14 03:00:00 / [C] Paper Room 08, 2021-05-14 09:00:00~2021-05-14 11:00:00 / [A] Paper Room 08, 2021-05-13 17:00:00~2021-05-13 19:00:00

会議の名前
CHI 2021
Feeling Colours: Investigating Crossmodal Correspondences Between 3D Shapes, Colours and Emotions
要旨

With increasing interest in multisensory experiences in HCI there is a need to consider the potential impact of crossmodal correspondences (CCs) between sensory modalities on perception and interpretation. We investigated CCs between active haptic experiences of tangible 3D objects, visual colour and emotion using the "Bouba/Kiki" paradigm. We asked 30 participants to assign colours and emotional categories to 3D-printed objects with varying degrees of angularity and complexity. We found tendencies to associate high degrees of complexity and angularity with red colours, low brightness and high arousal levels. Less complex round shapes were associated with blue colours, high brightness and positive valence levels. These findings contrast previously reported crossmodal effects triggered by 2D shapes of similar angularity and complexity, suggesting that designers cannot simply extrapolate potential perceptual and interpretive experiences elicited by 2D shapes to seemingly similar 3D tangible objects. Instead, we propose a design space for creating tangible multisensory artefacts that can trigger specific emotional percepts and discuss implications for exploiting CCs in the design of interactive technology.

著者
Anan Lin
University of Bristol, Bristol, United Kingdom
Meike Scheller
University of Aberdeen, Aberdeen, United Kingdom
Feng Feng
University of Bristol, Bristol, United Kingdom
Michael J. Proulx
University of Bath, Bath, United Kingdom
Oussama Metatla
University of Bristol, Bristol, United Kingdom
DOI

10.1145/3411764.3445373

論文URL

https://doi.org/10.1145/3411764.3445373

動画
Interaction Illustration Taxonomy: Classification of Styles and Techniques for Visually Representing Interaction Scenarios
要旨

Static illustrations are ubiquitous means to represent interaction scenarios. Across papers and reports, these visuals demonstrate people's use of devices, explain systems, or show design spaces. Creating such figures is challenging, and very little is known about the overarching strategies for visually representing interaction scenarios. To mitigate this task, we contribute a unified taxonomy of design elements that compose such figures. In particular, we provide a detailed classification of Structural and Interaction strategies, such as composition, visual techniques, dynamics, representation of users, and many others -- all in context of the type of scenarios. This taxonomy can inform researchers' choices when creating new figures, by providing a concise synthesis of visual strategies, and revealing approaches they were not aware of before. Furthermore, to support the community for creating further taxonomies, we also provide three open-source software facilitating the coding process and visual exploration of the coding scheme.

受賞
Honorable Mention
著者
Axel Antoine
Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France
Sylvain Malacria
Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France
Nicolai Marquardt
University College London, London, United Kingdom
Géry Casiez
Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille, France
DOI

10.1145/3411764.3445586

論文URL

https://doi.org/10.1145/3411764.3445586

動画
Color by Numbers: Interactive Structuring and Vectorization of Sketch Imagery
要旨

We present a novel, interactive interface for the integrated cleanup, neatening, structuring and vectorization of sketch imagery. Converting scanned raster drawings into vector illustrations is a well-researched set of problems. Our approach is based on a Delaunay subdivision of the raster drawing. We algorithmically generate a colored grouping of Delaunay regions that users interactively refine by dragging and dropping colors. Sketch strokes defined as marking boundaries of different colored regions are automatically neatened using Bezier curves, and turned into closed regions suitable for fills, textures, layering and animation. We show that minimal user interaction using our technique enables better sketch vectorization than state of art automated approaches. A user study, further shows our interface to be simple, fun and easy to use, yet effectively able to process messy images with a mix of construction lines, noisy and incomplete curves, sketched with arbitrary stroke style.

著者
Amal Dev Parakkat
TU Delft, Delft, Netherlands
Marie-Paule R.. Cani
CNRS/Ecole Polytechnique, IP Paris, Paris, France
Karan Singh
University of Toronto, Toronto, Ontario, Canada
DOI

10.1145/3411764.3445215

論文URL

https://doi.org/10.1145/3411764.3445215

動画
CASSIE: Curve and Surface Sketching in Immersive Environments
要旨

We present CASSIE, a conceptual modeling system in VR that leverages freehand mid-air sketching, and a novel 3D optimization framework to create connected curve network armatures, predictively surfaced using patches with C0 continuity. Our system provides a judicious balance of interactivity and automation, providing a homogeneous 3D drawing interface for a mix of freehand curves, curve networks, and surface patches. Our system encourages and aids users in drawing consistent networks of curves, easing the transition from freehand ideation to concept modeling. A comprehensive user study with professional designers as well as amateurs (N=12), and a diverse gallery of 3D models, show our armature and patch functionality to offer a user experience and expressivity on par with freehand ideation, while creating sophisticated concept models for downstream applications.

受賞
Honorable Mention
著者
Emilie Yu
Inria, Sophia-Antipolis, France
Rahul Arora
University of Toronto, Toronto, Ontario, Canada
Tibor Stanko
Inria, Sophia-Antipolis, France
J. Andreas Bærentzen
Technical University of Denmark, Lyngby, Denmark
Karan Singh
University of Toronto, Toronto, Ontario, Canada
Adrien Bousseau
Inria, Sophia Antipolis, France
DOI

10.1145/3411764.3445158

論文URL

https://doi.org/10.1145/3411764.3445158

動画
KeyTch: Combining the Keyboard with a Touchscreen for Rapid Command Selection on Toolbars
要旨

In this paper, we address the challenge of reducing mouse pointer transitions from the working object (e.g. text document) to simple or multi-level toolbars on desktop computers. To this end, we introduce KeyTch (pronounced ‘Keetch’), a novel approach for command selection on toolbars based on the combined use of the keyboard with a touchscreen. The toolbar is displayed on the touchscreen, which is positioned below the keyboard. Users can select commands by performing gestures combining a key press with the pinky finger, and a screen touch with the thumb of the same hand. After analyzing the design properties of KeyTch, a preliminary experiment validates that users can perform such gestures and reach the entire touchscreen surface with the thumb. Then a first user study unveils that direct touch outperforms indirect pointing to reach items on a simple toolbar displayed on the touchscreen. In a second study, we validate that KeyTch interaction techniques outperform the mouse for selecting items on a multi-level toolbar displayed on the touchscreen, allowing to select up to 720 commands with an accuracy above 95%, or 480 commands with an accuracy above 97%. Finally, two follow-up studies validate the benefits of KeyTch when used in a more integrated context.

著者
Elio Keddisseh
Universite Paul Sabatier, Toulouse, France
Marcos Serrano
IRIT - Elipse, Toulouse, France
Emmanuel Dubois
IRIT - Elipse, Toulouse, France
DOI

10.1145/3411764.3445288

論文URL

https://doi.org/10.1145/3411764.3445288

動画
Distractor Effects on Crossing-Based Interaction
要旨

Task-irrelevant distractors affect visuo-motor control for target acquisition and studying such effects has already received much attention in human-computer interaction. However, there has been little research into distractor effects on crossing-based interaction. We thus conducted an empirical study on pen-based interfaces to investigate six crossing tasks with distractor interference in comparison to two tasks without it. The six distractor-related tasks differed in movement precision constraint (directional/amplitude), target size, target distance, distractor location and target-distractor spacing. We also developed and experimentally validated six quantitative models for the six tasks. Our results show that crossing targets with distractors had longer average times and similar accuracy than that without distractors. The effects of distractors varied depending on distractor location, target-distractor spacing and movement precision constraint. When spacing is smaller than 11.27 mm, crossing tasks with distractor interference can be regarded as pointing tasks or a combination of pointing and crossing tasks, which could be better fitted with our proposed models than Fitts' law. According to these results, we provide practical implications to crossing-based user interface design.

著者
Huawei Tu
La Trobe University, Melbourne, Australia
Jin Huang
Chinese Academy of Sciences, Beijing, China
Hai-Ning Liang
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Richard Skarbez
La Trobe University, Melbourne, VIC, Australia
Feng Tian
Institute of software, Chinese Academy of Sciences, Beijing, China
Henry Been-Lirn. Duh
La Trobe University , Melbourne, Victoria, Australia
DOI

10.1145/3411764.3445340

論文URL

https://doi.org/10.1145/3411764.3445340

動画
Impact of Task on Attentional Tunneling in Handheld Augmented Reality
要旨

Attentional tunneling describes a phenomenon in Augmented Reality (AR) where users excessively focus on virtual content while neglecting their physical surroundings. This leads to the concern that users could neglect hazardous situations when using AR applications. However, studies have often confounded the role of the virtual content with the role of the associated task in inducing attentional tunneling. In this paper, we disentangle the impact of the associated task and of the virtual content on the attentional tunneling effect by measuring reaction times to events in two user studies. We found that presenting virtual content did not significantly increase user reaction times to events, but adding a task to the content did. This work contributes towards our understanding of the attentional tunneling effect on handheld AR devices, and highlights the need to consider both task and context when evaluating AR application usage.

受賞
Best Paper
著者
Brandon Victor Syiem
The University of Melbourne, Melbourne, Victoria, Australia
Ryan M.. Kelly
University of Melbourne, Melbourne, VIC, Australia
Jorge Goncalves
The University of Melbourne, Melbourne, Australia
Eduardo Velloso
University of Melbourne, Melbourne, Victoria, Australia
Tilman Dingler
University of Melbourne, Melbourne, Victoria, Australia
DOI

10.1145/3411764.3445580

論文URL

https://doi.org/10.1145/3411764.3445580

動画
Preserving Agency During Electrical Muscle Stimulation Training Speeds up Reaction Time Directly After Removing EMS
要旨

Force feedback devices, such as motor-based exoskeletons or wearables based on electrical muscle stimulation (EMS), have the unique potential to accelerate users’ own reaction time (RT). However, this speedup has only been explored while the device is attached to the user. In fact, very little is known regarding whether this faster reaction time still occurs after the user removes the device from their bodies–this is precisely what we investigated by means of a simple reaction time (RT) experiment, in which participants were asked to tap as soon as they saw an LED flashing. Participants experienced this in three EMS conditions: (1) fast-EMS, the electrical impulses were synced with the LED; (2) agency-EMS, the electrical impulse was delivered 40ms faster than the participant’s own RT, which prior work has shown to preserve one’s sense of agency over this movement; and, (3) late-EMS: the impulse was delivered after the participant’s own RT. Our results revealed that the participants’ RT was significantly reduced by approximately 8ms(up to 20ms) only after training with the agency-EMS condition. This finding suggests that the prioritizing agency during EMS training is key to motor-adaptation, i.e., it enables a faster motor response even after the user has removed the EMS device from their body.

著者
Shunichi Kasahara
Sony CSL, Tokyo, Japan
Kazuma Takada
Meiji University, Tokyo, Japan
Jun Nishida
University of Chicago, Chicago, Illinois, United States
Kazuhisa Shibata
RIKEN CBS, Wako, Saitama, Japan
Shinsuke Shimojo
California Institute of Technology, California, California, United States
Pedro Lopes
University of Chicago, Chicago, Illinois, United States
DOI

10.1145/3411764.3445147

論文URL

https://doi.org/10.1145/3411764.3445147

動画
Interaction Pace and User Preferences
要旨

The overall pace of interaction combines the user's pace and the system's pace, and a pace mismatch could impair user preferences (e.g., animations or timeouts that are too fast or slow for the user). Motivated by studies of speech rate convergence, we conducted an experiment to examine whether user preferences for system pace are correlated with user pace. Subjects first completed a series of trials to determine their user pace. They then completed a series of hierarchical drag-and-drop trials in which folders automatically expanded when the cursor hovered for longer than a controlled timeout. Results showed that preferences for timeout values correlated with user pace -- slow-paced users preferred long timeouts, and fast-paced users preferred short timeouts. Results indicate potential benefits in moving away from fixed or customisable settings for system pace. Instead, systems could improve preferences by automatically adapting their pace to converge towards that of the user.

受賞
Honorable Mention
著者
Alix Goguey
Université Grenoble Alpes, Grenoble, France
Carl Gutwin
University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Zhe Chen
University of Canterbury, Christchurch, New Zealand
Pang Suwanaposee
University of Canterbury, Christchurch, New Zealand
Andy Cockburn
University of Canterbury, Christchurch, New Zealand
DOI

10.1145/3411764.3445772

論文URL

https://doi.org/10.1145/3411764.3445772

動画
BackSwipe: Back-of-device Word-Gesture Interaction on Smartphones
要旨

Back-of-device interaction is a promising approach to interacting on smartphones. In this paper, we create a back-of-device command and text input technique called BackSwipe, which allows a user to hold a smartphone with one hand, and use the index finger of the same hand to draw a word-gesture anywhere at the back of the smartphone to enter commands and text. To support BackSwipe, we propose a back-of-device word-gesture decoding algorithm which infers the keyboard location from back-of-device gestures, and adjusts the keyboard size to suit the gesture scales; the inferred keyboard is then fed back into the system for decoding. Our user study shows BackSwipe is feasible and a promising input method, especially for command input in the one-hand holding posture: users can enter commands at an average accuracy of 92% with a speed of 5.32 seconds/command. The text entry performance varies across users. The average speed is 9.58 WPM with some users at 18.83 WPM; the average word error rate is 11.04% with some users at 2.85%. Overall, BackSwipe complements the extant smartphone interaction by leveraging the back of the device as a gestural input surface.

著者
Wenzhe Cui
Stony Brook University, Stony Brook, New York, United States
Suwen Zhu
Stony Brook University, Stony Brook, New York, United States
Zhi Li
Stony Brook University, Stony Brook, New York, United States
Zheer Xu
Dartmouth College, Hanover, New Hampshire, United States
Xing-Dong Yang
Dartmouth College, Hanover, New Hampshire, United States
IV Ramakrishnan
Stony Brook University, Stony Brook, New York, United States
Xiaojun Bi
Stony Brook University, Stony Brook, New York, United States
DOI

10.1145/3411764.3445081

論文URL

https://doi.org/10.1145/3411764.3445081

動画
PhraseFlow: Designs and Empirical Studies of Phrase-Level Input
要旨

Decoding on phrase-level may afford more correction accuracy than on word-level according to previous research. However, how phrase-level input affects the user typing behavior, and how to design the interaction to make it practical remain under explored. We present PhraseFlow, a phrase-level input keyboard that is able to correct previous text based on the subsequently input sequences. Computational studies show that phrase-level input reduces the error rate of autocorrection by over 16%. We found that phrase-level input introduced extra cognitive load to the user that hindered their performance. Through an iterative design-implement-research process, we optimized the design of PhraseFlow that alleviated the cognitive load. An in-lab study shows that users could adopt PhraseFlow quickly, resulting in 19% fewer error without losing speed. In real-life settings, we conducted a six-day deployment study with 42 participants, showing that 78.6% of the users would like to have the phrase-level input feature in future keyboards.

著者
Mingrui Ray. Zhang
University of Washington, Seattle, Washington, United States
Shumin Zhai
Google, Mountain View, California, United States
DOI

10.1145/3411764.3445166

論文URL

https://doi.org/10.1145/3411764.3445166

動画
PrivacyMic: Utilizing Inaudible Frequencies for Privacy Preserving Daily Activity Recognition
要旨

Sound presents an invaluable signal source that enables computing systems to perform daily activity recognition. However, microphones are optimized for human speech and hearing ranges: capturing private content, such as speech, while omitting useful, inaudible information that can aid in acoustic recognition tasks. We simulated acoustic recognition tasks using sounds from 127 everyday household/workplace objects, finding that inaudible frequencies can act as a substitute for privacy-sensitive frequencies. To take advantage of these inaudible frequencies, we designed a Raspberry Pi-based device that captures inaudible acoustic frequencies with settings that can remove speech or all audible frequencies entirely. We conducted a perception study, where participants "eavesdropped" on PrivacyMic’s filtered audio and found that none of our participants could transcribe speech. Finally, PrivacyMic’s real-world activity recognition performance is comparable to our simulated results, with over 95% classification accuracy across all environments, suggesting immediate viability in performing privacy-preserving daily activity recognition.

受賞
Honorable Mention
著者
Yasha Iravantchi
University of Michigan, Ann Arbor, Michigan, United States
Karan Ahuja
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Mayank Goel
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Chris Harrison
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Alanson Sample
The University of Michigan, Ann Arbor, Michigan, United States
DOI

10.1145/3411764.3445169

論文URL

https://doi.org/10.1145/3411764.3445169

動画
Think-Aloud Computing: Supporting Rich and Low-Effort Knowledge Capture
要旨

When users complete tasks on the computer, the knowledge they leverage and their intent is often lost because it is tedious or challenging to capture. This makes it harder to understand why a colleague designed a component a certain way or to remember requirements for software you wrote a year ago. We introduce think-aloud computing, a novel application of the think-aloud protocol where computer users are encouraged to speak while working to capture rich knowledge with relatively low effort. Through a formative study we find people shared information about design intent, work processes, problems encountered, to-do items, and other useful information. We developed a prototype that supports think-aloud computing by prompting users to speak and contextualizing speech with labels and application context. Our evaluation shows more subtle design decisions and process explanations were captured in think-aloud than via traditional documentation. Participants reported that think-aloud required similar effort as traditional documentation.

著者
Rebecca Krosnick
University of Michigan, Ann Arbor, Michigan, United States
Fraser Anderson
Autodesk Research, Toronto, Ontario, Canada
Justin Matejka
Autodesk Research, Toronto, Ontario, Canada
Steve Oney
University of Michigan, Ann Arbor, Michigan, United States
Walter S.. Lasecki
University of Michigan, Ann Arbor, Michigan, United States
Tovi Grossman
University of Toronto, Toronto, Ontario, Canada
George Fitzmaurice
Autodesk Research, Toronto, Ontario, Canada
DOI

10.1145/3411764.3445066

論文URL

https://doi.org/10.1145/3411764.3445066

動画