57. Interaction Techniques / Sketch and Illustration / Privacy

Feeling Colours: Investigating Crossmodal Correspondences Between 3D Shapes, Colours and Emotions
説明

With increasing interest in multisensory experiences in HCI there is a need to consider the potential impact of crossmodal correspondences (CCs) between sensory modalities on perception and interpretation. We investigated CCs between active haptic experiences of tangible 3D objects, visual colour and emotion using the "Bouba/Kiki" paradigm. We asked 30 participants to assign colours and emotional categories to 3D-printed objects with varying degrees of angularity and complexity. We found tendencies to associate high degrees of complexity and angularity with red colours, low brightness and high arousal levels. Less complex round shapes were associated with blue colours, high brightness and positive valence levels. These findings contrast previously reported crossmodal effects triggered by 2D shapes of similar angularity and complexity, suggesting that designers cannot simply extrapolate potential perceptual and interpretive experiences elicited by 2D shapes to seemingly similar 3D tangible objects. Instead, we propose a design space for creating tangible multisensory artefacts that can trigger specific emotional percepts and discuss implications for exploiting CCs in the design of interactive technology.

日本語まとめ
読み込み中…
読み込み中…
Interaction Illustration Taxonomy: Classification of Styles and Techniques for Visually Representing Interaction Scenarios
説明

Static illustrations are ubiquitous means to represent interaction scenarios. Across papers and reports, these visuals demonstrate people's use of devices, explain systems, or show design spaces. Creating such figures is challenging, and very little is known about the overarching strategies for visually representing interaction scenarios. To mitigate this task, we contribute a unified taxonomy of design elements that compose such figures. In particular, we provide a detailed classification of Structural and Interaction strategies, such as composition, visual techniques, dynamics, representation of users, and many others -- all in context of the type of scenarios. This taxonomy can inform researchers' choices when creating new figures, by providing a concise synthesis of visual strategies, and revealing approaches they were not aware of before. Furthermore, to support the community for creating further taxonomies, we also provide three open-source software facilitating the coding process and visual exploration of the coding scheme.

日本語まとめ
読み込み中…
読み込み中…
Color by Numbers: Interactive Structuring and Vectorization of Sketch Imagery
説明

We present a novel, interactive interface for the integrated cleanup, neatening, structuring and vectorization of sketch imagery. Converting scanned raster drawings into vector illustrations is a well-researched set of problems. Our approach is based on a Delaunay subdivision of the raster drawing. We algorithmically generate a colored grouping of Delaunay regions that users interactively refine by dragging and dropping colors. Sketch strokes defined as marking boundaries of different colored regions are automatically neatened using Bezier curves, and turned into closed regions suitable for fills, textures, layering and animation. We show that minimal user interaction using our technique enables better sketch vectorization than state of art automated approaches. A user study, further shows our interface to be simple, fun and easy to use, yet effectively able to process messy images with a mix of construction lines, noisy and incomplete curves, sketched with arbitrary stroke style.

日本語まとめ
読み込み中…
読み込み中…
CASSIE: Curve and Surface Sketching in Immersive Environments
説明

We present CASSIE, a conceptual modeling system in VR that leverages freehand mid-air sketching, and a novel 3D optimization framework to create connected curve network armatures, predictively surfaced using patches with C0 continuity. Our system provides a judicious balance of interactivity and automation, providing a homogeneous 3D drawing interface for a mix of freehand curves, curve networks, and surface patches. Our system encourages and aids users in drawing consistent networks of curves, easing the transition from freehand ideation to concept modeling. A comprehensive user study with professional designers as well as amateurs (N=12), and a diverse gallery of 3D models, show our armature and patch functionality to offer a user experience and expressivity on par with freehand ideation, while creating sophisticated concept models for downstream applications.

日本語まとめ
読み込み中…
読み込み中…
KeyTch: Combining the Keyboard with a Touchscreen for Rapid Command Selection on Toolbars
説明

In this paper, we address the challenge of reducing mouse pointer transitions from the working object (e.g. text document) to simple or multi-level toolbars on desktop computers. To this end, we introduce KeyTch (pronounced ‘Keetch’), a novel approach for command selection on toolbars based on the combined use of the keyboard with a touchscreen. The toolbar is displayed on the touchscreen, which is positioned below the keyboard. Users can select commands by performing gestures combining a key press with the pinky finger, and a screen touch with the thumb of the same hand. After analyzing the design properties of KeyTch, a preliminary experiment validates that users can perform such gestures and reach the entire touchscreen surface with the thumb. Then a first user study unveils that direct touch outperforms indirect pointing to reach items on a simple toolbar displayed on the touchscreen. In a second study, we validate that KeyTch interaction techniques outperform the mouse for selecting items on a multi-level toolbar displayed on the touchscreen, allowing to select up to 720 commands with an accuracy above 95%, or 480 commands with an accuracy above 97%. Finally, two follow-up studies validate the benefits of KeyTch when used in a more integrated context.

日本語まとめ
読み込み中…
読み込み中…
Distractor Effects on Crossing-Based Interaction
説明

Task-irrelevant distractors affect visuo-motor control for target acquisition and studying such effects has already received much attention in human-computer interaction. However, there has been little research into distractor effects on crossing-based interaction. We thus conducted an empirical study on pen-based interfaces to investigate six crossing tasks with distractor interference in comparison to two tasks without it. The six distractor-related tasks differed in movement precision constraint (directional/amplitude), target size, target distance, distractor location and target-distractor spacing. We also developed and experimentally validated six quantitative models for the six tasks. Our results show that crossing targets with distractors had longer average times and similar accuracy than that without distractors. The effects of distractors varied depending on distractor location, target-distractor spacing and movement precision constraint. When spacing is smaller than 11.27 mm, crossing tasks with distractor interference can be regarded as pointing tasks or a combination of pointing and crossing tasks, which could be better fitted with our proposed models than Fitts' law. According to these results, we provide practical implications to crossing-based user interface design.

日本語まとめ
読み込み中…
読み込み中…
Impact of Task on Attentional Tunneling in Handheld Augmented Reality
説明

Attentional tunneling describes a phenomenon in Augmented Reality (AR) where users excessively focus on virtual content while neglecting their physical surroundings. This leads to the concern that users could neglect hazardous situations when using AR applications. However, studies have often confounded the role of the virtual content with the role of the associated task in inducing attentional tunneling. In this paper, we disentangle the impact of the associated task and of the virtual content on the attentional tunneling effect by measuring reaction times to events in two user studies. We found that presenting virtual content did not significantly increase user reaction times to events, but adding a task to the content did. This work contributes towards our understanding of the attentional tunneling effect on handheld AR devices, and highlights the need to consider both task and context when evaluating AR application usage.

日本語まとめ
読み込み中…
読み込み中…
Preserving Agency During Electrical Muscle Stimulation Training Speeds up Reaction Time Directly After Removing EMS
説明

Force feedback devices, such as motor-based exoskeletons or wearables based on electrical muscle stimulation (EMS), have the unique potential to accelerate users’ own reaction time (RT). However, this speedup has only been explored while the device is attached to the user. In fact, very little is known regarding whether this faster reaction time still occurs after the user removes the device from their bodies–this is precisely what we investigated by means of a simple reaction time (RT) experiment, in which participants were asked to tap as soon as they saw an LED flashing. Participants experienced this in three EMS conditions: (1) fast-EMS, the electrical impulses were synced with the LED; (2) agency-EMS, the electrical impulse was delivered 40ms faster than the participant’s own RT, which prior work has shown to preserve one’s sense of agency over this movement; and, (3) late-EMS: the impulse was delivered after the participant’s own RT. Our results revealed that the participants’ RT was significantly reduced by approximately 8ms(up to 20ms) only after training with the agency-EMS condition. This finding suggests that the prioritizing agency during EMS training is key to motor-adaptation, i.e., it enables a faster motor response even after the user has removed the EMS device from their body.

日本語まとめ
読み込み中…
読み込み中…
Interaction Pace and User Preferences
説明

The overall pace of interaction combines the user's pace and the system's pace, and a pace mismatch could impair user preferences (e.g., animations or timeouts that are too fast or slow for the user). Motivated by studies of speech rate convergence, we conducted an experiment to examine whether user preferences for system pace are correlated with user pace. Subjects first completed a series of trials to determine their user pace. They then completed a series of hierarchical drag-and-drop trials in which folders automatically expanded when the cursor hovered for longer than a controlled timeout. Results showed that preferences for timeout values correlated with user pace -- slow-paced users preferred long timeouts, and fast-paced users preferred short timeouts. Results indicate potential benefits in moving away from fixed or customisable settings for system pace. Instead, systems could improve preferences by automatically adapting their pace to converge towards that of the user.

日本語まとめ
読み込み中…
読み込み中…
BackSwipe: Back-of-device Word-Gesture Interaction on Smartphones
説明

Back-of-device interaction is a promising approach to interacting on smartphones. In this paper, we create a back-of-device command and text input technique called BackSwipe, which allows a user to hold a smartphone with one hand, and use the index finger of the same hand to draw a word-gesture anywhere at the back of the smartphone to enter commands and text. To support BackSwipe, we propose a back-of-device word-gesture decoding algorithm which infers the keyboard location from back-of-device gestures, and adjusts the keyboard size to suit the gesture scales; the inferred keyboard is then fed back into the system for decoding. Our user study shows BackSwipe is feasible and a promising input method, especially for command input in the one-hand holding posture: users can enter commands at an average accuracy of 92% with a speed of 5.32 seconds/command. The text entry performance varies across users. The average speed is 9.58 WPM with some users at 18.83 WPM; the average word error rate is 11.04% with some users at 2.85%. Overall, BackSwipe complements the extant smartphone interaction by leveraging the back of the device as a gestural input surface.

日本語まとめ
読み込み中…
読み込み中…
PhraseFlow: Designs and Empirical Studies of Phrase-Level Input
説明

Decoding on phrase-level may afford more correction accuracy than on word-level according to previous research. However, how phrase-level input affects the user typing behavior, and how to design the interaction to make it practical remain under explored.

We present PhraseFlow, a phrase-level input keyboard that is able to correct previous text based on the subsequently input sequences. Computational studies show that phrase-level input reduces the error rate of autocorrection by over 16%. We found that phrase-level input introduced extra cognitive load to the user that hindered their performance. Through an iterative design-implement-research process, we optimized the design of PhraseFlow that alleviated the cognitive load. An in-lab study shows that users could adopt PhraseFlow quickly, resulting in 19% fewer error without losing speed. In real-life settings, we conducted a six-day deployment study with 42 participants, showing that 78.6% of the users would like to have the phrase-level input feature in future keyboards.

日本語まとめ
読み込み中…
読み込み中…
PrivacyMic: Utilizing Inaudible Frequencies for Privacy Preserving Daily Activity Recognition
説明

Sound presents an invaluable signal source that enables computing systems to perform daily activity recognition. However, microphones are optimized for human speech and hearing ranges: capturing private content, such as speech, while omitting useful, inaudible information that can aid in acoustic recognition tasks. We simulated acoustic recognition tasks using sounds from 127 everyday household/workplace objects, finding that inaudible frequencies can act as a substitute for privacy-sensitive frequencies. To take advantage of these inaudible frequencies, we designed a Raspberry Pi-based device that captures inaudible acoustic frequencies with settings that can remove speech or all audible frequencies entirely. We conducted a perception study, where participants "eavesdropped" on PrivacyMic’s filtered audio and found that none of our participants could transcribe speech. Finally, PrivacyMic’s real-world activity recognition performance is comparable to our simulated results, with over 95% classification accuracy across all environments, suggesting immediate viability in performing privacy-preserving daily activity recognition.

日本語まとめ
読み込み中…
読み込み中…
Think-Aloud Computing: Supporting Rich and Low-Effort Knowledge Capture
説明

When users complete tasks on the computer, the knowledge they leverage and their intent is often lost because it is tedious or challenging to capture. This makes it harder to understand why a colleague designed a component a certain way or to remember requirements for software you wrote a year ago. We introduce think-aloud computing, a novel application of the think-aloud protocol where computer users are encouraged to speak while working to capture rich knowledge with relatively low effort. Through a formative study we find people shared information about design intent, work processes, problems encountered, to-do items, and other useful information. We developed a prototype that supports think-aloud computing by prompting users to speak and contextualizing speech with labels and application context. Our evaluation shows more subtle design decisions and process explanations were captured in think-aloud than via traditional documentation. Participants reported that think-aloud required similar effort as traditional documentation.

日本語まとめ
読み込み中…
読み込み中…