Understanding & modeling users

Paper session

会議の名前
CHI 2020
Adults' and Children's Mental Models for Gestural Interactions with Interactive Spherical Displays
要旨

Interactive spherical displays offer numerous opportunities for engagement and education in public settings. Prior work established that users' touch-gesture patterns on spherical displays differ from those on flatscreen tabletops, and speculated that these differences stem from dissimilarity in how users conceptualize interactions with these two form factors. We analyzed think-aloud data collected during a gesture elicitation study to understand adults' and children's (ages 7 to 11) conceptual models of interaction with spherical displays and compared them to conceptual models of interaction with tabletop displays from prior work. Our findings confirm that the form factor strongly influenced users' mental models of interaction with the sphere. For example, participants conceptualized that the spherical display would respond to gestures in a similar way as real-world spherical objects like physical globes. Our work contributes new understanding of how users draw upon the perceived affordances of the sphere as well as prior touchscreen experience during their interactions.

キーワード
Interactive spherical displays
Mental models
Touchscreen displays
Touchscreen gestures
Children
Adults
著者
Nikita Soni
University of Florida, Gainesville, FL, USA
Schuyler Gleaves
University of Florida, Gainesville, FL, USA
Hannah Neff
University of Florida, Gainesville, FL, USA
Sarah Morrison-Smith
University of Florida, Gainesville, FL, USA
Shaghayegh Esmaeili
University of Florida, Gainesville, FL, USA
Ian Mayne
Elon University, Elon, NC, USA
Sayli Bapat
Maharashtra Institute of Technology, Pune, India
Carrie Schuman
University of Florida, Gainesville, FL, USA
Kathryn A. Stofer
University of Florida, Gainesville, FL, USA
Lisa Anthony
University of Florida, Gainesville, FL, USA
DOI

10.1145/3313831.3376468

論文URL

https://doi.org/10.1145/3313831.3376468

Manipulation, Learning, and Recall with Tangible Pen-Like Input
要旨

We examine two key human performance characteristics of a pen-like tangible input device that executes a different command depending on which corner, edge, or side contacts a surface. The manipulation time when transitioning between contacts is examined using physical mock-ups of three representative device sizes and a baseline pen mock-up. Results show the largest device is fastest overall and minimal differences with a pen for equivalent transitions. Using a hardware prototype able to sense all 26 different contacts, a second experiment evaluates learning and recall. Results show almost all 26 contacts can be learned in a two-hour session with an average of 94% recall after 24 hours. The results provide empirical evidence for the practicality, design, and utility for this type of tangible pen-like input.

キーワード
Pen Input
Tangible Interfaces
Learning
Command Selection
著者
Lisa A. Elkin
University of Washington & University of Waterloo, Seattle, WA, USA
Jean-Baptiste Beau
University of Waterloo, Waterloo, ON, Canada
Géry Casiez
Univ. Lille, UMR 9189 - CRIStAL & Inria & Institut Universitaire de France (IUF) & University of Waterloo, Villeneuve d'Ascq, France
Daniel Vogel
University of Waterloo, Waterloo, ON, Canada
DOI

10.1145/3313831.3376772

論文URL

https://doi.org/10.1145/3313831.3376772

動画
Exploring Auditory Information to Change Users' Perception of Time Passing as Shorter
要旨

Although the processing speed of computers has been drastically increasing year by year, users still have to wait for computers to complete tasks or to respond. To cope with this, several studies have proposed presenting certain visual information to users to change their perception of time passing as shorter, e.g., progress bars with animated ribbing or faster/slower virtual clocks. As speech interfaces such as smart speakers are becoming popular, a novel method is required to make users perceive the passing of time as shorter by presenting auditory stimuli. We thus prepared 20 pieces of auditory information as experimental stimuli; that is, 11 auditory stimuli that have the same 10.1-second duration but different numbers of 0.1-second sine-wave sounds and 9 other auditory stimuli that have the same 10.1-second duration and numbers of sounds but different interval patterns between the sounds. We conducted three experiments to figure out which kinds of auditory stimuli can change users' perception of time passing as shorter. We found that a 10.1-second auditory stimulus that has 0.1-second sine-wave sounds appearing 11 times with intervals between the sounds that narrow rapidly in a linear fashion was perceived as shortest at about 9.3 seconds, which was 7.6% shorter than the actual duration of the stimulus. We also found that different interval patterns of sounds in auditory information significantly affected users' perception of time passing as shorter, while different numbers of sounds did not.

キーワード
Auditory information
Eyes-free interaction
Filled-duration illusion
Users' perception of time passing
Waiting time
著者
Takanori Komatsu
Meiji University, Tokyo, Japan
Seiji Yamada
National Institute of Informatics and SOKENDAI, Tokyo, Japan
DOI

10.1145/3313831.3376157

論文URL

https://doi.org/10.1145/3313831.3376157

Awareness, Navigation, and Use of Feed Control Settings Online
要旨

Control settings are abundant and have significant effects on user experiences. One example of an impactful but understudied area is feed settings. In this study, we investigated awareness, navigation, and use of feed settings. We began by creating a taxonomy of feed settings on social media and search sites. Via an online survey, we measured awareness of Facebook feed settings. An in-person interview study then investigated how people navigated to and chose to set feed settings on their own feeds. We discovered that many participants did not believe ad personalization feed settings existed. Furthermore, we discovered a misalignment in the expectation and the function of settings, especially of ad personalization settings for many participants. Despite all participants struggling to find at least one setting, participants overall wanted to use settings: 94% altered at least one setting they encountered. From these results, we discuss implications and suggest design guidelines for settings.

キーワード
control
settings
feeds
social media
著者
Silas Hsu
University of Illinois at Urbana-Champaign, Urbana, IL, USA
Kristen Vaccaro
University of Illinois at Urbana-Champaign, Urbana, IL, USA
Yin Yue
University of Illinois at Urbana-Champaign, Champaign, IL, USA
Aimee Rickman
California State University, Fresno, Fresno, CA, USA
Karrie Karahalios
University of Illinois at Urbana-Champaign, Urbana, IL, USA
DOI

10.1145/3313831.3376583

論文URL

https://doi.org/10.1145/3313831.3376583

動画
Modeling Human Visual Search Performance on Realistic Webpages Using Analytical and Deep Learning Methods
要旨

Modeling visual search not only offers an opportunity to predict the usability of an interface before actually testing it on real users but also advances scientific understanding about human behavior. In this work, we first conduct a set of analyses on a large-scale dataset of visual search tasks on realistic webpages. We then present a deep neural network that learns to predict the scannability of webpage content, i.e., how easy it is for a user to find a specific target. Our model leverages both heuristic-based features such as target size and unstructured features such as raw image pixels. This approach allows us to model complex interactions that might be involved in a realistic visual search task, which can not be achieved by traditional analytical models. We analyze the model behavior to offer our insights into how the salience map learned by the model aligns with human intuition.

キーワード
Performance modeling
deep learning
scannability
convolutional neural network
webpage
visual attention
著者
Arianna Yuan
Stanford University, Stanford, CA, USA
Yang Li
Google Research, Mountain View, CA, USA
DOI

10.1145/3313831.3376870

論文URL

https://doi.org/10.1145/3313831.3376870