Smart Home, Bot, Robot, & Drone / Input & Measurement

[A] Paper Room 14, 2021-05-12 17:00:00~2021-05-12 19:00:00 / [B] Paper Room 14, 2021-05-13 01:00:00~2021-05-13 03:00:00 / [C] Paper Room 14, 2021-05-13 09:00:00~2021-05-13 11:00:00

会議の名前
CHI 2021
A Meta-Analysis of Human Personality and Robot Acceptance in Human-RobotInteraction
要旨

Human personality has been identified as a predictor of robot acceptance in the human–robot interaction (HRI) literature. Despite this,the HRI literature has provided mixed support for this assertion. To better understand the relationship between human personality and robot acceptance, this paper conducts a meta-analysis of 26 studies. Results found a positive relationship between human personality and robot acceptance. However, this relationship varied greatly by the specific personality trait along with the study sample’s age,gender diversity, task, and global region. This meta-analysis also identified gaps in the literature. Namely, additional studies are needed that investigate both the big five personality traits and other personality traits, examine a more diverse age range, and utilize samples from previously unexamined regions of the globe.

著者
Connor Esterwood
University of Michigan, Ann Arbor, Michigan, United States
Kyle Essenmacher
University of Michigan, Ann Arbor, Michigan, United States
Han Yang
University of Michigan, Ann Arbor, Michigan, United States
Fanpan Zeng
University of Michigan, Ann Arbor, Michigan, United States
Lionel Peter. Robert
University of Michigan, Ann Arbor, Michigan, United States
DOI

10.1145/3411764.3445542

論文URL

https://doi.org/10.1145/3411764.3445542

動画
IdeaBot: Investigating Social Facilitation in Human-Machine Team Creativity
要旨

The present study investigates how human subjects collaborate with a computer-mediated chatbot in creative idea generation tasks. In three text-based between-group studies, we tested whether the perceived identity (i.e.,whether the bot is perceived as a machine or as a human) or the conversational style of a teammate would moderate the outcomes of participants’ creative production. In Study 1, participants worked with either a chatbot or a human confederate. In Study 2, all participants worked with a human teammate but were informed that their partner was either a human or a chatbot. Conversely, all participants worked with a chatbot in Study 3, but were told the identity of their partner was either a chatbot or a human. We investigated differences in idea generation outcomes and found that participants consistently contributed more ideas and with ideas of higher quality when they perceived their teamworking partner as a bot. Furthermore, when the conversational style of the partner was robotic, participants with high anxiety in group communication reported greater creative self-efficacy in task performance. Finally, whether the perceived dominance of a partner and the pressure to come up with ideas during the task mediated positive outcomes of idea generation also depends on whether the conversational style of the bot partner was robot- or human-like. Based on our findings, we discussed implications for future design of artificial agents as active team players in collaboration tasks.

受賞
Honorable Mention
著者
Angel Hsing-Chi Hwang
Cornell University, Ithaca, New York, United States
Andrea Stevenson Won
Cornell University, Ithaca, New York, United States
DOI

10.1145/3411764.3445270

論文URL

https://doi.org/10.1145/3411764.3445270

動画
The Effects of System Interpretation Errors on Learning New Input Mechanisms
要旨

Input mechanisms can produce noisy signals that computers must interpret, and this interpretation can misconstrue the user’s intention. Researchers have studied how interpretation errors can affect users’ task performance, but little is known about how these errors affect learning, and whether they help or hinder the transition to expertise. Previous findings suggest that increasing the user’s attention can facilitate learning, so frequent interpretation errors may increase attention and learning; alternatively, however, interpretation errors may negatively interfere with skill development. To explore these potentially important effects, we conducted studies where participants learned commands with various rates of artificially injected interpretation errors. Our results showed that higher rates of interpretation error led to worse memory retention, higher completion times, higher occurrences of user error (beyond those injected by the system), and greater perceived effort. These findings indicate that when input mechanisms must interpret the user's input, interpretation errors cause problems for user learning.

受賞
Honorable Mention
著者
Kevin C.. Lam
University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Carl Gutwin
University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Madison Klarkowski
University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Andy Cockburn
University of Canterbury, Christchurch, New Zealand
DOI

10.1145/3411764.3445366

論文URL

https://doi.org/10.1145/3411764.3445366

動画
Typing Efficiency and Suggestion Accuracy Influence the Benefits and Adoption of Word Suggestions
要旨

Suggesting words to complete a given sequence of characters is a common feature of typing interfaces. Yet, previous studies have not found a clear benefit, some even finding it detrimental. We report on the first study to control for two important factors, word suggestion accuracy and typing efficiency. Our accuracy factor is enabled by a new methodology that builds on standard metrics of word suggestions. Typing efficiency is based on device type. Results show word suggestions are used less often in a desktop condition, with little difference between tablet and phone conditions. Very accurate suggestions do not improve entry speed on desktop, but do on tablet and phone. Based on our findings, we discuss implications for the design of automation features in typing systems.

受賞
Honorable Mention
著者
Quentin Roy
University of Waterloo, Waterloo, Ontario, Canada
Sébastien Berlioux
University of Waterloo, Waterloo, Ontario, Canada
Géry Casiez
Université de Lille, Lille, France
Daniel Vogel
University of Waterloo, Waterloo, Ontario, Canada
DOI

10.1145/3411764.3445725

論文URL

https://doi.org/10.1145/3411764.3445725

動画
People May Punish, Not Blame, Robots
要旨

As robots may take a greater part in our moral decision-making processes, whether people hold them accountable for moral harm becomes critical to explore. Blame and punishment signify moral accountability, often involving emotions. We quantitatively looked into people’s willingness to blame or punish an emotional vs. non-emotional robot that admits to its wrongdoing. Studies 1 and 2 (online video interaction) showed that people may punish a robot due to its lack of perceived emotional capacity than its perceived agency. Study 3 (in the lab) demonstrated that people were neither willing to blame nor punish the robot. Punishing non-emotional robots seems more likely than blaming them, yet punishment towards robots is more likely to arise online than offline. We reflect on if and why victimized humans (and those who care for them) may seek out retributive justice against robot scapegoats when there are no humans to hold accountable for moral harm.

著者
Minha Lee
Eindhoven University of Technology, Eindhoven, Netherlands
Peter Ruijten
Eindhoven University of Technology, Eindhoven, Netherlands
Lily Frank
Technical university of Eindhoven, Eindhoven, N/A, Netherlands
Yvonne de Kort
Technical University of Eindhoven, Eindhoven, Netherlands
Wijnand IJsselsteijn
Technical university of Eindhoven, Eindhoven, Netherlands
DOI

10.1145/3411764.3445284

論文URL

https://doi.org/10.1145/3411764.3445284

動画
Drone in Love: Emotional Perception of Facial Expressions on Flying Robots
要旨

Drones are rapidly populating human spaces, yet little is known about how these flying robots are perceived and understood by humans. Recent works suggested that their acceptance is predicated upon their sociability. This paper explores the use of facial expressions to represent emotions on social drones. We leveraged design practices from ground robotics and created a set of rendered robotic faces that convey basic emotions. We evaluated individuals' response to these emotional facial expressions on drones in two empirical studies (N = 98, N = 98). Our results demonstrate that individuals accurately recognize five drone emotional expressions, as well as make sense of intensities within emotion categories. We describe how participants were emotionally affected by the drone, showed empathy towards it, and created narratives to interpret its emotions. As a consequence, we formulate design recommendations for social drones and discuss methodological insights on the use of static versus dynamic stimuli in affective robotics studies.

著者
Viviane Herdel
Ben Gurion University of the Negev, Be’er Sheva, Israel
Anastasia Kuzminykh
University of Toronto, Toronto, Ontario, Canada
Andrea Hildebrandt
Carl von Ossietzky UniversityOldenburg, Oldenburg, Germany
Jessica R.. Cauchard
Ben Gurion University of the Negev, Be'er Sheva, Israel
DOI

10.1145/3411764.3445495

論文URL

https://doi.org/10.1145/3411764.3445495

動画
Should Robots Blush?
要旨

Social interaction is the most complex challenge in daily life. Inevitably, social robots will encounter interactions that are outside their competence. This raises a basic design question: how can robots fail gracefully in social interaction? The characteristic human response to social failure is embarrassment. Usefully, embarrassment signals both recognition of a problem and typically enlists sympathy and assistance to resolve it. This could enhance robot acceptability and provides an opportunity for interactive learning. Using a speculative design approach we explore how, when and why robots might communicate embarrassment. A series of specially developed cultural probes, scenario development and low-fidelity prototyping exercises suggest that: embarrassment is relevant for managing a diverse range of social scenarios, impacts on both humanoid and non-humanoid robot design, and highlights the critical importance of understanding interactional context. We conclude that embarrassment is fundamental to competent social functioning and provides a potentially fertile area for interaction design.

著者
Soomi Park
Queen Mary University of London, London, United Kingdom
Patrick G.T.. Healey
Queen Mary University of London, London, United Kingdom
Antonios Kaniadakis
Brunel University London, London, United Kingdom
DOI

10.1145/3411764.3445561

論文URL

https://doi.org/10.1145/3411764.3445561

動画
Programmable Smart Home Toolkits Should Better Address Households' Social Needs
要旨

End-user-programmable smart-home toolkits have engendered excitement in recent years. However, modern homes already cater quite well to users' needs, and genuinely new needs for smart-home automation seldom arise. Acknowledging this challenging starting point, we conducted a six-week in-the-wild study of smart-home toolkits with four carefully recruited technology-savvy families. Interleaved with free toolkit use in the home were several creativity workshops to facilitate ideation and programming. We evaluated use experiences at the end of the six weeks. Even with extensive facilitation, families faced difficulties in identifying needs for smart-home automation, except for social needs that emerged in all the families. We present analysis of those needs and discuss how end-user-programmable toolkits could better engage with both those household members who design new automated functions and those who merely `use' them.

著者
Antti Salovaara
Aalto University, Espoo, Finland
Andrea Bellucci
Universidad Carlos III de Madrid, Leganés, Madrid, Spain
Andrea Vianello
Aalto University, Espoo, Finland
Giulio Jacucci
University of Helsinki, Helsinki, Finland
DOI

10.1145/3411764.3445770

論文URL

https://doi.org/10.1145/3411764.3445770

動画
Creepy Technology: What Is It and How Do You Measure It?
要旨

Interactive technologies are getting closer to our bodies and permeate the infrastructure of our homes. While such technologies offer many benefits, they can also cause an initial feeling of unease in users. It is important for Human-Computer Interaction to manage first impressions and avoid designing technologies that appear creepy. To that end, we developed the Perceived Creepiness of Technology Scale (PCTS), which measures how creepy a technology appears to a user in an initial encounter with a new artefact. The scale was developed based on past work on creepiness and a set of ten focus groups conducted with users from diverse backgrounds. We followed a structured process of analytically developing and validating the scale. The PCTS is designed to enable designers and researchers to quickly compare interactive technologies and ensure that they do not design technologies that produce initial feelings of creepiness in users.

著者
Paweł W. Woźniak
Utrecht University, Utrecht, Netherlands
Jakob Karolus
LMU Munich, Munich, Germany
Florian Lang
LMU Munich, Munich, Germany
Caroline Eckerth
LMU Munich, Munich, Germany
Johannes Schöning
University of Bremen, Bremen, Germany
Yvonne Rogers
UCL Interaction Centre, London, United Kingdom
Jasmin Niess
University of Bremen, Bremen, Germany
DOI

10.1145/3411764.3445299

論文URL

https://doi.org/10.1145/3411764.3445299

動画
Touchscreen Typing As Optimal Supervisory Control
要旨

Traditionally, touchscreen typing has been studied in terms of motor performance. However, recent research has exposed a decisive role of visual attention being shared between the keyboard and the text area. Strategies for this are known to adapt to the task, design, and user. In this paper, we propose a unifying account of touchscreen typing, regarding it as optimal supervisory control. Under this theory, rules for controlling visuo-motor resources are learned via exploration in pursuit of maximal typing performance. The paper outlines the control problem and explains how visual and motor limitations affect it. We then present a model, implemented via reinforcement learning, that simulates co-ordination of eye and finger movements. Comparison with human data affirms that the model creates realistic finger- and eye-movement patterns and shows human-like adaptation. We demonstrate the model's utility for interface development in evaluating touchscreen keyboard designs.

著者
Jussi P. P.. Jokinen
Aalto University, Helsinki, Finland
Aditya Acharya
Aalto University, Espoo, Finland
Mohammad Uzair
Aalto University, Espoo, Finland
Xinhui Jiang
Kochi University of Technology, Kami, Kochi, Japan
Antti Oulasvirta
Aalto University, Helsinki, Finland
DOI

10.1145/3411764.3445483

論文URL

https://doi.org/10.1145/3411764.3445483

動画
Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking
要旨

Continuous assessment of task difficulty and mental workload is essential in improving the usability and accessibility of interactive systems. Eye tracking data has often been investigated to achieve this ability, with reports on the limited role of standard blink metrics. Here, we propose a new approach to the analysis of eye-blink responses for automated estimation of task difficulty. The core module is a time-frequency representation of eye-blink, which aims to capture the richness of information reflected on blinking. In our first study, we show that this method significantly improves the sensitivity to task difficulty. We then demonstrate how to form a framework where the represented patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent neural networks for their non-linear mapping onto difficulty-related parameters. This framework outperformed other methods that used hand-engineered features. This approach works with any built-in camera, without requiring specialized devices. We conclude by discussing how Rethinking Eye-blink can benefit real-world applications.

著者
Youngjun Cho
University College London, London, United Kingdom
DOI

10.1145/3411764.3445577

論文URL

https://doi.org/10.1145/3411764.3445577

動画