41. Smart Home, Bot, Robot, & Drone / Input & Measurement

A Meta-Analysis of Human Personality and Robot Acceptance in Human-RobotInteraction
説明

Human personality has been identified as a predictor of robot acceptance in the human–robot interaction (HRI) literature. Despite this,the HRI literature has provided mixed support for this assertion. To better understand the relationship between human personality and robot acceptance, this paper conducts a meta-analysis of 26 studies. Results found a positive relationship between human personality and robot acceptance. However, this relationship varied greatly by the specific personality trait along with the study sample’s age,gender diversity, task, and global region. This meta-analysis also identified gaps in the literature. Namely, additional studies are needed that investigate both the big five personality traits and other personality traits, examine a more diverse age range, and utilize samples from previously unexamined regions of the globe.

日本語まとめ
読み込み中…
読み込み中…
IdeaBot: Investigating Social Facilitation in Human-Machine Team Creativity
説明

The present study investigates how human subjects collaborate with a computer-mediated chatbot in creative idea generation tasks. In three text-based between-group studies, we tested whether the perceived identity (i.e.,whether the bot is perceived as a machine or as a human) or the conversational style of a teammate would moderate the outcomes of participants’ creative production. In Study 1, participants worked with either a chatbot or a human confederate. In Study 2, all participants worked with a human teammate but were informed that their partner was either a human or a chatbot. Conversely, all participants worked with a chatbot in Study 3, but were told the identity of their partner was either a chatbot or a human. We investigated differences in idea generation outcomes and found that participants consistently contributed more ideas and with ideas of higher quality when they perceived their teamworking partner as a bot. Furthermore, when the conversational style of the partner was robotic, participants with high anxiety in group communication reported greater creative self-efficacy in task performance. Finally, whether the perceived dominance of a partner and the pressure to come up with ideas during the task mediated positive outcomes of idea generation also depends on whether the conversational style of the bot partner was robot- or human-like. Based on our findings, we discussed implications for future design of artificial agents as active team players in collaboration tasks.

日本語まとめ
読み込み中…
読み込み中…
The Effects of System Interpretation Errors on Learning New Input Mechanisms
説明

Input mechanisms can produce noisy signals that computers must interpret, and this interpretation can misconstrue the user’s intention. Researchers have studied how interpretation errors can affect users’ task performance, but little is known about how these errors affect learning, and whether they help or hinder the transition to expertise. Previous findings suggest that increasing the user’s attention can facilitate learning, so frequent interpretation errors may increase attention and learning; alternatively, however, interpretation errors may negatively interfere with skill development. To explore these potentially important effects, we conducted studies where participants learned commands with various rates of artificially injected interpretation errors. Our results showed that higher rates of interpretation error led to worse memory retention, higher completion times, higher occurrences of user error (beyond those injected by the system), and greater perceived effort. These findings indicate that when input mechanisms must interpret the user's input, interpretation errors cause problems for user learning.

日本語まとめ
読み込み中…
読み込み中…
Typing Efficiency and Suggestion Accuracy Influence the Benefits and Adoption of Word Suggestions
説明

Suggesting words to complete a given sequence of characters is a common feature of typing interfaces. Yet, previous studies have not found a clear benefit, some even finding it detrimental. We report on the first study to control for two important factors, word suggestion accuracy and typing efficiency. Our accuracy factor is enabled by a new methodology that builds on standard metrics of word suggestions. Typing efficiency is based on device type. Results show word suggestions are used less often in a desktop condition, with little difference between tablet and phone conditions. Very accurate suggestions do not improve entry speed on desktop, but do on tablet and phone. Based on our findings, we discuss implications for the design of automation features in typing systems.

日本語まとめ
読み込み中…
読み込み中…
People May Punish, Not Blame, Robots
説明

As robots may take a greater part in our moral decision-making processes, whether people hold them accountable for moral harm becomes critical to explore. Blame and punishment signify moral accountability, often involving emotions. We quantitatively looked into people’s willingness to blame or punish an emotional vs. non-emotional robot that admits to its wrongdoing. Studies 1 and 2 (online video interaction) showed that people may punish a robot due to its lack of perceived emotional capacity than its perceived agency. Study 3 (in the lab) demonstrated that people were neither willing to blame nor punish the robot. Punishing non-emotional robots seems more likely than blaming them, yet punishment towards robots is more likely to arise online than offline. We reflect on if and why victimized humans (and those who care for them) may seek out retributive justice against robot scapegoats when there are no humans to hold accountable for moral harm.

日本語まとめ
読み込み中…
読み込み中…
Drone in Love: Emotional Perception of Facial Expressions on Flying Robots
説明

Drones are rapidly populating human spaces, yet little is known about how these flying robots are perceived and understood by humans. Recent works suggested that their acceptance is predicated upon their sociability. This paper explores the use of facial expressions to represent emotions on social drones. We leveraged design practices from ground robotics and created a set of rendered robotic faces that convey basic emotions. We evaluated individuals' response to these emotional facial expressions on drones in two empirical studies (N = 98, N = 98). Our results demonstrate that individuals accurately recognize five drone emotional expressions, as well as make sense of intensities within emotion categories. We describe how participants were emotionally affected by the drone, showed empathy towards it, and created narratives to interpret its emotions. As a consequence, we formulate design recommendations for social drones and discuss methodological insights on the use of static versus dynamic stimuli in affective robotics studies.

日本語まとめ
読み込み中…
読み込み中…
Should Robots Blush?
説明

Social interaction is the most complex challenge in daily life. Inevitably, social robots will encounter interactions that are outside their competence. This raises a basic design question: how can robots fail gracefully in social interaction? The characteristic human response to social failure is embarrassment. Usefully, embarrassment signals both recognition of a problem and typically enlists sympathy and assistance to resolve it. This could enhance robot acceptability and provides an opportunity for interactive learning. Using a speculative design approach we explore how, when and why robots might communicate embarrassment. A series of specially developed cultural probes, scenario development and low-fidelity prototyping exercises suggest that: embarrassment is relevant for managing a diverse range of social scenarios, impacts on both humanoid and non-humanoid robot design, and highlights the critical importance of understanding interactional context. We conclude that embarrassment is fundamental to competent social functioning and provides a potentially fertile area for interaction design.

日本語まとめ
読み込み中…
読み込み中…
Programmable Smart Home Toolkits Should Better Address Households' Social Needs
説明

End-user-programmable smart-home toolkits have engendered excitement in recent years. However, modern homes already cater quite well to users' needs, and genuinely new needs for smart-home automation seldom arise. Acknowledging this challenging starting point, we conducted a six-week in-the-wild study of smart-home toolkits with four carefully recruited technology-savvy families. Interleaved with free toolkit use in the home were several creativity workshops to facilitate ideation and programming. We evaluated use experiences at the end of the six weeks. Even with extensive facilitation, families faced difficulties in identifying needs for smart-home automation, except for social needs that emerged in all the families. We present analysis of those needs and discuss how end-user-programmable toolkits could better engage with both those household members who design new automated functions and those who merely `use' them.

日本語まとめ
読み込み中…
読み込み中…
Creepy Technology: What Is It and How Do You Measure It?
説明

Interactive technologies are getting closer to our bodies and permeate the infrastructure of our homes. While such technologies offer many benefits, they can also cause an initial feeling of unease in users. It is important for Human-Computer Interaction to manage first impressions and avoid designing technologies that appear creepy. To that end, we developed the Perceived Creepiness of Technology Scale (PCTS), which measures how creepy a technology appears to a user in an initial encounter with a new artefact. The scale was developed based on past work on creepiness and a set of ten focus groups conducted with users from diverse backgrounds. We followed a structured process of analytically developing and validating the scale. The PCTS is designed to enable designers and researchers to quickly compare interactive technologies and ensure that they do not design technologies that produce initial feelings of creepiness in users.

日本語まとめ
読み込み中…
読み込み中…
Touchscreen Typing As Optimal Supervisory Control
説明

Traditionally, touchscreen typing has been studied in terms of motor performance. However, recent research has exposed a decisive role of visual attention being shared between the keyboard and the text area. Strategies for this are known to adapt to the task, design, and user. In this paper, we propose a unifying account of touchscreen typing, regarding it as optimal supervisory control. Under this theory, rules for controlling visuo-motor resources are learned via exploration in pursuit of maximal typing performance. The paper outlines the control problem and explains how visual and motor limitations affect it. We then present a model, implemented via reinforcement learning, that simulates co-ordination of eye and finger movements. Comparison with human data affirms that the model creates realistic finger- and eye-movement patterns and shows human-like adaptation. We demonstrate the model's utility for interface development in evaluating touchscreen keyboard designs.

日本語まとめ
読み込み中…
読み込み中…
Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking
説明

Continuous assessment of task difficulty and mental workload is essential in improving the usability and accessibility of interactive systems. Eye tracking data has often been investigated to achieve this ability, with reports on the limited role of standard blink metrics. Here, we propose a new approach to the analysis of eye-blink responses for automated estimation of task difficulty. The core module is a time-frequency representation of eye-blink, which aims to capture the richness of information reflected on blinking. In our first study, we show that this method significantly improves the sensitivity to task difficulty. We then demonstrate how to form a framework where the represented patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent neural networks for their non-linear mapping onto difficulty-related parameters. This framework outperformed other methods that used hand-engineered features. This approach works with any built-in camera, without requiring specialized devices. We conclude by discussing how Rethinking Eye-blink can benefit real-world applications.

日本語まとめ
読み込み中…
読み込み中…