Interacting with flying objects has fueled people's imagination throughout history. Over the past decade, the Human-Drone Interaction (HDI) community has been working towards making this dream a reality. Despite notable findings, we lack a high-level perspective on the current and future use cases for interacting with drones. We present a holistic view of domains and applications of use that are described, studied, and envisioned in the HDI body of work. To map the extent and nature of the prior research, we performed a scoping review (N=217). We identified 16 domains and over 100 applications where drones and people interact together. We then describe in depth the main domains and applications reported in the literature and further present under-explored use cases with great potential. We conclude with fundamental challenges and opportunities for future research in the field. This work contributes a systematic step towards increased replicability and generalizability of HDI research.
https://dl.acm.org/doi/abs/10.1145/3491102.3501881
Online platforms commonly collect and display user-generated information to support subsequent users' decision-making. However, studies have noticed that presenting collective information can pose social influences on individuals' opinions and alter their preferences accordingly. It is essential to deepen understanding of people's preferences when exposed to others' opinions and the underlying cognitive mechanisms to address potential biases. Hence, we conducted a laboratory study to investigate how products' ratings and reviews influence participants' stated preferences and cognitive responses assessed by their Electroencephalography (EEG) signals. The results showed that social ratings and reviews could alter participants' preferences and affect their status of attention, working memory, and emotion. We further conducted predictive analyses to show that participants' Electroencephalography-based measures can achieve higher power than behavioral measures to discriminate how collective information is displayed to users. We discuss the design implications informed by the results to shed light on the design of collective rating systems.
https://dl.acm.org/doi/abs/10.1145/3491102.3517726
There is a growing interest in extending crowdwork beyond traditional desktop-centric design to include mobile devices (e.g., smartphones). However, mobilizing crowdwork remains significantly tedious due to a lack of understanding about the mobile usability requirements of human intelligence tasks (HITs). We present a taxonomy of characteristics that defines the mobile usability of HITs for smartphone devices. The taxonomy is developed based on findings from a study of three consecutive steps. In Step 1, we establish an initial design of our taxonomy through a targeted literature analysis. In Step 2, we verify and extend the taxonomy through an online survey with Amazon Mechanical Turk crowdworkers. Finally, in Step 3 we demonstrate the taxonomy's utility by applying it to analyze the mobile usability of a dataset of scraped HITs. In this paper, we present the iterative development of the taxonomy, highlighting the observed practices and preferences around mobile crowdwork. We conclude with the implications of our taxonomy for accessibly and ethically mobilizing crowdwork not only within the context of smartphone devices, but beyond them.
https://dl.acm.org/doi/abs/10.1145/3491102.3501876
User experience (UX) summarizes user perceptions and responses resulting from the interaction with a product, system, or service. The User Experience Questionnaire (UEQ) is one standardized instrument for measuring UX. With six scales, it identifies areas in which product improvements will have the highest impact. In this paper, we evaluate the reliability and validity of this questionnaire. The data of $N = 1,121$ participants who interacted with one of 23 products indicated an acceptable to good reliability of all scales. The results show, however, that the scales were not independent of each other. Combining perspicuity, efficiency, and dependability to pragmatic aspects as well as novelty and stimulation to hedonic aspects of UX improved the model fit significantly. The systematic variations of product properties and correlations with the System Usability Scale (SUS) in a second experiment with N=499 participants supported the validity of these two factors. Practical implications of the results are discussed.
https://dl.acm.org/doi/abs/10.1145/3491102.3502098
In the last decade, interest in accessible and eyes-free text entry has continued to grow. However, little research has been done to explore the feasibility of using audibly distinct phrases for text entry tasks. To better understand whether preexisting phrases used in text entry research are sufficiently distinct for eyes-free text entry tasks, we used Microsoft’s and Apple’s desktop text-to-speech systems to generate all 500 phrases from MacKenzie and Soukoreff’s set [32] using the default male and female voices. We then asked 392 participants recruited through Amazon’s Mechanical Turk to transcribe the generated audio clips. We report participant transcription errors and present the 96 phrases that were observed with no comprehension errors. These phrases were further tested with 80 participants who identified as low-vision and/or blind recruited through Twitter. We contribute the 92 phrases that were observed to maintain no comprehension errors across both experiments.
https://dl.acm.org/doi/abs/10.1145/3491102.3501897