We present the first real-world dataset and quantitative evaluation of visual attention of mobile device users in-situ, i.e. while using their devices during everyday routine. Understanding user attention is a core research challenge in mobile HCI but previous approaches relied on usage logs or self-reports that are only proxies and consequently do neither reflect attention completely nor accurately. Our evaluations are based on Everyday Mobile Visual Attention (EMVA) a new 32-participant dataset containing around 472 hours of video snippets recorded over more than two weeks in real life using the front-facing camera as well as associated usage logs, interaction events, and sensor data. Using an eye contact detection method, we are first to quantify the highly dynamic nature of everyday visual attention across users, mobile applications, and usage contexts. We discuss key insights from our analyses that highlight the potential and inform the design of future mobile attentive user interfaces.
https://doi.org/10.1145/3313831.3376449
Attentional tunneling, that is the inability to detect unexpected changes in the environment, has been shown to have critical consequences in air traffic control. The motivation of this study was to assess the design of a cognitive countermeasure dedicated to mitigate such failure of attention. The Red Alert cognitive countermeasure relies on a brief orange-red flash (300 ms) that masks the entire screen with a 15% opacity. Twenty-two air traffic controllers faced two demanding scenarios, with or without the cognitive countermeasure. The volunteers were not told about the Red Alert so as to assess the intuitiveness of the design without prior knowledge. Behavioral results indicated that the cognitive countermeasure reduced reaction time and improved the detection of the notification when compared to the classical operational design. Further analyses showed this effect was even stronger for half of our participants (91.7% detection rate) who intuitively understood the purpose of this design.
The use of semi-autonomous Unmanned Aerial Vehicles (UAV) to support emergency response scenarios, such as fire surveillance and search and rescue, offers the potential for huge societal benefits. However, designing an effective solution in this complex domain represents a "wicked design" problem, requiring a careful balance between trade-offs associated with drone autonomy versus human control, mission functionality versus safety, and the diverse needs of different stakeholders. This paper focuses on designing for situational awareness (SA) using a scenario-driven, participatory design process. We developed SA cards describing six common design-problems, known as SA demons, and three new demons of importance to our domain. We then used these SA cards to equip domain experts with SA knowledge so that they could more fully engage in the design process. We designed a potentially reusable solution for achieving SA in multi-stakeholder, multi-UAV, emergency response applications.
https://doi.org/10.1145/3313831.3376825
In virtual reality (VR), a user's virtual avatar can interact with a virtual object by colliding with it. If collision responses do not occur in the direction that the user expects, the user experiences degradation of accuracy and precision in applications such as VR sports games. In determining the response of a virtual collision, existing physics engines have not considered the direction in which the user perceived and estimated the collision. Based on the cue integration theory, this study presents a statistical model explaining how users estimate the direction of a virtual collision from their body's orientation and velocity vectors. The accuracy and precision of virtual collisions can be improved by 8.77% and 30.29%, respectively, by setting the virtual collision response in the direction that users perceive.