Multiplayer digital games can use aim assistance to help people with different levels of aiming ability to play together.
To dynamically provide each player with the right amount of assistance, an aim assistance algorithm needs a model of the player's ability that can be measured and updated during gameplay. The model must be based on difficulty parameters such as target speed, size, and duration, that can be adjusted in-game to change aiming difficulty, and must account for player's spatial and temporal aiming abilities.
To satisfy these requirements, we present the novel dynamic spatiotemporal model of a player's aiming ability, based on difficulty parameters that can be manipulated in a game. In a crowdsourced experiment with 72 participants, the model was found to accurately predict how close to a target a player can aim and to converge rapidly with a small set of observations of aiming tasks.
Large vertical surfaces such as wall displays allow users to work with a very large and high-resolution workspace. Such displays pro- mote physical navigation: users can step close to the display to see details, but also move away to get a wider view of the workspace. In terms of input, current solutions usually combine direct touch on the wall with input on a handheld device, disconnecting close and distant input rather than treating it as a continuum. We present SurfAirs, which are physical controllers that users can manipulate on screen (surface input), in the air (mid-air input), and transition from the surface to the air during a single manipulation (hybrid input). We report on two user studies that compare SurfAirs’ performance with bare-hand input for both mid-air and hybrid input. Participants prefer and perform better with SurfAirs.
Drones are increasingly used in situations where they can assist a person. However, people are not familiar with drones approaching them. We propose an expressive light system embedded on a drone to convey its intention to initiate communication, which we present as a four-stage process: Drone-Initiated Engagement Model (DIEM). We then describe the design and development of an LED-based prototype divided in two confgurations with a total of 26 light ani-mations, and report their evaluation in an online survey (N = 156). We describe the suitability of diferent confgurations, animations, and colors to convey each stage of DIEM. We fnally validate our system in a user study (N = 45) that showed that participants can perceive all four stages of approach via the drone movement and that expressive lights provide a more nuanced user experience. We contribute insights into the design of expressive light systems, as a stepping stone towards machine-initiated communication.
It has been shown that providing explanations about AI-based systems’ decisions can be an effective way to increase users’ trust and acceptance. The effect of explanation design in smart home systems on users’ acceptance and perceptions is however less known. We therefore explored the effect of different explanation designs on acceptance in the context of the Philips Hue smart home lighting system. We conducted interviews (N = 10) and an online experiment (N = 452) using three everyday smart home lighting scenarios with different explanation types. The results showed that although participants indicated a positive attitude towards explanations, receiving an explanation can potentially reduce the perceived control of the lighting system. Furthermore, participants preferred system-based explanations rather than user-based explanations. Our study also provides recommendations for the design of explanations in smart home systems.
While conversational agents have traditionally been used for simple tasks such as scheduling meetings and customer service support, recent advancements have led researchers to examine their use in complex social situations, such as to provide emotional support and companionship. For mourners who could be vulnerable to the sense of loneliness and disruption of self-identity, such technology offers a unique way to help them cope with grief. In this study, we explore the potential benefits and risks of such a practice, through semi-structured interviews with 10 mourners who actively used chatbots at different phases of their loss. Our findings indicated seven approaches in which chatbots were used to help people cope with grief, by taking the role of listener, acting as a simulation of the deceased, romantic partner, friend and emotion coach. We then highlight how interacting with the chatbots impacted mourners’ grief experience, and conclude the paper with further research opportunities.
Intensive care nurses are prone to suffering from chronic stress due to constant exposure to two main profession-related stressors: interruption and time pressure. These stressors have detrimental effects on the well-being of the nursing staff and, by proxy, the patients. To alleviate stress, increase safety, and support the training of stressful scenarios, we investigate the impact these stressors have on subjective and objective stress levels in a virtual environment. We designed an intensive care unit in which participants (n=26, 18 healthcare professionals) perform common tasks, e.g. refilling an infusion pump, whilst being exposed to interruptions and time pressure. Results from our between-subjects study provide data indicating stress increase in both stressor conditions, suggesting that artificially evoking work-related stressors for stress inoculation training (SIT) is a possible extension to simulation training during nursing education. This knowledge is helpful for designing training scenarios of safety critical situations early in the professional apprenticeship.