Partial automation makes digital games simpler by performing game actions for players. It may simplify gameplay for non-gamers who have difficulty controlling and understanding games. However, the automation may make players confused about what they control and what the automation controls. To describe and explain non-gamers' experiences of automation confusion, we analyzed gameplay, think-aloud, and interview data from ten non-gamer participants who played two partially automated games. Our results demonstrate how incorrect mental models, behaviours resulting from those models, and players' attitudes towards the games led to different levels and types of confusion.
https://doi.org/10.1145/3544548.3581116
Multiplayer online games seek to address toxic behaviors such as trolling and griefing through behavior moderation, where penalties such as chat restriction or account suspension are issued against toxic players in the hope that punishments create a teachable moment for punished players to reflect and improve future behavior. While punishments impact player experience (PX) in profound ways, little is known regarding how players experience behavior moderation. In this study, we conducted a survey of 291 players to understand their experiences with punishments in online multiplayer games. Through several statistical analyses, we found that moderation explanation plays a critical role in improving players’ perceived transparency and fairness of moderation; and these perceptions significantly affect what players do after punishments. We discuss moderation experience as an important facet of PX, bridge the game and moderation literature, and provide design implications for behavior moderation in multiplayer online games.
https://doi.org/10.1145/3544548.3581097
We aim to understand how people assess human likeness in navigation produced by people and artificially intelligent (AI) agents in a video game. To this end, we propose a novel AI agent with the goal of generating more human-like behavior. We collect hundreds of crowd-sourced assessments comparing the human-likeness of navigation behavior generated by our agent and baseline AI agents with human-generated behavior. Our proposed agent passes a Turing Test, while the baseline agents do not. By passing a Turing Test, we mean that human judges could not quantitatively distinguish between videos of a person and an AI agent navigating. To understand what people believe constitutes human-like navigation, we extensively analyze the justifications of these assessments. This work provides insights into the characteristics that people consider human-like in the context of goal-directed video game navigation, which is a key step for further improving human interactions with AI agents.
With the expanding popularity of Location-Based Games and the rise of advertising therein, there exists a need to comprehend the impact of Location-Based Game Advertising (LGA). This paper seeks to identify what makes positively affective LGA, leveraging Pokémon GO as a probe. Researchers conducted twenty-seven (n=27) semi-structured interviews with Pokémon GO players to reveal lived experiences regarding LGA. Our findings highlight \revision{the following} direct implications for LGA: (1) LGA act as a digital billboard, conveying qualitative alongside locative information, and (2) well-received LGA enhances the player’s agency. We additionally identify findings that have auxiliary implications to LGA: (3) positive memorability occurs when points of interest match physical reality, and (4) ludic engagement is a mediating factor in the memorability of locations. This research demonstrates that LGA in Location-Based Games is surprisingly well-received. However, developers must provide extra consideration to the player’s agency for such techniques to be effective.
https://doi.org/10.1145/3544548.3580939
For gamers, high frame rates are important for a smooth visual display and good quality of experience (QoE). However, high frame rates alone are not enough as variations in the frame display times can degrade QoE even as the average frame rate remains high. While the impact of steady frame rates on player QoE is fairly well-studied, the effects of frame rate variation is not. This paper presents a 33-person user study that evaluates the impact of frame rate variation on users playing three different computer games. Analysis of the results shows average frame rate alone is a poor predictor of QoE, and frame rate variation has a significant impact on player QoE. While the standard deviation of frame times is promising as a general predictor for QoE, frame time standard deviation may not be accurate for all individual games. However, 95% frame rate floor -– the bottom 5% of frame rates the player experiences –- appears to be an effective predictor of both QoE overall and for the individual games tested.
https://doi.org/10.1145/3544548.3580665
Measuring perceived challenge and demand in video games is crucial as these player experiences are essential to creating enjoyable games. Two recent measures that identified seemingly distinct structures of challenge (Challenge Originating from Recent Gameplay Interaction Scale (CORGIS) - cognitive, emotional, performative, decision-making) and demand (Video Game Demand Scale (VGDS) - cognitive, emotional, controller, exertional, social) have been theorised to overlap, reflecting the five-factor demand structure. To investigate the overlap between these two scales we compared a five (complete overlap) and nine-factor (no overlap) model by surveying 1,101 players asking them to recall their last gaming experience before completing CORGIS and VGDS. After failing to confirm both models, we conducted an exploratory factor analysis. Our findings reveal seven dimensions, where the five-factor VGDS model holds alongside two additional CORGIS dimensions of performative and decision-making, ultimately providing a more holistic understanding of the concepts whilst highlighting unique aspects of each approach.
https://doi.org/10.1145/3544548.3581409