Charts are used to communicate data visually, but often, we do not know whether a chart's intended message aligns with the message readers perceive. In this mixed-methods study, we investigate how data journalists encode data and how members of a broad audience engage with, experience, and understand these visualizations. We conducted workshops and interviews with school and university students, job seekers, designers, and senior citizens to collect perceived messages and feedback on eight real-world charts. We analyzed these messages and compared them to the intended message. Our results help to understand the gulf that can exist between messages (that producers encode) and viewer interpretations. In particular, we find that consumers are often overwhelmed with the amount of data provided and are easily confused with terms that are not well known. Chart producers tend to follow strong conventions on how to visually encode particular information that might not always benefit consumers.
Creating games together is both a playful and effective way to develop skills in computational thinking, collaboration, and more. However, game development can be challenging for younger developers who lack formal training. While teenage developers frequently turn to online communities for peer support, their experiences may vary. To better understand the benefits and challenges teens face within online developer communities, we conducted interviews with 18 teenagers who created games or elements in Roblox and received peer support from one or more online Roblox developer communities. Our findings show that developer communities provide teens with valuable resources for technical, social, and career growth. However, teenagers also struggle with inter-user conflicts and a lack of community structure, leading to difficulties in handling complex issues that may arise, such as financial scams. Based on these insights, we propose takeaways for creating positive and safe online spaces for teenage game creators.
Volunteer moderators use various strategies to address online harms within their communities. Although punitive measures like content removal or account bans are common, recent research has explored the potential for restorative justice as an alternative framework to address the distinct needs of victims, offenders, and community members. In this study, we take steps toward identifying a more concrete design space for restorative justice-oriented tools by developing ApoloBot, a Discord bot designed to facilitate apologies when harm occurs in online communities. We present results from two rounds of interviews: first, with moderators giving feedback about the design of ApoloBot, and second, after a subset of these moderators have deployed ApoloBot in their communities. This study builds on prior work to yield more detailed insights regarding the potential of adopting online restorative justice tools, including opportunities, challenges, and implications for future designs.
In times of crisis, communities rise to fill the void of faltering institutions, self-organising to provide essential resources to marginalised populations. From providing relief to survivors of natural disasters, to addressing crises caused by societal failings like poverty, homelessness and unemployment, mutual aid is an important tool for community care and the development of new systems of survival. With mutual aid efforts increasingly entering the digital sphere, some work has investigated how the internet has transformed mutual aid, especially via social media. While such work describes mutual aid across a variety of contexts, we lack a broad understanding of how mutual aid principles translate online and the challenges organisers face in this digital landscape. To address this, we review 19 papers, identifying key characteristics, strategies, and challenges in online mutual aid. In doing so, we aim to enhance our understanding of how technology might foster sustainable community support and solidarity.
This study investigates how interaction scenarios of human caregiving for robots affect humans’ perceived bond with robots. In a between-subjects lab experiment (n = 88), participants played a game with a social robot during which they provided either 1) emotional care (comforting the robot); 2) instrumental care (helping with battery charging); or 3) no care for the robot. Results indicated that caregiving did not significantly affect human-robot bonding according to explicit relationship measures including closeness, social attraction, or desire for future interaction. However, caregiving mattered when bonding was measured implicitly. Those in the emotional caregiving scenario were more hesitant to replace the robot and invested more effort in a voluntary task requested by the robot than those who provided no care. These findings provide empirical evidence that emotional caregiving interactions can effectively foster initial human-robot bonding, highlighting a promising design scenario for human-robot interaction.
Personal informatics helps individuals understand themselves, but it often struggles to capture non-conscious behaviors such as stress responses, habitual actions, and communication styles. Incorporating social aspects into PI systems offers new perspectives on self-understanding, yet prior research has largely focused on unidirectional approaches that center benefits on the primary tracker. To address this gap, we introduce the Peerspective study, which explores reciprocal tracking---a bidirectional practice where two participants observe and provide feedback to each other, fostering mutual self-understanding and collaboration. In a week-long study with eight peer dyads, we explored how reciprocal observation and feedback influence self-awareness and interpersonal relationships. Our findings reveal that reciprocal tracking not only helps participants uncover blind spots and expand their self-concepts but also enhances empathy, deepens communication, and promotes sustained engagement. We discuss key facilitators and challenges of integrating reciprocity into personal informatics systems and offer design considerations for supporting collaborative tracking in everyday contexts.
Virtual reality technologies that enhance realism and artificial intelligence (AI) systems that assist human behavior are increasingly interwoven in social applications. However, how these technologies might jointly influence interpersonal coordination remains unclear. We conducted an experiment with 240 participants in 120 pairs who interacted through remote-controlled robot cars in a physical space or virtual cars in a digital space, with or without autosteering assistance, using the chicken game, an established model of interpersonal coordination. We find that both realism and AI assistance help improve user performance but through opposing mechanisms. Real-world contexts enhanced communication, fostering reciprocal actions and collective benefits. In contrast, autosteering assistance diminished the need for interpersonal coordination, shifting participants’ focus towards self-interest. Notably, when combined, the egocentric effects of autosteering assistance outweighed the prosocial effects of realism. The design of HCI systems that involve social coordination will, we believe, need to take such effects into account.