Too often, while interacting with online technologies, we blindly agree to services' terms and conditions (T&Cs). We often disregard their content—believing it is not worth engaging with the long, hard-to-understand texts. The inconspicuous display of online T&Cs on the user interface (UI) adds to our lack of engagement. Nevertheless, certain information included in T&Cs could help us make optimal decisions. In this replication research, we investigate this issue in the purchasing context. We confirm and extend previous findings through an online experiment (N=987), showing that differently presented T&Cs (icons, scroll, and cost cue) compared to hyperlinked text affect whether people open them, becoming aware. We also show the effect of decision-making style on the relationship between awareness and satisfaction. We discuss the possible use of these findings to improve users' informed decisions. We also highlight problems that different designs may pose, potentially increasing the information gap between users and service providers.
https://dl.acm.org/doi/abs/10.1145/3491102.3517720
How do people form impressions of effect size when reading scientific results? We present a series of studies on how people perceive treatment effectiveness when scientific results are summarized in various ways. We first show that a prevalent form of summarizing results---presenting mean differences between conditions---can lead to significant overestimation of treatment effectiveness, and that including confidence intervals can exacerbate the problem. We attempt to remedy potential misperceptions by displaying information about variability in individual outcomes in different formats: statements about variance, a quantitative measure of standardized effect size, and analogies that compare the treatment with more familiar effects (e.g., height differences by age). We find that all of these formats substantially reduce potential misperceptions and that analogies can be as helpful as more precise quantitative statements of standardized effect size. These findings can be applied by scientists in HCI and beyond to improve the communication of results to laypeople.
https://dl.acm.org/doi/abs/10.1145/3491102.3502053
Recent work in HCI suggests that users can be powerful in surfacing harmful algorithmic behaviors that formal auditing approaches fail to detect. However, it is not well understood how users are often able to be so effective, nor how we might support more effective user-driven auditing. To investigate, we conducted a series of think-aloud interviews, diary studies, and workshops, exploring how users find and make sense of harmful behaviors in algorithmic systems, both individually and collectively. Based on our findings, we present a process model capturing the dynamics of and influences on users' search and sensemaking behaviors. We find that 1) users' search strategies and interpretations are heavily guided by their personal experiences with and exposures to societal bias; and 2) collective sensemaking amongst multiple users is invaluable in user-driven algorithm audits. We offer directions for the design of future methods and tools that can better support user-driven auditing.
https://dl.acm.org/doi/abs/10.1145/3491102.3517441
Future offices are likely reshaped by Augmented Reality (AR) extending the display space while maintaining awareness of surroundings, and thus promise to support collaborative tasks such as brainstorming or sensemaking. However, it is unclear how physical surroundings and co-located collaboration influence the spatial organization of virtual content for sensemaking. Therefore, we conducted a study (N=28) to investigate the effect of office environments and work styles during a document classification task using AR with regard to content placement, layout strategies, and sensemaking workflows. Results show that participants require furniture, especially tables and whiteboards, to assist sensemaking and collaboration regardless of room settings, while generous free spaces (e.g., walls) are likely used when available. Moreover, collaborating participants tend to use furniture despite personal layout preferences. We identified different placement and layout strategies, as well as the transitions in-between. Finally, we propose design implications for future immersive sensemaking applications and beyond.
https://dl.acm.org/doi/abs/10.1145/3491102.3501946
Information extraction (IE) approaches often play a pivotal role in text analysis and require significant human intervention. Therefore, a deeper understanding of existing IE practices and related challenges from a human-in-the-loop perspective is warranted. In this work, we conducted semi-structured interviews in an industrial environment and analyzed the reported IE approaches and limitations. We observed that data science workers often follow an iterative task model consisting of information foraging and sensemaking loops across all the phases of an IE workflow. The task model is generalizable and captures diverse goals across these phases (e.g. data preparation, modeling, evaluation.) We found several limitations in both foraging (e.g., data exploration) and sensemaking (e.g., qualitative debugging) loops stemming from a lack of adherence to existing cognitive engineering principles. Moreover, we identified that due to the iterative nature of an IE workflow, the requirement of provenance is often implied but rarely supported by existing systems. Based on these findings, we discuss design implications for supporting IE workflows and future research directions.
https://dl.acm.org/doi/abs/10.1145/3491102.3502068