Cognitive load is a significant challenge to users for being deliberative. Interface design has been used to mitigate this cognitive state. This paper surveys literature on the anchoring effect, partitioning effect and point-of-choice effect, based on which we propose three interface nudges, namely, the word-count anchor, partitioning text fields, and reply choice prompt. We then conducted a 2×2×2 factorial experiment with 80 participants (10 for each condition), testing how these nudges affect deliberativeness. The results showed a significant positive impact of the word-count anchor. There was also a significant positive impact of the partitioning text fields on the word count of response. The reply choice prompt showed a surprisingly negative affect on the quantity of response, hinting at the possibility that the reply choice prompt induces a fear of evaluation, which could in turn dampen the willingness to reply.
Humans quite frequently interact with conversational agents. The rapid advancement in generative language modeling through neural networks has helped advance the creation of intelligent conversational agents. Researchers typically evaluate the output of their models through crowdsourced judgments, but there are no established best practices for conducting such studies. Moreover, it is unclear if cognitive biases in decision-making are affecting crowdsourced workers' judgments when they undertake these tasks. To investigate, we conducted a between-subjects study with 77 crowdsourced workers to understand the role of cognitive biases, specifically anchoring bias, when humans are asked to evaluate the output of conversational agents. Our results provide insight into how best to evaluate conversational agents. We find increased consistency in ratings across two experimental conditions may be a result of anchoring bias. We also determine that external factors such as time and prior experience in similar tasks have effects on inter-rater consistency.
https://doi.org/10.1145/3313831.3376318
We investigated retroactive transfer when users alternate between different interfaces. Retroactive transfer is the influence of a newly learned interface on users' performance with a previously learned interface. In an interview study, participants described their experiences when alternating between different interfaces, e.g. different operating systems, devices or techniques. Negative retroactive transfer related to text entry was the most frequently reported incident. We then reported a laboratory experiment that investigated the impact of similarity between two abstract keyboard layouts, and the number of alternations between them, on retroactive interference. Results indicated that even small changes in the interference interface produced a significant performance drop for the entire previously learned interface. The amplitude of this performance drop decreases with the number of alternations. We suggest that retroactive transfer should receive more attention in HCI, as the ubiquitous nature of interactions across applications and systems requires users to increasingly alternate between similar interfaces.
https://doi.org/10.1145/3313831.3376538
We conduct a study of hiring bias on a simulation platform where we ask Amazon MTurk participants to make hiring decisions for a mathematically intensive task. Our findings suggest hiring biases against Black workers and less attractive workers, and preferences towards Asian workers, female workers and more attractive workers. We also show that certain UI designs, including provision of candidates' information at the individual level and reducing the number of choices, can significantly reduce discrimination. However, provision of candidate's information at the subgroup level can increase discrimination. The results have practical implications for designing better online freelance marketplaces.
Undisclosed online endorsements on social media can be misleading to users who may not know when viewed content contains advertisements. Despite federal regulations requiring content creators to disclose online endorsements, studies suggest that less than 10% do so in practice. To overcome this issue, we need knowledge of how to best detect online endorsements, knowledge about how prevalent online endorsements are in the wild, and ways to design systems to automatically disclose advertising content to viewers. To that end, we designed, implemented, and evaluated a tool called AdIntuition which automatically discloses when YouTube videos contain affiliate marketing, a type of social media endorsement. We evaluated AdIntuition with 783 users using a survey, field deployment, and diary study. We discuss our findings and recommendations for future measurements of and tools to detect and alert users about affiliate marketing content.
https://doi.org/10.1145/3313831.3376178