Despite the benefits of team diversity, individuals often choose to work with similar others. Online team formation systems have the potential to help people assemble diverse teams. Systems can connect people to collaborators outside their networks, and features can quantify and raise the salience of diversity to users as they search for prospective teammates. But if we build a feature indicating diversity into the tool, how will people react to it? Two experiments manipulating the presence or absence of a "diversity score" feature within a teammate recommender demonstrate that, when present, individuals avoid collaborators who would increase team diversity in favor of those who lower team diversity. These results have important practical implications. Though the increased access to diverse teammates provided by recommender systems may benefit diversity, designers are cautioned against creating features that raise the salience of diversity as this information may undermine diversity.
First impressions influence subsequent behavior, especially when deciding how much effort to invest in an activity such as taking an online course. In computer programming courses, a context where social group stereotypes are salient, social cues early in the course can be used strategically to affirm members of historically underrepresented groups in their sense of belonging. We tested this idea in two randomized field experiments (N=53,922) by varying the social identity and status of the presenter of a welcome video and assessing online learners' persistence and achievement. Counter to our hypotheses, we found lower persistence among women in certain age groups if the welcome video was presented by a female instructor or by lower-status peers. Men remained unaffected. The results suggest that women are more responsive to social cues in online STEM courses, an environment where their social identity has been negatively stereotyped. Presenting a male and female instructor together was an effective strategy for retaining women in the course.
We use personality theory to compare self-presentation between multiple Instagram accounts, investigating authenticity and consistency. Many studies claim social media promote inauthentic self-presentation focused on socially desirable traits. At the same time, affordances suggest that self-presentation should be relatively consistent within one social medium. For 88 participants, we examine personality traits for 'real Instagram' ('Rinsta') versus 'fake Instagram' ('Finsta') accounts, comparing these with people's offline traits using mixed-methods. Counterintuitively, we find Finsta accounts often present socially undesirable traits. Furthermore, different accounts on the same social medium reveal quite different styles of self-presentation. Overall Finstas are more Extraverted, less Conscientious, and less Agreeable than Rinstas, although equally Neurotic as offline. Interviews indicate trait differences arise from differing audience perceptions. A large anonymous Rinsta audience promotes a carefully curated self. In contrast, a small but trusted Finsta audience can engender more authentic, but negative self-presentation. We discuss design and theory implications.
As an online community for discussing research findings, r/science has the potential to contribute to science outreach and communication with a broad audience. Yet previous work suggests that most of the active contributors on r/science are science-educated people rather than a lay general public. One potential reason is that r/science contributors might use a different, more specialized language than used in other subreddits. To investigate this possibility, we analyzed the language used in more than 68 million posts and comments from 12 subreddits from 2018. We show that r/science uses a specialized language that is distinct from other subreddits. Transient (newer) authors of posts and comments on r/science use less specialized language than more frequent authors, and those that leave the community use less specialized language than those that stay, even when comparing their first comments. These findings suggest that the specialized language used in r/science has a gatekeeping effect, preventing participation by people whose language does not align with that used in r/science. By characterizing r/science's specialized language, we contribute guidelines and tools for increasing the number of contributors in r/science.
Transgender people are marginalized, facing specific privacy concerns and high risk of online and offline harassment, discrimination, and violence. They also benefit tremendously from technology. We conducted semi-structured interviews with 18 transgender people from 3 U.S. cities about their computer security and privacy experiences broadly construed. Participants frequently returned to themes of activism and prosocial behavior, such as protest organization, political speech, and role-modeling transgender identities, so we focus our analysis on these themes. We identify several prominent risk models related to visibility, luck, and identity that participants used to analyze their own risk profiles, often as distinct or extreme. These risk perceptions may heavily influence transgender people's defensive behaviors and self-efficacy, jeopardizing their ability to defend themselves or gain technology's benefits. We articulate design lessons emerging from these ideas, contrasting and relating them to lessons about other marginalized groups whenever possible.