Self-disclosure is central to mental health, and chatbots are increasingly used to elicit it by lowering the risk of social judgment. With the rapid growth of voice-based chatbots, it is crucial to understand how their voice identity shapes self-disclosure, yet this relationship remains underexplored. We address this gap through a mixed-method study that combined a 14-day in-the-wild deployment (N = 61) with post-study interviews. Participants interacted daily with chatbots that spoke in one of three voices varying in social distance: their own, a family member's, or a stranger's. Findings show that chatbots using the user's own voice were rated as more attractive and sustained deeper levels of disclosure over time. Family voice chatbots prompted reflection on interpersonal relationships, where participants reported comfort in discussing some topics but reluctance in others. Together, these findings highlight voice identity as a key design lever for steering both the amount and focus of self-disclosure.
ACM CHI Conference on Human Factors in Computing Systems