The adoption of responsible data science (RDS) practices in AI development remains inadequate despite growing awareness of algorithmic harms. One measure of success is by observing practitioners’ behaviors – namely, their adoption of responsible sequences of behaviors in their model building practice. This paper evaluates two interventions for changing problematic behaviors: (i) a motivational priming intervention that introduces short, relevant stories, and (ii) a fairness toolkit (Aequitas)—to bridge the gap between ethical principles and practitioner behavior. Through a mixed-methods study with data scientists (N=12), we assess how these interventions influence fairness practices, model outcomes, and cognitive load across credit risk and income classification tasks. Results indicate that both interventions were efficient in promoting responsible data science behaviors and improving the delivered models’ fairness, while maintaining baseline accuracy. We argue that effective behavior change interventions must balance technical tooling with motivational scaffolding to provide actionable insights for fostering sustainable RDS practices.
ACM CHI Conference on Human Factors in Computing Systems