Human intervention is claimed to safeguard decision-subjects' rights in algorithmic decision-making and contribute to their fairness perceptions. However, how decision subjects perceive hybrid decision-maker configurations (i.e., combining humans and algorithms) is unclear. We address this gap through a mixed-methods study in an algorithmic policy enforcement context. Through qualitative interviews (Study 1; N_1=21), we identify three characteristics (i.e., decision-maker's profile, model type, input data provenance) that affect how decision-subjects perceive decision-makers' ability, benevolence, and integrity (ABI). Through a quantitative study (Study 2; N_2=223), we then systematically evaluate the individual and combined effects of these characteristics on decision-subjects' perceptions towards decision-makers, and fairness perceptions. We found that only decision-maker’s profile contributes to perceived ability, benevolence, and integrity. Interestingly, the effect of decision-maker's profile on fairness perceptions was mediated by perceived ability and integrity. Our findings have design implications for ensuring effective human intervention as a protection against harmful algorithmic decisions.
https://dl.acm.org/doi/10.1145/3706598.3713145
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)