Providing the same level of information through the in-vehicle interface can overwhelm automated vehicle occupants in simple scenarios or leave them underinformed in more demanding situations. This study investigates how human preference for in-vehicle feedback detail scales with scene complexity during pedestrian encounters. We measure scene complexity through driving decision diversity and validate its positive correlation with pedestrian crossing intent uncertainty in an initial experiment (N=68). Using a mock-up in-vehicle interface, the second experiment (N=88) evaluates user preferences for manually crafted feedback concepts simulating three levels of the system's situational awareness. Results indicate that as intent uncertainty increases, users prefer more detailed feedback. While perception-only feedback suffices for simple encounters, in complex situations, information on system comprehension and projection aids better and easier understanding of driving decisions. These findings provide an empirical basis for scaling feedback to situational needs. As this study used manually generated feedback based on ground-truth data, the findings require further investigation considering real-world AI performance in automated vehicles.
ACM CHI Conference on Human Factors in Computing Systems