Traffic is inherently dangerous, with around 1.19 million fatalities annually. Automotive Mediated Reality (AMR) can enhance driving safety by overlaying critical information (e.g., outlines, icons, text) on key objects to improve awareness, altering objects' appearance to simplify traffic situations, and diminishing their appearance to minimize distractions. However, real-world AMR evaluation remains limited due to technical challenges. To fill this sim-to-real gap, we present MIRAGE, an open-source tool that enables real-time AMR in real vehicles. MIRAGE implements 15 effects across the AMR spectrum of augmented, diminished, and modified reality using state-of-the-art computational models for object detection and segmentation, depth estimation, and inpainting. In an on-road expert user study (N=9) of MIRAGE, participants enjoyed the AMR experience while pointing out technical limitations and identifying use cases for AMR. We discuss these results in relation to prior work and outline implications for AMR ethics and interaction design.
As autonomous vehicles enter public spaces, external human–machine interfaces are proposed to support communication with external road users. A decade of research has produced hundreds of studies and reviews, yet it remains unclear whether the field is converging on shared principles or diverging across approaches. We present a multi-dimensional analysis of 620 publications, complemented by industry deployments and regulatory documents, to track research evolution and identify convergence. The analysis reveals several field-level patterns. First, convergence on a safety-first core: simple visual cues that clarify intent. Second, sustained divergence in necessity and implementation. Third, a progressive filtering funnel: broad exploration in research and concepts narrows in deployment and is codified by regulation into a minimal set of permitted signals. These insights point to a shift in emphasis for future work, from producing new prototypes toward consolidating evidence, clarifying points of contention, and developing frameworks that can adapt across contexts.
Although in-car touchscreens expand interaction possibilities, they risk compromising driver safety and vigilance. We propose a data- and expert-informed framework for designing adaptive touchscreens that respond to a driver’s usage profile and cognitive state, maximizing usability while mitigating safety risks. First, in a driving simulator study, we find that cognitive load slows touchscreen button selections by 20\% and produced shorter, more frequent off-road glances. We also find that enlarging buttons improves selection speeds by 0.3 seconds but at the cost of requiring more display pages. Next, these findings informed a co-design session with expert in-cabin designers, generating guidelines for adaptive interfaces that balance usability and safety. These guidelines form the basis of our Profile-State Adaptive (PSA) framework, which integrates driver profiles with cognitive states to guide interface adaptations. We then extend the framework to include a quantitative Time-Cost model as well as design patterns for adaptive layouts across usage profiles and cognitive demands.
In this paper, we explore a novel approach that leverages retrofitting to create sensor-powered smart car cabins. We propose that retrofitting offers a promising way to complement and extend the capabilities of built-in smart cabin sensors provided by car manufacturers. To understand how retrofitting solutions should be designed, we conducted a two-phase study. First, through semi-structured interviews with 18 participants, we examined challenges with built-in smart cabin sensors and identified opportunities where retrofitting could address these limitations. Second, through probe-based participatory design sessions with 15 participants, we identified user requirements and expectations for effective retrofit solutions. Based on our findings, we present a set of design recommendations to guide the future development of retrofit methods for smart car cabins.
Despite widespread use, charts remain largely inaccessible for Low-Vision Individuals (LVI). Reading charts requires viewing data points within a global context, which is difficult for LVI who may rely on magnification or experience a partial field of vision. We aim to improve exploration by providing visual access to critical context. To inform this, we conducted a formative study with five LVI. We identified four fundamental contextual elements common across chart types: axes, legend, grid lines, and the overview. We propose two pointer-based interaction methods to provide this context: Dynamic Context, a novel focus+context interaction, and Mini-map, which adapts overview+detail principles for LVI. In a study with N=22 LVI, we compared both methods and evaluated their integration to current tools. Our results show that Dynamic Context had significant positive impact on access, usability, and effort reduction; however, worsened visual load. Mini-map strengthened spatial understanding, but was less preferred for this task. We offer design insights to guide the development of future systems that support LVI with visual context while balancing visual load.
Agentic AI assistants that autonomously perform multi-step tasks raise open questions for user experience: how should such systems communicate progress and reasoning during extended operations, especially in attention-critical contexts such as driving? We investigate feedback timing and verbosity from agentic LLM-based in-car assistants through a controlled, mixed-methods study (N=45) comparing planned steps and intermediate results feedback against silent operation with final-only response. Using a dual-task paradigm with an in-car voice assistant, we found that intermediate feedback significantly improved perceived speed, trust, and user experience while reducing task load - effects that held across varying task complexities and interaction contexts. Interviews further revealed user preferences for an adaptive approach: high initial transparency to establish trust, followed by progressively reducing verbosity as systems prove reliable, with adjustments based on task stakes and situational context. These findings inform design principles for feedback systems in agentic AI assistants, balancing transparency and efficiency across domains.
People are increasingly leveraging generative AI (GenAI) for design tasks, making it critical to understand GenAI's impact on design outcomes and users' creative capabilities. We conducted a within-subjects experiment where 36 participants designed advertisements both with and without GenAI. Evaluations from clients and online volunteers revealed that GenAI-supported designs were perceived as significantly more creative and unconventional. Additionally, online volunteers, but not clients, rated these designs as more visually appealing. However, neither group perceived differences in usefulness, and clients noted no improvement in brand alignment, highlighting a notable decoupling of novelty and usefulness (two established components of creativity) in GenAI-supported design outputs. Although short-term GenAI use did not broadly influence participants' creative thinking or experience, subgroup analyses indicated increases in divergent thinking among participants new to GenAI relative to participants with GenAI experience. We discuss the implications of the decoupling effect and GenAI's influence on humans' creativity.