Comparables XAI: Faithful Example-based AI Explanations with Counterfactual Trace Adjustments
説明

Explaining with examples is an intuitive way to justify AI decisions. However, it is challenging to understand how a decision value should change relative to the examples with many features differing by large amounts. We draw from real estate valuation that uses Comparables—examples with known values for comparison. Estimates are made more accurate by hypothetically adjusting the attributes of each Comparable and correspondingly changing the value based on factors. We propose Comparables XAI for relatable example-based explanations of AI with Trace adjustments that trace counterfactual changes from each Comparable to the Subject, one attribute at a time, monotonically along the AI feature space. In modelling and user studies, Trace-adjusted Comparables achieved the highest XAI faithfulness and precision, user accuracy, and narrowest uncertainty bounds compared to linear regression, linearly adjusted Comparables, or unadjusted Comparables. This work contributes a new analytical basis for using example-based explanations to improve user understanding of AI decisions.

日本語まとめ
読み込み中…
読み込み中…
PleaSQLarify: Visual Pragmatic Repair for Natural Language Database Querying
説明

Natural language database interfaces broaden data access, yet they remain brittle under input ambiguity. Standard approaches often collapse uncertainty into a single query, offering little support for mismatches between user intent and system interpretation. We reframe this challenge through pragmatic inference: while users economize expressions, systems operate on priors over the action space that may not align with the users'. In this view, pragmatic repair---incremental clarification through minimal interaction---is a natural strategy for resolving underspecification. We present PleaSQLarify, which operationalizes pragmatic repair by structuring interaction around interpretable decision variables that enable efficient clarification. A visual interface complements this by surfacing the action space for exploration, requesting user disambiguation, and making belief updates traceable across turns. In a study with twelve participants, PleaSQLarify helped users recognize alternative interpretations and efficiently resolve ambiguity. Our findings highlight pragmatic repair as a design principle that fosters effective user control in natural language interfaces.

日本語まとめ
読み込み中…
読み込み中…
Improving User Interface Generation Models from Designer Feedback
説明

Despite being trained on vast amounts of data, most LLMs are unable to reliably generate well-designed UIs. Designer feedback is essential to improving performance on UI generation; however, we find that existing RLHF methods based on ratings or rankings are not well-aligned with with designers' workflows and ignore the rich rationale used to critique and improve UI designs. In this paper, we investigate several approaches for designers to give feedback to UI generation models, using familiar interactions such as commenting, sketching and direct manipulation. We first perform an evaluation with 21 designers where they gave feedback using these interactions, which resulted in ~1500 design annotations. We then use this data to finetune a series of LLMs to generate higher quality UIs. Finally, we evaluate these models with human judges, and we find that our designer-aligned approaches outperform models trained with traditional ranking feedback and all tested baselines, including GPT-5.

日本語まとめ
読み込み中…
読み込み中…
Editable XAI: Toward Bidirectional Human-AI Alignment with Co-Editable Explanations of Interpretable Attributes
説明

While Explainable AI (XAI) helps users understand AI decisions, misalignment in domain knowledge can lead to disagreement. This inconsistency hinders understanding, and because explanations are often read-only, users lack the control to improve alignment. We propose making XAI editable, allowing users to write rules to improve control and gain deeper understanding through the generation effect of active learning. We developed CoExplain, leveraging a neural network for universal representation and symbolic rules for intuitive reasoning on interpretable attributes. CoExplain explains the neural network with a faithful proxy decision tree, parses user-written rules as an equivalent neural network graph, and collaboratively optimizes the decision tree. In a user study (N=43), CoExplain and manually editable XAI improved user understanding and model alignment compared to read-only XAI. CoExplain was easier to use with fewer edits and less time. This work contributes Editable XAI for bidirectional AI alignment, improving understanding and control.

日本語まとめ
読み込み中…
読み込み中…
“Tell Me Why You’re Asking”: Exploring How to Increase Engagement in Preference Feedback for Intelligent Notification Systems
説明

Understanding how people are willing to express notification preferences is essential for designing personalized intelligent notification systems. Yet little is known about when, how, and under what conditions individuals choose to provide such input. We conducted semi-structured interviews with 33 participants, using design probes to examine the timing, methods, and concerns surrounding preference expression. Our findings make three contributions. First, we show that willingness to provide feedback depends not only on input ease and function but also on the justifiability of the moment, with requests embedded into notification-handling routines perceived as most natural. Second, we find that sustained engagement requires two forms of clarity: clarity in how to express one's preferences and clarity in how the system interprets and applies that input. Third, we reveal expectations for notification systems to act as evolving partners that distinguish temporary and situational shifts from longer-term preference changes and support mutual learning over time.

日本語まとめ
読み込み中…
読み込み中…
Exploring the Role of Interaction Data to Empower End-User Decision-Making in UI Personalization
説明

User interface personalization enhances digital efficiency, usability, and accessibility. However, in user-driven setups, limited support for identifying and evaluating worthwhile opportunities often leads to underuse. We explore a reflexive personalization approach where individuals engage with their digital interaction data to identify meaningful personalization opportunities and benefits. We interviewed 12 participants, using experimental vignettes as design probes to support reflection on different forms of using interaction data to empower decision-making in personalization and the preferred level of system support. We found that people can independently identify personalization opportunities but prefer system support through visual personalization suggestions. Interaction data can shape how users perceive and approach personalization by reinforcing the perceived value of change and data collection, helping them weigh benefits against effort, and increasing the transparency of system suggestions. We discuss opportunities for designing personalization software that raises end-users' agency over interfaces through reflective engagement with their interaction data.

日本語まとめ
読み込み中…
読み込み中…
Rethinking User Empowerment in AI Recommender System: Innovating Transparent and Controllable Interfaces
説明

AI-driven recommender systems are often perceived as personalization black boxes, limiting users’ ability to understand how their data shapes content (information asymmetry) or to influence system behavior meaningfully (power asymmetry). This study explores how design can strengthen user agency by integrating transparency with actionable control. We developed a provotype that introduces new interface features for managing data use, discovering varied content, and configuring context-based recommending modes. The walkthroughs and interviews with 19 participants show how these features help users interpret personalization signals, understand how their actions influence outcomes, address concerns from unwanted inference to narrow feeds (e.g., filter bubbles), and build trust in the system. We also identify strategies for promoting adoption and awareness of agency-enhancing features. Overall, our findings reaffirm users’ desire for active influence over personalization and contribute concrete interface mechanisms with empirical insights for designing recommender systems that foreground user autonomy and fairness in AI-driven content delivery.

日本語まとめ
読み込み中…
読み込み中…