Investigating Positive and Negative Qualities of Human-in-the-Loop Optimization for Designing Interaction Techniques
説明

Designers reportedly struggle with design optimization tasks where they are asked to find a combination of design parameters that maximizes a given set of objectives. In HCI, design optimization problems are often exceedingly complex, involving multiple objectives and expensive empirical evaluations. Model-based computational design algorithms assist designers by generating design examples during design, however they assume a model of the interaction domain. Black box methods for assistance, on the other hand, can work with any design problem. However, virtually all empirical studies of this human-in-the-loop approach have been carried out by either researchers or end-users. The question stands out if such methods can help designers in realistic tasks. In this paper, we study Bayesian optimization as an algorithmic method to guide the design optimization process. It operates by proposing to a designer which design candidate to try next, given previous observations. We report observations from a comparative study with 40 novice designers who were tasked to optimize a complex 3D touch interaction technique. The optimizer helped designers explore larger proportions of the design space and arrive at a better solution, however they reported lower agency and expressiveness. Designers guided by an optimizer reported lower mental effort but also felt less creative and less in charge of the progress. We conclude that human-in-the-loop optimization can support novice designers in cases where agency is not critical.

日本語まとめ
読み込み中…
読み込み中…
For What It's Worth: Humans Overwrite Their Economic Self-Interest to Avoid Bargaining With AI Systems
説明

As algorithms are increasingly augmenting and substituting human decision-making, understanding how the introduction of computational agents changes the fundamentals of human behavior becomes vital. This pertains to not only users, but also those parties who face the consequences of an algorithmic decision. In a controlled experiment with 480 participants, we exploit an extended version of two-player ultimatum bargaining where responders choose to bargain with either another human, another human with an AI decision aid or an autonomous AI-system acting on behalf of a passive human proposer. Our results show strong responder preferences against the algorithm, as most responders opt for a human opponent and demand higher compensation to reach a contract with autonomous agents. To map these preferences to economic expectations, we elicit incentivized subject beliefs about their opponent’s behavior. The majority of responders maximize their expected value when this is line with approaching the human proposer. In contrast, responders predicting income maximization for the autonomous AI-system overwhelmingly override economic self-interest to avoid the algorithm.

日本語まとめ
読み込み中…
読み込み中…
You Complete Me: Human-AI Teams and Complementary Expertise
説明

People consider recommendations from AI systems in diverse domains ranging from recognizing tumors in medical images to deciding which shoes look cute with an outfit. Implicit in the decision process is the perceived expertise of the AI system. In this paper, we investigate how people trust and rely on an AI assistant that performs with different levels of expertise relative to the person, ranging from completely overlapping expertise to perfectly complementary expertise. Through a series of controlled online lab studies where participants identified objects with the help of an AI assistant, we demonstrate that participants were able to perceive when the assistant was an expert or non-expert within the same task and calibrate their reliance on the AI to improve team performance. We also demonstrate that communicating expertise through the linguistic properties of the explanation text was effective, where embracing language increased reliance and distancing language reduced reliance on AI.

日本語まとめ
読み込み中…
読み込み中…
Jury Learning: Integrating Dissenting Voices into Machine Learning Models
説明

Whose labels should a machine learning (ML) algorithm learn to emulate? For ML tasks ranging from online comment toxicity to misinformation detection to medical diagnosis, different groups in society may have irreconcilable disagreements about ground truth labels. Supervised ML today resolves these label disagreements implicitly using majority vote, which overrides minority groups’ labels. We introduce jury learning, a supervised ML approach that resolves these disagreements explicitly through the metaphor of a jury: defining which people or groups, in what proportion, determine the classifier’s prediction. For example, a jury learning model for online toxicity might centrally feature women and Black jurors, who are commonly targets of online harassment. To enable jury learning, we contribute a deep learning architecture that models every annotator in a dataset, samples from annotators’ models to populate the jury, then runs inference to classify. Our architecture enables juries that dynamically adapt their composition, explore counterfactuals, and visualize dissent. A field evaluation finds that practitioners construct diverse juries that alter 14% of classification outcomes.

日本語まとめ
読み込み中…
読み込み中…
What is Your Current Mindset? Categories for a satisficing exploration of mobile point-of-interest recommendations
説明

Is recommendation the new search? Recommender systems have shortened the search for information in everyday activities such as following the news, media, and shopping. In this paper, we address the challenges of capturing the situational needs of the user and linking them to the available datasets with the concept of Mindsets. Mindsets are categories such as “I'm hungry” and “Surprise me” designed to lead the users to explicitly state their intent, control the recommended content, save time, get inspired, and gain shortcuts for a satisficing exploration of POI recommendations. In our methodology, we first compiled Mindsets with a card sorting workshop and a formative evaluation. Using the insights gathered from potential end users, we then quantified Mindsets by linking them to POI utility measures using approximated lexicographic multi-objective optimisation. Finally, we ran a summative evaluation of Mindsets and derived guidelines for designing novel categories for recommender systems.

日本語まとめ
読み込み中…
読み込み中…