Exploring Users’ Perspectives on a Solid-Enabled Personal Data Store Enhanced Streaming Service
説明

This study explores users’ perceptions of integrating a personal data store to enhance personalized recommendations within a streaming service. Using a research-through-design approach and guided by Human Data Interaction principles (legibility, agency, and negotiability), we developed an enhanced streaming service prototype. This prototype was evaluated by experts (n=5), refined, and then used in two focus groups (n=19) to gauge participants’ reactions to the personal data store integration and their willingness to share different data types for enhanced personalized streaming recommendations. The focus groups revealed mixed reactions to the personal data store, with users weighing curiosity against concerns. However, many of the implemented data transparency and control features helped to mitigate these doubts. By linking our findings to existing literature, we developed a set of design recommendations to help businesses and guide future research in building personal data store applications, further advancing the field of Human Data Interaction.

日本語まとめ
読み込み中…
読み込み中…
Mind the Kayak! Informing UX Design of Autonomous Vehicles through Edge Case Testing in the Field
説明

As autonomous vehicles are being deployed in the field for public use, passengers are interacting with traffic in new ways. In recent years, user experience related to risky traffic interactions has been studied using virtual simulations, desktop studies, and surveys—yet field tests have remained out of reach. In this paper, we present results from a field test of an autonomous urban passenger ferry open to public use. Specifically, we investigate two questions: (i) are passengers' safety perceptions negatively affected by interactions with risky traffic? and (ii) can simulating risky behavior in the field (so-called "adversarial evaluation") present a viable way to study user experience? After repeatedly sending a kayaker on a collision course with the ferry (N~=~20 interventions), we sampled naïve passengers about their experiences (intervention group; N~=~37) and compared the result to those who experienced a normal crossing (control group, N~=~178). The results favored the intervention group, which scored higher in safety perception. However, the latter also reported that there is a need for more feedback about the ferry's current state and future intentions to avoid surprises both for passengers and for other traffic. As autonomous vehicles are field-tested and deployed, the study reflects a growing need to test user experience in the operational environment. We discuss implications for design, emphasizing the use of external human-machine interfaces (eHMIs) and special considerations for the maritime domain.

日本語まとめ
読み込み中…
読み込み中…
Generative AI and Perceptual Harms: Who’s Suspected of using LLMs?
説明

Large language models (LLMs) are increasingly integrated into a variety of writing tasks.

While these tools can help people by generating ideas or producing higher quality work, like many other AI tools, they may risk causing a variety of harms, potentially disproportionately burdening historically marginalized groups.

In this work, we introduce and evaluate perceptual harms, a term for the harms caused to users when others perceive or suspect them of using AI.

We examined perceptual harms in three online experiments, each of which entailed participants evaluating write-ups from mock freelance writers.

We asked participants to state whether they suspected the freelancers of using AI, to rank the quality of their writing, and to evaluate whether they should be hired.

We found some support for perceptual harms against certain demographic groups.

At the same time, perceptions of AI use negatively impacted writing evaluations and hiring outcomes across the board.

日本語まとめ
読み込み中…
読み込み中…
"Create a Fear of Missing Out" – ChatGPT Implements Unsolicited Deceptive Designs in Generated Websites Without Warning
説明

With the recent advancements in Large Language Models (LLMs), web developers increasingly apply their code-generation capabilities to website design. However, since these models are trained on existing designerly knowledge, they may inadvertently replicate bad or even illegal practices, especially deceptive designs (DD). This paper examines whether users can accidentally create DD for a fictitious webshop using GPT-4. We recruited 20 participants, asking them to use ChatGPT to generate functionalities (product overview or checkout) and then modify these using neutral prompts to meet a business goal (e.g., "increase the likelihood of us selling our product"). We found that all 20 generated websites contained at least one DD pattern (mean: 5, max: 9), with GPT-4 providing no warnings. When reflecting on the designs, only 4 participants expressed concerns, while most considered the outcomes satisfactory and not morally problematic, despite the potential ethical and legal implications for end-users and those adopting ChatGPT's recommendations.

日本語まとめ
読み込み中…
読み込み中…
A Cross-Country Analysis of GDPR Cookie Banners and Flexible Methods for Scraping Them
説明

Online tracking remains problematic, with compliance and ethical issues persisting despite regulatory efforts. Consent interfaces, the visible manifestation of this industry, have seen significant attention over the years. We present robust automated methods to study the presence, design, and third-party suppliers of consent interfaces at scale and the web service consent-observatory.eu to do it with. We examine the top 10,000 websites across 31 countries under the ePrivacy Directive and GDPR (n=254.148). Our findings show that 67% of websites use consent interfaces, but only 15% are minimally compliant, mostly because they lack a reject option. Consent management platforms (CMPs) are powerful intermediaries in this space: 67% of interfaces are provided by CMPs, and three organisations hold 37% of the market. There is little evidence that regulators’ guidance and fines have impacted compliance rates, but 18% of compliance variance is explained by CMPs. Researchers should take an infrastructural perspective on online tracking and study the factual control of intermediaries to identify effective leverage points.

日本語まとめ
読み込み中…
読み込み中…
You Shall Not Pass: Warning Drivers of Unsafe Overtaking Maneuvers on Country Roads by Predicting Safe Sight Distance
説明

Overtaking on country roads with possible opposing traffic is a dangerous maneuver and many proposed assistant systems assume car-to-car communication and sensors currently unavailable in cars. To overcome this limitation, we develop an assistant that uses simple in-car sensors to predict the required sight distance for safe overtaking. Our models predict this from vehicle speeds, accelerations, and 3D map data. In a user study with a Virtual Reality driving simulator (N=25), we compare two UI variants (monitoring-focused vs scheduling-focused). The results reveal that both UIs enable more patient driving and thus increase overall driving safety. While the monitoring-focused UI achieves higher System Usability Score and distracts drivers less, the preferred UI depends on personal preference. Driving data shows predictions were off at times. We investigate and discuss this in a comparison of our models to actual driving behavior and identify crucial model parameters and assumptions that significantly improve model predictions.

日本語まとめ
読み込み中…
読み込み中…
AEGIS: Human Attention-based Explainable Guidance for Intelligent Vehicle Systems
説明

Improving decision-making capabilities in Autonomous Intelligent Vehicles (AIVs) has been a heated topic in recent years. Despite advancements, training machine to capture regions of interest for comprehensive scene understanding, like human perception and reasoning, remains a significant challenge. This study introduces a novel framework, Human Attention-based Explainable Guidance for Intelligent Vehicle Systems (AEGIS).

AEGIS utilizes human attention, converted from eye-tracking, to guide reinforcement learning (RL) models to identify critical regions of interest for decision-making. AEGIS uses a pre-trained human attention model to guide reinforcement learning (RL) models to identify critical regions of interest for decision-making. By collecting 1.2 million frames from 20 participants across six scenarios, AEGIS pre-trains a model to predict human attention patterns. The learned human attention guides the RL agent’s focus on task-relevant objects, prioritizes critical instances, enhances robustness in unseen environments, and leads to faster learning convergence. This approach enhances interpretability by making machine attention more comparable to human attention and thus enhancing the RL agent’s performance in diverse driving scenarios. The code is available in https://github.com/ALEX95GOGO/AEGIS.

日本語まとめ
読み込み中…
読み込み中…