Contemporary robotic workplaces are increasingly complex, involving multiple robots, machines, and digital services. Their setup and adaptation demand collaboration across domains, yet domain experts with essential process knowledge are typically non-programmers. Prior research in end-user robot programming has simplified individual task specification, but little is known about how multiple users coordinate and maintain shared awareness when programming together. This paper presents an empirical study of a handheld augmented-reality system that enables co-located users to jointly create and edit robot programs in a shared workspace. Through qualitative analysis of five participant pairs, we examine how collaborators coordinate their actions, manage workspace awareness, and negotiate control. The results reveal opportunities and challenges of co-located handheld AR programming and inform the design of collaborative, context-aware interfaces that can better support end-user participation in configuring industrial robotic workplaces.
Syntax remains a major barrier for novices. Although block-based systems reduce or eliminate syntax errors, conditionals still challenge learners, likely because their semantics remain implicit. In this paper, we address this problem by introducing a semantics-first, state-visible programming approach inspired by the classic visual language Stagecast Creator. To demonstrate its usefulness, we designed Elephant, a unified, Karel-like research platform that supports three equally expressive programming paradigms: (i) semantics-first programming, (ii) block-based programming with the Blockly library, and (iii) text-based programming in JavaScript with domain-specific libraries. We then deployed Elephant in two within-subjects studies with secondary-school students (N = 39) to compare semantics-first programming to textual and block-based baselines, keeping the program semantics constant across modes and reducing cross-tool confounds. Results indicate, among other things, that semantics-first programming yields significantly higher task performance, suggesting that increasing the visibility of the program state during program composition could support greater outcomes in secondary computing education.
The initial psychiatric interview centers on patients’ chief complaints, symptoms, and functional impairments, forming the basis of diagnostic impressions. In real clinical practice, however, interviews are constrained by limited time and the unpredictability of patient responses, making it difficult to secure essential information efficiently. While prior conversational agents have focused on conversationalizing validated instruments or advancing interview systems in general medical domains, little research has addressed the distinctive challenges of initial psychiatric history-taking from clinicians’ perspective. We present a flexible psychiatric interviewer that dynamically adapts question flow and prioritizes clinically essential information within time constraints, with a clinical dashboard for efficient review. We evaluated the system through 1,440 simulated patient dialogues and follow-up interviews with 19 clinicians. Results show that it captures essential information within a limited time while preserving conversational flexibility and empathy, highlighting design implications for coachable and responsible AI interviewers that align with clinical practice.
With advances in biosensing and artificial intelligence, we are seeing fast developments of recursive biocybernetic closed-loop systems. These media technologies adapt the content in real-time based on users’ psychophysiological input, with the goal of modulating users' affective responses. However, existing research primarily focuses on technical design aspects. Building on Science, Technology, and Society, media, and HCI studies, we developed a critically grounded sociotechnical taxonomy of bioadaptive media with three interdependent dimensions: System Objective, Feedback Logic, and User Agency. By analysing these dimensions on artifact, organisational, and socio-political/ontological levels using our reflexive tool, we can interrogate whose perspectives and epistemologies these systems prioritise. We illustrate the application of the taxonomy through a critical evaluation of three speculative case studies, namely immersive journalism, collaborative VR, and mind-altering. Finally, we identify and discuss normalisation of affect; agency, power, and politics; and reflexivity and pluralism in design as core issues.
Automated vehicles (AVs) must communicate their yielding intentions to pedestrians at crossings. External Human-Machine Interfaces (eHMIs, on-vehicle displays) are promising solutions, but were primarily tested with walking pedestrians. Runners are a significant pedestrian group who move faster and face distinct bodily and perceptual demands, raising questions about how pedestrian activity influences eHMI use. We conducted an outdoor study using an augmented reality simulator. Participants navigated a virtual crossing while walking and running; an approaching AV displayed one of three eHMIs: red/green colour-changing lights, animated cyan lights, or no-eHMI. No-eHMI consistently underperformed. Walkers mostly stopped and validated eHMI signals with vehicle behaviour; they processed both eHMI animations and colour changes effectively. Runners experienced greater time pressure to cross, increasing reliance on the eHMI over vehicle behaviour. They preferred colour changes over animation for rapid decisions. These findings are crucial for promoting eHMI inclusivity and physical wellbeing as AVs join our roads.
Thousands of human remains are stored in museums and collections worldwide. But while their collection, handling and display are increasingly subjected to ethical guidelines and regulations, institutional guidelines and research on their digitization are scarce. This study explores the challenges and opportunities of digitizing human remains, outlining new directions. Based on interviews (n=23) with museum professionals across 14 organizations worldwide, we map: 1) A taxonomy of data work with human remains; 2) Main areas of uncertainties in data work and adaptive strategies currently employed to address them; 3) Examples of ill-fitting digital systems. We introduce the concept of ‘data hauntings’ to highlight the historical, technical and regulatory ghosts lingering in digital systems of contested assets, and the specters of future data circulation. Lastly, we propose a ‘hauntological’ framework for rethinking, (re)designing and maintaining digital systems to enable equitable data work - balancing historical trace-ability, ethical accountability and searchability.
Eco-friendly service options (EFSOs) aim to reduce personal carbon emissions, yet their eco-friendly framing may permit increased consumption, weakening their intended impact. Such rebound effects remain underexamined in HCI, including how common eco-feedback approaches shape them. We investigate this in an online within-subjects experiment (N=75) in a ride-hailing context. Participants completed 10 trials for five conditions (No EFSO, EFSO - Minimal, EFSO - CO2 Equivalency, EFSO - Gamified, EFSO - Social), yielding 50 choices between walking and ride-hailing for trips ranging from 0.5mi - 2.0mi (≈ 0.80km - 3.22km). We measured how different EFSO variants affected ride-hailing uptake relative to a No EFSO baseline. EFSOs lacking explicit eco-feedback metrics increased ride-hailing uptake, and qualitative responses indicate that EFSOs can make convenience-driven choices more permissible. We conclude with implications for designing EFSOs that begin to take rebound effects into account.