Environmental User Experience (UX) data collection is essential for user research, enabling evidence-based design decisions. However, traditional retrospective methods like micro-phenomenological interviews suffer from recall inaccuracies and memory distortions. Concurrent UX data collection methods with environmental contexts are promising but lack in-depth investigation. To examine this potential, we conducted a formative study with 34 participants, identifying design goals such as natural interaction, in-situ annotation, and spatial-temporal coupling. We developed JourneyCapturer, an interactive tool that fulfills these goals to integrate concurrent annotation within Immersive Virtual Environment (IVE), enabling real-time UX data capture within contextual scenarios. Using a mixed-method design, we comparatively evaluated 20 participants through concurrent IVE annotations and retrospective interviews, revealing how JourneyCapturer improves UX collection processes and outcomes. Our findings suggest that a consciously proactive concurrent IVE method with a first-person perspective advances UX research, offering implications for expert collaboration, multi-modal analytics, and IVE-based field studies.
X's Community Notes is a crowdsourced fact-checking system. To improve its scalability, X introduced ``Request Community Note'' feature, enabling users to solicit fact-checks from contributors on specific posts. Yet, its implications for the system---what gets checked, by whom, and with what quality---remain unclear. Using 98,685 requested posts and their associated notes, we evaluate how requests shape the Community Notes system. We find that requested posts with higher GPT-estimated misleadingness and from authors with greater misinformation exposure are more likely to receive notes. Conversely, requested political posts (vs. non-political) are less likely to receive notes. We also observe partisan asymmetries: posts from Republicans are more likely to receive notes than those from Democrats. Although only 12% of requested posts receive request-fostered notes from top contributors, these notes are rated as more helpful and less polarized than others, partly reflecting top contributors' selective fact-checking of misleading posts. Our findings highlight both the limitations and promise of requests for scaling high-quality community-based fact-checking.
Bio-digital systems that merge microbial life with technology promise new modes of computation, combining biological adaptability with digital precision. Yet realizing this potential symbiotically -- where biological and digital agents co-adapt and co-process -- remains elusive, largely due to the absence of a shared vocabulary bridging biology and computing. Consequently, microbes are often constrained to uni-directional roles, functioning as sensors or actuators rather than as active, computational partners in bio-digital systems. In response, we propose a taxonomy and pathways that articulate and expand the roles of biological and digital entities for synergetic bio-digital computation. Using this taxonomy, we analysed 70 systems across HCI, design, and engineering, identifying how biological mechanisms can be mapped onto computational abstractions. We argue that such mappings enable computationally actionable directions that foster richer and reciprocal relationships in bio-digital systems, supporting regenerative ecologies across time and scale while inspiring new paradigms for computation in HCI.
In digital product organizations, design systems have enabled speed and consistency by structuring design work as the assembly of predefined components. Design is recognized as a creative activity, but assembly work typically is not, and this shift may have an impact on how creativity is realized in the workplace. To find out, we conducted seventeen interviews with executive-level design managers in mid‑sized and large companies. The data reveal a tension: leaders depend on designers who can work within system constraints that demand assembly‑level consistency, yet when hiring, they value candidates who challenge assumptions, reframe problems, and propose unexpected solutions. Portfolios, however, often show neither, a gap many managers attribute to the rapid‑training pipelines of contemporary bootcamps. Managers express concern that the systems enabling efficient production may be narrowing the range of skills they see when hiring, leaving a profession caught between creative ideals and the industrial machinery shaping modern product design.
Visual communication often needs stylistically consistent icons that span concrete and abstract meanings, for use in diverse contexts. We present Iconix, a human-AI co-creative system that organizes icon generation along two axes: semantic richness (what is depicted) and visual complexity (how much detail). Given a user-specified concept, Iconix constructs a semantic scaffold of related analytical perspectives and employs chained, image-conditioned generation to produce a coherent style of exemplars. Each exemplar is then automatically distilled into a progressive sequence, from detailed and elaborate to abstract and simple. The resulting two-dimensional grid exposes a navigable space, helping designers reason jointly about figurative content and visual abstraction. A within-subjects study (N=32) found that compared to a baseline workflow, participants produced icon grids more creatively, reported lower workload, and explored a coherent range of design variations. We discuss implications for human-machine co-creative approaches that couple semantic scaffolding with progressive simplification to support visual abstraction.
The integration of LLMs into GUI agents promises to revolutionize web browsing automation, yet the practical user experience remains challenging. This paper systematically characterizes user-reported issues with GUI agents by focusing on three dimensions: phenomena, influences, and user-centric mitigation. We adopted a two-phase method combining social media analysis (N=221 posts) and semi-structured interviews (N=21). Our findings reveal a taxonomy of complaints unique to GUI agents, including deficits in grounding abstract intent into concrete interface affordances, the inability to adapt to dynamic visual states, and the execution of erroneous actions. These lead to influences distinct from text-based hallucinations, ranging from task abandonment to security risks like uncontrolled file system access. In response, users are forced to employ ad-hoc mitigation strategies, including ecological sandboxing, and cursor shadowing to correct GUI agents behaviors. We contribute: (1) a comprehensive characterization of complaints specific to GUI agents interaction, (2) an analysis of how these phenomena degrade interaction integrity, and (3) design implications for creating consequence-aware agents.