By leveraging quantum-mechanical properties like superposition, entanglement, and interference, quantum computing (QC) offers promising solutions for problems that classical computing has not been able to solve efficiently, such as drug discovery, cryptography, and physical simulation. Unfortunately, adopting QC remains difficult for potential users like QC beginners and application-specific domain experts, due to limited theoretical and practical knowledge, the lack of integrated interface-wise support, and poor documentation. For example, to use quantum computers, one has to convert conceptual logic into low-level codes, analyze quantum program results, and share programs and results. To support the wider adoption of QC, we, as designers and QC experts, propose interaction techniques for QC through design iterations. These techniques include writing quantum codes conceptually, comparing initial quantum programs with optimized programs, sharing quantum program results, and exploring quantum machines. We demonstrate the feasibility and utility of these techniques via use cases with high-fidelity prototypes.
https://dl.acm.org/doi/10.1145/3706598.3713370
Waiting for system loading is a common scenario that often diminishes user experience, leading to dissatisfaction. Well-established visual indicators like progress bars can not directly apply to the interactions with voice assistants (VAs) like Siri. As VAs continue to rise in popularity, this research aims to explore the design of auditory indicators, particularly human speech, for optimizing waiting experiences in Voice User Interfaces (VUIs). We first organized focus groups (N=35) to identify design considerations for speech indicators, uncovering design opportunities in integrating explanations and humor. Subsequently, we conducted an empirical study (N=30) to evaluate the effects of speech indicators with two levels of explanation and humor on the waiting experience, measured by attention, perceived time, pleasure, and overall satisfaction, during both short and long loading durations. Our findings suggest significant potential for incorporating explanations and humor into VUIs, offering actionable insights for designing effective speech indicators that improve waiting experiences.
https://dl.acm.org/doi/10.1145/3706598.3713090
Can J.J. Gibson’s concept of affordances be empirically examined using screen-based technology? We show how screen-based affordances can be examined through the use case of perceptual toughness, i.e. the break-ability of a virtual object. We present two user experiments (n=72, n=66) examining break-ability through a novel ’Perceptual Impact Testing’ methodology and online screen-based 3D virtual environment. We show that judgements of break-ability are systematically distorted when a perceiver’s virtual ‘Point of Observation’ or virtual environment’s ‘Horizonal Geometry’ are manipulated. These statistically significant results provide evidence that: 1) direct perception can account for perceptual distortions of break-ability; 2) Gibsonian affordances can be empirically examined through screen-based interactions.
Authors of typeset formulas augment those formulas to make them easier to understand. When they do so, they trade off between using markup tools like LaTeX and formula-unaware graphical editors. In this paper, we explore how editing tools could combine the best affordances of both kinds of tools. We develop FreeForm, a projectional editor wherein authors can augment formulas---with color, labels, spacing, and more---across multiple synchronized representations. Augmentations are created graphically using direct selections and compact menus. Those augmentations propagate to LaTeX markup, which can itself be edited and easily exported. In two lab studies, we observe the value of our editor versus baselines of a widely-used LaTeX document editor and a state-of-the-art formula augmentation tool. Finally, we make recommendations for the design of projectional markup augmentation editors.
https://dl.acm.org/doi/10.1145/3706598.3714288
Inspirational search, the process of exploring designs to inform and inspire new creative work, is pivotal in mobile user interface (UI) design. However, exploring the vast space of UI references remains a challenge. Existing AI-based UI search methods often miss crucial semantics like target users or the mood of apps. Additionally, these models typically require metadata like view hierarchies, limiting their practical use. We used a multimodal large language model (MLLM) to extract and interpret semantics from mobile UI images. We identified key UI semantics through a formative study and developed a semantic-based UI search system. Through computational and human evaluations, we demonstrate that our approach significantly outperforms existing UI retrieval methods, offering UI designers a more enriched and contextually relevant search experience. We enhance the understanding of mobile UI design semantics and highlight MLLMs' potential in inspirational search, providing a rich dataset of UI semantics for future studies.
https://dl.acm.org/doi/10.1145/3706598.3714213
This study investigates the relationship between the HEXACO personality traits and text entry behaviors in composition and transcription tasks. By analyzing metrics such as entry speed, accuracy, editing efforts, and readability, we identified correlations between specific traits and text entry performance. In composition, honesty-humility and agreeableness were the strongest predictors, correlating significantly with composition time, text length, and editing efforts. In transcription, openness, honesty-humility, and agreeableness influenced performance, though no single trait consistently predicted all metrics. Interestingly, extraversion did not show strong correlations in either task, despite its established link to composition performance in academic contexts. These findings suggest that personality traits affect text entry behavior differently depending on the task, with creative tasks like composition being shaped by distinct traits compared to repetitive tasks like transcription. This research provides valuable insights into the relationship between personality and text entry, opening avenues for personalizing interaction systems based on individual traits.
https://dl.acm.org/doi/10.1145/3706598.3714149
Conversational search offers an easier and faster alternative to conventional web search, while having downsides like a lack of source verification. Research has examined performance disparities between these two systems in various settings. However, little work has investigated how changes in the nature of a search task affect user preferences. We investigate how psychological distance - the perceived closeness of one to an event - affects user preferences between conversational and web search. We hypothesise that tasks with different psychological distances elicit different information needs, which in turn affect user preferences between systems. Our study finds that, under fixed condition ordering, greater psychological distances lead users to prefer conversational search, which they perceive as more credible, useful, enjoyable, and easy to use. We reveal qualitative reasons for these differences and provide design implications for search system designers.
https://dl.acm.org/doi/10.1145/3706598.3713770