Interface Evolution: Learning, Adaptation, Customisation

会議の名前
UIST 2023
Towards Flexible and Robust User Interface Adaptations With Multiple Objectives
要旨

This paper proposes a new approach for online UI adaptation that aims to overcome the limitations of the most commonly used UI optimization method involving multiple objectives: weighted sum optimization. Weighted sums are highly sensitive to objective formulation, limiting the effectiveness of UI adaptations. We propose ParetoAdapt, an adaptation approach that uses online multi-objective optimization with a posteriori articulated preferences---that is, articulation of preferences after the optimization has concluded to make UI adaptation robust to incomplete and inaccurate objective formulations. It offers users a flexible way to control adaptations by selecting from a set of Pareto optimal adaptation proposals and adjusting them to fit their needs. We showcase the feasibility and flexibility of ParetoAdapt by implementing an online layout adaptation system in a state-of-the-art 3D UI adaptation framework. We further evaluate its robustness and run-time in simulation-based experiments that allow us to systematically change the accuracy of the estimated user preferences. We conclude by discussing how our approach may impact the usability and practicality of online UI adaptations.

著者
Christoph A.. Johns
Aarhus University, Aarhus, Denmark
João Marcelo. Evangelista Belo
Aarhus University, Aarhus, Denmark
Anna Maria. Feit
Saarland University, Saarland Informatics Campus, Saarbrücken, Germany
Clemens Nylandsted. Klokmose
Aarhus University, Aarhus, Denmark
Ken Pfeuffer
Aarhus University, Aarhus, Denmark
論文URL

https://doi.org/10.1145/3586183.3606799

動画
InteractionAdapt: Interaction-driven Workspace Adaptation in Situated Virtual Reality
要旨

Virtual Reality (VR) has the potential to transform how we work: it enables flexible and personalized workspaces beyond what is possible in the physical world. However, while most VR applications are designed to operate in a single empty physical space, work environments are often populated with real-world objects and increasingly diverse due to the growing amount of work in mobile scenarios. In this paper, we present InteractionAdapt, an optimization-based method for adapting VR workspaces for situated use in varying everyday physical environments, allowing VR users to transition between real-world settings while retaining most of their personalized VR environment for efficient interaction to ensure temporal consistency and visibility. InteractionAdapt leverages physical affordances in the real world to optimize UI elements for the respectively most suitable input technique, including on-surface touch, mid-air touch and pinch, and cursor control. Our optimization term thereby models the trade-off across these interaction techniques based on experimental findings of 3D interaction in situated physical environments. Our two evaluations of InteractionAdapt in a selection task and a travel planning task established its capability of supporting efficient interaction, during which it produced adapted layouts that participants preferred to several baselines. We further showcase the versatility of our approach through applications that cover a wide range of use cases.

著者
Yi Fei Cheng
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Christoph Gebhardt
ETH Zurich, Zurich, Switzerland
Christian Holz
ETH Zürich, Zurich, Switzerland
論文URL

https://doi.org/10.1145/3586183.3606717

動画
From Gap to Synergy: Enhancing Contextual Understanding through Human-Machine Collaboration in Personalized Systems
要旨

This paper presents LangAware, a collaborative approach for constructing personalized context for context-aware applications. The need for personalization arises due to significant variations in context between individuals based on scenarios, devices, and preferences. However, there is often a notable gap between humans and machines in the understanding of how contexts are constructed, as observed in trigger-action programming studies such as IFTTT. LangAware enables end-users to participate in establishing contextual rules in-situ using natural language. The system leverages large language models (LLMs) to semantically connect low-level sensor detectors to high-level contexts and provide understandable natural language feedback for effective user involvement. We conducted a user study with 16 participants in real-life settings, which revealed an average success rate of 87.50% for defining contextual rules in a variety of 12 campus scenarios, typically accomplished within just two modifications. Furthermore, users reported a better understanding of the machine's capabilities by interacting with LangAware.

著者
Weihao Chen
Tsinghua University, Beijing, China
Chun Yu
Tsinghua University, Beijing, China
Huadong Wang
Tsinghua University, Beijing, China
Zheng Wang
Tsinghua University, Beijing, China
Lichen Yang
Tsinghua University, Beijing, China
Yukun Wang
Tsinghua University, Beijing, China
Weinan Shi
Tsinghua University, Beijing, China
Yuanchun Shi
Tsinghua University, Beijing, China
論文URL

https://doi.org/10.1145/3586183.3606741

動画
Learning Custom Experience Ontologies via Embedding-based Feedback Loops
要旨

Companies and organizations rely on behavioral analytics tools like Google Analytics to monitor their digital experiences. Making sense of the data these tools capture, however, requires manual event tagging and filtering---often a tedious process. Prior research approaches have trained machine learning models to automatically tag interaction data, but they draw from fixed digital experience vocabularies which cannot be easily augmented or customized. This paper introduces a novel machine learning feedback loop that generates customized tag predictions for organizations. The approach uses a general experience vocabulary to bootstrap initial tag predictions on interactive Sankey diagrams representing user navigation paths on a digital asset. By interacting with the path visualization, organizations can manually revise predictions. The system leverages this feedback to refine an organization's experience ontology, computing custom word embeddings for each of its terms via vector space refinement algorithms. The updates made to the custom experience ontology and its associated word embeddings result in better event tag predictions for that organization in the future. We conducted a needfinding interview with web analytics professionals to ground our design choices, and present a real-world deployment that demonstrates how, even with just a few training examples, custom tags can be predicted over new data.

著者
Ali Zaidi
Inc., San Francisco, California, United States
Kelsey Turbeville
UserTesting, Inc., San Francisco, California, United States
Kristijan Ivančić
Inc., San Francsico, California, United States
Jason Moss
UserTesting, Inc., Atlanta, Georgia, United States
Jenny Gutierrez Villalobos
UserTesting, Inc, San Franciscoe, California, United States
Aravind Sagar
User Testing, Inc., San Francisco, California, United States
Huiying Li
UserTesting, Inc., San Francisco, California, United States
Charu Mehra
UserTesting, Inc., San Francisco, California, United States
Sixuan Li
UserTesting, San Francisco, California, United States
Scott Hutchins
UserTesting, Inc., San Francisco, California, United States
Ranjitha Kumar
University of Illinois at Urbana-Champaign, Urbana, Illinois, United States
論文URL

https://doi.org/10.1145/3586183.3606715

動画
Neighbor-Environment Observer: An Intelligent Agent for Immersive Working Companionship
要旨

Human-computer symbiosis is a crucial direction for the development of artificial intelligence. As intelligent systems become increasingly prevalent in our work and personal lives, it is important to develop strategies to support users across physical and virtual environments. While technological advances in personal digital devices, such as personal computers and virtual reality devices, can provide immersive experiences, they can also disrupt users' awareness of their surroundings and enhance the frustration caused by disturbances. In this paper, we propose a joint observation strategy for artificial agents to support users across virtual and physical environments. We introduce a prototype system, neighbor-environment observer (NEO), that utilizes non-invasive sensors to assist users in dealing with disruptions to their immersive experience. System experiments evaluate NEO from different perspectives and demonstrate the effectiveness of the joint observation strategy. A user study is conducted to evaluate its usability. The results show that NEO could lessen users' workload with the learned user preference. We suggest that the proposed strategy can be applied to various smart home scenarios.

著者
Zhe Sun
Beijing Institute for General Artificial Intelligence, Beijing, China
Qixuan Liang
Beijing Institute for General Artificial Intellgence, Beijing, China
Meng Wang
Beijing Institute for General Artificial Intelligence, Beijing, China
Zhenliang Zhang
Beijing Institute for General Artificial Intelligence, Beijing, China
論文URL

https://doi.org/10.1145/3586183.3606728

動画
Never-ending Learning of User Interfaces
要旨

Machine learning models have been trained to predict semantic information about user interfaces (UIs) to make apps more accessible, easier to test, and to automate. Currently, most models rely on datasets that are collected and labeled by human crowd-workers, a process that is costly and surprisingly error-prone for certain tasks. For example, it is possible to guess if a UI element is “tappable” from a screenshot (i.e., based on visual signifiers) or from potentially unreliable metadata (e.g., a view hierarchy), but one way to know for certain is to programmatically tap the UI element and observe the effects. We built the Never-ending UI Learner, an app crawler that automatically installs real apps from a mobile app store and crawls them to discover new and challenging training examples to learn from. The Never-ending UI Learner has crawled for more than 5,000 device-hours, performing over half a million actions on 6,000 apps to train three computer vision models for i) tappability prediction, ii) draggability prediction, and iii) screen similarity.

著者
Jason Wu
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Rebecca Krosnick
University of Michigan, Ann Arbor, Michigan, United States
Eldon Schoop
Apple, Seattle, Washington, United States
Amanda Swearngin
Apple, Seattle, Washington, United States
Jeffrey P. Bigham
Apple, Pittsburgh, Pennsylvania, United States
Jeffrey Nichols
Apple Inc, San Diego, California, United States
論文URL

https://doi.org/10.1145/3586183.3606824

動画
Unveiling the Tricks: Automated Detection of Dark Patterns in Mobile Applications
要旨

Mobile apps bring us many conveniences, such as online shopping and communication, but some use malicious designs called dark patterns to trick users into doing things that are not in their best interest. Many works have been done to summarize the taxonomy of these patterns and some have tried to mitigate the problems through various techniques. However, these techniques are either time-consuming, not generalisable or limited to specific patterns. To address these issues, we propose UIGuard, a knowledge-driven system that utilizes computer vision and natural language pattern matching to automatically detect a wide range of dark patterns in mobile UIs. Our system relieves the need for manually creating rules for each new UI/app and covers more types with superior performance. In detail, we integrated existing taxonomies into a consistent one, conducted a characteristic analysis and distilled knowledge from real-world examples and the taxonomy. Our UIGuard consists of two components, Property Extraction and Knowledge-Driven Dark Pattern Checker. We collected the first dark pattern dataset, which contains 4,999 benign UIs and 1,353 malicious UIs of 1,660 instances spanning 1,023 mobile apps. Our system achieves a superior performance in detecting dark patterns (micro averages: 0.82 in precision, 0.77 in recall, 0.79 in F1 score). A user study involving 58 participants further showed that UIGuard significantly increases users' knowledge of dark patterns. We demonstrated potential use cases of our work, which can benefit different stakeholders, and serve as a training tool for raising awareness of dark patterns

著者
Jieshan Chen
CSIRO's Data61, Sydney, New South Wales, Australia
Jiamou Sun
CSIRO's Data61, Sydney, NSW(AUS), Australia
Sidong Feng
Monash University, Melbourne, Victoria, Australia
Zhenchang Xing
CSIRO's Data61 adn Australian National University, ACTON, ACT, Australia
Qinghua Lu
CSIRO, Sydney, NSW, Australia
XIWEI XU
CSIRO, Eveleigh, NSW, Australia
Chunyang Chen
Monash University, Melbourne, Victoria, Australia
論文URL

https://doi.org/10.1145/3586183.3606783

動画