この勉強会は終了しました。ご参加ありがとうございました。
End-users can potentially style and customize websites by editing them through in-browser developer tools. Unfortunately, end-users lack the knowledge needed to translate high-level styling goals into low-level code edits. We present Stylette, a browser extension that enables users to change the style of websites by expressing goals in natural language. By interpreting the user's goal with a large language model and extracting suggestions from our dataset of 1.7 million web components, Stylette generates a palette of CSS properties and values that the user can apply to reach their goal. A comparative study (N=40) showed that Stylette lowered the learning curve, helping participants perform styling changes 35% faster than those using developer tools. By presenting various alternatives for a single goal, the tool helped participants familiarize themselves with CSS through experimentation. Beyond CSS, our work can be expanded to help novices quickly grasp complex software or programming languages.
We demonstrate that recent natural language processing (NLP) techniques introduce a new paradigm of vocabulary learning that benefits from both micro and usage-based learning by generating and presenting the usages of foreign words based on the learner's context. Then, without allocating dedicated time for studying, the user can become familiarized with how the words are used by seeing the example usages during daily activities, such as Web browsing. To achieve this, we introduce VocabEncounter, a vocabulary-learning system that suitably encapsulates the given words into materials the user is reading in near real time by leveraging recent NLP techniques. After confirming the system's human-comparable quality of generating translated phrases by involving crowdworkers, we conducted a series of user studies, which demonstrated its effectiveness on learning vocabulary and its favorable experiences. Our work shows how NLP-based generation techniques can transform our daily activities into a field for vocabulary learning.
Context-aware applications have the potential to act opportunistically to facilitate human experiences and activities, from reminding us of places to perform personal activities, to identifying coincidental moments to engage in digitally-mediated shared experiences. However, despite the availability of context-detectors and programming frameworks for defining how such applications should trigger, designers lack support for expressing their human concepts of a situation and the experiences and activities they afford (e.g., situations to toss a frisbee) when context-features are made available at the level of locations (e.g., parks). This paper introduces Affinder, a block-based programming environment that supports constructing {\em concept expressions} that effectively translate their conceptions of a situation into a machine representation using available context features. During pilot testing, we discovered three bridging challenges that arise when expressing situations that cannot be encoded directly by a single context-feature. To overcome these bridging challenges, Affinder provides designers (1) an {\em unlimited vocabulary search} for discovering features they may have forgotten; (2) {\em prompts for reflecting and expanding} their concepts of a situation and ideas for foraging for context-features; and (3) {\em simulation and repair tools} for identifying and resolving issues with the precision of concept expressions on real use-cases. In a comparison study, we found that Affinder’s core functions helped designers stretch their concepts of how to express a situation, find relevant context-features matching their concepts, and recognize when the concept expression operated differently than intended on real-world cases. These results show that Affinder and tools that support bridging can improve a designer’s ability to express their concepts of a human situation into detectable machine representations—thus pushing the boundaries of how computing systems support our activities in the world.
Shared control is an emerging interaction paradigm in which a human and an AI partner collaboratively control a system. Shared control unifies human and artificial intelligence, making the human’s interactions with computers more accessible, safe, precise, effective, creative, and playful. This form of interaction has independently emerged in contexts as varied as mobility assistance, driving, surgery, and digital games. These domains each have their own problems, terminology, and design philosophies. Without a common language for describing interactions in shared control, it is difficult for designers working in one domain to share their knowledge with designers working in another. To address this problem, we present a dimension space for shared control, based on a survey of 55 shared control systems from six different problem domains. This design space analysis tool enables designers to classify existing systems, make comparisons between them, identify higher-level design patterns, and imagine solutions to novel problems.
With the advancements in AI, agents (i.e., smart products, robots, software agents) are increasingly capable of working closely together with humans in a variety of ways while benefiting from each other. These human-agent collaborations have gained growing attention in the HCI community; however, the field lacks clear guidelines on how to design the agents’ behaviors in collaborations. In this paper, the qualities that are relevant for designers to create robust and pleasant human-agent collaborations were investigated. Bratman’s Shared Cooperative Activity framework was used to identify the core characteristics of collaborations and survey the most important issues in the design of human-agent collaborations, namely code-of-conduct, task delegation, autonomy and control, intelligibility, common ground, offering help and requesting help. The aim of this work is to add structure to this growing and important facet of HCI research and operationalize the concept of human-agent collaboration with concrete design considerations.