Apéritif: Scaffolding Preregistrations to Automatically Generate Analysis Code and Methods Descriptions
説明

The HCI community has been advocating preregistration as a practice to improve the credibility of scientific research. However, it remains unclear how HCI researchers preregister studies and what preregistration users perceive as benefits and challenges. By systematically reviewing the past four CHI proceedings and surveying 11 researchers, we found that only 1.11% of papers presented preregistered studies, though both authors and reviewers of preregistered studies perceive it as beneficial. Our formative studies revealed key challenges ranging from a lack of detail about the study design, hindering comprehensibility, to inconsistencies between preregistrations and published papers. To explore ways for addressing these issues, we developed Apéritif, a research prototype that scaffolds the preregistration process and automatically generates analysis code and a methods description. In an evaluation with 17 HCI researchers, we found that Apéritif reduces the effort of preregistering a study, facilitates researchers' workflows, and promotes consistency between research artifacts.

日本語まとめ
読み込み中…
読み込み中…
Automatically Generating and Improving Voice Command Interface from Operation Sequences on Smartphones
説明

Using voice commands to automate smartphone tasks (e.g., making a video call) can effectively augment the interactivity of numerous mobile apps. However, creating voice command interfaces requires a tremendous amount of effort in labeling and compiling the graphical user interface (GUI) and the utterance data. In this paper, we propose AutoVCI, a novel approach to automatically generate voice command interface (VCI) from smartphone operation sequences. The generated voice command interface has two distinct features. First, it automatically maps a voice command to GUI operations and fills in parameters accordingly, leveraging the GUI data instead of corpus or hand-written rules. Second, it launches a complementary Q&A dialogue to confirm the intention in case of ambiguity. In addition, the generated voice command interface can learn and evolve from user interactions. It accumulates the history command understanding results to annotate the user’s input and improve its semantic understanding ability. We implemented this approach on Android devices and conducted a two-phase user study with 16 and 67 participants in each phase. Experimental results of the study demonstrated the practical feasibility of AutoVCI.

日本語まとめ
読み込み中…
読み込み中…
TaleBrush: Sketching Stories with Generative Pretrained Language Models
説明

While advanced text generation algorithms (e.g., GPT-3) have enabled writers to co-create stories with an AI, guiding the narrative remains a challenge. Existing systems often leverage simple turn-taking between the writer and the AI in story development. However, writers remain unsupported in intuitively understanding the AI’s actions or steering the iterative generation. We introduce TaleBrush, a generative story ideation tool that uses line sketching interactions with a GPT-based language model for control and sensemaking of a protagonist’s fortune in co-created stories. Our empirical evaluation found our pipeline reliably controls story generation while maintaining the novelty of generated sentences. In a user study with 14 participants with diverse writing experiences, we found participants successfully leveraged sketching to iteratively explore and write stories according to their intentions about the character’s fortune while taking inspiration from generated stories. We conclude with a reflection on how sketching interactions can facilitate the iterative human-AI co-creation process.

日本語まとめ
読み込み中…
読み込み中…
Symphony: Composing Interactive Interfaces for Machine Learning
説明

Interfaces for machine learning (ML), information and visualizations about models or data, can help practitioners build robust and responsible ML systems. Despite their benefits, recent studies of ML teams and our interviews with practitioners (n=9) showed that ML interfaces have limited adoption in practice. While existing ML interfaces are effective for specific tasks, they are not designed to be reused, explored, and shared by multiple stakeholders in cross-functional teams. To enable analysis and communication between different ML practitioners, we designed and implemented Symphony, a framework for composing interactive ML interfaces with task-specific, data-driven components that can be used across platforms such as computational notebooks and web dashboards. We developed Symphony through participatory design sessions with 10 teams (n=31), and discuss our findings from deploying Symphony to 3 production ML projects at Apple. Symphony helped ML practitioners discover previously unknown issues like data duplicates and blind spots in models while enabling them to share insights with other stakeholders.

日本語まとめ
読み込み中…
読み込み中…