注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

5
Personal Dream Informatics: A Self-Information Systems Model of Dream Engagement
Michael Jeffrey Daniel. Hoefer (University of Colorado Boulder, Boulder, Colorado, United States)Bryce E. Schumacher (University of Colorado Boulder, Boulder, Colorado, United States)Stephen Voida (University of Colorado Boulder, Boulder, Colorado, United States)
We present the research area of personal dream informatics: studying the self-information systems that support dream engagement and communication between the dreaming self and the wakeful self. Through a survey study of 281 individuals primarily recruited from an online community dedicated to dreaming, we develop a dream-information systems view of dreaming and dream tracking as a type of self-information system. While dream-information systems are characterized by diverse tracking processes, motivations, and outcomes, they are universally constrained by the ephemeral dreamset - the short period of time between waking up and rapid memory loss of dream experiences. By developing a system dynamics model of dreaming we highlight feedback loops that serve as high leverage points for technology designers, and suggest a variety of design considerations for crafting technology that best supports dream recall, dream tracking, and dreamwork for nightmare relief and personal development.
4
Embodied Geometric Reasoning with a Robot: The Impact of Robot Gestures on Student Reasoning about Geometrical Conjectures
Joseph E. Michaelis (University of Illinois at Chicago, Chicago, Illinois, United States)Daniela Di Canio (University of Illinois at Chicago, Chicago, Illinois, United States)
In this paper, we explore how the physically embodied nature of robots can influence learning through non-verbal communication, such as gesturing. We take an embodied cognition perspective to examine student interactions with a NAO robot that uses gestures while reasoning about geometry conjectures. College aged students (N = 30) were randomly assigned to either a dynamic condition, where the robot uses dynamic gestures that represent and manipulate geometric shapes in the conjectures, or control condition, where the robot uses beat gestures that match the rhythm of speech. Students in the dynamic condition: (1) use more gestures when they reason about geometry conjectures, (2) look more at the robot as it speaks, (3) feel the robot is a better study partner and uses effective gestures, but (4) were not more successful in correctly reasoning about geometry conjectures. We discuss implications for socially supported and embodied learning with a physically present robot.
3
Adaptive Empathy Learning Support in Peer Review Scenarios
Thiemo Wambsganss (University of St. Gallen, Sankt Gallen, Switzerland)Matthias Soellner (University of Kassel, Kassel, Germany)Kenneth R. Koedinger (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Jan Marco Leimeister (University of St. Gallen, St. Gallen, Switzerland)
Advances in Natural Language Processing offer techniques to detect the empathy level in texts. To test if individual feedback on certain students’ empathy level in their peer review writing process will help them to write more empathic reviews, we developed ELEA, an adaptive writing support system that provides students with feedback on the cognitive and emotional empathy structures. We compared ELEA to a proven empathy support tool in a peer review setting with 119 students. We found students using ELEA wrote more empathic peer reviews with a higher level of emotional empathy compared to the control group. The high perceived skill learning, the technology acceptance, and the level of enjoyment provide promising results to use such an approach as a feedback application in traditional learning settings. Our results indicate that learning applications based on NLP are able to foster empathic writing skills of students in peer review scenarios.
3
Shaping Textile Sliders: An Evaluation of Form Factors and Tick Marks for Textile Sliders
Oliver Nowak (RWTH Aachen University, Aachen, Germany)René Schäfer (RWTH Aachen University, Aachen, Germany)Anke Brocker (RWTH Aachen University, Aachen, Germany)Philipp Wacker (RWTH Aachen University, Aachen, Germany)Jan Borchers (RWTH Aachen University, Aachen, Germany)
Textile interfaces enable designers to integrate unobtrusive media and smart home controls into furniture such as sofas. While the technical aspects of such controllers have been the subject of numerous research projects, the physical form factor of these controls has received little attention so far. This work investigates how general design properties, such as overall slider shape, raised vs. recessed sliders, and number and layout of tick marks, affect users' preferences and performance. Our first user study identified a preference for certain design combinations, such as recessed, closed-shaped sliders. Our second user study included performance measurements on variations of the preferred designs from study 1, and took a closer look at tick marks. Tick marks supported orientation better than slider shape. Sliders with at least three tick marks were preferred, and performed well. Non-uniform, equally distributed tick marks reduced the movements users needed to orient themselves on the slider.
3
O&O: A DIY toolkit for designing and rapid prototyping olfactory interfaces
Yuxuan Lei (Tsinghua University, Beijing, China)Qi Lu (Tsinghua University, Beijing, China)Yingqing Xu (Tsinghua University, Beijing, China)
Constructing olfactory interfaces on demand requires significant design proficiency and engineering effort. The absence of powerful and convenient tools that reduced innovation complexity posed obstacles for future research in the area. To address this problem, we proposed O&O, a modular olfactory interface DIY toolkit. The toolkit consists of: (1) a scent generation kit, a set of electronics and accessories that supported three common scent vaporization techniques; (2) a module construction kit, a set of primitive cardboard modules for assembling permutable functional structures; (3) a design manual, a step-by-step design thinking framework that directs the decision-making and prototyping process. We organized a formal workshop with 19 participants and four solo DIY trials to evaluate the capability of the toolkit, the overall user engagement, the creations in both sessions, and the iterative suggestions. Finally, design implications and future opportunities were discussed for further research.
3
StoryDrawer: A Child-AI Collaborative Drawing System to Support Children's Creative Visual Storytelling
Chao Zhang (Zhejiang University, Hangzhou, China)Cheng Yao (Zhejiang University, Hangzhou, China)Jiayi Wu (Zhejiang University, Hangzhou, China)Weijia Lin (Zhejiang University, Hangzhou, China)Lijuan Liu (Zhejiang University, Hangzhou, China)Ge Yan (Zhejiang University, HangZhou, China)Fangtian Ying (Hubei University of Technology, Wuhan, China)
Visual storytelling is a new approach to creative expression based on verbal and figural creativity. The keys to visual storytelling are narrating and drawing over a period of time, which can be beneficial but also demanding on creativity for children. Informed by need-finding investigations, we developed StoryDrawer, a co-creative system that supports visual storytelling for children aged 6–10 years through collaborative drawing between children and artificial intelligence (AI). The system includes a context-based voice agent and two AI-driven collaborative strategies: the real-time transformation of children’s telling into drawings, and the generation of abstract sketches with semantic similarity to existing story content. We conducted a 2 × 2 study with 64 children to evaluate the efficacy of StoryDrawer by varying the two strategies in four conditions. The results suggest that StoryDrawer provoked participants’ creative and elaborate ideas and contributed to their creative outcomes during an engaging visual storytelling experience.
2
FabricatINK: Personal Fabrication of Bespoke Displays Using Electronic Ink from Upcycled E Readers
Ollie Hanton (University of Bristol, Bristol, United Kingdom)Zichao Shen (University of Bristol, Bristol, United Kingdom)Mike Fraser (University of Bath, Bath, United Kingdom)Anne Roudaut (University of Bristol, Bristol, United Kingdom)
FabricatINK explores the personal fabrication of irregularly-shaped low-power displays using electronic ink (E ink). E ink is a programmable bicolour material used in traditional form-factors such as E readers. It has potential for more versatile use within the scope of personal fabrication of custom-shaped displays, and it has the promise to be the pre-eminent material choice for this purpose. We appraise technical literature to identify properties of E ink, suited to fabrication. We identify a key roadblock, universal access to E ink as a material, and we deliver a method to circumvent this by upcycling broken electronics. We subsequently present a novel fabrication method for irregularly-shaped E ink displays. We demonstrate our fabrication process and E ink's versatility through ten prototypes showing different applications and use cases. By addressing E ink as a material for display fabrication, we uncover the potential for users to create custom-shaped truly bistable displays.
2
Paper Trail: An Immersive Authoring System for Augmented Reality Instructional Experiences
Shwetha Rajaram (University of Michigan, Ann Arbor, Michigan, United States)Michael Nebeling (University of Michigan, Ann Arbor, Michigan, United States)
Prior work has demonstrated augmented reality's benefits to education, but current tools are difficult to integrate with traditional instructional methods. We present Paper Trail, an immersive authoring system designed to explore how to enable instructors to create AR educational experiences, leaving paper at the core of the interaction and enhancing it with various forms of digital media, animations for dynamic illustrations, and clipping masks to guide learning. To inform the system design, we developed five scenarios exploring the benefits that hand-held and head-worn AR can bring to STEM instruction and developed a design space of AR interactions enhancing paper based on these scenarios and prior work. Using the example of an AR physics handout, we assessed the system's potential with PhD-level instructors and its usability with XR design experts. In an elicitation study with high-school teachers, we study how Paper Trail could be used and extended to enable flexible use cases across various domains. We discuss benefits of immersive paper for supporting diverse student needs and challenges for making effective use of AR for learning.
2
First Steps Towards Designing Electrotactons: Investigating Intensity and Pulse Frequency as Parameters for Electrotactile Cues.
Yosuef Alotaibi (University of Glasgow , Glasgow , United Kingdom)John H. Williamson (University of Glasgow, Glasgow, United Kingdom)Stephen Anthony. Brewster (University of Glasgow, Glasgow, United Kingdom)
Electrotactile stimulation is a novel form of haptic feedback. There is little work investigating its basic design parameters and how they create effective tactile cues. This paper describes two experiments that extend our knowledge of two key parameters. The first investigated the combination of pulse width and amplitude Intensity on sensations of urgency, annoyance, valence and arousal. Results showed significant effects: increasing Intensity caused higher ratings of urgency, annoyance and arousal but reduced valence. We established clear levels for differentiating each sensation. A second study then investigated Intensity and Pulse Frequency to find out how many distinguishable levels could be perceived. Results showed that both Intensity and Pulse Frequency significantly affected perception, with four distinguishable levels of Intensity and two of Pulse Frequency. These results add significant new knowledge about the parameter space of electrotactile cue design and help designers select suitable properties to use when creating electrotactile cues.
2
Mobile-Friendly Content Design for MOOCs: Challenges, Requirements, and Design Opportunities
Jeongyeon Kim (KAIST, Daejeon, Korea, Republic of)Yubin Choi (KAIST, Daejeon, Korea, Republic of)Meng Xia (KAIST, Daejeon, Korea, Republic of)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Most video-based learning content is designed for desktops without considering mobile environments. We (1) investigate the gap between mobile learners’ challenges and video engineers’ considerations using mixed methods and (2) provide design guidelines for creating mobile-friendly MOOC videos. To uncover learners’ challenges, we conducted a survey (n=134) and interviews (n=21), and evaluated the mobile adequacy of current MOOCs by analyzing 41,722 video frames from 101 video lectures. Interview results revealed low readability and situationally-induced impairments as major challenges. The content analysis showed a low guideline compliance rate for key design factors. We then interviewed 11 video production engineers to investigate design factors they mainly consider. The engineers mainly focus on the size and amount of content while lacking consideration for color, complex images, and situationally-induced impairments. Finally, we present and validate guidelines for designing mobile-friendly MOOCs, such as providing adaptive and customizable visual design and context-aware accessibility support.
2
A Layered Authoring Tool for Stylized 3D animations
Jiaju Ma (Brown University, Providence, Rhode Island, United States)Li-Yi Wei (Adobe Research, San Jose, California, United States)Rubaiat Habib Kazi (Adobe Research, Seattle, Washington, United States)
Guided by the 12 principles of animation, stylization is a core 2D animation feature but has been utilized mainly by experienced animators. Although there are tools for stylizing 2D animations, creating stylized 3D animations remains a challenging problem due to the additional spatial dimension and the need for responsive actions like contact and collision. We propose a system that helps users create stylized casual 3D animations. A layered authoring interface is employed to balance between ease of use and expressiveness. Our surface level UI is a timeline sequencer that lets users add preset stylization effects such as squash and stretch and follow through to plain motions. Users can adjust spatial and temporal parameters to fine-tune these stylizations. These edits are propagated to our node-graph-based second level UI, in which the users can create custom stylizations after they are comfortable with the surface level UI. Our system also enables the stylization of interactions among multiple objects like force, energy, and collision. A pilot user study has shown that our fluid layered UI design allows for both ease of use and expressiveness better than existing tools.
2
immersivePOV: Filming How-To Videos with a Head-Mounted 360° Action Camera
Kevin Huang (University of Toronto, Toronto, Ontario, Canada)Jiannan Li (University of Toronto, Toronto, Ontario, Canada)Mauricio Sousa (University of Toronto, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)
How-to videos are often shot using camera angles that may not be optimal for learning motor tasks, with a prevalent use of third-person perspective. We present \textit{immersivePOV}, an approach to film how-to videos from an immersive first-person perspective using a head-mounted 360° action camera. immersivePOV how-to videos can be viewed in a Virtual Reality headset, giving the viewer an eye-level viewpoint with three Degrees of Freedom. We evaluated our approach with two everyday motor tasks against a baseline first-person perspective and a third-person perspective. In a between-subjects study, participants were assigned to watch the task videos and then replicate the tasks. Results suggest that immersivePOV reduced perceived cognitive load and facilitated task learning. We discuss how immersivePOV can also streamline the video production process for content creators. Altogether, we conclude that immersivePOV is an effective approach to film how-to videos for learners and content creators alike.
2
Supercharging Trial-and-Error for Learning Complex Software Applications
Damien Masson (Autodesk Research, Toronto, Ontario, Canada)Jo Vermeulen (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Justin Matejka (Autodesk Research, Toronto, Ontario, Canada)
Despite an abundance of carefully-crafted tutorials, trial-and-error remains many people’s preferred way to learn complex software. Yet, approaches to facilitate trial-and-error (such as tooltips) have evolved very little since the 1980s. While existing mechanisms work well for simple software, they scale poorly to large feature-rich applications. In this paper, we explore new techniques to support trial-and-error in complex applications. We identify key benefits and challenges of trial-and-error, and introduce a framework with a conceptual model and design space. Using this framework, we developed three techniques: ToolTrack to keep track of trial-and-error progress; ToolTrip to go beyond trial-and-error of single commands by highlighting related commands that are frequently used together; and ToolTaste to quickly and safely try commands. We demonstrate how these techniques facilitate trial-and-error, as illustrated through a proof-of-concept implementation in the CAD software Fusion 360. We conclude by discussing possible scenarios and outline directions for future research on trial-and-error.
2
Does Dynamically Drawn Text Improve Learning? Investigating the Effect of Text Presentation Styles in Video Learning
Ashwin Ram (National University of Singapore, Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)
Dynamically drawn content (e.g., handwritten text) in learning videos is believed to improve users’ engagement and learning over static powerpoint-based ones. However, evidence from existing literature is inconclusive. With the emergence of Optical Head-Mounted Displays (OHMDs), recent work has shown that video learning can be adapted for on-the-go scenarios. To better understand the role of dynamic drawing, we decoupled dynamically drawn text into two factors (font style and motion of appearance) and studied their impact on learning performance under two usage scenarios (while seated with desktop and walking with OHMD). We found that although letter-traced text was more engaging for some users, most preferred learning with typeface text that displayed the entire word at once and achieved better recall (46.7% higher), regardless of the usage scenarios. Insights learned from the studies can better inform designers on how to present text in videos for ubiquitous access.
2
Paracentral and near-peripheral visualizations: Towards attention-maintaining secondary information presentation on OHMDs during in-person social interactions
Nuwan Nanayakkarawasam Peru Kandage. Janaka (National University of Singapore, Singapore, Singapore)Chloe Haigh (National University of Singapore, Singapore, Singapore)Hyeongcheol Kim (National University of Singapore, Singapore , Singapore)Shan Zhang (National University of Singapore, Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)
Optical see-through Head-Mounted Displays (OST HMDs, OHMDs) are known to facilitate situational awareness while accessing secondary information. However, information displayed on OHMDs can cause attention shifts, which distract users from natural social interactions. We hypothesize that information displayed in paracentral and near-peripheral vision can be better perceived while the user is maintaining eye contact during face-to-face conversations. Leveraging this idea, we designed a circular progress bar to provide progress updates in paracentral and near-peripheral vision. We compared it with textual and linear progress bars under two conversation settings: a simulated one with a digital conversation partner and a realistic one with a real partner. Results show that a circular progress bar can effectively reduce notification distractions without losing eye contact and is more preferred by users. Our findings highlight the potential of utilizing the paracentral and near-peripheral vision for secondary information presentation on OHMDs.
2
Switching Between Standard Pointing Methods with Current and Emerging Computer Form Factors
Margaret Jean. Foley (University of Waterloo, Waterloo, Ontario, Canada)Quentin Roy (University of Waterloo, Waterloo, Ontario, Canada)Da-Yuan Huang (Huawei Canada, Markham, Ontario, Canada)Wei Li (Huawei Canada, Markham, Ontario, Canada)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
We investigate performance characteristics when switching between four pointing methods: absolute touch, absolute pen, relative mouse, and relative trackpad. The established "subtraction method" protocol used in mode-switching studies is extended to test pairs of methods and accommodate switch direction, multiple baselines, and controlling relative cursor position. A first experiment examines method switching on and around the horizontal surface of a tablet. Results find switching between pen and touch is fastest, and switching between relative and absolute methods incurs additional time penalty. A second experiment expands the investigation to an emerging foldable all-screen laptop form factor where switching also occurs on an angled surface and along a smoothly curved hinge. Results find switching between trackpad and touch is fastest, with all switching times generally higher. Our work contributes missing empirical evidence for switching performance using modern input methods, and our results can inform interaction design for current and emerging device form factors.
2
Prevalence and Salience of Problematic Microtransactions in Top-Grossing Mobile and PC Games: A Content Analysis of User Reviews
Elena Petrovskaya (University of York, York, United Kingdom)Sebastian Deterding (University of York, York, United Kingdom)David I. Zendle (University of York, York, North Yorkshire, United Kingdom)
Microtransactions have become a major monetisation model in digital games, shaping their design, impacting their player experience, and raising ethical concerns. Research in this area has chiefly focused on loot boxes. This begs the question whether other microtransactions might actually be more relevant and problematic for players. We therefore conducted a content analysis of negative player reviews (n=801) of top-grossing mobile and desktop games to determine which problematic microtransactions are most prevalent and salient for players. We found that problematic microtransactions with mobile games featuring more frequent and different techniques compared to desktop games. Across both, players minded issues related to fairness, transparency, and degraded user experience, supporting prior theoretical work, and importantly take issue with monetisation-driven design as such. We identify future research needs on why microtransactions in particular spark this critique, and which player communities it may be more or less representative of.
2
"I don't want to feel like I'm working in a 1960s factory": The Practitioner Perspective on Creativity Support Tool Adoption
Srishti Palani (Autodesk Research, Toronto, Ontario, Canada)David Ledo (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Fraser Anderson (Autodesk Research, Toronto, Ontario, Canada)
With the rapid development of creativity support tools, creative practitioners (e.g., designers, artists, architects) have to constantly explore and adopt new tools into their practice. While HCI research has focused on developing novel creativity support tools, little is known about creative practitioner's values when exploring and adopting these tools. We collect and analyze 23 videos, 13 interviews, and 105 survey responses of creative practitioners reflecting on their values to derive a value framework. We find that practitioners value the tools' functionality, integration into their current workflow, performance, user interface and experience, learning support, costs and emotional connection, in that order. They largely discover tools through personal recommendations. To help unify and encourage reflection from the wider community of CST stakeholders (e.g., systems creators, researchers, marketers, educators), we situate the framework within existing research on systems, creativity support tools and technology adoption.
2
Do You See What You Mean? Using Predictive Visualizations to Reduce Optimism in Duration Estimates
Morgane Koval (CNRS, ISIR, Paris, France)Yvonne Jansen (Sorbonne Université, CNRS, ISIR, Paris, France)
Making time estimates, such as how long a given task might take, frequently leads to inaccurate predictions because of an optimistic bias. Previous attempts to alleviate this bias, including decomposing the task into smaller components and listing potential surprises, have not shown any major improvement. This article builds on the premise that these procedures may have failed because they involve compound probabilities and mixture distributions which are difficult to compute in one's head. We hypothesize that predictive visualizations of such distributions would facilitate the estimation of task durations. We conducted a crowdsourced study in which 145 participants provided different estimates of overall and sub-task durations and we used these to generate predictive visualizations of the resulting mixture distributions. We compared participants' initial estimates with their updated ones and found compelling evidence that predictive visualizations encourage less optimistic estimates.
1
Supporting Serendipitous Discovery and Balanced Analysis of Online Product Reviews with Interaction-Driven Metrics and Bias-Mitigating Suggestions
Mahmood Jasim (University of Massachusetts Amherst, Amherst, Massachusetts, United States)Christopher Collins (Ontario Tech University, Oshawa, Ontario, Canada)Ali Sarvghad (University of Massachusetts Amherst, Amherst, Massachusetts, United States)Narges Mahyar (University of Massachusetts Amherst, Amherst, Massachusetts, United States)
In this study, we investigate how supporting serendipitous discovery and analysis of online product reviews can encourage readers to explore reviews more comprehensively prior to making purchase decisions. We propose two interventions --- Exploration Metrics that can help readers understand and track their exploration patterns through visual indicators and a Bias Mitigation Model that intends to maximize knowledge discovery by suggesting sentiment and semantically diverse reviews. We designed, developed, and evaluated a text analytics system called Serendyze, where we integrated these interventions. We asked 100 crowd workers to use Serendyze to make purchase decisions based on product reviews. Our evaluation suggests that exploration metrics enabled readers to efficiently cover more reviews in a balanced way, and suggestions from the bias mitigation model influenced readers to make confident data-driven decisions. We discuss the role of user agency and trust in text-level analysis systems and their applicability in domains beyond review exploration.
1
Enhanced Videogame Livestreaming by Reconstructing an Interactive 3D Game View for Spectators
Jeremy Hartmann (University of Waterloo, Waterloo, Ontario, Canada)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
Many videogame players livestream their gameplay so remote spectators can watch for enjoyment, fandom, and to learn strategies and techniques. Current approaches capture the player's rendered RGB view of the game, and then encode and stream it as a 2D live video feed. We extend this basic concept by also capturing the depth buffer, camera pose, and projection matrix from the rendering pipeline of the videogame and package them all within a MPEG-4 media container. Combining these additional data streams with the RGB view, our system builds a real-time, cumulative 3D representation of the live game environment for spectators. This enables each spectator to individually control a personal game view in 3D. This means they can watch the game from multiple perspectives, enabling a new kind of videogame spectatorship experience.
1
Exploring Spatial UI Transition Mechanisms with Head-Worn Augmented Reality
Feiyu Lu (Virginia Tech, Blacksburg, Virginia, United States)Yan Xu (Facebook, Redmond, Washington, United States)
Imagine in the future people comfortably wear augmented reality (AR) displays all day, how do we design interfaces that adapt to the contextual changes as people move around? In current operating systems, the majority of AR content defaults to staying at a fixed location until being manually moved by the users. However, this approach puts the burden of user interface (UI) transition solely on users. In this paper, we first ran a bodystorming design workshop to capture the limitations of existing manual UI transition approaches in spatially diverse tasks. Then we addressed these limitations by designing and evaluating three UI transition mechanisms with different levels of automation and controllability (low-effort manual, semi-automated, fully-automated). Furthermore, we simulated imperfect contextual awareness by introducing prediction errors with different costs to correct them. Our results provide valuable lessons about the trade-offs between UI automation levels, controllability, user agency, and the impact of prediction errors.
1
PneuMesh: Pneumatic-driven Truss-based Shape Changing System
Jianzhe Gu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yuyu Lin (zhejiang university, hangzhou, China)Qiang Cui (Tsinghua university, beijing, BJ, China)Xiaoqian Li (Zhejiang University, Hangzhou, China)Jiaji Li (Zhejiang University, Hangzhou, China)Lingyun Sun (Zhejiang University, Hangzhou, China)Cheng Yao (Zhejiang University, Hangzhou, China)Fangtian Ying (Zhejiang University, Hangzhou, China)Guanyun Wang (Zhejiang University, Hangzhou, China)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
From cross-sea bridges to large-scale installations, truss structures have been known for their structural stability and shape complexity. In addition to the advantages of static trusses, truss structure has a large degree of freedom to change shape when equipped with rotatable joints and retractable beams. However, it is difficult to design a complex motion and build a control system for large numbers of trusses. In this paper, we present PneuMesh, a novel truss-based shape-changing system that is easy to design and build but still able to achieve a range of tasks. PneuMesh accomplishes this by introducing an air channel connection strategy and reconfigurable constraint design that drastically decreases the number of control units without losing the complexity of shape-changing. We develop a design tool with real-time simulation to assist users in designing the shape and motion of truss-based shape-changing robots and devices. A design session with 7 participants demonstrates that PneuMesh empowers users to design and build truss structures with a wide range of shapes and various functional motions.
1
"I Wanted to See How Bad It Was": Online Self-screening as a Critical Transition Point Among Young Adults with Common Mental Health Conditions
Kaylee Payne. Kruzan (Northwestern University, Chicago, Illinois, United States)Jonah Meyerhoff (Northwestern University, Chicago, Illinois, United States)Theresa Nguyen (Mental Health America, Alexandria, Virginia, United States)Madhu Reddy (University of California, Irvine, Irvine, California, United States)David C.. Mohr (Northwestern University, Chicago, Illinois, United States)Rachel Kornfield (Northwestern University, Evanston, Illinois, United States)
Young adults have high rates of mental health conditions, yet they are the age group least likely to seek traditional treatment. They do, however, seek information about their mental health online, including by filling out online mental health screeners. To better understand online self-screening, and its role in help-seeking, we conducted focus groups with 50 young adults who voluntarily completed a mental health screener hosted on an advocacy website. We explored (1) catalysts for taking the screener, (2) anticipated outcomes, (3) reactions to the results, and (4) desired next steps. For many participants, the screener results validated their lived experiences of symptoms, but they were nevertheless unsure how to use the information to improve their mental health moving forward. Our findings suggest that online screeners can serve as a transition point in young people's mental health journeys. We discuss design implications for online screeners, post-screener feedback, and digital interventions broadly.
1
Katika: An End-to-End System for Authoring Amateur Explainer Motion Graphics Videos
Amir Jahanlou (Simon Fraser University, Surrey, British Columbia, Canada)Parmit K. Chilana (Simon Fraser University, Burnaby, British Columbia, Canada)
Explainer motion graphics videos that use a combination of graphical elements and movement to convey a visual message are becoming increasingly popular among amateur creators in different domains. But, to author motion graphics videos, amateurs either have to face a steep learning curve with professional design tools or struggle with re-purposing slide-sharing tools that are easier to access but have limited animation capabilities. To simplify the process of motion graphics authoring, we present the design and implementation of Katika, an end-to-end system for creating shots based on a script, adding artworks and animation from a crowdsourced library, and editing the video using semi-automated transitions. Our observational study illustrates that participants (N=11) enjoyed using Katika and, within a one-hour session, managed to create an explainer motion graphics video. We identify opportunities for future HCI research to lower the barriers to entry and democratize the authoring of motion graphics videos.
1
Dually Noted: Layout-Aware Annotations with Smartphone Augmented Reality
Jing Qian (Brown University, Providence, Rhode Island, United States)Qi Sun (New York University, New York, New York, United States)Curtis Wigington (Adobe Research, San Jose, California, United States)Han L.. Han (Université Paris-Saclay, CNRS, Inria, Orsay, France)Tong Sun (Adobe Research, San Jose, California, United States)Jennifer Healey (Adobe Research, San Jose, California, United States)James Tompkin (Brown University, Providence, Rhode Island, United States)Jeff Huang (Brown University, Providence, Rhode Island, United States)
Sharing annotations encourages feedback, discussion, and knowledge passing among readers and can be beneficial for personal and public use. Prior augmented reality (AR) systems have expanded these benefits to both digital and printed documents. However, despite smartphone AR now being widely available, there is a lack of research about how to use AR effectively for interactive document annotation. We propose Dually Noted, a smartphone-based AR annotation system that recognizes the layout of structural elements in a printed document for real-time authoring and viewing of annotations. We conducted experience prototyping with eight users to elicit potential benefits and challenges within smartphone AR, and this informed the resulting Dually Noted system and annotation interactions with the document elements. AR annotation is often unwieldy, but during a 12-user empirical study our novel structural understanding component allows Dually Noted to improve precise highlighting and annotation interaction accuracy by 13%, increase interaction speed by 42%, and significantly lower cognitive load over a baseline method without document layout understanding. Qualitatively, participants commented that Dually Noted was a swift and portable annotation experience. Overall, our research provides new methods and insights for how to improve AR annotations for physical documents.
1
The TAC Toolkit: Supporting Design for User Acceptance of Health Technologies from a Macro-Temporal Perspective
Camille Nadal (Trinity College Dublin, Dublin, Ireland)Shane McCully (Trinity College Dublin, Dublin, Ireland)Kevin Doherty (Technical University of Denmark, Copenhagen, Denmark)Corina Sas (Lancaster University, Lancaster, United Kingdom)Gavin Doherty (Trinity College Dublin, Dublin, Ireland)
User acceptance is key for the successful uptake and use of health technologies, but also impacted by numerous factors not always easily accessible nor operationalised by designers in practice. This work seeks to facilitate the application of acceptance theory in design practice through the Technology Acceptance (TAC) toolkit: a novel theory-based design tool and method comprising 16 cards, 3 personas, 3 scenarios, a virtual think-space, and a website, which we evaluated through workshops conducted with 21 designers of health technologies. Findings showed that the toolkit revised and extended designers' knowledge of technology acceptance, fostered their appreciation, empathy and ethical values while designing for acceptance, and contributed towards shaping their future design practice. We discuss implications for considering user acceptance a dynamic, multi-stage process in design practice, and better supporting designers in imagining distant acceptance challenges. Finally, we examine the generative value of the TAC toolkit and its possible future evolution.
1
Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels
Jochen Görtler (University of Konstanz, Konstanz, Germany)Fred Hohman (Apple, Seattle, Washington, United States)Dominik Moritz (Apple, Pittsburgh, Pennsylvania, United States)Kanit Wongsuphasawat (Apple, Seattle, Washington, United States)Donghao Ren (Apple, Seattle, Washington, United States)Rahul Nair (Apple, Heidelberg, Germany)Marc Kirchner (Apple, Heidelberg, Germany)Kayur Patel (Apple, Seattle, Washington, United States)
The confusion matrix, a ubiquitous visualization for helping people evaluate machine learning models, is a tabular layout that compares predicted class labels against actual class labels over all data instances. We conduct formative research with machine learning practitioners at Apple and find that conventional confusion matrices do not support more complex data-structures found in modern-day applications, such as hierarchical and multi-output labels. To express such variations of confusion matrices, we design an algebra that models confusion matrices as probability distributions. Based on this algebra, we develop Neo, a visual analytics system that enables practitioners to flexibly author and interact with hierarchical and multi-output confusion matrices, visualize derived metrics, renormalize confusions, and share matrix specifications. Finally, we demonstrate Neo's utility with three model evaluation scenarios that help people better understand model performance and reveal hidden confusions.
1
PITAS: Sensing and Actuating Embedded Robotic Sheet for Physical Information Communication
Tingyu Cheng (Interactive Computing, Atlanta, Georgia, United States)Jung Wook Park (Georgia Institute of Technology, Atlanta, Georgia, United States)Jiachen Li (Georgia Institute of Technology, Atlanta, Georgia, United States)Charles Ramey (Georgia Institute of Technology, Atlanta, Georgia, United States)Hongnan Lin (Georgia Institute of Technology , Atlanta, Georgia, United States)Gregory D.. Abowd (Northeastern University, Boston, Massachusetts, United States)Carolina Brum Medeiros (Flipr Sensing, Toronto, Ontario, Canada)HyunJoo Oh (Georgia Institute of Technology, Atlanta, Georgia, United States)Marcello Giordano (Facebook Reality Labs, Toronto, Ontario, Canada)
This work presents PITAS, a thin-sheet robotic material composed of a reversible phase transition actuating layer and a heating/sensing layer. The synthetic sheet material enables non-expert makers to create shape-changing devices that can locally or remotely convey physical information such as shape, color, texture and temperature changes. PITAS sheets can be manipulated into various 2D shapes or 3D geometries using subtractive fabrication methods such as laser, vinyl, or manual cutting or an optional additive 3D printing method for creating 3D objects. After describing the design of PITAS, this paper also describes a study conducted with thirteen makers to gauge the accessibility, design space, and limitations encountered when PITAS is used as a soft robotic material while designing physical information communication devices. Lastly, this work reports on the results of a mechanical and electrical evaluation of PITAS and presents application examples to demonstrate its utility.
1
Get To The Point! Problem-Based Curated Data Views To Augment Care For Critically Ill Patients
Minfan Zhang (University of Toronto, Toronto, Ontario, Canada)Daniel Ehrmann (Hospital for Sick Children, Toronto, Ontario, Canada)Mjaye Mazwi (Hospital for Sick Children, Toronto, Ontario, Canada)Danny Eytan (Hospital for Sick Children, Toronto, Ontario, Canada)Marzyeh Ghassemi (MIT, Cambridge, Massachusetts, United States)Fanny Chevalier (University of Toronto, Toronto, Ontario, Canada)
Electronic health records in critical care medicine offer unprecedented opportunities for clinical reasoning and decision making. Paradoxically, these data-rich environments have also resulted in clinical decision support systems (CDSSs) that fit poorly into clinical contexts, and increase health workers cognitive load. In this paper, we introduce a novel approach to designing CDSSs that are embedded in clinical workflows, by presenting problem-based curated data views tailored for problem-driven discovery, team communication, and situational awareness. We describe the design and evaluation of one such CDSS, In-Sight, that embodies our approach and addresses the clinical problem of monitoring critically ill pediatric patients. Our work is the result of a co-design process, further informed by empirical data collected through formal usability testing, focus groups, and a simulation study with domain experts. We discuss the potential and limitations of our approach, and share lessons learned in our iterative co-design process.
1
ElectriPop: Low-Cost, Shape-Changing Displays Using Electrostatically Inflated Mylar Sheets
Cathy Mengying Fang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Jianzhe Gu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chris Harrison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
We describe how sheets of metalized mylar can be cut and then “inflated” into complex 3D forms with electrostatic charge for use in digitally-controlled, shape-changing displays. This is achieved by placing and nesting various cuts, slits and holes such that mylar elements repel from one another to reach an equilibrium state. Importantly, our technique is compatible with industrial and hobbyist cutting processes, from die and laser cutting to handheld exacto-knives and scissors. Given that mylar film costs <$1 per m^2, we can create self-actuating 3D objects for just a few cents, opening new uses in low-cost consumer goods. We describe a design vocabulary, interactive simulation tool, fabrication guide, and proof-of-concept electrostatic actuation hardware. We detail our technique's performance metrics along with qualitative feedback from a design study. We present numerous examples generated using our pipeline to illustrate the rich creative potential of our method.
1
A Conversational Approach for Modifying Service Mashups in IoT Environments
Sanghoon Kim (Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, Republic of)In-Young Ko (Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, Republic of)
Existing conversational approaches for Internet of Things (IoT) service mashup do not support modification because of the usability challenge, although it is common for users to modify the service mashups in IoT environments. To support the modification of IoT service mashups through conversational interfaces in a usable manner, we propose the conversational mashup modification agent (CoMMA). Users can modify IoT service mashups using CoMMA through natural language conversations. CoMMA has a two-step mashup modification interaction, an implicature-based localization step, and a modification step with a disambiguation strategy. The localization step allows users to easily search for a mashup by vocalizing their expressions in the environment. The modification step supports users to modify mashups by speaking simple modification commands. We conducted a user study and the results show that CoMMA is as effective as visual approaches in terms of task completion time and perceived task workload for modifying IoT service mashups.
1
ShapeFindAR: Exploring In-Situ Spatial Search for Physical Artifact Retrieval using Mixed Reality
Evgeny Stemasov (Ulm University, Ulm, Germany)Tobias Wagner (Ulm University, Ulm, Germany)Jan Gugenheimer (Institut Polytechnique de Paris, Paris, France)Enrico Rukzio (University of Ulm, Ulm, Germany)
Personal fabrication is made more accessible through repositories like Thingiverse, as they replace modeling with retrieval. However, they require users to translate spatial requirements to keywords, which paints an incomplete picture of physical artifacts: proportions or morphology are non-trivially encoded through text only. We explore a vision of in-situ spatial search for (future) physical artifacts, and present ShapeFindAR, a mixed-reality tool to search for 3D models using in-situ sketches blended with textual queries. With ShapeFindAR, users search for geometry, and not necessarily precise labels, while coupling the search process to the physical environment (e.g., by sketching in-situ, extracting search terms from objects present, or tracing them). We developed ShapeFindAR for HoloLens 2, connected to a database of 3D-printable artifacts. We specify in-situ spatial search, describe its advantages, and present walkthroughs using ShapeFindAR, which highlight novel ways for users to articulate their wishes, without requiring complex modeling tools or profound domain knowledge.
1
VisGuide: User-Oriented Recommendations for Data Event Extraction
Yu-Rong Cao (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)Xiao Han Li (Multimedia Engineering, Hsinchu, Taiwan)Jia-Yu Pan (Google, Mountain View, California, United States)Wen-Chieh Lin (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)
Data exploration systems have become popular tools with which data analysts and others can explore raw data and organize their observations. However, users of such systems who are unfamiliar with their datasets face several challenges when trying to extract data events of interest to them. Those challenges include progressively discovering informative charts, organizing them into a logical order to depict a meaningful fact, and arranging one or more facts to illustrate a data event. To alleviate them, we propose VisGuide - a data exploration system that generates personalized recommendations to aid users’ discovery of data events in breadth and depth by incrementally learning their data exploration preferences and recommending meaningful charts tailored to them. As well as user preferences, VisGuide’s recommendations simultaneously consider sequence organization and chart presentation. We conducted two user studies to evaluate 1) the usability of VisGuide and 2) user satisfaction with its recommendation system. The results of those studies indicate that VisGuide can effectively help users create coherent and user-oriented visualization trees that represent meaningful data events.
1
Promptiverse: Scalable Generation of Scaffolding Prompts Through Human-AI Hybrid Knowledge Graph Annotation
Yoonjoo Lee (KAIST, Daejeon, Korea, Republic of)John Joon Young. Chung (University of Michigan, Ann Arbor, Michigan, United States)Tae Soo Kim (KAIST, Daejeon, Korea, Republic of)Jean Y. Song (DGIST, Daegu, Korea, Republic of)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Online learners are hugely diverse with varying prior knowledge, but most instructional videos online are created to be one-size-fits-all. Thus, learners may struggle to understand the content by only watching the videos. Providing scaffolding prompts can help learners overcome these struggles through questions and hints that relate different concepts in the videos and elicit meaningful learning. However, serving diverse learners would require a spectrum of scaffolding prompts, which incurs high authoring effort. In this work, we introduce Promptiverse, an approach for generating diverse, multi-turn scaffolding prompts at scale, powered by numerous traversal paths over knowledge graphs. To facilitate the construction of the knowledge graphs, we propose a hybrid human-AI annotation tool, Grannotate. In our study (N=24), participants produced 40 times more on-par quality prompts with higher diversity, through Promptiverse and Grannotate, compared to hand-designed prompts. Promptiverse presents a model for creating diverse and adaptive learning experiences online.
1
VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality
Uwe Gruenefeld (University of Duisburg-Essen, Essen, Germany)Jonas Auda (University of Duisburg-Essen, Essen, Germany)Florian Mathis (University of Glasgow, Glasgow, United Kingdom)Stefan Schneegass (University of Duisburg-Essen, Essen, Germany)Mohamed Khamis (University of Glasgow, Glasgow, United Kingdom)Jan Gugenheimer (Institut Polytechnique de Paris, Paris, France)Sven Mayer (LMU Munich, Munich, Germany)
Cross-reality systems empower users to transition along the reality-virtuality continuum or collaborate with others experiencing different manifestations of it. However, prototyping these systems is challenging, as it requires sophisticated technical skills, time, and often expensive hardware. We present VRception, a concept and toolkit for quick and easy prototyping of cross-reality systems. By simulating all levels of the reality-virtuality continuum entirely in Virtual Reality, our concept overcomes the asynchronicity of realities, eliminating technical obstacles. Our VRception Toolkit leverages this concept to allow rapid prototyping of cross-reality systems and easy remixing of elements from all continuum levels. We replicated six cross-reality papers using our toolkit and presented them to their authors. Interviews with them revealed that our toolkit sufficiently replicates their core functionalities and allows quick iterations. Additionally, remote participants used our toolkit in pairs to collaboratively implement prototypes in about eight minutes that they would have otherwise expected to take days.
1
Janus Screen: A Screen with Switchable Projection Surfaces Using Wire Grid Polarizer
Wataru Yamada (NTT DOCOMO, INC., Tokyo, Japan)Sawa Korogi (NTT DOCOMO, INC., Tokyo, Japan)Keiichi Ochiai (NTT DOCOMO, INC., Tokyo, Japan)
In this paper, we present a novel screen system employing polarizers that allow switching of the projection surface to the front, rear, or both sides using only two projectors on one side. In this system, we propose a method that employs two projectors equipped with polarizers and a multi-layered screen comprising an anti-reflective plate, transparent screen, and wire grid polarizer. The multi-layered screen changes whether the projected image is shown on the front or rear side of the screen depending on the polarization direction of the incident light. Hence, the proposed method can project images on the front, rear, or both sides of the screen by projecting images from either or both projectors using polarizers. In addition, the proposed method can be easily deployed by simply attaching multiple optical films. We implement a prototype and confirm that the proposed method can selectively switch the projection surface.
1
"It Puts Life into My Creations": Understanding Fluid Fiber as a Media for Expressive Display
Guanhong Liu (Tsinghua University, Beijing, China)Haiqing Xu (Tsinghua University, beijing, -Select-, China)Xianghua(Sharon) Ding (Fudan University, Shanghai, China)Mingyue Gao (Tsinghua University, Beijing, China)Bowen Li (Parsons School of Design, New York, New York, United States)Fushen Ruan (Beijing Film Academy, Beijing, China)Haipeng Mi (Tsinghua University, Beijing, China)
Fluid fiber, with fluid flowing in a tube, is an attractive material to flexibly and dynamically display digital information. While it has recently attracted attention from the HCI field, there is currently little knowledge about this material, limited to controlling the position of the droplets to present representational information like letters and numbers. To develop a broader and deepened understanding of this material and its potential for display design, we conducted a study based on a design workshop where art and design practitioners engaged in creation practice with a toolkit we designed and developed. The toolkit includes hardware components for controlling bubbles and droplets and a GUI design tool for arranging the fluid layout. Our research reveals the structural and expressive affordance of such a fluid fiber for displaying information, highlighting the unique value of fluidity as an intuitive form to express life, emotion, movement and changes.
1
Logic Bonbon: Exploring Food as Computational Artifact
Jialin Deng (Monash University, Melbourne, Victoria, Australia)Patrick Olivier (Monash University, Melbourne, Victoria, Australia)Josh Andres (The Australian National University, Canberra, Australian Capital Territory, Australia)Kirsten Ellis (Monash University, Melbourne, Vic, Australia)Ryan Wee (Monash University, Melbourne, Victoria, Australia)Florian ‘Floyd’. Mueller (Monash University, Melbourne, VIC, Australia)
In recognition of food’s significant experiential pleasures, culinary practitioners and designers are increasingly exploring novel combinations of computing technologies and food. However, despite much creative endeavors, proposals and prototypes have so far largely maintained a traditional divide, treating food and technology as separate entities. In contrast, we present a “Research through Design” exploration of the notion of food as computational artifact: wherein food itself is the material of computation. We describe the Logic Bonbon, a dessert that can hydrodynamically regulate its flavor via a fluidic logic system. Through a study of experiencing the Logic Bonbon and reflection on our design practice, we offer a provisional account of how food as computational artifact can mediate new interactions through a novel approach to food-computation integration, that promotes an enriched future of Human-Food Interaction.
1
(Re)discovering the Physical Body Online: Strategies and Challenges to Approach Non-Cisgender Identity in Social Virtual Reality
Guo Freeman (Clemson University, Clemson, South Carolina, United States)Divine Maloney (Clemson University, Clemson, South Carolina, United States)Dane Acena (Clemson University, Clemson, South Carolina, United States)Catherine Barwulor (Clemson University, Clemson , South Carolina, United States)
The contemporary understanding of gender continues to highlight the complexity and variety of gender identities beyond a binary dichotomy regarding one’s biological sex assigned at birth. The emergence and popularity of various online social spaces also makes the digital presentation of gender even more sophisticated. In this paper, we use non-cisgender as an umbrella term to describe diverse gender identities that do not match people’s sex assigned at birth, including Transgender, Genderfluid, and Non-binary. We especially explore non-cisgender individuals’ identity practices and their challenges in novel social Virtual Reality (VR) spaces where they can present, express, and experiment their identity in ways that traditional online social spaces cannot provide. We provide one of the first empirical evidence of how social VR platforms may introduce new and novel phenomena and practices of approaching diverse gender identities online. We also contribute to re-conceptualizing technology-supported identity practices by highlighting the role of(re)discovering the physical body online and informing the design of the emerging metaverse for supporting diverse gender identities in the future.
1
Exploring Perceptions of Cross-Sectoral Data Sharing with People with Parkinson’s
Roisin McNaney (Monash University, Melbourne, Australia)Catherine Morgan (University of Bristol, Bristol, United Kingdom)Pranav Kulkarni (Monash University, Melbourne, VIC, Australia)Julio Vega (University of Pittsburgh, Pittsburgh, Pennsylvania, United States)Farnoosh Heidarivincheh (University of Bristol, Bristol, United Kingdom)Ryan McConville (University of Bristol, Bristol, United Kingdom)Alan Whone (University of Bristol, Bristol, United Kingdom)Mickey Kim (University of Bristol, Bristol, United Kingdom)Reuben Kirkham (Monash University, Melbourne, Australia)Ian Craddock (University of Bristol, Bristol, United Kingdom)
In interdisciplinary spaces such as digital health, datasets that are complex to collect, require specialist facilities, and/or are collected with specific populations have value in a range of different sectors. In this study we collected a simulated free-living dataset, in a smart home, with 12 participants (six people with Parkinson’s, six carers). We explored their initial perceptions of the sensors through interviews and then conducted two data exploration workshops, wherein we showed participants the collected data and discussed their views on how this data, and other data relating to their Parkinson’s symptoms, might be shared across different sectors. We provide recommendations around how participants might be better engaged in considering data sharing in the early stages of research, and guidance for how research might be configured to allow for more informed data sharing practices in the future.
1
SilentSpeller: Towards mobile, hands-free, silent speech text entry using electropalatography
Naoki Kimura (The University of Tokyo, Bunkyo, Tokyo, Japan)Tan Gemicioglu (Georgia Institute of Technology, Atlanta, Georgia, United States)Jonathan Womack (Georgia Institute of Technology, Atlanta, Georgia, United States)Yuhui Zhao (Georgia Institute of Technology, Atlanta, Georgia, United States)Richard Li (University of Washington, Seattle, Washington, United States)Abdelkareem Bedri (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Zixiong Su (The University of Tokyo, Tokyo, Japan)Alex Olwal (Google Inc., Mountain View, California, United States)Jun Rekimoto (The University of Tokyo, Tokyo, Japan)Thad Starner (Georgia Institute of Technology, Atlanta, Georgia, United States)
Speech is inappropriate in many situations, limiting when voice control can be used. Most unvoiced speech text entry systems can not be used while on-the-go due to movement artifacts. Using a dental retainer with capacitive touch sensors, SilentSpeller tracks tongue movement, enabling users to type by spelling words without voicing. SilentSpeller achieves an average 97% character accuracy in offline isolated word testing on a 1164-word dictionary. Walking has little effect on accuracy; average offline character accuracy was roughly equivalent on 107 phrases entered while walking (97.5%) or seated (96.5%). To demonstrate extensibility, the system was tested on 100 unseen words, leading to an average 94% accuracy. Live text entry speeds for seven participants averaged 37 words per minute at 87% accuracy. Comparing silent spelling to current practice suggests that SilentSpeller may be a viable alternative for silent mobile text entry.
1
Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces
Ryo Suzuki (University of Calgary, Calgary, Alberta, Canada)Adnan Karim (University of Calgary, Calgary, Alberta, Canada)Tian Xia (University of Calgary, Calgary, Alberta, Canada)Hooman Hedayati (University of Colorado Boulder, Boulder, Colorado, United States)Nicolai Marquardt (University College London, London, United Kingdom)
This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics.
1
Evaluating Singing for Computer Input Using Pitch, Interval, and Melody
Graeme Zinck (University of Waterloo, Waterloo, Ontario, Canada)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
In voice-based interfaces, non-verbal features represent a simple and underutilized design space for hands-free, language-agnostic interactions. We evaluate the performance of three fundamental types of voice-based musical interactions: pitch, interval, and melody. These interactions involve singing or humming a sequence of one or more notes. A 21-person study evaluates the feasibility and enjoyability of these interactions. The top performing participants were able to perform all interactions reasonably quickly (<5s) with average error rates between 1.3% and 8.6% after training. Others improved with training but still had error rates as high as 46% for pitch and melody interactions. The majority of participants found all tasks enjoyable. Using these results, we propose design considerations for using singing interactions as well as potential use cases for both standard computers and augmented reality glasses.
1
Electrical Head Actuation: Enabling Interactive Systems to Directly Manipulate Head Orientation
Yudai Tanaka (University of Chicago, Chicago, Illinois, United States)Jun Nishida (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a novel interface concept in which interactive systems directly manipulate the user’s head orientation. We implement this using electrical-muscle-stimulation (EMS) of the neck muscles, which turns the head around its yaw (left/right) and pitch (up/down) axis. As the first exploration of EMS for head actuation, we characterized which muscles can be robustly actuated. Second, we evaluated the accuracy of our system for actuating participants' head orientation towards static targets and trajectories. Third, we demonstrated how it enables interactions not possible before by building a range of applications, such as (1) synchronizing head orientations of two users, which enables a user to communicate head nods to another user while listening to music, and (2) directly changing the user's head orientation to locate objects in AR. Finally, in our second study, participants felt that our head actuation contributed positively to their experience in four distinct applications.
1
HydroMod : Constructive Modules for Prototyping Hydraulic Physical Interfaces
Takafumi Morita (The University of Tokyo, Tokyo, Japan)Yu Kuwajima (Shibaura institute technologies , Koto, Tokyo, Japan)Ayato Minaminosono (Shibaura Institution of Technology, Tokyo, Japan)Shingo Maeda (Shibaura Institute of Technology, Tokyo, Japan)Yasuaki Kakehi (The University of Tokyo, Tokyo, Japan)
In recent years, actuators that handle fluids such as gases and liquids have been attracting attention for their applications in soft robots and shape-changing interfaces. In the field of HCI, there have been various inflatable prototyping tools that utilize air control, however, very few tools for liquid control have been developed. In this study, we propose HydroMod, new constructive modules that can easily generate liquid flow and programmatically control liquid flow, with the aim of lowering the barrier to entry for prototyping with liquids. HydroMod consists of palm-sized small modules, which can generate liquid flow with the electrohydrodynamics (EHD) phenomenon by simply connecting the modules. Moreover, users can configure and control the flow path by simply recombining the modules. In this paper, we propose the design of the modules, evaluate the performance of HydroMod as a fluid system, and also show the possible application scenarios of fluid prototyping using this system.
1
Understanding Gesture Input Articulation with Upper-Body Wearables for Users with Upper-Body Motor Impairments
Radu-Daniel Vatavu (Ștefan cel Mare University of Suceava, Suceava, Romania)Ovidiu-Ciprian Ungurean (Ștefan cel Mare University of Suceava, Suceava, Romania)
We examine touchscreen stroke-gestures and mid-air motion-gestures articulated by users with upper-body motor impairments with devices worn on the wrist, finger, and head. We analyze users' gesture input performance in terms of production time, articulation consistency, and kinematic measures, and contrast the performance of users with upper-body motor impairments with that of a control group of users without impairments. Our results, from two datasets of 7,290 stroke-gestures and 3,809 motion-gestures collected from 28 participants, reveal that users with upper-body motor impairments take twice as much time to produce stroke-gestures on wearable touchscreens compared to users without impairments, but articulate motion-gestures equally fast and with similar acceleration. We interpret our findings in the context of ability-based design and propose ten implications for accessible gesture input with upper-body wearables for users with upper-body motor impairments.
1
ComputableViz: Mathematical Operators as a Formalism for Visualization Processing and Analysis
Aoyu Wu (Hong Kong University of Science and Technology, Hong Kong, China)Wai Tong (The Hong Kong University of Science and Technology, Hong Kong, China)Haotian Li (The Hong Kong University of Science and Technology, Hong Kong, China)Dominik Moritz (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yong Wang (Singapore Management University, Singapore, Singapore, Singapore)Huamin Qu (The Hong Kong University of Science and Technology, Hong Kong, China)
Data visualizations are created and shared on the web at an unprecedented speed, raising new needs and questions for processing and analyzing visualizations after they have been generated and digitized. However, existing formalisms focus on operating on a single visualization instead of multiple visualizations, making it challenging to perform analysis tasks such as sorting and clustering visualizations. Through a systematic analysis of previous work, we abstract visualization-related tasks into mathematical operators such as union and propose a design space of visualization operations. We realize the design by developing ComputableViz, a library that supports operations on multiple visualization specifications. To demonstrate its usefulness and extensibility, we present multiple usage scenarios concerning processing and analyzing visualization, such as generating visualization embeddings and automatically making visualizations accessible. We conclude by discussing research opportunities and challenges for managing and exploiting the massive visualizations on the web.
1
Shape-Haptics: Planar & Passive Force Feedback Mechanisms for Physical Interfaces
Clement Zheng (National University of Singapore, Singapore, Singapore, Singapore)Zhen Zhou Yong (National University of Singapore, Singapore, Singapore, Singapore)Hongnan Lin (Georgia Institute of Technology , Atlanta, Georgia, United States)HyunJoo Oh (Georgia Institute of Technology, Atlanta, Georgia, United States)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)
We present Shape-Haptics, an approach for designers to rapidly design and fabricate passive force feedback mechanisms for physical interfaces. Such mechanisms are used in everyday interfaces and tools, and they are challenging to design. Shape-Haptics abstracts and broadens the haptic expression of this class of force feedback systems through 2D laser cut configurations that are simple to fabricate. They leverage the properties of polyoxymethylene plastic and comprise a compliant spring structure that engages with a sliding profile during tangible interaction. By shaping the sliding profile, designers can easily customize the haptic force feedback delivered by the mechanism. We provide a computational design sandbox to facilitate designers to explore and fabricate Shape-Haptics mechanisms. We also propose a series of applications that demonstrate the utility of Shape-Haptics in creating and customizing haptics for different physical interfaces.
1
Consent in the Age of AR: Investigating The Comfort With Displaying Personal Information in Augmented Reality
Jan Ole Rixen (Institute of Media Informatics, Ulm, Germany)Mark Colley (Ulm University, Ulm, Germany)Ali Askari (Ulm University, Ulm, Germany)Jan Gugenheimer (Institut Polytechnique de Paris, Paris, France)Enrico Rukzio (University of Ulm, Ulm, Germany)
Social Media (SM) has shown that we adapt our communication and disclosure behaviors to available technological opportunities. Head-mounted Augmented Reality (AR) will soon allow to effortlessly display the information we disclosed not isolated from our physical presence (e.g., on a smartphone) but visually attached to the human body. In this work, we explore how the medium (AR vs. Smartphone), our role (being augmented vs. augmenting), and characteristics of information types (e.g., level of intimacy, self-disclosed vs. non-self-disclosed) impact the users' comfort when displaying personal information. Conducting an online survey (N=148), we found that AR technology and being augmented negatively impacted this comfort. Additionally, we report that AR mitigated the effects of information characteristics compared to those they had on smartphones. In light of our results, we discuss that information augmentation should be built on consent and openness, focusing more on the comfort of the augmented rather than the technological possibilities.