注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

3
O&O: A DIY toolkit for designing and rapid prototyping olfactory interfaces
Yuxuan Lei (Tsinghua University, Beijing, China)Qi Lu (Tsinghua University, Beijing, China)Yingqing Xu (Tsinghua University, Beijing, China)
Constructing olfactory interfaces on demand requires significant design proficiency and engineering effort. The absence of powerful and convenient tools that reduced innovation complexity posed obstacles for future research in the area. To address this problem, we proposed O&O, a modular olfactory interface DIY toolkit. The toolkit consists of: (1) a scent generation kit, a set of electronics and accessories that supported three common scent vaporization techniques; (2) a module construction kit, a set of primitive cardboard modules for assembling permutable functional structures; (3) a design manual, a step-by-step design thinking framework that directs the decision-making and prototyping process. We organized a formal workshop with 19 participants and four solo DIY trials to evaluate the capability of the toolkit, the overall user engagement, the creations in both sessions, and the iterative suggestions. Finally, design implications and future opportunities were discussed for further research.
3
Shaping Textile Sliders: An Evaluation of Form Factors and Tick Marks for Textile Sliders
Oliver Nowak (RWTH Aachen University, Aachen, Germany)René Schäfer (RWTH Aachen University, Aachen, Germany)Anke Brocker (RWTH Aachen University, Aachen, Germany)Philipp Wacker (RWTH Aachen University, Aachen, Germany)Jan Borchers (RWTH Aachen University, Aachen, Germany)
Textile interfaces enable designers to integrate unobtrusive media and smart home controls into furniture such as sofas. While the technical aspects of such controllers have been the subject of numerous research projects, the physical form factor of these controls has received little attention so far. This work investigates how general design properties, such as overall slider shape, raised vs. recessed sliders, and number and layout of tick marks, affect users' preferences and performance. Our first user study identified a preference for certain design combinations, such as recessed, closed-shaped sliders. Our second user study included performance measurements on variations of the preferred designs from study 1, and took a closer look at tick marks. Tick marks supported orientation better than slider shape. Sliders with at least three tick marks were preferred, and performed well. Non-uniform, equally distributed tick marks reduced the movements users needed to orient themselves on the slider.
2
FabricatINK: Personal Fabrication of Bespoke Displays Using Electronic Ink from Upcycled E Readers
Ollie Hanton (University of Bristol, Bristol, United Kingdom)Zichao Shen (University of Bristol, Bristol, United Kingdom)Mike Fraser (University of Bath, Bath, United Kingdom)Anne Roudaut (University of Bristol, Bristol, United Kingdom)
FabricatINK explores the personal fabrication of irregularly-shaped low-power displays using electronic ink (E ink). E ink is a programmable bicolour material used in traditional form-factors such as E readers. It has potential for more versatile use within the scope of personal fabrication of custom-shaped displays, and it has the promise to be the pre-eminent material choice for this purpose. We appraise technical literature to identify properties of E ink, suited to fabrication. We identify a key roadblock, universal access to E ink as a material, and we deliver a method to circumvent this by upcycling broken electronics. We subsequently present a novel fabrication method for irregularly-shaped E ink displays. We demonstrate our fabrication process and E ink's versatility through ten prototypes showing different applications and use cases. By addressing E ink as a material for display fabrication, we uncover the potential for users to create custom-shaped truly bistable displays.
2
Paper Trail: An Immersive Authoring System for Augmented Reality Instructional Experiences
Shwetha Rajaram (University of Michigan, Ann Arbor, Michigan, United States)Michael Nebeling (University of Michigan, Ann Arbor, Michigan, United States)
Prior work has demonstrated augmented reality's benefits to education, but current tools are difficult to integrate with traditional instructional methods. We present Paper Trail, an immersive authoring system designed to explore how to enable instructors to create AR educational experiences, leaving paper at the core of the interaction and enhancing it with various forms of digital media, animations for dynamic illustrations, and clipping masks to guide learning. To inform the system design, we developed five scenarios exploring the benefits that hand-held and head-worn AR can bring to STEM instruction and developed a design space of AR interactions enhancing paper based on these scenarios and prior work. Using the example of an AR physics handout, we assessed the system's potential with PhD-level instructors and its usability with XR design experts. In an elicitation study with high-school teachers, we study how Paper Trail could be used and extended to enable flexible use cases across various domains. We discuss benefits of immersive paper for supporting diverse student needs and challenges for making effective use of AR for learning.
2
"I don't want to feel like I'm working in a 1960s factory": The Practitioner Perspective on Creativity Support Tool Adoption
Srishti Palani (Autodesk Research, Toronto, Ontario, Canada)David Ledo (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Fraser Anderson (Autodesk Research, Toronto, Ontario, Canada)
With the rapid development of creativity support tools, creative practitioners (e.g., designers, artists, architects) have to constantly explore and adopt new tools into their practice. While HCI research has focused on developing novel creativity support tools, little is known about creative practitioner's values when exploring and adopting these tools. We collect and analyze 23 videos, 13 interviews, and 105 survey responses of creative practitioners reflecting on their values to derive a value framework. We find that practitioners value the tools' functionality, integration into their current workflow, performance, user interface and experience, learning support, costs and emotional connection, in that order. They largely discover tools through personal recommendations. To help unify and encourage reflection from the wider community of CST stakeholders (e.g., systems creators, researchers, marketers, educators), we situate the framework within existing research on systems, creativity support tools and technology adoption.
2
Prevalence and Salience of Problematic Microtransactions in Top-Grossing Mobile and PC Games: A Content Analysis of User Reviews
Elena Petrovskaya (University of York, York, United Kingdom)Sebastian Deterding (University of York, York, United Kingdom)David I. Zendle (University of York, York, North Yorkshire, United Kingdom)
Microtransactions have become a major monetisation model in digital games, shaping their design, impacting their player experience, and raising ethical concerns. Research in this area has chiefly focused on loot boxes. This begs the question whether other microtransactions might actually be more relevant and problematic for players. We therefore conducted a content analysis of negative player reviews (n=801) of top-grossing mobile and desktop games to determine which problematic microtransactions are most prevalent and salient for players. We found that problematic microtransactions with mobile games featuring more frequent and different techniques compared to desktop games. Across both, players minded issues related to fairness, transparency, and degraded user experience, supporting prior theoretical work, and importantly take issue with monetisation-driven design as such. We identify future research needs on why microtransactions in particular spark this critique, and which player communities it may be more or less representative of.
2
A Layered Authoring Tool for Stylized 3D animations
Jiaju Ma (Brown University, Providence, Rhode Island, United States)Li-Yi Wei (Adobe Research, San Jose, California, United States)Rubaiat Habib Kazi (Adobe Research, Seattle, Washington, United States)
Guided by the 12 principles of animation, stylization is a core 2D animation feature but has been utilized mainly by experienced animators. Although there are tools for stylizing 2D animations, creating stylized 3D animations remains a challenging problem due to the additional spatial dimension and the need for responsive actions like contact and collision. We propose a system that helps users create stylized casual 3D animations. A layered authoring interface is employed to balance between ease of use and expressiveness. Our surface level UI is a timeline sequencer that lets users add preset stylization effects such as squash and stretch and follow through to plain motions. Users can adjust spatial and temporal parameters to fine-tune these stylizations. These edits are propagated to our node-graph-based second level UI, in which the users can create custom stylizations after they are comfortable with the surface level UI. Our system also enables the stylization of interactions among multiple objects like force, energy, and collision. A pilot user study has shown that our fluid layered UI design allows for both ease of use and expressiveness better than existing tools.
2
First Steps Towards Designing Electrotactons: Investigating Intensity and Pulse Frequency as Parameters for Electrotactile Cues.
Yosuef Alotaibi (University of Glasgow , Glasgow , United Kingdom)John H. Williamson (University of Glasgow, Glasgow, United Kingdom)Stephen Anthony. Brewster (University of Glasgow, Glasgow, United Kingdom)
Electrotactile stimulation is a novel form of haptic feedback. There is little work investigating its basic design parameters and how they create effective tactile cues. This paper describes two experiments that extend our knowledge of two key parameters. The first investigated the combination of pulse width and amplitude Intensity on sensations of urgency, annoyance, valence and arousal. Results showed significant effects: increasing Intensity caused higher ratings of urgency, annoyance and arousal but reduced valence. We established clear levels for differentiating each sensation. A second study then investigated Intensity and Pulse Frequency to find out how many distinguishable levels could be perceived. Results showed that both Intensity and Pulse Frequency significantly affected perception, with four distinguishable levels of Intensity and two of Pulse Frequency. These results add significant new knowledge about the parameter space of electrotactile cue design and help designers select suitable properties to use when creating electrotactile cues.
2
immersivePOV: Filming How-To Videos with a Head-Mounted 360° Action Camera
Kevin Huang (University of Toronto, Toronto, Ontario, Canada)Jiannan Li (University of Toronto, Toronto, Ontario, Canada)Mauricio Sousa (University of Toronto, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)
How-to videos are often shot using camera angles that may not be optimal for learning motor tasks, with a prevalent use of third-person perspective. We present \textit{immersivePOV}, an approach to film how-to videos from an immersive first-person perspective using a head-mounted 360° action camera. immersivePOV how-to videos can be viewed in a Virtual Reality headset, giving the viewer an eye-level viewpoint with three Degrees of Freedom. We evaluated our approach with two everyday motor tasks against a baseline first-person perspective and a third-person perspective. In a between-subjects study, participants were assigned to watch the task videos and then replicate the tasks. Results suggest that immersivePOV reduced perceived cognitive load and facilitated task learning. We discuss how immersivePOV can also streamline the video production process for content creators. Altogether, we conclude that immersivePOV is an effective approach to film how-to videos for learners and content creators alike.
2
Supercharging Trial-and-Error for Learning Complex Software Applications
Damien Masson (Autodesk Research, Toronto, Ontario, Canada)Jo Vermeulen (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Justin Matejka (Autodesk Research, Toronto, Ontario, Canada)
Despite an abundance of carefully-crafted tutorials, trial-and-error remains many people’s preferred way to learn complex software. Yet, approaches to facilitate trial-and-error (such as tooltips) have evolved very little since the 1980s. While existing mechanisms work well for simple software, they scale poorly to large feature-rich applications. In this paper, we explore new techniques to support trial-and-error in complex applications. We identify key benefits and challenges of trial-and-error, and introduce a framework with a conceptual model and design space. Using this framework, we developed three techniques: ToolTrack to keep track of trial-and-error progress; ToolTrip to go beyond trial-and-error of single commands by highlighting related commands that are frequently used together; and ToolTaste to quickly and safely try commands. We demonstrate how these techniques facilitate trial-and-error, as illustrated through a proof-of-concept implementation in the CAD software Fusion 360. We conclude by discussing possible scenarios and outline directions for future research on trial-and-error.
2
Do You See What You Mean? Using Predictive Visualizations to Reduce Optimism in Duration Estimates
Morgane Koval (CNRS, ISIR, Paris, France)Yvonne Jansen (Sorbonne Université, CNRS, ISIR, Paris, France)
Making time estimates, such as how long a given task might take, frequently leads to inaccurate predictions because of an optimistic bias. Previous attempts to alleviate this bias, including decomposing the task into smaller components and listing potential surprises, have not shown any major improvement. This article builds on the premise that these procedures may have failed because they involve compound probabilities and mixture distributions which are difficult to compute in one's head. We hypothesize that predictive visualizations of such distributions would facilitate the estimation of task durations. We conducted a crowdsourced study in which 145 participants provided different estimates of overall and sub-task durations and we used these to generate predictive visualizations of the resulting mixture distributions. We compared participants' initial estimates with their updated ones and found compelling evidence that predictive visualizations encourage less optimistic estimates.
2
Does Dynamically Drawn Text Improve Learning? Investigating the Effect of Text Presentation Styles in Video Learning
Ashwin Ram (National University of Singapore, Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)
Dynamically drawn content (e.g., handwritten text) in learning videos is believed to improve users’ engagement and learning over static powerpoint-based ones. However, evidence from existing literature is inconclusive. With the emergence of Optical Head-Mounted Displays (OHMDs), recent work has shown that video learning can be adapted for on-the-go scenarios. To better understand the role of dynamic drawing, we decoupled dynamically drawn text into two factors (font style and motion of appearance) and studied their impact on learning performance under two usage scenarios (while seated with desktop and walking with OHMD). We found that although letter-traced text was more engaging for some users, most preferred learning with typeface text that displayed the entire word at once and achieved better recall (46.7% higher), regardless of the usage scenarios. Insights learned from the studies can better inform designers on how to present text in videos for ubiquitous access.
2
Personal Dream Informatics: A Self-Information Systems Model of Dream Engagement
Michael Jeffrey Daniel. Hoefer (University of Colorado Boulder, Boulder, Colorado, United States)Bryce E. Schumacher (University of Colorado Boulder, Boulder, Colorado, United States)Stephen Voida (University of Colorado Boulder, Boulder, Colorado, United States)
We present the research area of personal dream informatics: studying the self-information systems that support dream engagement and communication between the dreaming self and the wakeful self. Through a survey study of 281 individuals primarily recruited from an online community dedicated to dreaming, we develop a dream-information systems view of dreaming and dream tracking as a type of self-information system. While dream-information systems are characterized by diverse tracking processes, motivations, and outcomes, they are universally constrained by the ephemeral dreamset - the short period of time between waking up and rapid memory loss of dream experiences. By developing a system dynamics model of dreaming we highlight feedback loops that serve as high leverage points for technology designers, and suggest a variety of design considerations for crafting technology that best supports dream recall, dream tracking, and dreamwork for nightmare relief and personal development.
2
Mobile-Friendly Content Design for MOOCs: Challenges, Requirements, and Design Opportunities
Jeongyeon Kim (KAIST, Daejeon, Korea, Republic of)Yubin Choi (KAIST, Daejeon, Korea, Republic of)Meng Xia (KAIST, Daejeon, Korea, Republic of)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Most video-based learning content is designed for desktops without considering mobile environments. We (1) investigate the gap between mobile learners’ challenges and video engineers’ considerations using mixed methods and (2) provide design guidelines for creating mobile-friendly MOOC videos. To uncover learners’ challenges, we conducted a survey (n=134) and interviews (n=21), and evaluated the mobile adequacy of current MOOCs by analyzing 41,722 video frames from 101 video lectures. Interview results revealed low readability and situationally-induced impairments as major challenges. The content analysis showed a low guideline compliance rate for key design factors. We then interviewed 11 video production engineers to investigate design factors they mainly consider. The engineers mainly focus on the size and amount of content while lacking consideration for color, complex images, and situationally-induced impairments. Finally, we present and validate guidelines for designing mobile-friendly MOOCs, such as providing adaptive and customizable visual design and context-aware accessibility support.
2
Paracentral and near-peripheral visualizations: Towards attention-maintaining secondary information presentation on OHMDs during in-person social interactions
Nuwan Nanayakkarawasam Peru Kandage. Janaka (National University of Singapore, Singapore, Singapore)Chloe Haigh (National University of Singapore, Singapore, Singapore)Hyeongcheol Kim (National University of Singapore, Singapore , Singapore)Shan Zhang (National University of Singapore, Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)
Optical see-through Head-Mounted Displays (OST HMDs, OHMDs) are known to facilitate situational awareness while accessing secondary information. However, information displayed on OHMDs can cause attention shifts, which distract users from natural social interactions. We hypothesize that information displayed in paracentral and near-peripheral vision can be better perceived while the user is maintaining eye contact during face-to-face conversations. Leveraging this idea, we designed a circular progress bar to provide progress updates in paracentral and near-peripheral vision. We compared it with textual and linear progress bars under two conversation settings: a simulated one with a digital conversation partner and a realistic one with a real partner. Results show that a circular progress bar can effectively reduce notification distractions without losing eye contact and is more preferred by users. Our findings highlight the potential of utilizing the paracentral and near-peripheral vision for secondary information presentation on OHMDs.
2
Switching Between Standard Pointing Methods with Current and Emerging Computer Form Factors
Margaret Jean. Foley (University of Waterloo, Waterloo, Ontario, Canada)Quentin Roy (University of Waterloo, Waterloo, Ontario, Canada)Da-Yuan Huang (Huawei Canada, Markham, Ontario, Canada)Wei Li (Huawei Canada, Markham, Ontario, Canada)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
We investigate performance characteristics when switching between four pointing methods: absolute touch, absolute pen, relative mouse, and relative trackpad. The established "subtraction method" protocol used in mode-switching studies is extended to test pairs of methods and accommodate switch direction, multiple baselines, and controlling relative cursor position. A first experiment examines method switching on and around the horizontal surface of a tablet. Results find switching between pen and touch is fastest, and switching between relative and absolute methods incurs additional time penalty. A second experiment expands the investigation to an emerging foldable all-screen laptop form factor where switching also occurs on an angled surface and along a smoothly curved hinge. Results find switching between trackpad and touch is fastest, with all switching times generally higher. Our work contributes missing empirical evidence for switching performance using modern input methods, and our results can inform interaction design for current and emerging device form factors.
1
A Conversational Approach for Modifying Service Mashups in IoT Environments
Sanghoon Kim (Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, Republic of)In-Young Ko (Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, Republic of)
Existing conversational approaches for Internet of Things (IoT) service mashup do not support modification because of the usability challenge, although it is common for users to modify the service mashups in IoT environments. To support the modification of IoT service mashups through conversational interfaces in a usable manner, we propose the conversational mashup modification agent (CoMMA). Users can modify IoT service mashups using CoMMA through natural language conversations. CoMMA has a two-step mashup modification interaction, an implicature-based localization step, and a modification step with a disambiguation strategy. The localization step allows users to easily search for a mashup by vocalizing their expressions in the environment. The modification step supports users to modify mashups by speaking simple modification commands. We conducted a user study and the results show that CoMMA is as effective as visual approaches in terms of task completion time and perceived task workload for modifying IoT service mashups.
1
A Little Too Personal: Effects of Standardization versus Personalization on Job Acquisition, Work Completion, and Revenue for Online Freelancers
Jane Hsieh (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yili Hong (University of Houston, Houston, Texas, United States)Gordon Burtch (Boston University, Boston, Massachusetts, United States)Haiyi Zhu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
As more individuals consider permanently working from home, the online labor market continues to grow as an alternative working environment. While the flexibility and autonomy of these online gigs attracts many workers, success depends critically upon self-management and workers' efficient allocation of scarce resources. To achieve this, freelancers may develop alternative work strategies, employing highly standardized schedules and communication patterns while taking on large work volumes, or engaging in smaller numbers of jobs whilst tailoring their activities to build relationships with individual employers. In this study, we consider this contrast in relation to worker communication patterns. We demonstrate the heterogeneous effects of standardization versus personalization across different stages of a project and examine the relative impact on job acquisition, project completion, and earnings. Our findings can inform the design of platforms and various worker support tools for the gig economy.
1
VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality
Uwe Gruenefeld (University of Duisburg-Essen, Essen, Germany)Jonas Auda (University of Duisburg-Essen, Essen, Germany)Florian Mathis (University of Glasgow, Glasgow, United Kingdom)Stefan Schneegass (University of Duisburg-Essen, Essen, Germany)Mohamed Khamis (University of Glasgow, Glasgow, United Kingdom)Jan Gugenheimer (Institut Polytechnique de Paris, Paris, France)Sven Mayer (LMU Munich, Munich, Germany)
Cross-reality systems empower users to transition along the reality-virtuality continuum or collaborate with others experiencing different manifestations of it. However, prototyping these systems is challenging, as it requires sophisticated technical skills, time, and often expensive hardware. We present VRception, a concept and toolkit for quick and easy prototyping of cross-reality systems. By simulating all levels of the reality-virtuality continuum entirely in Virtual Reality, our concept overcomes the asynchronicity of realities, eliminating technical obstacles. Our VRception Toolkit leverages this concept to allow rapid prototyping of cross-reality systems and easy remixing of elements from all continuum levels. We replicated six cross-reality papers using our toolkit and presented them to their authors. Interviews with them revealed that our toolkit sufficiently replicates their core functionalities and allows quick iterations. Additionally, remote participants used our toolkit in pairs to collaboratively implement prototypes in about eight minutes that they would have otherwise expected to take days.
1
Electrical Head Actuation: Enabling Interactive Systems to Directly Manipulate Head Orientation
Yudai Tanaka (University of Chicago, Chicago, Illinois, United States)Jun Nishida (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a novel interface concept in which interactive systems directly manipulate the user’s head orientation. We implement this using electrical-muscle-stimulation (EMS) of the neck muscles, which turns the head around its yaw (left/right) and pitch (up/down) axis. As the first exploration of EMS for head actuation, we characterized which muscles can be robustly actuated. Second, we evaluated the accuracy of our system for actuating participants' head orientation towards static targets and trajectories. Third, we demonstrated how it enables interactions not possible before by building a range of applications, such as (1) synchronizing head orientations of two users, which enables a user to communicate head nods to another user while listening to music, and (2) directly changing the user's head orientation to locate objects in AR. Finally, in our second study, participants felt that our head actuation contributed positively to their experience in four distinct applications.
1
Barriers to Expertise in Citizen Science Games
Josh Aaron Miller (Northeastern University, Boston, Massachusetts, United States)Seth Cooper (Northeastern University, Boston, Massachusetts, United States)
Expertise-centric citizen science games (ECCSGs) can be powerful tools for crowdsourcing scientific knowledge production. However, to be effective these games must train their players on how to become experts, which is difficult in practice. In this study, we investigated the path to expertise and the barriers involved by interviewing players of three ECCSGs: Foldit, Eterna, and Eyewire. We then applied reflexive thematic analysis to generate themes of their experiences and produce a model of expertise and its barriers. We found expertise is constructed through a cycle of exploratory and social learning but prevented by instructional design issues. Moreover, exploration is slowed by a lack of polish to the game artifact, and social learning is disrupted by a lack of clear communication. Based on our analysis we make several recommendations for CSG developers, including: collaborating with professionals of required skill sets; providing social features and feedback systems; and improving scientific communication.
1
Designing Visuo-Haptic Illusions with Proxies in Virtual Reality: Exploration of Grasp, Movement Trajectory and Object Mass
Martin Feick (Saarland Informatics Campus, Saarbrücken, Germany)Kora Persephone. Regitz (Saarland Informatics Campus, Saarbrücken, Germany)Anthony Tang (University of Toronto, Toronto, Ontario, Canada)Antonio Krüger (DFKI, Saarland Informatics Campus, Saarbrücken, Germany)
Visuo-haptic illusions are a method to expand proxy-based interactions in VR by introducing unnoticeable discrepancies between the virtual and real-world. Yet, how different design variables affect the illusions with proxies is still unclear. To unpack a subset of variables, we conducted two user studies with 48 participants to explore the impact of (1) different grasping types and movement trajectories, and (2) different grasping types and object masses on the discrepancy which may be introduced. Our Bayes analysis suggests that grasping types and object masses (≤ 500 g) did not noticeably affect the discrepancy, but for movement trajectory, results were inconclusive. Further, we identified a significant difference between un-/restricted movement trajectories. Our data shows considerable differences in participants’ proprioceptive accuracy, which seem to correlate with their prior VR experience. Finally, we illustrate the impact of our key findings on the visuo-haptic illusion design process by showcasing a new design workflow.
1
FaceOri: Tracking Head Position and Orientation Using Ultrasonic Ranging on Earphones
Yuntao Wang (Tsinghua University, Beijing, China)Jiexin Ding (Tsinghua University, Beijing, China)Ishan Chatterjee (University of Washington, Seattle, Washington, United States)Farshid Salemi Parizi (University of Washington, Seattle, Washington, United States)Yuzhou Zhuang (Tsinghua University, Beijing, China)Yukang Yan (Tsinghua University, Beijing, China)Shwetak Patel (University of Washington, Seattle, Washington, United States)Yuanchun Shi (Tsinghua University, Beijing, China)
Face orientation can often indicate users’ intended interaction target. In this paper, we propose FaceOri, a novel face tracking technique based on acoustic ranging using earphones. FaceOri can leverage the speaker on a commodity device to emit an ultrasonic chirp, which is picked up by the set of microphones on the user’s earphone, and then processed to calculate the distance from each microphone to the device. These measurements are used to derive the user’s face orientation and distance with respect to the device. We conduct a ground truth comparison and user study to evaluate FaceOri’s performance. The results show that the system can determine whether the user orients to the device at a 93.5% accuracy within a 1.5 meters range. Furthermore, FaceOri can continuously track the user’s head orientation with a median absolute error of 10.9 mm in the distance, 3.7◦ in yaw, and 5.8◦ in pitch. FaceOri can allow for convenient hands-free control of devices and produce more intelligent context-aware interaction.
1
The Dark Side of Perceptual Manipulations in Virtual Reality
Wen-Jie Tseng (Institut Polytechnique de Paris, Paris, France)Elise Bonnail (Télécom Paris, Palaiseau, France)Mark McGill (University of Glasgow, Glasgow, Lanarkshire, United Kingdom)Mohamed Khamis (University of Glasgow, Glasgow, United Kingdom)Eric Lecolinet (Institut Polytechnique de Paris, Paris, France)Samuel Huron (Télécom Paristech, Université Paris-Saclay, Paris, France)Jan Gugenheimer (Institut Polytechnique de Paris, Paris, France)
"Virtual-Physical Perceptual Manipulations'' (VPPMs) such as redirected walking and haptics expand the user's capacity to interact with Virtual Reality (VR) beyond what would ordinarily physically be possible. VPPMs leverage knowledge of the limits of human perception to effect changes in the user's physical movements, becoming able to (perceptibly and imperceptibly) nudge their physical actions to enhance interactivity in VR. We explore the risks posed by the malicious use of VPPMs. First, we define, conceptualize and demonstrate the existence of VPPMs. Next, using speculative design workshops, we explore and characterize the threats/risks posed, proposing mitigations and preventative recommendations against the malicious use of VPPMs. Finally, we implement two sample applications to demonstrate how existing VPPMs could be trivially subverted to create the potential for physical harm. This paper aims to raise awareness that the current way we apply and publish VPPMs can lead to malicious exploits of our perceptual vulnerabilities.
1
Embodied Geometric Reasoning with a Robot: The Impact of Robot Gestures on Student Reasoning about Geometrical Conjectures
Joseph E. Michaelis (University of Illinois at Chicago, Chicago, Illinois, United States)Daniela Di Canio (University of Illinois at Chicago, Chicago, Illinois, United States)
In this paper, we explore how the physically embodied nature of robots can influence learning through non-verbal communication, such as gesturing. We take an embodied cognition perspective to examine student interactions with a NAO robot that uses gestures while reasoning about geometry conjectures. College aged students (N = 30) were randomly assigned to either a dynamic condition, where the robot uses dynamic gestures that represent and manipulate geometric shapes in the conjectures, or control condition, where the robot uses beat gestures that match the rhythm of speech. Students in the dynamic condition: (1) use more gestures when they reason about geometry conjectures, (2) look more at the robot as it speaks, (3) feel the robot is a better study partner and uses effective gestures, but (4) were not more successful in correctly reasoning about geometry conjectures. We discuss implications for socially supported and embodied learning with a physically present robot.
1
Structure-aware Visualization Retrieval
Haotian Li (The Hong Kong University of Science and Technology, Hong Kong, China)Yong Wang (Singapore Management University, Singapore, Singapore, Singapore)Aoyu Wu (Hong Kong University of Science and Technology, Hong Kong, China)Huan Wei (The Hong Kong University of Science and Technology, Hong Kong, China)Huamin Qu (The Hong Kong University of Science and Technology, Hong Kong, China)
With the wide usage of data visualizations, a huge number of Scalable Vector Graphic (SVG)-based visualizations have been created and shared online. Accordingly, there has been an increasing interest in exploring how to retrieve perceptually similar visualizations from a large corpus, since it can benefit various downstream applications such as visualization recommendation. Existing methods mainly focus on the visual appearance of visualizations by regarding them as bitmap images. However, the structural information intrinsically existing in SVG-based visualizations is ignored. Such structural information can delineate the spatial and hierarchical relationship among visual elements, and characterize visualizations thoroughly from a new perspective. This paper presents a structure-aware method to advance the performance of visualization retrieval by collectively considering both the visual and structural information. We extensively evaluated our approach through quantitative comparisons, a user study and case studies. The results demonstrate the effectiveness of our approach and its advantages over existing methods.
1
At-home Pupillometry using Smartphone Facial Identification Cameras
Colin Barry (University of California, San Diego, La Jolla, California, United States)Jessica de Souza (UCSD, La Jolla, California, United States)Yinan Xuan (University of California San Diego, La Jolla, California, United States)Jason Holden (University of California, San Diego, La Jolla, California, United States)Eric Granholm (University of California, San Diego, La Jolla, California, United States)Edward Jay. Wang (University of California, San Diego, San Diego, California, United States)
With recent developments in medical and psychiatric research surrounding pupillary response, cheap and accessible pupillometers could enable medical benefits from early neurological disease detection to measurements of cognitive load. In this paper, we introduce a novel smartphone-based pupillometer to allow for future development in clinical research surrounding at-home pupil measurements. Our solution utilizes a NIR front-facing camera for facial recognition paired with the RGB selfie camera to perform tracking of absolute pupil dilation with sub-millimeter accuracy. In comparison to a gold standard pupillometer during a pupillary light reflex test, the smartphone-based system achieves a median MAE of 0.27mm for absolute pupil dilation tracking and a median error of 3.52\% for pupil dilation change tracking. Additionally, we remotely deployed the system to older adults as part of a usability study that demonstrates promise for future smartphone deployments to remotely collect data in older, inexperienced adult users operating the system themselves.
1
Design of Digital Workplace Stress-Reduction Intervention Systems: Effects of Intervention Type and Timing
Esther Howe (Microsoft Research, Redmond, Washington, United States)Jina Suh (Microsoft Research, Redmond, Washington, United States)Mehrab Bin Morshed (Microsoft Research, Redmond, Washington, United States)Daniel McDuff (Microsoft, Seattle, Washington, United States)Kael Rowan (Microsoft Research, Redmond, Washington, United States)Javier Hernandez (Microsoft Research, Cambridge, Massachusetts, United States)Marah Ihab. Abdin (Microsoft Research, Redmond, Washington, United States)Gonzalo Ramos (Microsoft Research, KIRKLAND, Washington, United States)Tracy Tran (Microsoft Research, Redmond, Washington, United States)Mary P. Czerwinski (Microsoft Research, Redmond, Washington, United States)
Workplace stress-reduction interventions have produced mixed results due to engagement and adherence barriers. Leveraging technology to integrate such interventions into the workday may address these barriers and help mitigate the mental, physical, and monetary effects of workplace stress. To inform the design of a workplace stress-reduction intervention system, we conducted a four-week longitudinal study with 86 participants, examining the effects of intervention type and timing on usage, stress reduction impact, and user preferences. We compared three intervention types and two delivery timing conditions: Pre-scheduled (PS) by users and Just-in-time (JIT) prompted by the system-identified user stress-levels. We found JIT participants completed significantly more interventions than PS participants, but post-intervention and study-long stress reduction was not significantly different between conditions. Participants rated low-effort interventions highest, but high-effort interventions reduced the most stress. Participants felt JIT provided accountability but desired partial agency over timing. We present type and timing implications.
1
FitVid: Responsive and Flexible Video Content Adaptation
Jeongyeon Kim (KAIST, Daejeon, Korea, Republic of)Yubin Choi (KAIST, Daejeon, Korea, Republic of)Minsuk Kahng (Oregon State University, Corvallis, Oregon, United States)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Mobile video-based learning attracts many learners with its mobility and ease of access. However, most lectures are designed for desktops. Our formative study reveals mobile learners' two major needs: more readable content and customizable video design. To support mobile-optimized learning, we present FitVid, a system that provides responsive and customizable video content. Our system consists of (1) an adaptation pipeline that reverse-engineers pixels to retrieve design elements (e.g., text, images) from videos, leveraging deep learning with a custom dataset, which powers (2) a UI that enables resizing, repositioning, and toggling in-video elements. The content adaptation improves the guideline compliance rate by 24% and 8% for word count and font size. The content evaluation study (n=198) shows that the adaptation significantly increases readability and user satisfaction. The user study (n=31) indicates that FitVid significantly improves learning experience, interactivity, and concentration. We discuss design implications for responsive and customizable video adaptation.
1
Digital Fabrication of Pneumatic Actuators with Integrated Sensing by Machine Knitting
Yiyue Luo (Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States)Kui Wu (MIT, Cambridge, Massachusetts, United States)Andrew Spielberg (Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States)Michael Foshey (MIT, Cambridge, Massachusetts, United States)Tomás Palacios (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Daniela Rus (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Wojciech Matusik (MIT, Cambridge, Massachusetts, United States)
Soft actuators with integrated sensing have shown utility in a variety of applications such as assistive wearables, robotics, and interactive input devices. Despite their promise, these actuators can be difficult to both design and fabricate. As a solution, we present a workflow for computationally designing and digitally fabricating soft pneumatic actuators \emph{via} a machine knitting process. Machine knitting is attractive as a fabrication process because it is fast, digital (programmable), and provides access to a rich material library of functional yarns for specified mechanical behavior and integrated sensing. Our method uses elastic stitches to construct non-homogeneous knitting structures, which program the bending of actuators when inflated. Our method also integrates pressure and swept frequency capacitive sensing structures using conductive yarns. The entire knitted structure is fabricated automatically in a single machine run. We further provide a computational design interface for the user to interactively preview actuators’ quasi-static shape when authoring elastic stitches. Our sensing-integrated actuators are cost-effective, easy to design, robust to large actuation, and require minimal manual post-processing. We demonstrate five use-cases of our actuators in relevant application settings.
1
Why Did You/I Read but Not Reply? IM Users’ Unresponded Read-Receipt Practices and Explanations
Yu-Ling Chou (National Tsing Hua University, Hsinchu, Taiwan)Yi-Hsiu Lin (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)Tzu-Yi Lin (Department of Computer Science, Hsinchu, Taiwan)Hsin Ying You (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)Yung-Ju Chang (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)
We investigate instant-messaging (IM) users’ sense-making and practices around read-receipts: a feature of IM apps for supporting the awareness of turn-taking, i.e., whether a message recipient has read a message. Using a grounded-theory approach, we highlight the importance of five contextual factors – situational, relational, interactional, conversational, and personal – that shape the variety of IM users’ sense-making about read-receipts and strategies for utilizing them in different settings. This approach yields a 21-part typology comprising five types of senders’ speculation about why their messages with read-receipts have not been answered; eight types of recipients’ causes/reasons behind such non-response; and four types of senders’ and recipients’ subsequent strategies, respectively. Mismatches between senders’ speculations about un-responded-to read-receipted messages (URRMs) and recipients’ self-reported explanations are also discussed as sources of communicative friction. The findings reveal that, beyond indicating turn-taking, read-receipts have been leveraged as a strategic tool for various purposes in interpersonal relations.
1
ShapeFindAR: Exploring In-Situ Spatial Search for Physical Artifact Retrieval using Mixed Reality
Evgeny Stemasov (Ulm University, Ulm, Germany)Tobias Wagner (Ulm University, Ulm, Germany)Jan Gugenheimer (Institut Polytechnique de Paris, Paris, France)Enrico Rukzio (University of Ulm, Ulm, Germany)
Personal fabrication is made more accessible through repositories like Thingiverse, as they replace modeling with retrieval. However, they require users to translate spatial requirements to keywords, which paints an incomplete picture of physical artifacts: proportions or morphology are non-trivially encoded through text only. We explore a vision of in-situ spatial search for (future) physical artifacts, and present ShapeFindAR, a mixed-reality tool to search for 3D models using in-situ sketches blended with textual queries. With ShapeFindAR, users search for geometry, and not necessarily precise labels, while coupling the search process to the physical environment (e.g., by sketching in-situ, extracting search terms from objects present, or tracing them). We developed ShapeFindAR for HoloLens 2, connected to a database of 3D-printable artifacts. We specify in-situ spatial search, describe its advantages, and present walkthroughs using ShapeFindAR, which highlight novel ways for users to articulate their wishes, without requiring complex modeling tools or profound domain knowledge.
1
PneuMesh: Pneumatic-driven Truss-based Shape Changing System
Jianzhe Gu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yuyu Lin (zhejiang university, hangzhou, China)Qiang Cui (Tsinghua university, beijing, BJ, China)Xiaoqian Li (Zhejiang University, Hangzhou, China)Jiaji Li (Zhejiang University, Hangzhou, China)Lingyun Sun (Zhejiang University, Hangzhou, China)Cheng Yao (Zhejiang University, Hangzhou, China)Fangtian Ying (Zhejiang University, Hangzhou, China)Guanyun Wang (Zhejiang University, Hangzhou, China)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
From cross-sea bridges to large-scale installations, truss structures have been known for their structural stability and shape complexity. In addition to the advantages of static trusses, truss structure has a large degree of freedom to change shape when equipped with rotatable joints and retractable beams. However, it is difficult to design a complex motion and build a control system for large numbers of trusses. In this paper, we present PneuMesh, a novel truss-based shape-changing system that is easy to design and build but still able to achieve a range of tasks. PneuMesh accomplishes this by introducing an air channel connection strategy and reconfigurable constraint design that drastically decreases the number of control units without losing the complexity of shape-changing. We develop a design tool with real-time simulation to assist users in designing the shape and motion of truss-based shape-changing robots and devices. A design session with 7 participants demonstrates that PneuMesh empowers users to design and build truss structures with a wide range of shapes and various functional motions.
1
"Chat Has No Chill": A Novel Physiological Interaction for Engaging Live Streaming Audiences
Raquel Breejon. Robinson (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Ricardo Rheeder (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Madison Klarkowski (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Regan L. Mandryk (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)
Now more than ever, people are using online platforms to communicate. Twitch, the foremost platform for live game streaming, offers many communication modalities. However, the platform lacks representation of social cues and signals of the audience experience, which are innately present in live events. To address this, we present a technology probe that captures the audience energy and response in a game streaming context. We designed a game and integrated a custom-communication modality—Commons Sense—in which the audience members' heart rates are sensed via webcam, averaged, and fed into a video game to affect sound, lighting, and difficulty. We conducted an `in-the-wild' evaluation with four Twitch streamers and their audience members (N=55) to understand how these groups interacted through Commons Sense. Audience members and streamers indicated high levels of enjoyment and engagement with Commons Sense, suggesting the potential of physiological interaction as a beneficial communication tool in live streaming.
1
Mixplorer: Scaffolding Design Space Exploration through Genetic Recombination of Multiple Peoples' Designs to Support Novices' Creativity
Kevin Gonyop Kim (EPFL, Lausanne, Switzerland)Richard Lee. Davis (EPFL, Lausanne, Switzerland)Alessia Eletta Coppi (Swiss Federal Institute for Vocational Education and Training, Lugano , Switzerland)Alberto A. P.. Cattaneo (Swiss Federal University for Vocational Education and Training, Lugano, Switzerland)Pierre Dillenbourg (EPFL, Lausanne, Switzerland)
The ability to consider a wide range of solutions to a design problem is a crucial skill for designers, and is a major differentiator between experts and novices. One reason for this is that novices are unaware of the full extent of the design space in which solutions are situated. To support novice designers with design space exploration, we introduce Mixplorer, a system that allows designers to take an initial design and mix it with other designs. Mixplorer differs from existing tools by supporting the exploration of ill-defined design spaces through social design space exploration. To evaluate Mixplorer, we conducted (1) an interview study with design instructors who reported that Mixplorer would "help to open the minds" of novice designers and (2) a controlled experiment with novices, finding that the design-mixing functionality of Mixplorer provided significantly better support for creativity, and that participants who mixed designs produced more novel designs.
1
Dually Noted: Layout-Aware Annotations with Smartphone Augmented Reality
Jing Qian (Brown University, Providence, Rhode Island, United States)Qi Sun (New York University, New York, New York, United States)Curtis Wigington (Adobe Research, San Jose, California, United States)Han L.. Han (Université Paris-Saclay, CNRS, Inria, Orsay, France)Tong Sun (Adobe Research, San Jose, California, United States)Jennifer Healey (Adobe Research, San Jose, California, United States)James Tompkin (Brown University, Providence, Rhode Island, United States)Jeff Huang (Brown University, Providence, Rhode Island, United States)
Sharing annotations encourages feedback, discussion, and knowledge passing among readers and can be beneficial for personal and public use. Prior augmented reality (AR) systems have expanded these benefits to both digital and printed documents. However, despite smartphone AR now being widely available, there is a lack of research about how to use AR effectively for interactive document annotation. We propose Dually Noted, a smartphone-based AR annotation system that recognizes the layout of structural elements in a printed document for real-time authoring and viewing of annotations. We conducted experience prototyping with eight users to elicit potential benefits and challenges within smartphone AR, and this informed the resulting Dually Noted system and annotation interactions with the document elements. AR annotation is often unwieldy, but during a 12-user empirical study our novel structural understanding component allows Dually Noted to improve precise highlighting and annotation interaction accuracy by 13%, increase interaction speed by 42%, and significantly lower cognitive load over a baseline method without document layout understanding. Qualitatively, participants commented that Dually Noted was a swift and portable annotation experience. Overall, our research provides new methods and insights for how to improve AR annotations for physical documents.
1
Ga11y: an Automated GIF Annotation System for Visually Impaired Users
Mingrui Ray. Zhang (University of Washington, Seattle, Washington, United States)Mingyuan Zhong (University of Washington, Seattle, Washington, United States)Jacob O.. Wobbrock (University of Washington, Seattle, Washington, United States)
Animated GIF images have become prevalent in internet culture, often used to express richer and more nuanced meanings than static images. But animated GIFs often lack adequate alternative text descriptions, and it is challenging to generate such descriptions automatically, resulting in inaccessible GIFs for blind or low-vision (BLV) users. To improve the accessibility of animated GIFs for BLV users, we provide a system called Ga11y (pronounced ``galley''), for creating GIF annotations. Ga11y combines the power of machine intelligence and crowdsourcing and has three components: an Android client for submitting annotation requests, a backend server and database, and a web interface where volunteers can respond to annotation requests. We evaluated three human annotation interfaces and employ the one that yielded the best annotation quality. We also conducted a multi-stage evaluation with 12 BLV participants from the United States and China, receiving positive feedback.
1
FAR: End-to-End Vibrotactile Distributed System Designed to Facilitate Affect Regulation in Children Diagnosed with Autism Spectrum Disorder Through Slow Breathing
Pardis Miri (Stanford University, Palo Alto, California, United States)Mehul Arora (Stanford University, stanford, California, United States)Aman Malhotra (Stanford University, Stanford, California, United States)Robert Flory (Intel, Hillsboro, Oregon, United States)Stephanie Hu (Stanford University, Stanford, California, United States)Ashley Lowber (Stanford University, Stanford, California, United States)Ishan Goyal (Stanford University, Stanford, California, United States)Jacqueline Nguyen (Stanford University, Stanford, California, United States)John P. Hegarty (Stanford University, Stanford, California, United States)Marlo Kohn (Stanford University, Stanford, California, United States)David Schneider (Stanford University, Stanford, California, United States)Heather Culbertson (University of Southern California, Los Angeles, California, United States)Daniel L. K. Yamins (Stanford, Stanford, California, United States)Lawrence Fung (Stanford University, Stanford, California, United States)Antonio Hardan (Stanford University, Stanford, California, United States)James J. Gross (Stanford University, Stanford, California, United States)Keith Marzullo (University of Maryland, Maryland, Washington, United States)
To address difficulties with affect dysregulation in youth diagnosed with autism spectrum disorder (ASD), we designed and developed an end-to-end vibrotactile breathing pacer system and evaluated its usability. In this paper we describe the system architecture and the features we deployed for this system based on expert advice and reviews. Through piloting this system with one child diagnosed with ASD, we learned that our system was used in ways we did and did not anticipate. For example, the paced-breathing personalization procedure did not meet the attention span of the pilot participant but two instead of one pacer devices encouraged caregiver’s involvement. This paper details our learnings and concludes with a list of system design guidelines at the system architecture level. To the best of our knowledge, this is the first fully functional vibrotactile system designed for ASD children that withstood usability testing in vitro for two weeks.
1
Squeezy-Feely: Investigating Lateral Thumb-Index Pinching as an Input Modality
Martin Schmitz (Technical University of Darmstadt, Darmstadt, Germany)Sebastian Günther (TU Darmstadt, Darmstadt, Germany)Dominik Schön (TU Darmstadt, Darmstadt, Germany)Florian Müller (LMU, Munich, Germany)
From zooming on smartphones and mid-air gestures to deformable user interfaces, thumb-index pinching grips are used in many interaction techniques. However, there is still a lack of systematic understanding of how the accuracy and efficiency of such grips are affected by various factors such as counterforce, grip span, and grip direction. Therefore, in this paper, we contribute an evaluation (N = 18) of thumb-index pinching performance in a visual targeting task using scales up to 75 items. As part of our findings, we conclude that the pinching interaction between the thumb and index finger is a promising modality also for one-dimensional input on higher scales. Furthermore, we discuss and outline implications for future user interfaces that benefit from pinching as an additional and complementary interaction modality.
1
Bivariate Effective Width Method to Improve the Normalization Capability for Subjective Speed-accuracy Biases in Rectangular-target Pointing
Shota Yamanaka (Yahoo Japan Corporation, Tokyo, Japan)Hiroki Usuba (Meiji University, Nakano, Tokyo, Japan)Homei Miyashita (Meiji University, Tokyo, Japan)
The effective width method of Fitts' law can normalize speed-accuracy biases in 1D target pointing tasks. However, in graphical user interfaces, more meaningful target shapes are rectangular. To empirically determine the best way to normalize the subjective biases, we ran remote and crowdsourced user experiments with three speed-accuracy instructions. We propose to normalize the speed-accuracy biases by applying the effective sizes to existing Fitts' law formulations including width W and height H. We call this target-size adjustment the bivariate effective width method. We found that, overall, Accot and Zhai's weighted Euclidean model using the effective width and height independently showed the best fit to the data in which the three instruction conditions were mixed (i.e., the time data measured in all instructions were analyzed with a single regression expression). Our approach enables researchers to fairly compare two or more conditions (e.g., devices, input techniques, user groups) with the normalized throughputs.
1
SonarID: Using Sonar to Identify Fingers on a Smartwatch
Jiwan Kim (UNIST, Ulsan, Korea, Republic of)Ian Oakley (UNIST, Ulsan, Korea, Republic of)
The diminutive size of wrist wearables has prompted the design of many novel input techniques to increase expressivity. Finger identification, or assigning different functionality to different fingers, has been frequently proposed. However, while the value of the technique seems clear, its implementation remains challenging, often relying on external devices (e.g., worn magnets) or explicit instructions. Addressing these limitations, this paper explores a novel approach to natural and unencumbered finger identification on an unmodified smartwatch: sonar. To do this, we adapt an existing finger tracking smartphone sonar implementation---rather than extract finger motion, we process raw sonar fingerprints representing the complete sonar scene recorded during a touch. We capture data from 16 participants operating a smartwatch and use their sonar fingerprints to train a deep learning recognizer that identifies taps by the thumb, index, and middle fingers with an accuracy of up to 93.7%, sufficient to support meaningful application development.
1
Janus Screen: A Screen with Switchable Projection Surfaces Using Wire Grid Polarizer
Wataru Yamada (NTT DOCOMO, INC., Tokyo, Japan)Sawa Korogi (NTT DOCOMO, INC., Tokyo, Japan)Keiichi Ochiai (NTT DOCOMO, INC., Tokyo, Japan)
In this paper, we present a novel screen system employing polarizers that allow switching of the projection surface to the front, rear, or both sides using only two projectors on one side. In this system, we propose a method that employs two projectors equipped with polarizers and a multi-layered screen comprising an anti-reflective plate, transparent screen, and wire grid polarizer. The multi-layered screen changes whether the projected image is shown on the front or rear side of the screen depending on the polarization direction of the incident light. Hence, the proposed method can project images on the front, rear, or both sides of the screen by projecting images from either or both projectors using polarizers. In addition, the proposed method can be easily deployed by simply attaching multiple optical films. We implement a prototype and confirm that the proposed method can selectively switch the projection surface.
1
Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels
Jochen Görtler (University of Konstanz, Konstanz, Germany)Fred Hohman (Apple, Seattle, Washington, United States)Dominik Moritz (Apple, Pittsburgh, Pennsylvania, United States)Kanit Wongsuphasawat (Apple, Seattle, Washington, United States)Donghao Ren (Apple, Seattle, Washington, United States)Rahul Nair (Apple, Heidelberg, Germany)Marc Kirchner (Apple, Heidelberg, Germany)Kayur Patel (Apple, Seattle, Washington, United States)
The confusion matrix, a ubiquitous visualization for helping people evaluate machine learning models, is a tabular layout that compares predicted class labels against actual class labels over all data instances. We conduct formative research with machine learning practitioners at Apple and find that conventional confusion matrices do not support more complex data-structures found in modern-day applications, such as hierarchical and multi-output labels. To express such variations of confusion matrices, we design an algebra that models confusion matrices as probability distributions. Based on this algebra, we develop Neo, a visual analytics system that enables practitioners to flexibly author and interact with hierarchical and multi-output confusion matrices, visualize derived metrics, renormalize confusions, and share matrix specifications. Finally, we demonstrate Neo's utility with three model evaluation scenarios that help people better understand model performance and reveal hidden confusions.
1
Interactive Robotic Plastering: Augmented Interactive Design and Fabrication for On-site Robotic Plastering
Daniela Mitterberger (ETH Zurich, Zürich, Switzerland)Selen Ercan Jenny (ETH Zurich, Zürich, Switzerland)Lauren B. Vasey (ETH Zurich, Zürich, Switzerland)Ena Lloret-Fritschi (ETH Zurich, Zürich, Switzerland)Petrus Aejmelaeus-Lindström (ETH Zurich, Zürich, Switzerland)Fabio Gramazio (ETH Zurich, Zürich, Switzerland)Matthias Kohler (ETH Zurich, Zürich, Switzerland)
This paper presents Interactive Robotic Plastering (IRoP), a system enabling designers and skilled workers to engage intuitively with an in-situ robotic plastering process. The research combines three elements: interactive design tools, an augmented reality interface, and a robotic spraying system. Plastering is a complex process relying on tacit knowledge and craftsmanship, making it difficult to simulate and automate. However, our system utilizes a controller-based interaction system to enable diverse users to interactively create articulated plasterwork in-situ. A customizable computational toolset converts human intentions into robotic motions while respecting robotic and material constraints. To accomplish this, we developed both an interactive computational model to translate the data from a motion-tracking system into robotic trajectories using design and editing tools as well as an audio-visual guidance system for in-situ projection. We then conducted two user-studies of designers and skilled workers who used \emph{IRoP} to design and fabricate a full-scale demonstrator.
1
Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces
Ryo Suzuki (University of Calgary, Calgary, Alberta, Canada)Adnan Karim (University of Calgary, Calgary, Alberta, Canada)Tian Xia (University of Calgary, Calgary, Alberta, Canada)Hooman Hedayati (University of Colorado Boulder, Boulder, Colorado, United States)Nicolai Marquardt (University College London, London, United Kingdom)
This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics.
1
Understanding How People with Limited Mobility Use Multi-Modal Input
Johann Wentzel (University of Waterloo, Waterloo, Ontario, Canada)Sasa Junuzovic (Microsoft Research, Redmond, Washington, United States)James Devine (Microsoft Research, Cambridge, Cambridgeshire, United Kingdom)John R. Porter (Microsoft, LYNNWOOD, Washington, United States)Martez E. Mott (Microsoft Research, Redmond, Washington, United States)
People with limited mobility often use multiple devices when interacting with computing systems, but little is known about the impact these multi-modal configurations have on daily computing use. A deeper understanding of the practices, preferences, obstacles, and workarounds associated with accessible multi-modal input can uncover opportunities to create more accessible computer applications and hardware. We explored how people with limited mobility use multi-modality through a three-part investigation grounded in the context of video games. First, we surveyed 43 people to learn about their preferred devices and configurations. Next, we conducted semi-structured interviews with 14 participants to understand their experiences and challenges with using, configuring, and discovering input setups. Lastly, we performed a systematic review of 74 YouTube videos to illustrate and categorize input setups and adaptations in-situ. We conclude with a discussion on how our findings can inform future accessibility research for current and emerging computing technologies.
1
STRAIDE: A Research Platform for Shape-Changing Spatial Displays based on Actuated Strings
Severin Engert (Technische Universität Dresden, Dresden, Germany)Konstantin Klamka (Technische Universität Dresden, Dresden, Germany)Andreas Peetz (Technische Universität Dresden, Dresden, Germany)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)
We present STRAIDE, a string-actuated interactive display environment that allows to explore the promising potential of shape-changing interfaces for casual visualizations. At the core, we envision a platform that spatially levitates elements to create dynamic visual shapes in space. We conceptualize this type of tangible mid-air display and discuss its multifaceted design dimensions. Through a design exploration, we realize a physical research platform with adjustable parameters and modular components. For conveniently designing and implementing novel applications, we provide developer tools ranging from graphical emulators to in-situ augmented reality representations. To demonstrate STRAIDE's reconfigurability, we further introduce three representative physical setups as a basis for situated applications including ambient notifications, personal smart home controls, and entertainment. They serve as a technical validation, lay the foundations for a discussion with developers that provided valuable insights, and encourage ideas for future usage of this type of appealing interactive installation.
1
Mindsets Matter: How Beliefs About Facebook Moderate the Association Between Time Spent and Well-Being
Sindhu Kiranmai Ernala (Georgia Institute of Technology, Atlanta, Georgia, United States)Moira Burke (Facebook, Menlo Park, California, United States)Alex Leavitt (Facebook, Menlo Park, California, United States)Nicole B.. Ellison (University of Michigan, Ann Arbor, Michigan, United States)
``Time spent on platform'' is a widely used measure in many studies examining social media use and well-being, yet the current literature presents unresolved findings about the relationship between time on platform and well-being. In this paper, we consider the moderating effect of people’s mindsets about social media — whether they think a platform is good or bad for themselves and for society more generally. Combining survey responses from 29,284 participants in 15 countries with server-logged data of Facebook use, we found that when people thought that Facebook was good for them and for society, time spent on the platform was not significantly associated with well-being. Conversely, when they thought Facebook was bad, greater time spent was associated with lower well-being. On average, there was a small, negative correlation between time spent and well-being and the causal direction is not known. Beliefs had a stronger moderating relationship when time-spent measures were self-reported rather than coming from server logs. We discuss potential mechanisms for these results and implications for future research on well-being and social media use.
1
Understanding Gesture Input Articulation with Upper-Body Wearables for Users with Upper-Body Motor Impairments
Radu-Daniel Vatavu (Ștefan cel Mare University of Suceava, Suceava, Romania)Ovidiu-Ciprian Ungurean (Ștefan cel Mare University of Suceava, Suceava, Romania)
We examine touchscreen stroke-gestures and mid-air motion-gestures articulated by users with upper-body motor impairments with devices worn on the wrist, finger, and head. We analyze users' gesture input performance in terms of production time, articulation consistency, and kinematic measures, and contrast the performance of users with upper-body motor impairments with that of a control group of users without impairments. Our results, from two datasets of 7,290 stroke-gestures and 3,809 motion-gestures collected from 28 participants, reveal that users with upper-body motor impairments take twice as much time to produce stroke-gestures on wearable touchscreens compared to users without impairments, but articulate motion-gestures equally fast and with similar acceleration. We interpret our findings in the context of ability-based design and propose ten implications for accessible gesture input with upper-body wearables for users with upper-body motor impairments.
1
ReCompFig: Designing Dynamically Reconfigurable Kinematic Devices Using Compliant Mechanisms and Tensioning Cables
Humphrey Yang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Tate Johnson (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Ke Zhong (Carnegie Mellon University , Pittsburgh , Pennsylvania, United States)Dinesh K. Patel (Carnegie Mellon University , Pittsburgh , Pennsylvania, United States)Gina Olson (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Carmel Majidi (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Mohammad Islam (Materials Science and Engineering, Pittsburgh, Pennsylvania, United States)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
From creating input devices to rendering tangible information, the field of HCI is interested in using kinematic mechanisms to create human-computer interfaces. Yet, due to fabrication and design challenges, it is often difficult to create kinematic devices that are compact and have multiple reconfigurable motional degrees of freedom (DOFs) depending on the interaction scenarios. In this work, we combine compliant mechanisms (CMs) with tensioning cables to create dynamically reconfigurable kinematic mechanisms. The devices’ kinematics (DOFs) is enabled and determined by the layout of bendable rods. The additional cables function as on-demand motion constraints that can dynamically lock or unlock the mechanism’s DOFs as they are tightened or loosened. We provide algorithms and a design tool prototype to help users design such kinematic devices. We also demonstrate various HCI use cases including a kinematic haptic display, a haptic proxy, and a multimodal input device.