The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

O&O: A DIY toolkit for designing and rapid prototyping olfactory interfaces
Yuxuan Lei (Tsinghua University, Beijing, China)Qi Lu (Tsinghua University, Beijing, China)Yingqing Xu (Tsinghua University, Beijing, China)
Constructing olfactory interfaces on demand requires significant design proficiency and engineering effort. The absence of powerful and convenient tools that reduced innovation complexity posed obstacles for future research in the area. To address this problem, we proposed O&O, a modular olfactory interface DIY toolkit. The toolkit consists of: (1) a scent generation kit, a set of electronics and accessories that supported three common scent vaporization techniques; (2) a module construction kit, a set of primitive cardboard modules for assembling permutable functional structures; (3) a design manual, a step-by-step design thinking framework that directs the decision-making and prototyping process. We organized a formal workshop with 19 participants and four solo DIY trials to evaluate the capability of the toolkit, the overall user engagement, the creations in both sessions, and the iterative suggestions. Finally, design implications and future opportunities were discussed for further research.
Shaping Textile Sliders: An Evaluation of Form Factors and Tick Marks for Textile Sliders
Oliver Nowak (RWTH Aachen University, Aachen, Germany)René Schäfer (RWTH Aachen University, Aachen, Germany)Anke Brocker (RWTH Aachen University, Aachen, Germany)Philipp Wacker (RWTH Aachen University, Aachen, Germany)Jan Borchers (RWTH Aachen University, Aachen, Germany)
Textile interfaces enable designers to integrate unobtrusive media and smart home controls into furniture such as sofas. While the technical aspects of such controllers have been the subject of numerous research projects, the physical form factor of these controls has received little attention so far. This work investigates how general design properties, such as overall slider shape, raised vs. recessed sliders, and number and layout of tick marks, affect users' preferences and performance. Our first user study identified a preference for certain design combinations, such as recessed, closed-shaped sliders. Our second user study included performance measurements on variations of the preferred designs from study 1, and took a closer look at tick marks. Tick marks supported orientation better than slider shape. Sliders with at least three tick marks were preferred, and performed well. Non-uniform, equally distributed tick marks reduced the movements users needed to orient themselves on the slider.
A Layered Authoring Tool for Stylized 3D animations
Jiaju Ma (Brown University, Providence, Rhode Island, United States)Li-Yi Wei (Adobe Research, San Jose, California, United States)Rubaiat Habib Kazi (Adobe Research, Seattle, Washington, United States)
Guided by the 12 principles of animation, stylization is a core 2D animation feature but has been utilized mainly by experienced animators. Although there are tools for stylizing 2D animations, creating stylized 3D animations remains a challenging problem due to the additional spatial dimension and the need for responsive actions like contact and collision. We propose a system that helps users create stylized casual 3D animations. A layered authoring interface is employed to balance between ease of use and expressiveness. Our surface level UI is a timeline sequencer that lets users add preset stylization effects such as squash and stretch and follow through to plain motions. Users can adjust spatial and temporal parameters to fine-tune these stylizations. These edits are propagated to our node-graph-based second level UI, in which the users can create custom stylizations after they are comfortable with the surface level UI. Our system also enables the stylization of interactions among multiple objects like force, energy, and collision. A pilot user study has shown that our fluid layered UI design allows for both ease of use and expressiveness better than existing tools.
Does Dynamically Drawn Text Improve Learning? Investigating the Effect of Text Presentation Styles in Video Learning
Ashwin Ram (National University of Singapore, Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)
Dynamically drawn content (e.g., handwritten text) in learning videos is believed to improve users’ engagement and learning over static powerpoint-based ones. However, evidence from existing literature is inconclusive. With the emergence of Optical Head-Mounted Displays (OHMDs), recent work has shown that video learning can be adapted for on-the-go scenarios. To better understand the role of dynamic drawing, we decoupled dynamically drawn text into two factors (font style and motion of appearance) and studied their impact on learning performance under two usage scenarios (while seated with desktop and walking with OHMD). We found that although letter-traced text was more engaging for some users, most preferred learning with typeface text that displayed the entire word at once and achieved better recall (46.7% higher), regardless of the usage scenarios. Insights learned from the studies can better inform designers on how to present text in videos for ubiquitous access.
FabricatINK: Personal Fabrication of Bespoke Displays Using Electronic Ink from Upcycled E Readers
Ollie Hanton (University of Bristol, Bristol, United Kingdom)Zichao Shen (University of Bristol, Bristol, United Kingdom)Mike Fraser (University of Bath, Bath, United Kingdom)Anne Roudaut (University of Bristol, Bristol, United Kingdom)
FabricatINK explores the personal fabrication of irregularly-shaped low-power displays using electronic ink (E ink). E ink is a programmable bicolour material used in traditional form-factors such as E readers. It has potential for more versatile use within the scope of personal fabrication of custom-shaped displays, and it has the promise to be the pre-eminent material choice for this purpose. We appraise technical literature to identify properties of E ink, suited to fabrication. We identify a key roadblock, universal access to E ink as a material, and we deliver a method to circumvent this by upcycling broken electronics. We subsequently present a novel fabrication method for irregularly-shaped E ink displays. We demonstrate our fabrication process and E ink's versatility through ten prototypes showing different applications and use cases. By addressing E ink as a material for display fabrication, we uncover the potential for users to create custom-shaped truly bistable displays.
"I don't want to feel like I'm working in a 1960s factory": The Practitioner Perspective on Creativity Support Tool Adoption
Srishti Palani (Autodesk Research, Toronto, Ontario, Canada)David Ledo (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Fraser Anderson (Autodesk Research, Toronto, Ontario, Canada)
With the rapid development of creativity support tools, creative practitioners (e.g., designers, artists, architects) have to constantly explore and adopt new tools into their practice. While HCI research has focused on developing novel creativity support tools, little is known about creative practitioner's values when exploring and adopting these tools. We collect and analyze 23 videos, 13 interviews, and 105 survey responses of creative practitioners reflecting on their values to derive a value framework. We find that practitioners value the tools' functionality, integration into their current workflow, performance, user interface and experience, learning support, costs and emotional connection, in that order. They largely discover tools through personal recommendations. To help unify and encourage reflection from the wider community of CST stakeholders (e.g., systems creators, researchers, marketers, educators), we situate the framework within existing research on systems, creativity support tools and technology adoption.
Supercharging Trial-and-Error for Learning Complex Software Applications
Damien Masson (Autodesk Research, Toronto, Ontario, Canada)Jo Vermeulen (Autodesk Research, Toronto, Ontario, Canada)George Fitzmaurice (Autodesk Research, Toronto, Ontario, Canada)Justin Matejka (Autodesk Research, Toronto, Ontario, Canada)
Despite an abundance of carefully-crafted tutorials, trial-and-error remains many people’s preferred way to learn complex software. Yet, approaches to facilitate trial-and-error (such as tooltips) have evolved very little since the 1980s. While existing mechanisms work well for simple software, they scale poorly to large feature-rich applications. In this paper, we explore new techniques to support trial-and-error in complex applications. We identify key benefits and challenges of trial-and-error, and introduce a framework with a conceptual model and design space. Using this framework, we developed three techniques: ToolTrack to keep track of trial-and-error progress; ToolTrip to go beyond trial-and-error of single commands by highlighting related commands that are frequently used together; and ToolTaste to quickly and safely try commands. We demonstrate how these techniques facilitate trial-and-error, as illustrated through a proof-of-concept implementation in the CAD software Fusion 360. We conclude by discussing possible scenarios and outline directions for future research on trial-and-error.
Mobile-Friendly Content Design for MOOCs: Challenges, Requirements, and Design Opportunities
Jeongyeon Kim (KAIST, Daejeon, Korea, Republic of)Yubin Choi (KAIST, Daejeon, Korea, Republic of)Meng Xia (KAIST, Daejeon, Korea, Republic of)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Most video-based learning content is designed for desktops without considering mobile environments. We (1) investigate the gap between mobile learners’ challenges and video engineers’ considerations using mixed methods and (2) provide design guidelines for creating mobile-friendly MOOC videos. To uncover learners’ challenges, we conducted a survey (n=134) and interviews (n=21), and evaluated the mobile adequacy of current MOOCs by analyzing 41,722 video frames from 101 video lectures. Interview results revealed low readability and situationally-induced impairments as major challenges. The content analysis showed a low guideline compliance rate for key design factors. We then interviewed 11 video production engineers to investigate design factors they mainly consider. The engineers mainly focus on the size and amount of content while lacking consideration for color, complex images, and situationally-induced impairments. Finally, we present and validate guidelines for designing mobile-friendly MOOCs, such as providing adaptive and customizable visual design and context-aware accessibility support.
immersivePOV: Filming How-To Videos with a Head-Mounted 360° Action Camera
Kevin Huang (University of Toronto, Toronto, Ontario, Canada)Jiannan Li (University of Toronto, Toronto, Ontario, Canada)Mauricio Sousa (University of Toronto, Toronto, Ontario, Canada)Tovi Grossman (University of Toronto, Toronto, Ontario, Canada)
How-to videos are often shot using camera angles that may not be optimal for learning motor tasks, with a prevalent use of third-person perspective. We present \textit{immersivePOV}, an approach to film how-to videos from an immersive first-person perspective using a head-mounted 360° action camera. immersivePOV how-to videos can be viewed in a Virtual Reality headset, giving the viewer an eye-level viewpoint with three Degrees of Freedom. We evaluated our approach with two everyday motor tasks against a baseline first-person perspective and a third-person perspective. In a between-subjects study, participants were assigned to watch the task videos and then replicate the tasks. Results suggest that immersivePOV reduced perceived cognitive load and facilitated task learning. We discuss how immersivePOV can also streamline the video production process for content creators. Altogether, we conclude that immersivePOV is an effective approach to film how-to videos for learners and content creators alike.
Do You See What You Mean? Using Predictive Visualizations to Reduce Optimism in Duration Estimates
Morgane Koval (CNRS, ISIR, Paris, France)Yvonne Jansen (Sorbonne Université, CNRS, ISIR, Paris, France)
Making time estimates, such as how long a given task might take, frequently leads to inaccurate predictions because of an optimistic bias. Previous attempts to alleviate this bias, including decomposing the task into smaller components and listing potential surprises, have not shown any major improvement. This article builds on the premise that these procedures may have failed because they involve compound probabilities and mixture distributions which are difficult to compute in one's head. We hypothesize that predictive visualizations of such distributions would facilitate the estimation of task durations. We conducted a crowdsourced study in which 145 participants provided different estimates of overall and sub-task durations and we used these to generate predictive visualizations of the resulting mixture distributions. We compared participants' initial estimates with their updated ones and found compelling evidence that predictive visualizations encourage less optimistic estimates.
First Steps Towards Designing Electrotactons: Investigating Intensity and Pulse Frequency as Parameters for Electrotactile Cues.
Yosuef Alotaibi (University of Glasgow , Glasgow , United Kingdom)John H. Williamson (University of Glasgow, Glasgow, United Kingdom)Stephen Anthony. Brewster (University of Glasgow, Glasgow, United Kingdom)
Electrotactile stimulation is a novel form of haptic feedback. There is little work investigating its basic design parameters and how they create effective tactile cues. This paper describes two experiments that extend our knowledge of two key parameters. The first investigated the combination of pulse width and amplitude Intensity on sensations of urgency, annoyance, valence and arousal. Results showed significant effects: increasing Intensity caused higher ratings of urgency, annoyance and arousal but reduced valence. We established clear levels for differentiating each sensation. A second study then investigated Intensity and Pulse Frequency to find out how many distinguishable levels could be perceived. Results showed that both Intensity and Pulse Frequency significantly affected perception, with four distinguishable levels of Intensity and two of Pulse Frequency. These results add significant new knowledge about the parameter space of electrotactile cue design and help designers select suitable properties to use when creating electrotactile cues.
Paper Trail: An Immersive Authoring System for Augmented Reality Instructional Experiences
Shwetha Rajaram (University of Michigan, Ann Arbor, Michigan, United States)Michael Nebeling (University of Michigan, Ann Arbor, Michigan, United States)
Prior work has demonstrated augmented reality's benefits to education, but current tools are difficult to integrate with traditional instructional methods. We present Paper Trail, an immersive authoring system designed to explore how to enable instructors to create AR educational experiences, leaving paper at the core of the interaction and enhancing it with various forms of digital media, animations for dynamic illustrations, and clipping masks to guide learning. To inform the system design, we developed five scenarios exploring the benefits that hand-held and head-worn AR can bring to STEM instruction and developed a design space of AR interactions enhancing paper based on these scenarios and prior work. Using the example of an AR physics handout, we assessed the system's potential with PhD-level instructors and its usability with XR design experts. In an elicitation study with high-school teachers, we study how Paper Trail could be used and extended to enable flexible use cases across various domains. We discuss benefits of immersive paper for supporting diverse student needs and challenges for making effective use of AR for learning.
Prevalence and Salience of Problematic Microtransactions in Top-Grossing Mobile and PC Games: A Content Analysis of User Reviews
Elena Petrovskaya (University of York, York, United Kingdom)Sebastian Deterding (University of York, York, United Kingdom)David I. Zendle (University of York, York, North Yorkshire, United Kingdom)
Microtransactions have become a major monetisation model in digital games, shaping their design, impacting their player experience, and raising ethical concerns. Research in this area has chiefly focused on loot boxes. This begs the question whether other microtransactions might actually be more relevant and problematic for players. We therefore conducted a content analysis of negative player reviews (n=801) of top-grossing mobile and desktop games to determine which problematic microtransactions are most prevalent and salient for players. We found that problematic microtransactions with mobile games featuring more frequent and different techniques compared to desktop games. Across both, players minded issues related to fairness, transparency, and degraded user experience, supporting prior theoretical work, and importantly take issue with monetisation-driven design as such. We identify future research needs on why microtransactions in particular spark this critique, and which player communities it may be more or less representative of.
Personal Dream Informatics: A Self-Information Systems Model of Dream Engagement
Michael Jeffrey Daniel. Hoefer (University of Colorado Boulder, Boulder, Colorado, United States)Bryce E. Schumacher (University of Colorado Boulder, Boulder, Colorado, United States)Stephen Voida (University of Colorado Boulder, Boulder, Colorado, United States)
We present the research area of personal dream informatics: studying the self-information systems that support dream engagement and communication between the dreaming self and the wakeful self. Through a survey study of 281 individuals primarily recruited from an online community dedicated to dreaming, we develop a dream-information systems view of dreaming and dream tracking as a type of self-information system. While dream-information systems are characterized by diverse tracking processes, motivations, and outcomes, they are universally constrained by the ephemeral dreamset - the short period of time between waking up and rapid memory loss of dream experiences. By developing a system dynamics model of dreaming we highlight feedback loops that serve as high leverage points for technology designers, and suggest a variety of design considerations for crafting technology that best supports dream recall, dream tracking, and dreamwork for nightmare relief and personal development.
Switching Between Standard Pointing Methods with Current and Emerging Computer Form Factors
Margaret Jean. Foley (University of Waterloo, Waterloo, Ontario, Canada)Quentin Roy (University of Waterloo, Waterloo, Ontario, Canada)Da-Yuan Huang (Huawei Canada, Markham, Ontario, Canada)Wei Li (Huawei Canada, Markham, Ontario, Canada)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
We investigate performance characteristics when switching between four pointing methods: absolute touch, absolute pen, relative mouse, and relative trackpad. The established "subtraction method" protocol used in mode-switching studies is extended to test pairs of methods and accommodate switch direction, multiple baselines, and controlling relative cursor position. A first experiment examines method switching on and around the horizontal surface of a tablet. Results find switching between pen and touch is fastest, and switching between relative and absolute methods incurs additional time penalty. A second experiment expands the investigation to an emerging foldable all-screen laptop form factor where switching also occurs on an angled surface and along a smoothly curved hinge. Results find switching between trackpad and touch is fastest, with all switching times generally higher. Our work contributes missing empirical evidence for switching performance using modern input methods, and our results can inform interaction design for current and emerging device form factors.
Paracentral and near-peripheral visualizations: Towards attention-maintaining secondary information presentation on OHMDs during in-person social interactions
Nuwan Nanayakkarawasam Peru Kandage. Janaka (National University of Singapore, Singapore, Singapore)Chloe Haigh (National University of Singapore, Singapore, Singapore)Hyeongcheol Kim (National University of Singapore, Singapore , Singapore)Shan Zhang (National University of Singapore, Singapore, Singapore)Shengdong Zhao (National University of Singapore, Singapore, Singapore)
Optical see-through Head-Mounted Displays (OST HMDs, OHMDs) are known to facilitate situational awareness while accessing secondary information. However, information displayed on OHMDs can cause attention shifts, which distract users from natural social interactions. We hypothesize that information displayed in paracentral and near-peripheral vision can be better perceived while the user is maintaining eye contact during face-to-face conversations. Leveraging this idea, we designed a circular progress bar to provide progress updates in paracentral and near-peripheral vision. We compared it with textual and linear progress bars under two conversation settings: a simulated one with a digital conversation partner and a realistic one with a real partner. Results show that a circular progress bar can effectively reduce notification distractions without losing eye contact and is more preferred by users. Our findings highlight the potential of utilizing the paracentral and near-peripheral vision for secondary information presentation on OHMDs.
Get To The Point! Problem-Based Curated Data Views To Augment Care For Critically Ill Patients
Minfan Zhang (University of Toronto, Toronto, Ontario, Canada)Daniel Ehrmann (Hospital for Sick Children, Toronto, Ontario, Canada)Mjaye Mazwi (Hospital for Sick Children, Toronto, Ontario, Canada)Danny Eytan (Hospital for Sick Children, Toronto, Ontario, Canada)Marzyeh Ghassemi (MIT, Cambridge, Massachusetts, United States)Fanny Chevalier (University of Toronto, Toronto, Ontario, Canada)
Electronic health records in critical care medicine offer unprecedented opportunities for clinical reasoning and decision making. Paradoxically, these data-rich environments have also resulted in clinical decision support systems (CDSSs) that fit poorly into clinical contexts, and increase health workers cognitive load. In this paper, we introduce a novel approach to designing CDSSs that are embedded in clinical workflows, by presenting problem-based curated data views tailored for problem-driven discovery, team communication, and situational awareness. We describe the design and evaluation of one such CDSS, In-Sight, that embodies our approach and addresses the clinical problem of monitoring critically ill pediatric patients. Our work is the result of a co-design process, further informed by empirical data collected through formal usability testing, focus groups, and a simulation study with domain experts. We discuss the potential and limitations of our approach, and share lessons learned in our iterative co-design process.
"Chat Has No Chill": A Novel Physiological Interaction for Engaging Live Streaming Audiences
Raquel Breejon. Robinson (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Ricardo Rheeder (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Madison Klarkowski (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)Regan L. Mandryk (University of Saskatchewan, Saskatoon, Saskatchewan, Canada)
Now more than ever, people are using online platforms to communicate. Twitch, the foremost platform for live game streaming, offers many communication modalities. However, the platform lacks representation of social cues and signals of the audience experience, which are innately present in live events. To address this, we present a technology probe that captures the audience energy and response in a game streaming context. We designed a game and integrated a custom-communication modality—Commons Sense—in which the audience members' heart rates are sensed via webcam, averaged, and fed into a video game to affect sound, lighting, and difficulty. We conducted an `in-the-wild' evaluation with four Twitch streamers and their audience members (N=55) to understand how these groups interacted through Commons Sense. Audience members and streamers indicated high levels of enjoyment and engagement with Commons Sense, suggesting the potential of physiological interaction as a beneficial communication tool in live streaming.
Structure-aware Visualization Retrieval
Haotian Li (The Hong Kong University of Science and Technology, Hong Kong, China)Yong Wang (Singapore Management University, Singapore, Singapore, Singapore)Aoyu Wu (Hong Kong University of Science and Technology, Hong Kong, China)Huan Wei (The Hong Kong University of Science and Technology, Hong Kong, China)Huamin Qu (The Hong Kong University of Science and Technology, Hong Kong, China)
With the wide usage of data visualizations, a huge number of Scalable Vector Graphic (SVG)-based visualizations have been created and shared online. Accordingly, there has been an increasing interest in exploring how to retrieve perceptually similar visualizations from a large corpus, since it can benefit various downstream applications such as visualization recommendation. Existing methods mainly focus on the visual appearance of visualizations by regarding them as bitmap images. However, the structural information intrinsically existing in SVG-based visualizations is ignored. Such structural information can delineate the spatial and hierarchical relationship among visual elements, and characterize visualizations thoroughly from a new perspective. This paper presents a structure-aware method to advance the performance of visualization retrieval by collectively considering both the visual and structural information. We extensively evaluated our approach through quantitative comparisons, a user study and case studies. The results demonstrate the effectiveness of our approach and its advantages over existing methods.
FitVid: Responsive and Flexible Video Content Adaptation
Jeongyeon Kim (KAIST, Daejeon, Korea, Republic of)Yubin Choi (KAIST, Daejeon, Korea, Republic of)Minsuk Kahng (Oregon State University, Corvallis, Oregon, United States)Juho Kim (KAIST, Daejeon, Korea, Republic of)
Mobile video-based learning attracts many learners with its mobility and ease of access. However, most lectures are designed for desktops. Our formative study reveals mobile learners' two major needs: more readable content and customizable video design. To support mobile-optimized learning, we present FitVid, a system that provides responsive and customizable video content. Our system consists of (1) an adaptation pipeline that reverse-engineers pixels to retrieve design elements (e.g., text, images) from videos, leveraging deep learning with a custom dataset, which powers (2) a UI that enables resizing, repositioning, and toggling in-video elements. The content adaptation improves the guideline compliance rate by 24% and 8% for word count and font size. The content evaluation study (n=198) shows that the adaptation significantly increases readability and user satisfaction. The user study (n=31) indicates that FitVid significantly improves learning experience, interactivity, and concentration. We discuss design implications for responsive and customizable video adaptation.
Exploring Spatial UI Transition Mechanisms with Head-Worn Augmented Reality
Feiyu Lu (Virginia Tech, Blacksburg, Virginia, United States)Yan Xu (Facebook, Redmond, Washington, United States)
Imagine in the future people comfortably wear augmented reality (AR) displays all day, how do we design interfaces that adapt to the contextual changes as people move around? In current operating systems, the majority of AR content defaults to staying at a fixed location until being manually moved by the users. However, this approach puts the burden of user interface (UI) transition solely on users. In this paper, we first ran a bodystorming design workshop to capture the limitations of existing manual UI transition approaches in spatially diverse tasks. Then we addressed these limitations by designing and evaluating three UI transition mechanisms with different levels of automation and controllability (low-effort manual, semi-automated, fully-automated). Furthermore, we simulated imperfect contextual awareness by introducing prediction errors with different costs to correct them. Our results provide valuable lessons about the trade-offs between UI automation levels, controllability, user agency, and the impact of prediction errors.
VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality
Uwe Gruenefeld (University of Duisburg-Essen, Essen, Germany)Jonas Auda (University of Duisburg-Essen, Essen, Germany)Florian Mathis (University of Glasgow, Glasgow, United Kingdom)Stefan Schneegass (University of Duisburg-Essen, Essen, Germany)Mohamed Khamis (University of Glasgow, Glasgow, United Kingdom)Jan Gugenheimer (Institut Polytechnique de Paris, Paris, France)Sven Mayer (LMU Munich, Munich, Germany)
Cross-reality systems empower users to transition along the reality-virtuality continuum or collaborate with others experiencing different manifestations of it. However, prototyping these systems is challenging, as it requires sophisticated technical skills, time, and often expensive hardware. We present VRception, a concept and toolkit for quick and easy prototyping of cross-reality systems. By simulating all levels of the reality-virtuality continuum entirely in Virtual Reality, our concept overcomes the asynchronicity of realities, eliminating technical obstacles. Our VRception Toolkit leverages this concept to allow rapid prototyping of cross-reality systems and easy remixing of elements from all continuum levels. We replicated six cross-reality papers using our toolkit and presented them to their authors. Interviews with them revealed that our toolkit sufficiently replicates their core functionalities and allows quick iterations. Additionally, remote participants used our toolkit in pairs to collaboratively implement prototypes in about eight minutes that they would have otherwise expected to take days.
Why Did You/I Read but Not Reply? IM Users’ Unresponded Read-Receipt Practices and Explanations
Yu-Ling Chou (National Tsing Hua University, Hsinchu, Taiwan)Yi-Hsiu Lin (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)Tzu-Yi Lin (Department of Computer Science, Hsinchu, Taiwan)Hsin Ying You (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)Yung-Ju Chang (National Yang Ming Chiao Tung University, Hsinchu, Taiwan)
We investigate instant-messaging (IM) users’ sense-making and practices around read-receipts: a feature of IM apps for supporting the awareness of turn-taking, i.e., whether a message recipient has read a message. Using a grounded-theory approach, we highlight the importance of five contextual factors – situational, relational, interactional, conversational, and personal – that shape the variety of IM users’ sense-making about read-receipts and strategies for utilizing them in different settings. This approach yields a 21-part typology comprising five types of senders’ speculation about why their messages with read-receipts have not been answered; eight types of recipients’ causes/reasons behind such non-response; and four types of senders’ and recipients’ subsequent strategies, respectively. Mismatches between senders’ speculations about un-responded-to read-receipted messages (URRMs) and recipients’ self-reported explanations are also discussed as sources of communicative friction. The findings reveal that, beyond indicating turn-taking, read-receipts have been leveraged as a strategic tool for various purposes in interpersonal relations.
FaceOri: Tracking Head Position and Orientation Using Ultrasonic Ranging on Earphones
Yuntao Wang (Tsinghua University, Beijing, China)Jiexin Ding (Tsinghua University, Beijing, China)Ishan Chatterjee (University of Washington, Seattle, Washington, United States)Farshid Salemi Parizi (University of Washington, Seattle, Washington, United States)Yuzhou Zhuang (Tsinghua University, Beijing, China)Yukang Yan (Tsinghua University, Beijing, China)Shwetak Patel (University of Washington, Seattle, Washington, United States)Yuanchun Shi (Tsinghua University, Beijing, China)
Face orientation can often indicate users’ intended interaction target. In this paper, we propose FaceOri, a novel face tracking technique based on acoustic ranging using earphones. FaceOri can leverage the speaker on a commodity device to emit an ultrasonic chirp, which is picked up by the set of microphones on the user’s earphone, and then processed to calculate the distance from each microphone to the device. These measurements are used to derive the user’s face orientation and distance with respect to the device. We conduct a ground truth comparison and user study to evaluate FaceOri’s performance. The results show that the system can determine whether the user orients to the device at a 93.5% accuracy within a 1.5 meters range. Furthermore, FaceOri can continuously track the user’s head orientation with a median absolute error of 10.9 mm in the distance, 3.7◦ in yaw, and 5.8◦ in pitch. FaceOri can allow for convenient hands-free control of devices and produce more intelligent context-aware interaction.
Digital Fabrication of Pneumatic Actuators with Integrated Sensing by Machine Knitting
Yiyue Luo (Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States)Kui Wu (MIT, Cambridge, Massachusetts, United States)Andrew Spielberg (Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States)Michael Foshey (MIT, Cambridge, Massachusetts, United States)Tomás Palacios (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Daniela Rus (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Wojciech Matusik (MIT, Cambridge, Massachusetts, United States)
Soft actuators with integrated sensing have shown utility in a variety of applications such as assistive wearables, robotics, and interactive input devices. Despite their promise, these actuators can be difficult to both design and fabricate. As a solution, we present a workflow for computationally designing and digitally fabricating soft pneumatic actuators \emph{via} a machine knitting process. Machine knitting is attractive as a fabrication process because it is fast, digital (programmable), and provides access to a rich material library of functional yarns for specified mechanical behavior and integrated sensing. Our method uses elastic stitches to construct non-homogeneous knitting structures, which program the bending of actuators when inflated. Our method also integrates pressure and swept frequency capacitive sensing structures using conductive yarns. The entire knitted structure is fabricated automatically in a single machine run. We further provide a computational design interface for the user to interactively preview actuators’ quasi-static shape when authoring elastic stitches. Our sensing-integrated actuators are cost-effective, easy to design, robust to large actuation, and require minimal manual post-processing. We demonstrate five use-cases of our actuators in relevant application settings.
ElectriPop: Low-Cost, Shape-Changing Displays Using Electrostatically Inflated Mylar Sheets
Cathy Mengying Fang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Jianzhe Gu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chris Harrison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
We describe how sheets of metalized mylar can be cut and then “inflated” into complex 3D forms with electrostatic charge for use in digitally-controlled, shape-changing displays. This is achieved by placing and nesting various cuts, slits and holes such that mylar elements repel from one another to reach an equilibrium state. Importantly, our technique is compatible with industrial and hobbyist cutting processes, from die and laser cutting to handheld exacto-knives and scissors. Given that mylar film costs <$1 per m^2, we can create self-actuating 3D objects for just a few cents, opening new uses in low-cost consumer goods. We describe a design vocabulary, interactive simulation tool, fabrication guide, and proof-of-concept electrostatic actuation hardware. We detail our technique's performance metrics along with qualitative feedback from a design study. We present numerous examples generated using our pipeline to illustrate the rich creative potential of our method.
Mindsets Matter: How Beliefs About Facebook Moderate the Association Between Time Spent and Well-Being
Sindhu Kiranmai Ernala (Georgia Institute of Technology, Atlanta, Georgia, United States)Moira Burke (Facebook, Menlo Park, California, United States)Alex Leavitt (Facebook, Menlo Park, California, United States)Nicole B.. Ellison (University of Michigan, Ann Arbor, Michigan, United States)
``Time spent on platform'' is a widely used measure in many studies examining social media use and well-being, yet the current literature presents unresolved findings about the relationship between time on platform and well-being. In this paper, we consider the moderating effect of people’s mindsets about social media — whether they think a platform is good or bad for themselves and for society more generally. Combining survey responses from 29,284 participants in 15 countries with server-logged data of Facebook use, we found that when people thought that Facebook was good for them and for society, time spent on the platform was not significantly associated with well-being. Conversely, when they thought Facebook was bad, greater time spent was associated with lower well-being. On average, there was a small, negative correlation between time spent and well-being and the causal direction is not known. Beliefs had a stronger moderating relationship when time-spent measures were self-reported rather than coming from server logs. We discuss potential mechanisms for these results and implications for future research on well-being and social media use.
SonarID: Using Sonar to Identify Fingers on a Smartwatch
Jiwan Kim (UNIST, Ulsan, Korea, Republic of)Ian Oakley (UNIST, Ulsan, Korea, Republic of)
The diminutive size of wrist wearables has prompted the design of many novel input techniques to increase expressivity. Finger identification, or assigning different functionality to different fingers, has been frequently proposed. However, while the value of the technique seems clear, its implementation remains challenging, often relying on external devices (e.g., worn magnets) or explicit instructions. Addressing these limitations, this paper explores a novel approach to natural and unencumbered finger identification on an unmodified smartwatch: sonar. To do this, we adapt an existing finger tracking smartphone sonar implementation---rather than extract finger motion, we process raw sonar fingerprints representing the complete sonar scene recorded during a touch. We capture data from 16 participants operating a smartwatch and use their sonar fingerprints to train a deep learning recognizer that identifies taps by the thumb, index, and middle fingers with an accuracy of up to 93.7%, sufficient to support meaningful application development.
The Dark Side of Perceptual Manipulations in Virtual Reality
Wen-Jie Tseng (Institut Polytechnique de Paris, Paris, France)Elise Bonnail (Télécom Paris, Palaiseau, France)Mark McGill (University of Glasgow, Glasgow, Lanarkshire, United Kingdom)Mohamed Khamis (University of Glasgow, Glasgow, United Kingdom)Eric Lecolinet (Institut Polytechnique de Paris, Paris, France)Samuel Huron (Télécom Paristech, Université Paris-Saclay, Paris, France)Jan Gugenheimer (Institut Polytechnique de Paris, Paris, France)
"Virtual-Physical Perceptual Manipulations'' (VPPMs) such as redirected walking and haptics expand the user's capacity to interact with Virtual Reality (VR) beyond what would ordinarily physically be possible. VPPMs leverage knowledge of the limits of human perception to effect changes in the user's physical movements, becoming able to (perceptibly and imperceptibly) nudge their physical actions to enhance interactivity in VR. We explore the risks posed by the malicious use of VPPMs. First, we define, conceptualize and demonstrate the existence of VPPMs. Next, using speculative design workshops, we explore and characterize the threats/risks posed, proposing mitigations and preventative recommendations against the malicious use of VPPMs. Finally, we implement two sample applications to demonstrate how existing VPPMs could be trivially subverted to create the potential for physical harm. This paper aims to raise awareness that the current way we apply and publish VPPMs can lead to malicious exploits of our perceptual vulnerabilities.
STRAIDE: A Research Platform for Shape-Changing Spatial Displays based on Actuated Strings
Severin Engert (Technische Universität Dresden, Dresden, Germany)Konstantin Klamka (Technische Universität Dresden, Dresden, Germany)Andreas Peetz (Technische Universität Dresden, Dresden, Germany)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)
We present STRAIDE, a string-actuated interactive display environment that allows to explore the promising potential of shape-changing interfaces for casual visualizations. At the core, we envision a platform that spatially levitates elements to create dynamic visual shapes in space. We conceptualize this type of tangible mid-air display and discuss its multifaceted design dimensions. Through a design exploration, we realize a physical research platform with adjustable parameters and modular components. For conveniently designing and implementing novel applications, we provide developer tools ranging from graphical emulators to in-situ augmented reality representations. To demonstrate STRAIDE's reconfigurability, we further introduce three representative physical setups as a basis for situated applications including ambient notifications, personal smart home controls, and entertainment. They serve as a technical validation, lay the foundations for a discussion with developers that provided valuable insights, and encourage ideas for future usage of this type of appealing interactive installation.
Ga11y: an Automated GIF Annotation System for Visually Impaired Users
Mingrui Ray. Zhang (University of Washington, Seattle, Washington, United States)Mingyuan Zhong (University of Washington, Seattle, Washington, United States)Jacob O.. Wobbrock (University of Washington, Seattle, Washington, United States)
Animated GIF images have become prevalent in internet culture, often used to express richer and more nuanced meanings than static images. But animated GIFs often lack adequate alternative text descriptions, and it is challenging to generate such descriptions automatically, resulting in inaccessible GIFs for blind or low-vision (BLV) users. To improve the accessibility of animated GIFs for BLV users, we provide a system called Ga11y (pronounced ``galley''), for creating GIF annotations. Ga11y combines the power of machine intelligence and crowdsourcing and has three components: an Android client for submitting annotation requests, a backend server and database, and a web interface where volunteers can respond to annotation requests. We evaluated three human annotation interfaces and employ the one that yielded the best annotation quality. We also conducted a multi-stage evaluation with 12 BLV participants from the United States and China, receiving positive feedback.
Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces
Ryo Suzuki (University of Calgary, Calgary, Alberta, Canada)Adnan Karim (University of Calgary, Calgary, Alberta, Canada)Tian Xia (University of Calgary, Calgary, Alberta, Canada)Hooman Hedayati (University of Colorado Boulder, Boulder, Colorado, United States)Nicolai Marquardt (University College London, London, United Kingdom)
This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics.
Janus Screen: A Screen with Switchable Projection Surfaces Using Wire Grid Polarizer
Wataru Yamada (NTT DOCOMO, INC., Tokyo, Japan)Sawa Korogi (NTT DOCOMO, INC., Tokyo, Japan)Keiichi Ochiai (NTT DOCOMO, INC., Tokyo, Japan)
In this paper, we present a novel screen system employing polarizers that allow switching of the projection surface to the front, rear, or both sides using only two projectors on one side. In this system, we propose a method that employs two projectors equipped with polarizers and a multi-layered screen comprising an anti-reflective plate, transparent screen, and wire grid polarizer. The multi-layered screen changes whether the projected image is shown on the front or rear side of the screen depending on the polarization direction of the incident light. Hence, the proposed method can project images on the front, rear, or both sides of the screen by projecting images from either or both projectors using polarizers. In addition, the proposed method can be easily deployed by simply attaching multiple optical films. We implement a prototype and confirm that the proposed method can selectively switch the projection surface.
Exploring Perceptions of Cross-Sectoral Data Sharing with People with Parkinson’s
Roisin McNaney (Monash University, Melbourne, Australia)Catherine Morgan (University of Bristol, Bristol, United Kingdom)Pranav Kulkarni (Monash University, Melbourne, VIC, Australia)Julio Vega (University of Pittsburgh, Pittsburgh, Pennsylvania, United States)Farnoosh Heidarivincheh (University of Bristol, Bristol, United Kingdom)Ryan McConville (University of Bristol, Bristol, United Kingdom)Alan Whone (University of Bristol, Bristol, United Kingdom)Mickey Kim (University of Bristol, Bristol, United Kingdom)Reuben Kirkham (Monash University, Melbourne, Australia)Ian Craddock (University of Bristol, Bristol, United Kingdom)
In interdisciplinary spaces such as digital health, datasets that are complex to collect, require specialist facilities, and/or are collected with specific populations have value in a range of different sectors. In this study we collected a simulated free-living dataset, in a smart home, with 12 participants (six people with Parkinson’s, six carers). We explored their initial perceptions of the sensors through interviews and then conducted two data exploration workshops, wherein we showed participants the collected data and discussed their views on how this data, and other data relating to their Parkinson’s symptoms, might be shared across different sectors. We provide recommendations around how participants might be better engaged in considering data sharing in the early stages of research, and guidance for how research might be configured to allow for more informed data sharing practices in the future.
A Little Too Personal: Effects of Standardization versus Personalization on Job Acquisition, Work Completion, and Revenue for Online Freelancers
Jane Hsieh (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Yili Hong (University of Houston, Houston, Texas, United States)Gordon Burtch (Boston University, Boston, Massachusetts, United States)Haiyi Zhu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
As more individuals consider permanently working from home, the online labor market continues to grow as an alternative working environment. While the flexibility and autonomy of these online gigs attracts many workers, success depends critically upon self-management and workers' efficient allocation of scarce resources. To achieve this, freelancers may develop alternative work strategies, employing highly standardized schedules and communication patterns while taking on large work volumes, or engaging in smaller numbers of jobs whilst tailoring their activities to build relationships with individual employers. In this study, we consider this contrast in relation to worker communication patterns. We demonstrate the heterogeneous effects of standardization versus personalization across different stages of a project and examine the relative impact on job acquisition, project completion, and earnings. Our findings can inform the design of platforms and various worker support tools for the gig economy.
Understanding Gesture Input Articulation with Upper-Body Wearables for Users with Upper-Body Motor Impairments
Radu-Daniel Vatavu (Ștefan cel Mare University of Suceava, Suceava, Romania)Ovidiu-Ciprian Ungurean (Ștefan cel Mare University of Suceava, Suceava, Romania)
We examine touchscreen stroke-gestures and mid-air motion-gestures articulated by users with upper-body motor impairments with devices worn on the wrist, finger, and head. We analyze users' gesture input performance in terms of production time, articulation consistency, and kinematic measures, and contrast the performance of users with upper-body motor impairments with that of a control group of users without impairments. Our results, from two datasets of 7,290 stroke-gestures and 3,809 motion-gestures collected from 28 participants, reveal that users with upper-body motor impairments take twice as much time to produce stroke-gestures on wearable touchscreens compared to users without impairments, but articulate motion-gestures equally fast and with similar acceleration. We interpret our findings in the context of ability-based design and propose ten implications for accessible gesture input with upper-body wearables for users with upper-body motor impairments.
Understanding How People with Limited Mobility Use Multi-Modal Input
Johann Wentzel (University of Waterloo, Waterloo, Ontario, Canada)Sasa Junuzovic (Microsoft Research, Redmond, Washington, United States)James Devine (Microsoft Research, Cambridge, Cambridgeshire, United Kingdom)John R. Porter (Microsoft, LYNNWOOD, Washington, United States)Martez E. Mott (Microsoft Research, Redmond, Washington, United States)
People with limited mobility often use multiple devices when interacting with computing systems, but little is known about the impact these multi-modal configurations have on daily computing use. A deeper understanding of the practices, preferences, obstacles, and workarounds associated with accessible multi-modal input can uncover opportunities to create more accessible computer applications and hardware. We explored how people with limited mobility use multi-modality through a three-part investigation grounded in the context of video games. First, we surveyed 43 people to learn about their preferred devices and configurations. Next, we conducted semi-structured interviews with 14 participants to understand their experiences and challenges with using, configuring, and discovering input setups. Lastly, we performed a systematic review of 74 YouTube videos to illustrate and categorize input setups and adaptations in-situ. We conclude with a discussion on how our findings can inform future accessibility research for current and emerging computing technologies.
Electrical Head Actuation: Enabling Interactive Systems to Directly Manipulate Head Orientation
Yudai Tanaka (University of Chicago, Chicago, Illinois, United States)Jun Nishida (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose a novel interface concept in which interactive systems directly manipulate the user’s head orientation. We implement this using electrical-muscle-stimulation (EMS) of the neck muscles, which turns the head around its yaw (left/right) and pitch (up/down) axis. As the first exploration of EMS for head actuation, we characterized which muscles can be robustly actuated. Second, we evaluated the accuracy of our system for actuating participants' head orientation towards static targets and trajectories. Third, we demonstrated how it enables interactions not possible before by building a range of applications, such as (1) synchronizing head orientations of two users, which enables a user to communicate head nods to another user while listening to music, and (2) directly changing the user's head orientation to locate objects in AR. Finally, in our second study, participants felt that our head actuation contributed positively to their experience in four distinct applications.
Mold-It: Understanding how Physical Shapes affect Interaction with Handheld Freeform Devices
Marcos Serrano (IRIT - Elipse, Toulouse, France)Jolee Finch (University of Bristol, Bristol, United Kingdom)Pourang Irani (University of Manitoba, Winnipeg, Manitoba, Canada)Andrés Lucero (Aalto University, Espoo, Finland)Anne Roudaut (University of Bristol, Bristol, United Kingdom)
Advanced technologies are increasingly enabling the creation of interactive devices with non-rectangular form-factors but it is currently unclear what alternative form-factors are desirable for end users. We contribute an understanding of the interplay between the rationale for the form factors of such devices and their interactive content through think aloud design sessions in which participants could mold devices as they wished using clay. We analysed their qualitative reflections on how the shapes affected interaction. Using thematic analysis, we identified shape features desirable on handheld freeform devices and discuss the particularity of three themes central to the choice of form factors: freeform dexterity, shape features discoverability and shape adaptability (to the task and context). In a second study following the same experimental set-up, we focused on the trade off between dexterity and discoverability and the relation to the concept of affordance. Our work reveals the shape features that impact the most the choice of grasps on freeform devices from which we derive design guidelines for the design of such devices.
At-home Pupillometry using Smartphone Facial Identification Cameras
Colin Barry (University of California, San Diego, La Jolla, California, United States)Jessica de Souza (UCSD, La Jolla, California, United States)Yinan Xuan (University of California San Diego, La Jolla, California, United States)Jason Holden (University of California, San Diego, La Jolla, California, United States)Eric Granholm (University of California, San Diego, La Jolla, California, United States)Edward Jay. Wang (University of California, San Diego, San Diego, California, United States)
With recent developments in medical and psychiatric research surrounding pupillary response, cheap and accessible pupillometers could enable medical benefits from early neurological disease detection to measurements of cognitive load. In this paper, we introduce a novel smartphone-based pupillometer to allow for future development in clinical research surrounding at-home pupil measurements. Our solution utilizes a NIR front-facing camera for facial recognition paired with the RGB selfie camera to perform tracking of absolute pupil dilation with sub-millimeter accuracy. In comparison to a gold standard pupillometer during a pupillary light reflex test, the smartphone-based system achieves a median MAE of 0.27mm for absolute pupil dilation tracking and a median error of 3.52\% for pupil dilation change tracking. Additionally, we remotely deployed the system to older adults as part of a usability study that demonstrates promise for future smartphone deployments to remotely collect data in older, inexperienced adult users operating the system themselves.
Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels
Jochen Görtler (University of Konstanz, Konstanz, Germany)Fred Hohman (Apple, Seattle, Washington, United States)Dominik Moritz (Apple, Pittsburgh, Pennsylvania, United States)Kanit Wongsuphasawat (Apple, Seattle, Washington, United States)Donghao Ren (Apple, Seattle, Washington, United States)Rahul Nair (Apple, Heidelberg, Germany)Marc Kirchner (Apple, Heidelberg, Germany)Kayur Patel (Apple, Seattle, Washington, United States)
The confusion matrix, a ubiquitous visualization for helping people evaluate machine learning models, is a tabular layout that compares predicted class labels against actual class labels over all data instances. We conduct formative research with machine learning practitioners at Apple and find that conventional confusion matrices do not support more complex data-structures found in modern-day applications, such as hierarchical and multi-output labels. To express such variations of confusion matrices, we design an algebra that models confusion matrices as probability distributions. Based on this algebra, we develop Neo, a visual analytics system that enables practitioners to flexibly author and interact with hierarchical and multi-output confusion matrices, visualize derived metrics, renormalize confusions, and share matrix specifications. Finally, we demonstrate Neo's utility with three model evaluation scenarios that help people better understand model performance and reveal hidden confusions.
Squeezy-Feely: Investigating Lateral Thumb-Index Pinching as an Input Modality
Martin Schmitz (Technical University of Darmstadt, Darmstadt, Germany)Sebastian Günther (TU Darmstadt, Darmstadt, Germany)Dominik Schön (TU Darmstadt, Darmstadt, Germany)Florian Müller (LMU, Munich, Germany)
From zooming on smartphones and mid-air gestures to deformable user interfaces, thumb-index pinching grips are used in many interaction techniques. However, there is still a lack of systematic understanding of how the accuracy and efficiency of such grips are affected by various factors such as counterforce, grip span, and grip direction. Therefore, in this paper, we contribute an evaluation (N = 18) of thumb-index pinching performance in a visual targeting task using scales up to 75 items. As part of our findings, we conclude that the pinching interaction between the thumb and index finger is a promising modality also for one-dimensional input on higher scales. Furthermore, we discuss and outline implications for future user interfaces that benefit from pinching as an additional and complementary interaction modality.
ReCompFig: Designing Dynamically Reconfigurable Kinematic Devices Using Compliant Mechanisms and Tensioning Cables
Humphrey Yang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Tate Johnson (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Ke Zhong (Carnegie Mellon University , Pittsburgh , Pennsylvania, United States)Dinesh K. Patel (Carnegie Mellon University , Pittsburgh , Pennsylvania, United States)Gina Olson (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Carmel Majidi (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Mohammad Islam (Materials Science and Engineering, Pittsburgh, Pennsylvania, United States)Lining Yao (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
From creating input devices to rendering tangible information, the field of HCI is interested in using kinematic mechanisms to create human-computer interfaces. Yet, due to fabrication and design challenges, it is often difficult to create kinematic devices that are compact and have multiple reconfigurable motional degrees of freedom (DOFs) depending on the interaction scenarios. In this work, we combine compliant mechanisms (CMs) with tensioning cables to create dynamically reconfigurable kinematic mechanisms. The devices’ kinematics (DOFs) is enabled and determined by the layout of bendable rods. The additional cables function as on-demand motion constraints that can dynamically lock or unlock the mechanism’s DOFs as they are tightened or loosened. We provide algorithms and a design tool prototype to help users design such kinematic devices. We also demonstrate various HCI use cases including a kinematic haptic display, a haptic proxy, and a multimodal input device.
(Re)discovering the Physical Body Online: Strategies and Challenges to Approach Non-Cisgender Identity in Social Virtual Reality
Guo Freeman (Clemson University, Clemson, South Carolina, United States)Divine Maloney (Clemson University, Clemson, South Carolina, United States)Dane Acena (Clemson University, Clemson, South Carolina, United States)Catherine Barwulor (Clemson University, Clemson , South Carolina, United States)
The contemporary understanding of gender continues to highlight the complexity and variety of gender identities beyond a binary dichotomy regarding one’s biological sex assigned at birth. The emergence and popularity of various online social spaces also makes the digital presentation of gender even more sophisticated. In this paper, we use non-cisgender as an umbrella term to describe diverse gender identities that do not match people’s sex assigned at birth, including Transgender, Genderfluid, and Non-binary. We especially explore non-cisgender individuals’ identity practices and their challenges in novel social Virtual Reality (VR) spaces where they can present, express, and experiment their identity in ways that traditional online social spaces cannot provide. We provide one of the first empirical evidence of how social VR platforms may introduce new and novel phenomena and practices of approaching diverse gender identities online. We also contribute to re-conceptualizing technology-supported identity practices by highlighting the role of(re)discovering the physical body online and informing the design of the emerging metaverse for supporting diverse gender identities in the future.
Bivariate Effective Width Method to Improve the Normalization Capability for Subjective Speed-accuracy Biases in Rectangular-target Pointing
Shota Yamanaka (Yahoo Japan Corporation, Tokyo, Japan)Hiroki Usuba (Meiji University, Nakano, Tokyo, Japan)Homei Miyashita (Meiji University, Tokyo, Japan)
The effective width method of Fitts' law can normalize speed-accuracy biases in 1D target pointing tasks. However, in graphical user interfaces, more meaningful target shapes are rectangular. To empirically determine the best way to normalize the subjective biases, we ran remote and crowdsourced user experiments with three speed-accuracy instructions. We propose to normalize the speed-accuracy biases by applying the effective sizes to existing Fitts' law formulations including width W and height H. We call this target-size adjustment the bivariate effective width method. We found that, overall, Accot and Zhai's weighted Euclidean model using the effective width and height independently showed the best fit to the data in which the three instruction conditions were mixed (i.e., the time data measured in all instructions were analyzed with a single regression expression). Our approach enables researchers to fairly compare two or more conditions (e.g., devices, input techniques, user groups) with the normalized throughputs.
Style Blink: Exploring Digital Inking of Structured Information via Handcrafted Styling as a First-Class Object
Hugo Romat (Microsoft, Seattle, Washington, United States)Nicolai Marquardt (Microsoft Research, Redmond, Washington, United States)Ken Hinckley (Microsoft Research, Redmond, Washington, United States)Nathalie Henry Riche (Microsoft Research, Redmond, Washington, United States)
Structured note-taking forms such as sketchnoting, self-tracking journals, and bullet journaling go beyond immediate capture of information scraps. Instead, hand-drawn pride-in-craftmanship increases perceived value for sharing and display. But hand-crafting lists, tables, and calendars is tedious and repetitive. To support these practices digitally, Style Blink (“Style-Blocks+Ink”) explores handcrafted styling as a first-class object. Style-blocks encapsulate digital ink, enabling people to craft, modify, and reuse embellishments and decorations for larger structures, and apply custom layouts. For example, we provide interaction instruments that style ink for personal expression, inking palettes that afford creative experimentation, fillable pens that can be “loaded” with commands and actions to replace menu selections, techniques to customize inked structures post-creation by modifying the underlying handcrafted style-blocks and to re-layout the overall structure to match users' preferred template. In effect, any ink stroke, notation, or sketch can be encapsulated as a style-object and re-purposed as a tool. Feedback from 13 users show the potential of style adaptation and re-use in individual sketching practices.
Design of Digital Workplace Stress-Reduction Intervention Systems: Effects of Intervention Type and Timing
Esther Howe (Microsoft Research, Redmond, Washington, United States)Jina Suh (Microsoft Research, Redmond, Washington, United States)Mehrab Bin Morshed (Microsoft Research, Redmond, Washington, United States)Daniel McDuff (Microsoft, Seattle, Washington, United States)Kael Rowan (Microsoft Research, Redmond, Washington, United States)Javier Hernandez (Microsoft Research, Cambridge, Massachusetts, United States)Marah Ihab. Abdin (Microsoft Research, Redmond, Washington, United States)Gonzalo Ramos (Microsoft Research, KIRKLAND, Washington, United States)Tracy Tran (Microsoft Research, Redmond, Washington, United States)Mary P. Czerwinski (Microsoft Research, Redmond, Washington, United States)
Workplace stress-reduction interventions have produced mixed results due to engagement and adherence barriers. Leveraging technology to integrate such interventions into the workday may address these barriers and help mitigate the mental, physical, and monetary effects of workplace stress. To inform the design of a workplace stress-reduction intervention system, we conducted a four-week longitudinal study with 86 participants, examining the effects of intervention type and timing on usage, stress reduction impact, and user preferences. We compared three intervention types and two delivery timing conditions: Pre-scheduled (PS) by users and Just-in-time (JIT) prompted by the system-identified user stress-levels. We found JIT participants completed significantly more interventions than PS participants, but post-intervention and study-long stress reduction was not significantly different between conditions. Participants rated low-effort interventions highest, but high-effort interventions reduced the most stress. Participants felt JIT provided accountability but desired partial agency over timing. We present type and timing implications.
The TAC Toolkit: Supporting Design for User Acceptance of Health Technologies from a Macro-Temporal Perspective
Camille Nadal (Trinity College Dublin, Dublin, Ireland)Shane McCully (Trinity College Dublin, Dublin, Ireland)Kevin Doherty (Technical University of Denmark, Copenhagen, Denmark)Corina Sas (Lancaster University, Lancaster, United Kingdom)Gavin Doherty (Trinity College Dublin, Dublin, Ireland)
User acceptance is key for the successful uptake and use of health technologies, but also impacted by numerous factors not always easily accessible nor operationalised by designers in practice. This work seeks to facilitate the application of acceptance theory in design practice through the Technology Acceptance (TAC) toolkit: a novel theory-based design tool and method comprising 16 cards, 3 personas, 3 scenarios, a virtual think-space, and a website, which we evaluated through workshops conducted with 21 designers of health technologies. Findings showed that the toolkit revised and extended designers' knowledge of technology acceptance, fostered their appreciation, empathy and ethical values while designing for acceptance, and contributed towards shaping their future design practice. We discuss implications for considering user acceptance a dynamic, multi-stage process in design practice, and better supporting designers in imagining distant acceptance challenges. Finally, we examine the generative value of the TAC toolkit and its possible future evolution.
Designing Visuo-Haptic Illusions with Proxies in Virtual Reality: Exploration of Grasp, Movement Trajectory and Object Mass
Martin Feick (Saarland Informatics Campus, Saarbrücken, Germany)Kora Persephone. Regitz (Saarland Informatics Campus, Saarbrücken, Germany)Anthony Tang (University of Toronto, Toronto, Ontario, Canada)Antonio Krüger (DFKI, Saarland Informatics Campus, Saarbrücken, Germany)
Visuo-haptic illusions are a method to expand proxy-based interactions in VR by introducing unnoticeable discrepancies between the virtual and real-world. Yet, how different design variables affect the illusions with proxies is still unclear. To unpack a subset of variables, we conducted two user studies with 48 participants to explore the impact of (1) different grasping types and movement trajectories, and (2) different grasping types and object masses on the discrepancy which may be introduced. Our Bayes analysis suggests that grasping types and object masses (≤ 500 g) did not noticeably affect the discrepancy, but for movement trajectory, results were inconclusive. Further, we identified a significant difference between un-/restricted movement trajectories. Our data shows considerable differences in participants’ proprioceptive accuracy, which seem to correlate with their prior VR experience. Finally, we illustrate the impact of our key findings on the visuo-haptic illusion design process by showcasing a new design workflow.
A Conversational Approach for Modifying Service Mashups in IoT Environments
Sanghoon Kim (Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, Republic of)In-Young Ko (Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, Republic of)
Existing conversational approaches for Internet of Things (IoT) service mashup do not support modification because of the usability challenge, although it is common for users to modify the service mashups in IoT environments. To support the modification of IoT service mashups through conversational interfaces in a usable manner, we propose the conversational mashup modification agent (CoMMA). Users can modify IoT service mashups using CoMMA through natural language conversations. CoMMA has a two-step mashup modification interaction, an implicature-based localization step, and a modification step with a disambiguation strategy. The localization step allows users to easily search for a mashup by vocalizing their expressions in the environment. The modification step supports users to modify mashups by speaking simple modification commands. We conducted a user study and the results show that CoMMA is as effective as visual approaches in terms of task completion time and perceived task workload for modifying IoT service mashups.