Households contain a variety of surfaces that are used in a number of activity contexts. As ambient technology becomes commonplace in our homes, it is only a matter of time before these surfaces become linked to computer systems for Household Surface Interaction (HSI). However, little is known about the user experience attached to HSI, and the potential acceptance of HSI within modern homes. To address this problem, we ran a mixed methods user study with 39 participants to examine HSI using nine household surfaces and five common gestures (tap, press, swipe, drag, and pinch). We found that under the right conditions, surfaces with some amount of texture can enhance HSI. Furthermore, perceived good and poor user experience varied among participants for surface type indicating individual preferences. We present findings and design considerations based on surface characteristics and the challenges that users perceive they may have with HSI within their homes.
A swarm of robots can accomplish more than the sum of its parts, and swarm systems will soon see increased use in applications ranging from tangible interfaces to search and rescue teams. However, effective human control of robot swarms has been shown to be demonstrably more difficult than controlling a single robot, and swarm-specific interactions methodologies are relatively underexplored. As we envision even non-expert users will have more daily in-person encounters with different numbers of robots in the future, we present a user-defined set of control interactions for tabletop swarm robots derived from an elicitation study. We investigated the effects of number of robots and proximity on the user's interaction and found significant effects. For instance, participants varied between using 1-2 fingers, one hand, and both hands depending on the group size. We also provide general design guidelines such as preferred interaction modality, common strategies, and a high-agreement interaction set.
ProtoSpray is a fabrication method that combines 3D printing and spray coating, to create interactive displays of arbitrary shapes. Our approach makes novel use of 3D printed conductive channels to create base electrodes on 3D shapes. This is then combined with spraying active materials to produce illumination. We demonstrate the feasibility and benefits of this combined approach in 6 evaluations exploring different shaped topologies. We analyze factors such as spray orientations, surface topologies and printer resolutions, to discuss how spray nozzles can be integrated into traditional 3D printers. We present a series of ProtoSprayed objects demonstrating how our technique goes beyond existing fabrication techniques by allowing creation of displays on objects with curvatures as complex as a Mobius strip. Our work provides a platform to empower makers to use displays as a fabrication material.
We present Sprayable User Interfaces: room-sized interactive surfaces that contain sensor and display elements created by airbrushing functional inks. Since airbrushing is inherently mobile, designers can create large-scale user interfaces on complex 3D geometries where existing stationary fabrication methods fail. To enable Sprayable User Interfaces, we developed a novel design and fabrication pipeline that takes a desired user interface layout as input and automatically generates stencils for airbrushing the layout onto a physical surface. After fabricating stencils from cardboard or projecting stencils digitally, designers spray each layer with an airbrush, attach a microcontroller to the user interface, and the interface is ready to be used. Our technical evaluation shows that Sprayable User Interfaces work on various geometries and surface materials, such as porous stone and rough wood. We demonstrate our system with several application examples including interactive smart home applications on a wall and a soft leather sofa, an interactive smart city application, and interactive architecture in public office spaces.
The interactive, digital future with its seductive vision of Internet-of-Things connected sensors, actuators and displays comes at a high cost in terms of both energy demands and the clutter it brings to the physical world. But what if such devices were made of materials that enabled them to self-power their interactive features? And, what if those materials were directly used to build aesthetically pleasing environments and objects that met practical physical needs as well as digital ones? In this paper we introduce PV-Tiles – a novel material that closely couples photovoltaic energy harvesting and light sensing materials with digital interface components. We consider potential contexts, use-cases and light gestures surfaced through co-creation workshops; and, present initial technological designs and prototypes. The work opens a new set of opportunities and collaborations between HCI and material science, stimulating technical and design pointers to accommodate and exploit the material's properties.