この勉強会は終了しました。ご参加ありがとうございました。
Programming offers new opportunities for visual art creation, but understanding and manipulating the abstract representations that make programming powerful can pose challenges for artists who are accustomed to manual tools and concrete visual interaction. We hypothesize that we can reduce these barriers through programming environments that link state to visual artwork output. We created Demystified Dynamic Brushes (DDB), a tool that bidirectionally links code, numerical data, and artwork across the programming interface and the execution environment – i.e., the artist's in-progress artwork. DDB automatically records stylus input as artists draw, and stores a history of brush state and output in relation to the input. This structure enables artists to inspect current and past numerical input, state, and output and control program execution through the direct selection of visual geometric elements in the drawing canvas. An observational study suggests that artists engage in program inspection when they can visually access geometric state information on the drawing canvas in the process of manual drawing.
Live programming is a regime in which the programming environment provides continual feedback, most often in the form of runtime values. In this paper, we present Projection Boxes, a novel visualization technique for displaying runtime values of programs. The key idea behind projection boxes is to start with a full semantics of the program, and then use projections to pick a subset of the semantics to display. By varying the projection used, projection boxes can encode both previously known visualization techniques, and also new ones. As such, projection boxes provide an expressive and configurable framework for displaying runtime information. Through a user study we demonstrate that (1) users find projection boxes and their configurability useful (2) users are not distracted by the always-on visualization (3) a key driving force behind the need for a configurable visualization for live programming lies with the wide variation in programmer preferences.
Mobile robots and IoT (Internet of Things) devices can increase productivity, but only if they can be programmed by workers who understand the domain. This is especially true in manufacturing. Visual programming in the spatial context of the operating environment can enable mental models at a familiar level of abstraction. However, spatial-visual programming is still in its infancy; existing systems lack IoT integration and fundamental constructs, such as functions, that are essential for code reuse, encapsulation, or recursive algorithms. We present Vipo, a spatial-visual programming system for robot-IoT workflows. Vipo was designed with input from managers at six factories using mobile robots. Our user study (n=22) evaluated efficiency, correctness, comprehensibility of spatial-visual programming with functions.
Social robots have varied effectiveness when interacting with humans in different interaction contexts. A robot programmed to escort individuals to a different location, for instance, may behave more appropriately in a crowded airport than a quiet library, or vice versa. To address these issues, we exploit ideas from program synthesis and propose an approach to transforming the structure of hand-crafted interaction programs that uses user-scored execution traces as input, in which end users score their paths through the interaction based on their experience. Additionally, our approach guarantees that transformations to a program will not violate task and social expectations that must be maintained across contexts. We evaluated our approach by adapting a robot program to both real-world and simulated contexts and found evidence that making informed edits to the robot's program improves user experience.
Aggregate elements are ubiquitous in natural and man-made objects. Interactively authoring these elements with varying anisotropy and deformability can require high artistic skills and manual labor. To reduce input workload and enhance output quality, we present an autocomplete system that can help users distribute and align such elements over different domains. Through a brushing interface, users can place and mix a few elements, and let our system automatically populate more elements for the remaining output. Furthermore, aggregate elements often require proper direction/scalar fields for proper arrangements, but fully specifying such fields across entire domains can be difficult or inconvenient for ordinary users. To address this usability challenge, we formulate element fields that can smoothly orient all the elements based on partial user specifications without requiring full input fields in any step. We validate our prototype system with a pilot user study and show applications in design, collage, and modeling.