この勉強会は終了しました。ご参加ありがとうございました。
Acoustic levitation is an emerging technique that has found application in contactless assembly and dynamic displays. It uses precise phase control in an ultrasound transducer array to manage the positions and movements of multiple particles. Yet, maintaining stable mid-air particles is challenging, with unexpected drops disrupting the intended motion and position. Here, we present StableLev, a data-driven pipeline for the detection and amendment of instabilities in multi-particle levitation. We first curate a hybrid levitation dataset, blending optimized simulations with labels based on actual trajectory outcomes. We then design an AutoEncoder to detect anomalies in the simulated data, correlating closely with observed particle drops. Finally, we reconstruct the acoustic field at anomaly regions to improve particle stability and experimentally demonstrate successful dynamic levitation for trajectories within our dataset. Our work provides new insights into multi-particle levitation and enhances its robustness, which will be valuable in a wide range of applications.
Attempting to make sense of a phenomenon or crisis, social media users often share data visualizations and interpretations that can be erroneous or misleading. Prior work has studied how data visualizations can mislead, but do misleading visualizations reach a broad social media audience? And if so, do users amplify or challenge misleading interpretations? To answer these questions, we conducted a mixed-methods analysis of the public's engagement with data visualization posts about COVID-19 on Twitter. Compared to posts with accurate visual insights, our results show that posts with misleading visualizations garner more replies in which the audiences point out nuanced fallacies and caveats in data interpretations. Based on the results of our thematic analysis of engagement, we identify and discuss important opportunities and limitations to effectively leveraging crowdsourced assessments to address data-driven misinformation.
The message a designer wants to convey plays a pivotal role in directing the design of an infographic, yet most authoring workflows start with creating the visualizations or graphics first without gauging whether they fit the message. To address this gap, we propose Epigraphics, a web-based authoring system that treats an "epigraph" as the first-class object, and uses it to guide infographic asset creation, editing, and syncing. The system uses the text-based message to recommend visualizations, graphics, data filters, color palettes, and animations. It further supports between-asset interactions and fine-tuning such as recoloring, highlighting, and animation syncing that enhance the aesthetic cohesiveness of the assets. A gallery and case studies show that our system can produce infographics inspired by existing popular ones, and a task-based usability study with 10 designers show that a text-sourced workflow can standardize content, empower users to think more about the big picture, and facilitate rapid prototyping.
Data physicalizations have gained prominence across domains, but their environmental impact has been largely overlooked. This work addresses this gap by investigating the interplay between sustainability and physicalization practices. We conducted interviews with experts from diverse backgrounds, followed by a survey to gather insights into how they approach physicalization projects and reflect on sustainability. Our thematic analysis revealed sustainability considerations throughout the entire physicalization life cycle — a framework that encompasses various stages in a physicalization's existence. Notably, we found no single agreed-upon definition for sustainable physicalizations, highlighting the complexity of integrating sustainability into physicalization practices. We outline sustainability challenges and strategies based on participants' experiences and propose the Sustainable Physicalization Practices (SuPPra) Matrix, providing a structured approach for designers to reflect on and enhance the environmental impact of their future physicalizations.
While visual channels (e.g., color, shape, size) have been explored for visualizing data in data physicalizations, there is a lack of understanding regarding how to encode data into physical material properties (e.g., roughness, hardness). This understanding is critical for ensuring data is correctly communicated and for potentially extending the channels and bandwidth available for encoding that data. We present a method to encode ordinal data into roughness, validated through user studies. In the first study, we identified just noticeable differences in perceived roughness from this method. In the second study, we 3D-printed proof of concepts for five different multivariate physicalizations using the model. These physicalizations were qualitatively explored (N=10) to understand people's comprehension and impressions of the roughness channel. Our findings suggest roughness may be used for certain types of data encoding, and the context of the data can impact how people interpret roughness mapping direction.