This work investigates the integration of generative visual aids in human-robot task communication. We developed GenComUI, a system powered by large language models (LLMs) that dynamically generates contextual visual aids—such as map annotations, path indicators, and animations—to support verbal task communication and facilitate the generation of customized task programs for the robot. This system was informed by a formative study that examined how humans use external visual tools to assist verbal communication in spatial tasks. To evaluate its effectiveness, we conducted a user experiment (n = 20) comparing GenComUI with a voice-only baseline. The results demonstrate that generative visual aids, through both qualitative and quantitative analysis, enhance verbal task communication by providing continuous visual feedback, thus promoting natural and effective human-robot communication. Additionally, the study offers a set of design implications, emphasizing how dynamically generated visual aids can serve as an effective communication medium in human-robot interaction. These findings underscore the potential of generative visual aids to inform the design of more intuitive and effective human-robot communication, particularly for complex communication scenarios in human-robot interaction and LLM-based end-user development.
https://dl.acm.org/doi/10.1145/3706598.3714238
Recent attention to anthropomorphism---the attribution of human-like qualities to non-human objects or entities---of language technologies like LLMs has sparked renewed discussions about potential negative impacts of anthropomorphism. To productively discuss the impacts of this anthropomorphism and in what contexts it is appropriate, we need a shared vocabulary for the vast variety of ways that language can be anthropomorphic. In this work, we draw on existing literature and analyze empirical cases of user interactions with language technologies to develop a taxonomy of textual expressions that can contribute to anthropomorphism. We highlight challenges and tensions involved in understanding linguistic anthropomorphism, such as how all language is fundamentally human and how efforts to characterize and shift perceptions of humanness in machines can also dehumanize certain humans. We discuss ways that our taxonomy supports more precise and effective discussions of and decisions about anthropomorphism of language technologies.
https://dl.acm.org/doi/10.1145/3706598.3714038
Urban gardening is widely recognized for its numerous health and environmental benefits. However, the lack of suitable garden spaces, demanding daily schedules and limited gardening expertise present major roadblocks for citizens looking to engage in urban gardening. While prior research has explored smart home solutions to support urban gardeners, these approaches currently do not fully address these practical barriers. In this paper, we present PlantPal, a system that enables the cultivation of garden spaces irrespective of one's location, expertise level, or time constraints. PlantPal enables the shared operation of a precision agriculture robot (PAR) that is equipped with garden tools and a multi-camera system. Insights from a 3-week deployment (N=18) indicate that PlantPal facilitated the integration of gardening tasks into daily routines, fostered a sense of connection with one's field, and provided an engaging experience despite the remote setting. We contribute design considerations for future robot-assisted urban gardening concepts.
https://dl.acm.org/doi/10.1145/3706598.3713180
Technological advancements such as LLMs have enabled everyday things to use language, fostering increased anthropomorphism during interactions. This study employs material speculation to investigate how people experience things that express their thoughts, emotions, and intentions. We utilized Areca, an air purifier capable of keeping a diary, and placed it in the everyday spaces of eight participants over three weeks. Weekly interviews were conducted to capture participants’ evolving interactions with Areca, concluding with a session collaboratively speculating on the future of everyday things. Our findings indicate that things expressing thoughts, emotions, and intentions can be perceived as possessing agency beyond mere functionality. While some participants exhibited emotional engagement with Areca over time, responses varied, including moments of detachment. We conclude with design implications for HCI designers, offering insights into how emerging technologies may shape human-thing relationships in complex ways.
https://dl.acm.org/doi/10.1145/3706598.3713228
Drones are increasingly being deployed to assist firefighting crews in their missions, with the technology being chosen based on availability, rather than aligned with their specific needs. This phenomenon is exacerbated in the Global South, where infrastructure is scarce and where specific processes and user needs have to be adequately mapped to successfully introduce new technologies. We conducted semi-structured interviews with firefighting professionals (N=15) from Thailand, covering their prior experience with drones, challenges they encounter in their job, and how they envision this technology could better support them in the future. Our findings describe users’ technological needs and their expectations in terms of interaction and collaboration with drones. We identified specific challenges in Thailand that hinder the deployment of drone technology, including mismatches in technical and financial decisions. Furthermore, participants advocated for sharing physical systems between fire departments. We conclude with design considerations for drones in resource-limited firefighting contexts.
https://dl.acm.org/doi/10.1145/3706598.3714172
Natural interactions, such as those based on gesture input, feel intuitive, familiar, and well-suited to user abilities in context, and have been supported by extensive research. Contrary to the conventional mainstream, we advocate for non-natural interaction design as a transformative process that results in highly effective interactions by deliberately deviating from user intuition and expectations of physical-world naturalness or the context in which innate human modalities, such as gestures used for interaction and communication, are applied-departing from the established notion of the "natural," yet prioritizing usability. To this end, we offer four perspectives on the relationship between natural and non-natural design, and explore three prototypes addressing gesture-based interactions with digital content in the physical environment, on the user's body, and through digital devices, to challenge assumptions in natural design. Lastly, we provide a formalization of non-natural interaction, along with design principles to guide future developments.
https://dl.acm.org/doi/10.1145/3706598.3713459
Telepresence robots have the potential to change our experiences in galleries and museums, allowing for a range of hybrid interactions for visitors and museum professionals, improving accessibility, offering activities or information, and providing a range of practical use cases (e.g. the robots augmenting museum exhibits). We present the results of 3 qualitative studies conducted in the UK exploring the acceptability (1 - interviews with museum professionals with no previous exposure to telepresence), acceptance (2 – focus groups for initial exposure to telepresence robots), and adoption (3 – interviews with museum professionals with long-term exposure to robots) of telepresence robots in museums. Our results identified opportunities and barriers focusing on the unique perspective of museum professionals and showed how priorities of museums shift and change according to their exposure to different technologies. We proposed a set of practical guidelines for future telepresence robots in museums, including design implications, potential applications, and integration strategies.
https://dl.acm.org/doi/10.1145/3706598.3713533