Animated virtual reality (VR) stories, combining the presence of VR and the artistry of computer animation, offer a compelling way to deliver messages and evoke emotions. Motivated by the growing demand for immersive narrative experiences, more creators are creating animated VR stories. However, a holistic understanding of their creation processes and challenges involved in crafting these stories is still limited. Based on semi-structured interviews with 21 animated VR story creators, we identify ten common stages in their end-to-end creation processes, ranging from idea generation to evaluation, which form diverse workflows that are story-driven or visual-driven. Additionally, we highlight nine unique issues that arise during the creation process, such as a lack of reference material for multi-element plots, the absence of specific functionalities for story integration, and inadequate support for audience evaluation. We compare the creation of animated VR stories to general XR applications and distill several future research opportunities.
Hackathons have become popular collaborative events for accelerating the development of creative ideas and prototypes. There are several case studies showcasing creative outcomes across domains such as industry, education, and research. However, there are no large-scale studies on creativity in hackathons which can advance theory on how hackathon formats lead to creative outcomes. We conducted a computational analysis of 193,353 hackathon projects. By operationalizing creativity through usefulness and novelty, we refined our dataset to 10,363 projects, allowing us to analyze how participant characteristics, collaboration patterns, and hackathon setups influence the development of creative projects. The contribution of our paper is twofold: We identified means for organizers to foster creativity in hackathons. We also explore the use of large language models (LLMs) to augment the evaluation of creative outcomes and discuss challenges and opportunities of doing this, which has implications for creativity research at large.
Creative Problem-Solving (CPS) promotes creative and critical thinking while enhancing real-world problem-solving skills, making it essential for middle school education. However, providing personalized mentorship in CPS projects at scale is challenging due to resource constraints and diverse student needs. To address this, we developed Mentigo, an AI-driven mentor agent designed to guide middle school students through the CPS process. Using a dataset of real classroom interactions, we encoded CPS task stages, adaptive guidance strategies, and personalized feedback mechanisms to inform Mentigo`s dynamic mentoring framework powered by large language models (LLMs). A comparative experiment with 12 students and evaluations from five expert educators demonstrated improved student engagement, creativity, and task performance. Our findings highlight design implications for using LLM-based AI mentors to enhance CPS learning in educational environments.
How does generative AI affect collaborative creative work and humans' capability to carry it out?
We tested 52 participant pairs in a standard creativity test, the Alternate Uses Test. The experimental AI group had access to ChatGPT-4, while the control group did not.
The intervention did not lead to an improved performance overall.
Further, the AI group elaborated their ideas significantly less.
This effect carried over to the unaided post-test, pointing to longer-term effects of AI be(com)ing everyday technology, as how people perform a task with a tool shapes how they (learn to) perform the task without it.
Analysis of the human-AI collaboration process revealed that participants were selective in using ChatGPT-4 output for the experimental task, misjudging and falsely assessing its output. This actually reduced their number of created ideas and underscores that users need to understand a (generative AI-based) tool's capability for the specific task to support effective performance.
Recent advances in generative AI music have resulted in new technologies that are being framed as co-creative tools for musicians with early work demonstrating their potential to add to music practice. While the field has seen many valuable contributions, work that involves practising musicians in the design and development of these tools is limited, with the majority of work including them only once a tool has been developed. In this paper, we present a case study that explores the needs of practising musicians through the co-design of a musical variation system, highlighting the importance of involving a diverse range of musicians throughout the design process and uncovering various design insights. This was achieved through two workshops and a two week ecological evaluation, where musicians from different musical backgrounds offered valuable insights not only on a musical system's design but also on how a musical AI could be integrated into their musical practices.
Since the emergence of generative AI, creative workers have spoken up about the career-based harms they have experienced arising from this new technology. A common theme in these accounts of harm is that generative AI models are trained on workers' creative output without their consent and without giving credit or compensation to the original creators.
This paper reports findings from 20 interviews with creative workers in three domains: visual art and design, writing, and programming. We investigate the gaps between current AI governance strategies, what creative workers want out of generative AI governance, and the nuanced role of creative workers' consent, compensation and credit for training AI models on their work. Finally, we make recommendations for how generative AI can be governed and how operators of generative AI systems might more ethically train models on creative output in the future.