この勉強会は終了しました。ご参加ありがとうございました。
Today’s youth lives in a world deeply intertwined with AI, which has become an integral part of everyday life. For this reason, it is important for youth to critically think about and examine AI to become responsible users in the future. Although recent attempts have educated youth on AI with focus on delivering critical perspectives within a structured curriculum, opportunities to develop critical thinking competencies that can be reflected in their lives must be provided. With this background, we designed an informal learning experience through an AI-related exhibition to cultivate critical thinking competency. To explore changes before and after the exhibition, 23 participants were invited to experience the exhibition. We found that the exhibition can support the youth in relating AI to their lives through critical thinking processes. Our findings suggest implications for designing learning experiences to foster critical thinking competency for better coexistence with AI.
Despite that reading assignments are prevalent, methods to encourage students to actively read are limited. We propose a system ReadingQuizMaker that supports instructors to conveniently design high-quality questions to help students comprehend readings. ReadingQuizMaker adapts to instructors' natural workflows of creating questions, while providing NLP-based process-oriented support. ReadingQuizMaker enables instructors to decide when and which NLP models to use, select the input to the models, and edit the outcomes. In an evaluation study, instructors found the resulting questions to be comparable to their previously designed quizzes. Instructors praised ReadingQuizMaker for its ease of use, and considered the NLP suggestions to be satisfying and helpful. We compared ReadingQuizMaker with a control condition where instructors were given automatically generated questions to edit. Instructors showed a strong preference for the human-AI teaming approach provided by ReadingQuizMaker. Our findings suggest the importance of giving users control and showing an immediate preview of AI outcomes when providing AI support.
Artificial intelligence (AI) literacy is especially important for those who may not be well-represented in technology design. We worked with ten Black girls in fifth and sixth grade from a predominantly Black school to understand their perceptions around fair and accountable AI and how they can have an empowered role in the creation of AI. Thematic analysis of discussions and activity artifacts from a summer camp and after-school session revealed a number of findings around how Black girls: perceive AI, primarily consider fairness as niceness and equality (but may need support considering other notions, such as equity), consider accountability, and envision a just future. We also discuss how the learners can be positioned as decision-making designers in creating AI technology, as well as how AI literacy learning experiences can be empowering.
Children acquire an understanding of the world by asking "why'' and "how'' questions. Conversational agents (CAs) like smart speakers or voice assistants can be promising respondents to children's questions as they are more readily available than parents or teachers. However, CAs' answers to "why'' and "how'' questions are not designed for children, as they can be difficult to understand and provide little interactivity to engage the child. In this work, we propose design guidelines for creating interactive dialogues that promote children's engagement and help them understand explanations. Applying these guidelines, we propose DAPIE, a system that answers children's questions through interactive dialogue by employing an AI-based pipeline that automatically transforms existing long-form answers from online sources into such dialogues. A user study (N=16) showed that, with DAPIE, children performed better in an immediate understanding assessment while also reporting higher enjoyment than when explanations were presented sentence-by-sentence.
Enabling students to dynamically transition between individual and collaborative learning activities has great potential to support better learning. We explore how technology can support teachers in orchestrating dynamic transitions during class. Working with five teachers and 199 students over 22 class sessions, we conducted classroom-based prototyping of a co-orchestration technology ecosystem that supports the dynamic pairing of students working with intelligent tutoring systems. Using mixed-methods data analysis, we study the resulting observed classroom dynamics, and how teachers and students perceived and experienced dynamic transitions as supported by our technology. We discover a potential tension between teachers' and students' preferred level of control: students prefer a degree of control over the dynamic transitions that teachers are hesitant to grant. Our study reveals design implications and challenges for future human-AI co-orchestration in classroom use, bringing us closer to realizing the vision of highly-personalized smart classrooms that address the unique needs of each student.
AI code generators like OpenAI Codex have the potential to assist novice programmers by generating code from natural language descriptions, however, over-reliance might negatively impact learning and retention. To explore the implications that AI code generators have on introductory programming, we conducted a controlled experiment with 69 novices (ages 10-17). Learners worked on 45 Python code-authoring tasks, for which half of the learners had access to Codex, each followed by a code-modification task. Our results show that using Codex significantly increased code-authoring performance (1.15x increased completion rate and 1.8x higher scores) while not decreasing performance on manual code-modification tasks. Additionally, learners with access to Codex during the training phase performed slightly better on the evaluation post-tests conducted one week later, although this difference did not reach statistical significance. Of interest, learners with higher Scratch pre-test scores performed significantly better on retention post-tests, if they had prior access to Codex.