Studying the Effect of AI Code Generators on Supporting Learners in Introductory Programming

要旨

AI code generators like OpenAI Codex have the potential to assist novice programmers by generating code from natural language descriptions, however, over-reliance might negatively impact learning and retention. To explore the implications that AI code generators have on introductory programming, we conducted a controlled experiment with 69 novices (ages 10-17). Learners worked on 45 Python code-authoring tasks, for which half of the learners had access to Codex, each followed by a code-modification task. Our results show that using Codex significantly increased code-authoring performance (1.15x increased completion rate and 1.8x higher scores) while not decreasing performance on manual code-modification tasks. Additionally, learners with access to Codex during the training phase performed slightly better on the evaluation post-tests conducted one week later, although this difference did not reach statistical significance. Of interest, learners with higher Scratch pre-test scores performed significantly better on retention post-tests, if they had prior access to Codex.

著者
Majeed Kazemitabaar
University of Toronto, Toronto, Ontario, Canada
Justin Chow
University of Toronto, Toronto, Ontario, Canada
Carl Ka To. Ma
University of Toronto, Toronto, Ontario, Canada
Barbara J.. Ericson
University of Michigan, Ann Arbor, Michigan, United States
David Weintrop
University of Maryland, College Park, Maryland, United States
Tovi Grossman
University of Toronto, Toronto, Ontario, Canada
論文URL

https://doi.org/10.1145/3544548.3580919

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Learning with and about AI

Hall B
6 件の発表
2023-04-26 18:00:00
2023-04-26 19:30:00