AI coding assistants are changing how software engineers engage in coding work. This shift raises a key question: does the changing of coding work also alter how software engineers evaluate and demonstrate coding expertise? We explore this question through a simulated live coding interview involving two software engineers, one as evaluator and the other as candidate, with AI tools allowed. Participants continued to rely on familiar criteria but adjusted the evidence they sought, as AI assistants both introduced new forms of demonstrating expertise and obscured some established workflows. The importance of these evolving enactions varied with evaluators’ emphasis on implementation versus planning. Lacking a clear link to expertise, heightened productivity expectations created additional tensions around these evolving enactions. We conclude by discussing how extended enactions can be supported through AI-focused tools and training, and how tensions between diminished enactions and productivity call for collaborative attention.
ACM CHI Conference on Human Factors in Computing Systems