Computational thinking (CT) is essential for the 21st century learner. Yet, assessing CT remains challenging. This is particularly challenging in constructionist learning, where individual idiosyncrasies may clash with one-size-fits-all assessments. Tools like Dr. Scratch offer CT metrics that show promise for effective and scalable CT assessments, particularly in constructionist game-based learning (GBL). Prior work has advanced the design of automated CT metrics but hardly included teachers in the process. We extend Dr. Scratch to improve automated CT assessments for GBL and put teachers in the loop to assess its novel features. Specifically, we interviewed seven middle school teachers employing GBL in STEM curricula and asked them to provide feedback on the newly designed CT metrics. Teachers view the new CT metrics positively, underscoring their potential for adaptive CT assessments despite hindrances. We advance automated CT assessments via teacher evaluation toward design-sensitive CT metrics and CT for all.
https://dl.acm.org/doi/10.1145/3706598.3713368
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)