The increasing accessibility of large machine learning (ML) models has resulted in their widespread adoption in everyday products, with a correspondingly negative environmental impact. Selecting more suitable ML models could not only improve training time and achievable accuracy, but also long-term sustainability. However, ML developers' model selection process remains underexplored, especially with respect to sustainability trade-offs. Our interviews with 13 ML developers showed that participants select models mainly based on familiarity, accuracy and interpretability, but often overlook sustainability. They critically reflected on the current trends of large models and the lack of available information regarding model sustainability. We present implications for the ML and HCI communities, emphasizing the importance of critical reflection on model selection in education and practice. Based on our insights, we provide initial recommendations for promoting model sustainability evaluation and how the HCI community can assist in making sustainable model alternatives more accessible.
https://dl.acm.org/doi/10.1145/3706598.3713240
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)