Talaria: Interactively Optimizing Machine Learning Models for Efficient Inference

要旨

On-device machine learning (ML) moves computation from the cloud to personal devices, protecting user privacy and enabling intelligent user experiences. However, fitting models on devices with limited resources presents a major technical challenge: practitioners need to optimize models and balance hardware metrics such as model size, latency, and power. To help practitioners create efficient ML models, we designed and developed Talaria: a model visualization and optimization system. Talaria enables practitioners to compile models to hardware, interactively visualize model statistics, and simulate optimizations to test the impact on inference metrics. Since its internal deployment two years ago, we have evaluated Talaria using three methodologies: (1) a log analysis highlighting its growth of 800+ practitioners submitting 3,600+ models; (2) a usability survey with 26 users assessing the utility of 20 Talaria features; and (3) a qualitative interview with the 7 most active users about their experience using Talaria.

受賞
Honorable Mention
著者
Fred Hohman
Apple, Seattle, Washington, United States
Chaoqun Wang
Apple, Beijing, China
Jinmook Lee
Apple, Cupertino, California, United States
Jochen Görtler
Independent Researcher, Walldorf, Germany
Dominik Moritz
Apple, Pittsburgh, Pennsylvania, United States
Jeffrey P. Bigham
Apple, Pittsburgh, Pennsylvania, United States
Zhile Ren
Apple, Seattle, Washington, United States
Cecile Foret
Apple, Cupertino, California, United States
Qi Shan
Apple Inc, Seattle, Washington, United States
Xiaoyi Zhang
Apple Inc, Seattle, Washington, United States
論文URL

doi.org/10.1145/3613904.3642628

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Large Language Models

316A
5 件の発表
2024-05-15 01:00:00
2024-05-15 02:20:00