Crowdsourcing the Perception of Machine Teaching

要旨

Teachable interfaces can empower end-users to attune machine learning systems to their idiosyncratic characteristics and environment by explicitly providing pertinent training examples. While facilitating control, their effectiveness can be hindered by the lack of expertise or misconceptions. We investigate how users may conceptualize, experience, and reflect on their engagement in machine teaching by deploying a mobile teachable testbed in Amazon Mechanical Turk. Using a performance-based payment scheme, Mechanical Turkers (N=100) are called to train, test, and re-train a robust recognition model in real-time with a few snapshots taken in their environment. We find that participants incorporate diversity in their examples drawing from parallels to how humans recognize objects independent of size, viewpoint, location, and illumination. Many of their misconceptions relate to consistency and model capabilities for reasoning. With limited variation and edge cases in testing, the majority of them do not change strategies on a second training attempt.

キーワード
teachable interfaces
interactive machine learning
object recognition
crowdsourcing
personalization
著者
Jonggi Hong
University of Maryland, College Park, MD, USA
Kyungjun Lee
University of Maryland, College Park, MD, USA
June Xu
University of Maryland, College Park, MD, USA
Hernisa Kacorri
University of Maryland, College Park, MD, USA
DOI

10.1145/3313831.3376428

論文URL

https://doi.org/10.1145/3313831.3376428

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Crowdsourcing & the value of discussion

Paper session
316C MAUI
5 件の発表
2020-04-29 23:00:00
2020-04-30 00:15:00
日本語まとめ
読み込み中…