Towards Aligning Multimodal LLMs with Human Experts: A Focus on Parent–Child Interaction

要旨

While multimodal large language models (MLLMs) are increasingly applied in human-centred AI systems, their ability to understand complex social interactions remains uncertain. We present an exploratory study on aligning MLLMs with speech–language pathologists (SLPs) in analysing joint attention in parent–child interactions, a key construct in early social–communicative development. Drawing on interviews and video annotations with three SLPs, we characterise how observational cues of gaze, action, and vocalisation inform their reasoning processes. We then test whether an MLLM can approximate this workflow through a two-stage prompting, separating observation from judgment. Our findings reveal that alignment is more robust at the observation layer, where experts share common descriptors, than at the judgement layer, where interpretive criteria diverge. We position this work as a case-based probe into expert–AI alignment in complex social behaviour, highlighting both the feasibility and the challenges of applying MLLMs to socially situated interaction analysis.

受賞
Honorable Mention
著者
Weiyan Shi
Singapore University of Technology and Design, Singapore, Singapore
Kenny Tsu Wei Choo
Singapore University of Technology and Design, Singapore, Singapore

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: AI Systems for Human Goals

P1 - Room 122
7 件の発表
2026-04-14 18:00:00
2026-04-14 19:30:00