Due to age-related cognitive and physical decline, older adults face numerous difficulties when learning new functions of smartphone applications. However, older adults often struggle to ask questions clearly and follow instructions independently. Through a formative study (N=16), we identified the behaviors and challenges of older adults seeking help independently and analyzed the effective mechanism of in-person instruction. Based on these findings, we proposed GuideMe, an in-situ conversational instruction system for older adults' application learning. GuideMe utilizes Vision-Language-Models to analyze multimodal context in users' situations, then assists users in confirming their intentions by asking clarifying questions, and finally provides step-by-step instructions using in-situ highlight and deictic gestures. We conducted a user study (N=18) that demonstrated that GuideMe significantly reduced users' cognitive load during learning, helped them ask questions and follow instructions efficiently, and achieved performance comparable to that of in-person instruction.
As populations age and technology becomes more pervasive, understanding the alignment between older adults' values and technology design is paramount. More research is needed to understand how older adults’ living contexts shape their values and the use of technology. To address this, through a multi-context study, we explored how values differ for older adults and how their context of living might influence the adoption and use of technology. We conducted 22 semi-structured interviews with older adults in various residential contexts. We show that older adults tend to prioritize the same core values across living contexts, yet how they express values in each context differs. Technology can amplify or inhibit key values. We describe implications for context-responsive technology and design for continuity, to allow older adults to continually uphold important values through technology use.
Large Language Models (LLMs) are increasingly integrated into mental health and well-being technologies, yet little is known about how they are perceived by older adults or how they should be designed to meet later-life needs. Mindfulness technologies, often promoted as tools for healthy ageing, provide a useful context for exploring these questions. We conducted participatory workshops with sixteen older adults using LugnAI, a prototype LLM-based system for guided mindfulness practice. Participants reflected on their experiences with AI-guided mindfulness and contributed design preferences for future systems. Analysis revealed tensions between adaptivity and autonomy, supportive versus intrusive engagement strategies, and AI-enabled emotional support versus the preservation of human connection and self-regulation practices. Based on these findings, we provide concrete design considerations for LLM-based mindfulness technologies that are sensitive to the socioaffective realities of ageing. While situated in mindfulness, the insights extend to broader applications of LLMs in supporting older adults’ well-being.
Research with older adults has hinted at the ways that elements beyond the interface play a role in technology use, including videoconferencing. To further understand the range of materials and resources involved, we studied videoconferencing use by ten older individuals with cognitive concerns in a week-long study of interviews, observations, and a modified diary study. Our analysis identified that objects extending beyond software and hardware play a role in videoconferencing, including paper-based objects, personal items, and objects in the built environment. These objects support participants by externalizing information difficult to recall, distributing cognitive effort across time, and lowering cognitive load through their spatial placement and affordances. These insights point to opportunities for researchers working with older people to focus on the work happening outside of today's interfaces. We also discuss how the lens of distributed cognition could help us design better technologies to support age-related cognitive impairment.
Home-based care (HBC) delivers medical and care services in patients' living environments, offering unique opportunities for patient-centered care. However, patient agency is often inadequately represented in shared HBC planning processes. Through 23 multi-stakeholder interviews with HBC patients, healthcare professionals, and care workers, alongside 60 hours of ethnographic observations, we examined how patient agency manifests in HBC and why this representation gap occurs. Our findings reveal that patient agency is not a static individual attribute but a relational capacity shaped through maintaining everyday continuity, mutual recognition from care providers, and engagement with material home environments. Furthermore, we identified that structured documentation systems filter out contextual knowledge, informal communication channels fragment patient voices, and doctor-centered hierarchies position patients as passive recipients. Drawing on these insights, we propose design considerations to bridge this representation gap and to integrate patient agency into shared HBC plans.
Deaf students face a persistent visual attention split between signer and instructional materials. Although virtual reality (VR) is often promoted as an educational solution, it typically reinforces hearing norms (e.g., caption overlays or interpreter boxes onto hearing classrooms). Our work foregrounds Deaf leadership and reclaims VR design authority: in a mixed-hearing team led by Deaf scholars, we designed and evaluated a VR classroom prototype featuring three signer-placement modes: corner, parallel, and transparent. Twelve Deaf participants explored the prototype during a 15-minute lecture and participated in qualitative semi-structured interviews. Participants reported reduced attention split and improved visibility, and suggested VR may support flexibility and comprehension in Deaf learning. From these reflections, we introduce a five-dimension conceptual framework---proximity, customizability, visual efficiency, cultural fit, and task flexibility---that organizes how Deaf signers evaluate signer placements. This work moves Deaf Tech theory into practice, opening pathways for future Deaf-centered, culturally grounded HCI.
AI-generated influencers are rapidly gaining popularity on Chinese short-video platforms, often adopting kinship-based roles such as "AI grandchildren'' to attract older adults. Although this trend has raised public concern, little is known about the design strategies behind these influencers, how older adults experience them, and the benefits and risks involved. In this study, we combined social media analysis with interviews to unpack the above questions. Our findings show that influencers use both visual and conversational cues to enact kinship roles, prompting audiences to engage in kinship-based role-play. Interviews further show that these cues arouse emotional resonance, help fulfill older adults’ informational and emotional needs, while also raising concerns about emotional displacement and unequal emotional investment. We highlight the complex relationship between virtual avatars and real family ties, shaped by broader sociocultural norms, and discuss how AI might strengthen social support for older adults while mitigating risks within cultural contexts.