この勉強会は終了しました。ご参加ありがとうございました。
Group conversations often shift quickly from topic to topic, leaving a small window of time for participants to contribute. AAC users often miss this window due to the speed asymmetry between using speech and using AAC devices. AAC users may take over a minute longer to contribute, and this speed difference can cause mismatches between the ongoing conversation and the AAC user's response. This results in misunderstandings and missed opportunities to participate. We present COMPA, an add-on tool for online group conversations that seeks to support conversation partners in achieving common ground. COMPA uses a conversation's live transcription to enable AAC users to mark conversation segments they intend to address (Context Marking) and generate contextual starter phrases related to the marked conversation segment (Phrase Assistance) and a selected user intent. We study COMPA in 5 different triadic group conversations, each composed by a researcher, an AAC user and a conversation partner (n=10) and share findings on how conversational context supports conversation partners in achieving common ground.
Existing videoconferencing (VC) technologies are often optimized for productivity and efficiency, with little support for the "soft side" of VC meetings such as empathy, authenticity, belonging, and emotional connections. This paper presents findings from a 15-month long autoethnographic study of VC experiences by the first author, a person who stutters (PWS). Our research shed light on the hidden costs of VC for PWS, uncovering the substantial emotional and cognitive efforts that other meeting attendants are often unaware of. Recognizing the disproportionate burden on PWS to be heard in VC, we propose a set of design implications for a more inclusive communication environment, advocating for shared responsibility among all, including communication technologies, to ensure the inclusion and respect of every voice.
People living with complex communication needs employ multimodal pathways to communicate including: limited speech, paralinguistics, non-verbal communication and leveraging low-tech devices. However, most augmentative and alternative communication (AAC) interventions undermine end-users' agency by obstructing these intuitive communication pathways. In this paper, we collaborate with 19 people living with the language impairment aphasia exploring contextual communication challenges, before low-fidelity prototyping and wireframing wearable AAC displays. These activities culminated in two low-input wearable AAC prototypes that instead, scaffold users' pre-existing communication abilities. Firstly, the InkTalker is a low-power and affordable eInk AAC smartbadge designed to discreetly reveal invisible disabilities and usable as a communication prop. Secondly, WalkieTalkie is a scalable AAC app that converts smartphones into a feature-rich public display operable via multimodal input/outputs. We offer results from communication interactions with both devices, discussions and feedback responses. Participants used both AAC devices to interdependently socialise with others and augment pre-existing communication abilities.
Assistive technologies for adults with Down syndrome (DS) need designs tailored to their specific technology requirements. While prior research has explored technology design for individuals with intellectual disabilities, little is understood about the needs and expectations of adults with DS. Assistive technologies should leverage the abilities and interests of the population, while incorporating age- and context-considerate content. In this work, we interviewed six adults with DS, seven parents of adults with DS, and three experts in speech-language pathology, special education, and occupational therapy to determine how technology could support adults with DS. In our thematic analysis, four main themes emerged, including (1) community vs. home social involvement; (2) misalignment of skill expectations between adults with DS and parents; (3) family limitations in technology support; and (4) considerations for technology development. Our findings extend prior literature by including the voices of adults with DS in how and when they use technology.
Individuals with cognitive-communication disorders (CCDs) due to neurological conditions, such as traumatic brain injury and aphasia, experience difficulties in communication and cognition that impact their ability to perform activities of daily living, or ADLs (e.g., self-care, meal preparation, scheduling). Voice assistive technology (VAT) can support the independent performance of ADLs; however, there are limited VAT training programs that teach individuals with CCDs how to properly implement and use VAT for ADLs. The present study examined the implementation of an online training program using Alexa voice commands for five ADL domains (scheduling, entertainment, self-care, news & facts, and meal preparation). Using video analysis with seven adults with CCDs between ages 25 and 82 and interviews with five participants and three caregivers, we synthesized five weeks of training performance, analyzed participants' perceived benefits and challenges, and discussed challenges and opportunities for implementing VAT training for ADLs skills for adults with CCDs.