Accessible input & learning

Paper session

会議の名前
CHI 2020
Interactive Multisensory Environments for Primary School Children
要旨

Interactive Multi-Sensory Environments (iMSEs) are room-sized interactive installations equipped with digitally enriched physical materials and ambient embedded devices. These items can sense users' presence, gestures, movements, and manipulation, and react by providing gentle stimulation (e.g., light, sound, projections, blowing bubbles, tactile feel, aromas) to different senses. Most of prior research on iMSEs investigates their use for persons with disabilities (e.g., autism). Our work focuses on the use of iMSEs in primary education contexts and for mixed groups of young students, i.e., children with and without disability. The paper describes the latest version of an iMSE called Magic Room that has been installed in two local schools. We report two empirical studies devoted to understand how the Magic Room could be used in inclusive educational settings, and to explore its potential benefits.

キーワード
Interactive Multisensory Environment
Children
Children with Special Needs
Embodied Interaction
Primary School
Smart Object
Smart Space
Well-being
著者
Franca Garzotto
Politecnico di Milano, Milano, Italy
Eleonora Beccaluva
Politecnico di Milano, Milano, Italy
Mattia Gianotti
Politecnico di Milano, Milan, Italy
Fabiano Riccardi
Politecnico di Milano, Milan, Italy
DOI

10.1145/3313831.3376343

論文URL

https://doi.org/10.1145/3313831.3376343

動画
Auditory Display in Interactive Science Simulations: Description and Sonification Support Interaction and Enhance Opportunities for Learning
要旨

Science simulations are widely used in classrooms to support inquiry-based learning of complex science concepts. These tools typically rely on interactive visual displays to convey relationships. Auditory displays, including verbal description and sonification (non-speech audio), combined with alternative input capabilities, may provide an enhanced experience for learners, particularly learners with visual impairment. We completed semi-structured interviews and usability testing with eight adult learners with visual impairment for two audio-enhanced simulations. We analyzed trends and edge cases in participants' interaction patterns, interpretations, and preferences. Findings include common interaction patterns across simulation use, increased efficiency with second use, and the complementary role that description and sonification play in supporting learning opportunities. We discuss how these control and display layers work to encourage exploration and engagement with science simulations. We conclude with general and specific design takeaways to support the implementation of auditory displays for accessible simulations.

キーワード
Multimodal
interactive simulation
learning
visual impairment
著者
Brianna J. Tomlinson
Georgia Institute of Technology, Atlanta, GA, USA
Bruce N. Walker
Georgia Institute of Technology, Atlanta, GA, USA
Emily B. Moore
University of Colorado Boulder, Boulder, CO, USA
DOI

10.1145/3313831.3376886

論文URL

https://doi.org/10.1145/3313831.3376886

Decoding Intent With Control Theory: Comparing Muscle Versus Manual Interface Performance
要旨

Manual device interaction requires precise coordination which may be difficult for users with motor impairments. Muscle interfaces provide alternative interaction methods that may enhance performance, but have not yet been evaluated for simple (eg. mouse tracking) and complex (eg. driving) continuous tasks. Control theory enables us to probe continuous task performance by separating user input into intent and error correction to quantify how motor impairments impact device interaction. We compared the effectiveness of a manual versus a muscle interface for eleven users without and three users with motor impairments performing continuous tasks. Both user groups preferred and performed better with the muscle versus the manual interface for the complex continuous task. These results suggest muscle interfaces and algorithms that can detect and augment user intent may be especially useful for future design of interfaces for continuous tasks.

キーワード
User intent
control theory
interaction
muscle interfaces
electromyography
motor impairments
accessibility
著者
Momona Yamagami
University of Washington, Seattle, WA, USA
Katherine M. Steele
University of Washington, Seattle, WA, USA
Samuel A. Burden
University of Washington, Seattle, WA, USA
DOI

10.1145/3313831.3376224

論文URL

https://doi.org/10.1145/3313831.3376224

動画
Designing Clinical AAC Tablet Applications with Adults who have Mild Intellectual Disabilities
要旨

Patients with mild intellectual disabilities (ID) face significant communication barriers within primary care services. This has a detrimental effect on the quality of treatment being provided, meaning the consultation process could benefit from augmentative and alternative communication (AAC) technologies. However, little research has been conducted in this area beyond that of paper-based aids. We address this by extracting design requirements for a clinical AAC tablet application from n=10 adults with mild ID. Our results show that such technologies can promote communication between general practitioners (GPs) and patients with mild ID by extracting symptoms in advance of the consultation via an accessible questionnaire. These symptoms act as a referent and assist in raising the awareness of conditions commonly overlooked by GPs. Furthermore, the application can support people with ID in identifying and accessing healthcare services. Finally, the participants identified 6 key factors that affect the clarity of medical images.

受賞
Best Paper
キーワード
Intellectual Disabilities
Primary Health Care
Augmentative and Alternative Communication
Accessibility
Mobile Applications
著者
Ryan Colin Gibson
University of Strathclyde, Glasgow, United Kingdom
Mark D. Dunlop
University of Strathclyde, Glasgow, United Kingdom
Matt-Mouley Bouamrane
University of Edinburgh, Edinburgh, United Kingdom
Revathy Nayar
University of Strathclyde, Glasgow, United Kingdom
DOI

10.1145/3313831.3376159

論文URL

https://doi.org/10.1145/3313831.3376159

Senorita: A Chorded Keyboard for Sighted, Low Vision, and Blind Mobile Users
要旨

Senorita is a novel two-thumb virtual chorded keyboard for mobile devices. It arranges the letters on eight keys in a single row by the bottom edge of the device based on letter frequencies and the anatomy of the thumbs. Unlike most chorded methods, it provides visual cues to perform the chording actions in sequence, instead of simultaneously, when the actions are unknown, facilitating "learning by doing". Its compact design leaves most of the screen available and its position near the edge accommodates eyes-free text entry. In a longitudinal study with a smartphone, Senorita yielded on average 14 wpm. In a short-term study with a tablet, it yielded on average 9.3 wpm. In the final longitudinal study, it yielded 3.7 wpm with blind users, surpassing their Qwerty performance. Low vision users yielded 5.8 wpm. Further, almost all users found Senorita effective, easy to learn, and wanted to keep use it.

キーワード
Text input
chords
accessibility
blind
mobile
tablets
著者
Gulnar Rakhmetulla
University of California, Merced, Merced, CA, USA
Ahmed Sabbir Arif
University of California, Merced, Merced, CA, USA
DOI

10.1145/3313831.3376576

論文URL

https://doi.org/10.1145/3313831.3376576

動画