Emoji are graphical symbols that appear in many aspects of our lives. Worldwide, around 36 million people are blind and 217 million have a moderate to severe visual impairment. This portion of the population may use and encounter emoji, yet it is unclear what accessibility challenges emoji introduce. We first conducted an online survey with 58 visually impaired participants to understand how they use and encounter emoji online, and the challenges they experience. We then conducted 11 interviews with screen reader users to understand more about the challenges reported in our survey findings. Our interview findings demonstrate that technology is both an enabler and a barrier, emoji descriptors can hinder communication, and therefore the use of emoji impacts social interaction. Using our findings from both studies, we propose best practice when using emoji and recommendations to improve the future accessibility of emoji for visually impaired people.
Living in an informal settlement with a visual impairment can be very challenging resulting in social exclusion. Mobile phones have been shown to be hugely beneficial to people with sight loss in formal and high-income settings. However, little is known about whether these results hold true for people with visual impairment (VIPs) in informal settlements. We present the findings of a case study of mobile technology use by VIPs in Kibera, an informal settlement in Nairobi. We used contextual interviews, ethnographic observations and a co-design workshop to explore how VIPs use mobile phones in their daily lives, and how this use influences the social infrastructure of VIPs. Our findings suggest that mobile technology supports and shapes the creation of social infrastructure. However, this is only made possible through the existing support networks of the VIPs, which are mediated through four types of interaction: direct, supported, dependent and restricted.
Social media platforms are integral to public and private discourse, but are becoming less accessible to people with vision impairments due to an increase in user-posted images. Some platforms (i.e. Twitter) let users add image descriptions (alternative text), but only 0.1% of images include these. To address this accessibility barrier, we created Twitter A11y, a browser extension to add alternative text on Twitter using six methods. For example, screenshots of text are common, so we detect textual images, and create alternative text using optical character recognition. Twitter A11y also leverages services to automatically generate alternative text or reuse them from across the web. We compare the coverage and quality of Twitter A11y's six alt-text strategies by evaluating the timelines of 50 self-identified blind Twitter users. We find that Twitter A11y increases alt-text coverage from 7.6% to 78.5%, before crowdsourcing descriptions for the remaining images. We estimate that 57.5% of returned descriptions are high-quality. We then report on the experiences of 10 participants with visual impairments using the tool during a week-long deployment. Twitter A11y increases access to social media platforms for people with visual impairments by providing high-quality automatic descriptions for user-posted images.
One of the challenges faced by healthy older adults is experiencing feelings of not "being-seen". Companion robots, commonly designed with zoomorphic or humanoid appearance show success among clinical older adults, but healthy older adults find them degrading. We present the design and implementation of a novel non-humanoid robot. The robot's primary function is a cognitive word game. Social interaction is conveyed as a secondary function, using non-verbal gestures, inspired by dancers' movement. In a lab study, 39 healthy older adults interacted with the prototype in 3 conditions: Companion-Function; Game-Function; and No-Function. Results show the non-verbal gestures were associated with feelings of "being-seen", and willingness to accept the robot into their home was influenced by its function, with game significantly higher than companion. We conclude that robot designers should further explore the potential of non-humanoid robots as a new class of companion robots, with a primary function that is not companionship.
https://doi.org/10.1145/3313831.3376411
A lot of information is nowadays presented graphically. However, students with blindness do not have access to visual information. Providing an alternative text is not always the appropriate solution as exploring graphics to discover information independently is a fundamental part of the learning process. In this work, we introduce a mobile audio-tactile learning environment, which facilitates the incorporation of real educational material. We evaluate our system by comparing three methods of interaction with tactile graphics: A tactile graphic augmented by (1) a document with key index information in Braille, (2) a digital document with key index information and (3) the TPad system, an audio-tactile solution meeting the specific needs within the school context. Our study shows that the TPad system is suitable for educational environments. Moreover, compared to the other methods TPad is faster to explore tactile graphics and it suggests a promising effect on the memorization of information.