On March 3rd, the world celebrates World Hearing Day, an annual event to raise awareness and promote ear and hearing health. In Kenya, some hospitals and local governments provide free hearing screenings. This year’s theme, “Hearing Care for All,” highlights the need for universal access to hearing healthcare services and technology. Unfortunately, policy gaps and systemic barriers continue to limit access to hearing care, especially in low- and middle-income countries.
However, advancements in AI technology are opening up new possibilities for improving the lives of the deaf and hard-of-hearing communities worldwide.
Policy gaps failing the deaf and hard of hearing community in Kenya
According to the World Health Organization (WHO), over 430 million people worldwide have hearing loss. This number is expected to increase to over 700 million by 2050. In Kenya, an estimated 600,000 people have some form of hearing impairment, yet access to quality hearing healthcare services remains limited.
A significant policy gap in Kenya is that sign language is not yet recognized as an official language. However, in the same breath, the constitution of Kenya mandates the state to promote the use of sign language and other communication formats accessible to persons with disabilities.
AI-powered hearing aids are rapidly changing the landscape of hearing healthcare, especially for individuals with mild to moderate hearing loss. These devices use AI algorithms to amplify and adjust sound based on the user’s preferences and environment.
Some smart hearing aids can even connect to smartphones and other devices, allowing for seamless streaming of music and phone calls. This service is currently available for iPhones, iPads, and Android 10 operating systems and above.
Additionally, some hearing aids can automatically detect and suppress background noise, making it easier for the user to focus on conversations or other important sounds. There are versions of hearing aids in the market that can now monitor the wearer’s physical activity and location. Overall, the inclusion of machine learning and AI into hearing aids is aimed at helping people with hearing loss optimize their interaction with their surroundings.
Automatic Speech Recognition Technology
One of the most promising applications of AI in hearing healthcare is automatic speech recognition (ASR) technology. ASR uses machine learning algorithms to transcribe speech in real-time, allowing individuals with hearing loss to read live captions of conversations or presentations.
Perhaps the most helpful use of automatic speech recognition tech I have encountered is the Innocaption App, which offers real-time transcription of phone calls at no cost. Unfortunately, the service is only restricted to the United States because it is funded and disseminated by the US Federal Communications Commission.
ASR also generates audio and video transcripts in apps like YouTube, TikTok, and Instagram, making it easier for the deaf and hard-of-hearing individuals to access educational material and entertainment media.
The ASR technology is not yet perfect, but advancements in AI are quickly improving the accuracy and reliability of ASR systems.
Sign Language to Text
For many deaf and hard-of-hearing individuals, sign language is their primary mode of communication. However, not all individuals who are deaf or hard of hearing are fluent in sign language, and not all hearing individuals can interpret sign language. Sign language is not universal, and there are over 300 versions worldwide.
There have been attempts to bridge this gap using machine learning algorithms to provide real-time captions and audio of sign language. The algorithms analyse a sign language video to provide text and speech output.
Closer home, Sign-IO gloves created by Roy Allela translates Kenyan sign language hand movements to speech. Even though he invented the gloves for his family to communicate with his deaf niece consistently, there is enormous potential to benefit millions of deaf people worldwide with non-signing family members.
AI Can Sign
There is still much video content churned daily on YouTube and other social media without auto-captions. Many viewers who rely on sign language as a primary means of communication are excluded.
New AI-assisted communication technology is creating avatars that can sign. The avatars use motion capture technology and machine learning algorithms to translate spoken language into sign language in real time.
Start-up company Robotica is trying to bridge the shortage of sign language interpreters. New content comes out daily, and one way to keep up with this is through machine translation. Currently, the avatars sign in British Sign Language (BSL) and are in the process of learning American and Italian Sign Languages.
As AI technology continues to evolve, the possibilities for improving hearing healthcare and communication for the deaf and hard-of-hearing community are endless. Advances in natural language processing, computer vision, and machine learning will likely lead to more innovative applications of AI in communication and improve hearing healthcare.
Many advanced AI communication innovations that benefit persons with disabilities are exclusive to the Global North. The time has come for the Global South, especially Kenya, to be innovative to narrow the gap for its citizens.