A positive patient experience is heavily influenced by how well a health care provider is able to communicate with the patient and their family. Patients who are deaf or hard of hearing (DHH) comprise 15% of the population in the United States, yet many who identify as DHH feel that they do not receive adequate care from their health care providers as a result of miscommunication. As many as 32% of DHH patients do not understand medical instructions when information is written or by reading lips. In many situations patients who are deaf or hard of hearing do not receive suitable accommodations, such as interpreting services from the hospital, and thus rely on a family member to interpret and communicate for them. In this case, detrimental translation errors leading to improper patient care can easily occur. Instead, a live interpreter or an audio/video American Sign Language (ASL) interpreter could be used to translate the conversation. While a live interpreter is preferable, there are often not enough interpreters to meet onsite hospital demands. When this occurs an audio/video interpreting service is used, but this device is not ideal for the hospital setting due to its need to connect to the internet, insufficient microphone capabilities, and poor video quality. Facial expressions are key in effective ASL communication, and poor video quality due to an unfit internet connection can make the DHH patient misinterpret the connotation of particular words. Audio/video interpreting services can also make DHH individuals uncomfortable with someone they do not know having knowledge of their health information. A new interpreting device with artificial intelligence would be developed to combat these issues. A device such as this would maximize features that are key to a DHH patient’s understanding of a conversation, and would include conversation captioning and an avatar with facial expressions. This device, the Smart ASL Interpreter would feature a screen that would display a human avatar that uses ASL and facial expressions to effectively communicate. A live transcript of the signed interpretation, called conversation captioning, would be featured to the right of the avatar. To detect audio, multiple microphones will be placed on the front of the device, and a dictation software such as Dragon NaturallySpeaking will be used to interpret audio into text. A motion sensing input device will be used that is positioned in front of the ASL using patient to translate ASL into text for the health care provider. The software will also include the ability to turn spoken word from the microphones into ASL sentences displayed onscreen by the avatar. The Smart ASL Interpreter would immensely improve the patient experience of those who are DHH.

 

ROBOTIC TECHNOLOGY & VIRTUAL ASSISTANTS

Author: Katarina Falero

Status: Project Concept