When the use of Electronic Health Records (EHRs) became inevitable in 2009 when former President Obama signed it into the legislation, some called it a scandal as the company which offers the system was granted a lucrative return. On the other hand, medical institutions and care providers, who were affected most by the change, were not formally consulted.

Time and again, doctors reflect their discomfort in complying with EHRs: with overwhelming administrative workload, diminishing patient care and increasing documentation errors being cited as some of the major concerns.

Interestingly, a 1971 study already found doctors missing up to 35% of information required of a paper chart and introducing non-human assistant which has no known self-conscious into the healthcare system is never unheard of.

Reasons behind EHRs’ unstructured text and other seeming failures

Perhaps it’s the lack of understanding of what EHRs can bring to the present medical sector which blurs the line. Naturally, clinicians express themselves in words but EHRs need coded information. Doctors often find themselves with limited coding choice which restricts the amount of information they can freely share in words.

This has kept EHRs in an unstructured text form and directly challenged the coding process, as coders are required to deduce a range of possible diagnosis, descriptions of symptoms, short-forms, and uncertainties.

Besides, there is no standardized EHRs across US, so it’s still common for primary care physicians to share patients’ health data with specialists and specialists sending test results back to the primary care physicians for documenting purposes. As privacy kicks in, patients will have to take up the responsibility to ensure the information are communicate across or via the old school way of fax or even mail.

Presently, it’s still considered immature to implement a universal EHR in view of its cost and security concerns coming from an imminent data breach.

Rich case-detection data derived from EHRs

Standard Natural Language Processing (NLP) tools are used to analyze EHRs but most of the time, clinical language contains many unique terms that create confusion. Furthermore, most medical reports are out of reach from public so most NLP tools are trained using generic texts from newspapers or academic journals and not rightful clinical writing.

Nevertheless, a recent study showed that text from EHRs do contain valuable information to detect a wide range of conditions like infectious and noncommunicable diseases and acute events but there is a need to employ various algorithm from MedLEE, HITEx to CTAKES. By analyzing physicians’ text in EHRs via NLP has also believed to generate four times increase in coder capacity and over $650,000 in incremental revenue.

It’s myopic to eliminate EHRs but important to accept clinicians’ preference towards their notes and find ways to better analyze and extract text meaning.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.