Neural machine (learning) translation systems (NMTS) allocate the rules from a text by analyzing a huge set of documents. Translation s

peed is improving, and error rate is reducing. However, there are shortcomings: computationally expensive both in training, and in translation inference. Also there is difficulty with rare words, which are frequent in electronic and paper medical/heath records.

Google Translate uses Google NMTS, which was introduced in November 2016. Now it used algorithm that is entirely based on deep learning. Its architecture of neural networks built on the seq2seq model. Google Translate features “Zero-shot translation” – its algorithm uses only one system instead of a huge set for every pair of languages. The Google NMTS also learns by analyzing existing translations; as it does so, it tweaks connections between artificial neurons in a way that will improve its performance.

My proposal to improve accuracy and quality of NMTS in EHR/EMR is as follows:

Just like Google NMTS to use “word segments”, I propose to break up each word into ‘medical word segments’, with “medical word token”.

Analyze huge sets of medical/health records and documents of different languages, including echo reports, radiology reports, and pathology reports as many possible. Learn and compare the documents/records before and after translation. Train the unseen morphological forms into the original dataset ( data corpus) when translating a morphological rich language.For low-resources languages such as Chinese or Vietnamese, unstructured word algorithm often generate low quality translation due to the sparse data. I propose to integrate the characteristics of linguistic relationship into the word alignment model, so the quality of Chinese-Vietnamese world alignment, and translation could be improved.

 

As for inference computation, compare normal-precision vs higher-precision arithmetic during inference computations. Also pay attention to balance between the flexibility of “character”-delimited models, and the efficacy of “word”-delimited model.

To build quality estimation method, I would build (1) predictor model trained from parallel corpora, and (2) estimator model trained from quality estimation data. Then quality estimation feature vector will be generated from the word prediction model feeding into the quality estimation model. Benchmark of translation:
WMT ’15 or ’16 benchmark, Metric BLEU ( bilingual evaluation understudy) to assess quality of words and phrases of translation. Validation: Human medical experts perform side-by-side manual evaluation.

Future application: built-in NMTS in transcription service, or in EHR/EMR for instant translation. The reports of different data source and EHR/EMR could be readily available accurately with high quality.

 

DECISION SUPPORT & HOSPITAL MONITORING

Author: C. Charles Lin

Coauthor(s): None

Status: Project Concept