A KNOWLEDGE REPRESENTATION LANGUAGE FOR PERVASIVE, DATA-DRIVEN DECISION SUPPORT ON WEARABLE HEALTHCARE SYSTEMS.
DIGITAL MEDICINE & WEARABLE TECHNOLOGY
Author: Nick Fung
Coauthor(s): Dr. Valerie M. Jones, Prof. Hermie J. Hermens.
Status: Work In Progress
Funding Acknowledgment: The MobiGuide project (http://www.mobiguide-project.eu/) has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no. 287811.
Pressures from the ageing population and increasing prevalence of chronic diseases have largely been driving a shift from hospital-centric to patient-centric healthcare services, with increased reliance on patient self-management. Meanwhile, this trend is enabled by advances in wearable technologies, especially in new sensing modalities and application-specific algorithms for interpreting the data. As part of the European project MobiGuide, which aimed to provide pervasive and evidence-based guidance to patients by means of Body Area Networks (BANs) amongst other technologies, we supplement existing wearable technology research by developing a knowledge representation language that underpins decision support on the patient’s BAN. As implied, the language is designed to facilitate reasoning with clinical knowledge to provide pervasive, mobile and evidence-based decision support to the patient.
Our knowledge representation language is founded on previous work in which we modelled the disease management process as a data flow network of parallel processes. Unlike languages which capture the control flow of disease management and thereby assume a centralised software architecture, our language is designed for peer-to-peer architectures and hence with an emphasis on parallel reasoning. This approach, we reason, is more appropriate for pervasive mobile decision support due to the inherently distributed and data-driven nature of wearable technologies.
We are currently finalising specification of the detailed syntax and semantics of the language. At the higher level, to reflect our conceptual model for disease management, the language contains constructs for specifying four types of processes:
1) Monitoring, which are primarily digital filters for processing sensor data.
2) Analysis, which generates clinically-relevant abstractions from processed data.
3) Decision, which comprise conditionals to determine the appropriate course of action.
4) Effectuation, which execute the decided plans.
Given their common data-driven and parallel nature, these processes are in turn, despite differences in functionality, constructed from a common set of elements:
1) An identifier capturing the purpose of the process.
2) The expected inputs and outputs to the process.
3) The schedule of the process.
4) The manner in which the input data should be processed.
Furthermore, our language offers a clear separation between clinical concerns of the system (e.g. which conditions require oversight) and technical concerns (e.g. how such oversight will be provided) For example, wearable sensors frequently incorporate specific data processing algorithms, thus processes in our language may be labelled “proxy” to indicate that the results will be imported from elsewhere. In addition, each process specification may be augmented to output messages to the user, such as a notification of non-compliance to a prescribed treatment regime.
Conclusions and Future Work:
Our conceptual model was and our language will be validated on a clinical guideline for gestational diabetes mellitus. Indeed, we plan to further validate the language against an atrial fibrillation guideline. Our current experience indicates that our knowledge representation language is appropriate for supporting mobile knowledge-based reasoning suitable for wearable healthcare systems. Given its formal and model-based nature, we intend that our language, when complete, will also enable tool support for knowledge acquisition, formalisation and verification.