Skip to yearly menu bar Skip to main content


Workshop

Machine Learning for Clinical Data Analysis, Healthcare and Genomics

Gunnar Rätsch · Madalina Fiterau · Julia Vogt

Level 5; room 511 f

Fri 12 Dec, 5:30 a.m. PST

Abstract:

Advances in medical information technology have resulted in enormous warehouses of data that are at once overwhelming and sparse. A single patient visit may result in tens to thousands of measurements and structured information, including clinical factors, diagnostic imaging, lab tests, genomic and proteomic tests. Hospitals may see thousands of patients each year. However, each patient may have relatively few visits to any particular medical provider. The resulting data are a heterogeneous amalgam of patient demographics, vital signs, diagnoses, records of treatment and medication receipt and annotations made by nurses or doctors, each with its own idiosyncrasies.
The objective of this workshop is to discuss how advanced machine learning techniques can derive clinical and scientific impact from these messy, incomplete, and partial data. We will bring together machine learning researchers and experts in medical informatics who are involved in the development of algorithms or intelligent systems designed to improve quality of healthcare. Relevant areas include health monitoring systems, clinical data labelling and clustering, clinical outcome prediction, efficient and scalable processing of medical records, feature selection or dimensionality reduction in clinical data, tools for personalized medicine, time-series analysis with medical applications and clinical genomics.



Detailed Description:

An important issue in clinical applications is the peculiarity of the available data – an amalgam of patient demographics, collected vital signs, diagnostics, records of administered treatment and medication and, potentially, annotations made by nurses or doctors. Vital signs are available typically as moving averages over varying time horizons [1], and occasionally in their original form (sampled at high frequency). The extensive data collection usually results in an overall abundance of data, which might lead to the falsely optimistic conclusion that its sheer magnitude will make training of any system trivial. The insidious issue with clinical data, which not even the best put-together repositories [2] manage to overcome, is its lack of continuity/consistency. The data comes from a vast number of patients, each with very specific clinical conditions. Data on individual patients may however be quite sparse and/or incomplete and often contain significant gaps due to circumstance or equipment malfunction. Not only are the samples limited for a given patient, but the health status of a single person can vary due to difference in external factors such as medication. These circumstances make short work of typical assumptions made by learning techniques. Thus, IID samples, coming from the same distribution, satisfying some tidy noise condition are virtually impossible to encounter in longitudinal physiologic data, on which medical diagnoses and decisions are based. To further complicate matters, records can be missing or outright incorrect, adding to the inevitable noise in vital sign readings, diagnostics and treatment records. Moreover, a patient can be attributed several diagnostics given out of a list of thousands of ICD9 codes. All things considered, the so-called ‘big data’ present in clinical applications is surprisingly sparse if the entire feature space is taken into account.

Despite the existence of algorithms that address some of these problems, a number of important research topics still remain open, including but not limited to:
(i) What individual-level predictions can be made from such partial, incomplete data?
(ii) How can partial, incomplete time series from multiple patients be combined to create a population and sub-population levels of understanding about treatment and disease? What are the best ways to stratify or cluster the data - using patient demographics, diagnostics and/or treatment - to ensure a plausible trade-off between model specialization and sample sufficiency? What is the best way to deal with outliers and how to detect incorrect data?
(iii) How can machine learning methods circumvent some of the inherent problems of large-scale clinical data? Can machine learning techniques and clinical tools (e.g. clinical review, expert ontologies, inter-institutional data) be used to adapt to the sparsity and biases in the data?
(iv) How can these data be used to assess standards of care and investigate the efficacy of various treatment programs? Generally, how can these data be used to help us better understand many of the complex causal relationships in medicine?
(v) Training classification models requires accurate labeling, and this in turn requires considerable effort on the part of human experts – can we reduce the amount of labeling needed through active learning? Can we use yet unlabeled data and combine semi-supervised approach with active learning to obtain higher accuracy?
(vi) What are the robust ways of modeling cross-signal correlations? How can we incorporate diagnostics and sparse, high-dimensional treatment information in clinical models? Can we characterize the effect of treatment on vital signs?

As just one example application where such research questions would be highly relevant, consider a vital sign monitoring system. Monitoring patient status is a crucial aspect of health care, with the task of anticipating and preventing critical episodes being traditionally entrusted to the nursing staff. However, there is increasing interest and demand for automated tools used to detect any abrupt worsening of health status in critically ill patients [3,4]. Most of the initial efforts were focused towards processing only one signal, a notable example being the detection of arrhythmias from electrocardiograms. However, it became increasingly clear that considering correlations across signals and deriving features over varying time windows holds great promise for the prognosis of adverse events [5,6]. Additionally, with the emergence of personalized care [7] and wearable technology for health monitoring [8], there is an increasing need for real-time online processing of vital signs and for adaptive models suitable to the ever-changing parameters specific to these applications. Given the heterogeneous data available, how can we develop flexible models that can gradually adapt to the characteristics of a patient as more data is obtained? Can this update be efficiently performed?

The workshop will approach the identified challenges from two perspectives. On one hand, healthcare experts will describe their requirements and describe the main issues in processing medical data. On the other hand, machine learning researchers will present algorithms and tools they have developed for clinical applications, describing their relevance. Most importantly, the discussions are meant to establish the limitations of current approaches, the feasibility of extending them to deal with the aforementioned data issues and to brainstorm on promising ML techniques that have been insufficiently exploited for these tasks.

References:
[1] Lawhern V., Hairston W.D., Robbins K., "Optimal Feature Selection for Artifact Classification in EEG Time Series", Foundations of Augmented Cognition Lecture Notes in Computer Science Volume 8027, 2013, pp 326-334
[2] MIMIC + other repositories
[3] Gopalakrishnan V., Lustgarten J., Visweswaran S., and Cooper G, “Bayesian rule learning for biomedical data mining”. Journal of Bioinformatics, 26, 2010.
[4] Randall Moorman J., Delos J. B., Flower A., Cao H., Kovatchev B.P., Richman J. S., and Lake D.E., “Cardiovascular oscillations at the bedside: early diagnosis of neonatal sepsis using heart rate characteristics monitoring”. Physiol Meas, 32 (11):1821-32, Nov 2011.
[5] Fiterau M., Dubrawski A., and Ye C., “Real-time adaptive monitoring of vital signs for clinical alarm preemption”, In Proceedings of the 2010 International Society for Disease Surveillance Annual Conference, 2011.
[6] Seely A.J. E., “Complexity at the bedside”, Journal of Critical Care”, Jun 2011.
[7] Narimatsu H., Kitanaka C., Kubota I., Sato S., Ueno Y., Kato T., Fukao A., Yamashita H,
Kayama T., “New developments in medical education for the realization of next-generation personalized medicine: concept and design of a medical education and training program through the genomic cohort study”, Journal of Human Genetics 2013 June 2013
[8] Pantelopoulos, A.,Bourbakis, N.G., "A Survey on Wearable Sensor-Based Systems for Health Monitoring and Prognosis," Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on , vol.40, no.1, pp.1,12, Jan. 2010

Live content is unavailable. Log in and register to view live content