Explaining the output of ensembles in medical decision support on a case by case basis

Robert Wall, Pádraig Cunningham, Paul Walsh, Stephen Byrne

Research output: Contribution to journalArticlepeer-review

29 Scopus citations


The use of ensembles in machine learning (ML) has had a considerable impact in increasing the accuracy and stability of predictors. This increase in accuracy has come at the cost of comprehensibility as, by definition, an ensemble model is considerably more complex than its component models. This is of significance for decision support systems in medicine because of the reluctance to use models that are essentially black boxes. Work on making ensembles comprehensible has so far focused on global models that mirror the behaviour of the ensemble as closely as possible. With such global models there is a clear tradeoff between comprehensibility and fidelity. In this paper, we pursue another tack, looking at local comprehensibility where the output of the ensemble is explained on a case-by-case basis. We argue that this meets the requirements of medical decision support systems. The approach presented here identifies the ensemble members that best fit the case in question and presents the behaviour of these in explanation.

Original languageEnglish (US)
Pages (from-to)191-206
Number of pages16
JournalArtificial Intelligence in Medicine
Issue number2
StatePublished - Jun 2003
Externally publishedYes


  • Anticoagulant drug therapy
  • Bronchiolitis
  • Medical decision support
  • Neural networks
  • Rules

ASJC Scopus subject areas

  • Artificial Intelligence
  • Medicine(all)


Dive into the research topics of 'Explaining the output of ensembles in medical decision support on a case by case basis'. Together they form a unique fingerprint.

Cite this