Bayesian modeling and inference for diagnostic accuracy and probability of disease based on multiple diagnostic biomarkers with and without a perfect reference standard

S. Reza Jafarzadeh, Wesley O. Johnson, Ian Gardner

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle.

Original languageEnglish (US)
Pages (from-to)859-876
Number of pages18
JournalStatistics in Medicine
Volume35
Issue number6
DOIs
StatePublished - Mar 15 2016

Fingerprint

Diagnostic Accuracy
Bayesian Modeling
Biomarkers
Bayesian inference
Diagnostics
ROC Curve
Area Under Curve
Receiver Operating Characteristic Curve
Paratuberculosis
Bayes Theorem
Standards
Random Effects Model
Operating Characteristics
Multivariate Models
Bayesian Methods
Performance Metrics
Screening
Exceed
Receiver
Metric

Keywords

  • AUC
  • Bayes' theorem
  • Biomarker combination
  • Imperfect reference standard
  • Receiver operating characteristic curve

ASJC Scopus subject areas

  • Epidemiology
  • Statistics and Probability

Cite this

Bayesian modeling and inference for diagnostic accuracy and probability of disease based on multiple diagnostic biomarkers with and without a perfect reference standard. / Jafarzadeh, S. Reza; Johnson, Wesley O.; Gardner, Ian.

In: Statistics in Medicine, Vol. 35, No. 6, 15.03.2016, p. 859-876.

Research output: Contribution to journalArticle

@article{674a70144b234e88a557128ad498b409,
title = "Bayesian modeling and inference for diagnostic accuracy and probability of disease based on multiple diagnostic biomarkers with and without a perfect reference standard",
abstract = "The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle.",
keywords = "AUC, Bayes' theorem, Biomarker combination, Imperfect reference standard, Receiver operating characteristic curve",
author = "Jafarzadeh, {S. Reza} and Johnson, {Wesley O.} and Ian Gardner",
year = "2016",
month = "3",
day = "15",
doi = "10.1002/sim.6745",
language = "English (US)",
volume = "35",
pages = "859--876",
journal = "Statistics in Medicine",
issn = "0277-6715",
publisher = "John Wiley and Sons Ltd",
number = "6",

}

TY - JOUR

T1 - Bayesian modeling and inference for diagnostic accuracy and probability of disease based on multiple diagnostic biomarkers with and without a perfect reference standard

AU - Jafarzadeh, S. Reza

AU - Johnson, Wesley O.

AU - Gardner, Ian

PY - 2016/3/15

Y1 - 2016/3/15

N2 - The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle.

AB - The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle.

KW - AUC

KW - Bayes' theorem

KW - Biomarker combination

KW - Imperfect reference standard

KW - Receiver operating characteristic curve

UR - http://www.scopus.com/inward/record.url?scp=84956580247&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84956580247&partnerID=8YFLogxK

U2 - 10.1002/sim.6745

DO - 10.1002/sim.6745

M3 - Article

C2 - 26415924

AN - SCOPUS:84956580247

VL - 35

SP - 859

EP - 876

JO - Statistics in Medicine

JF - Statistics in Medicine

SN - 0277-6715

IS - 6

ER -