Reality check: Perceived versus actual performance of community mammographers

Joshua J Fenton, Joseph Egger, Patricia A. Carney, Gary Cutter, Carl D'Orsi, Edward A. Sickles, Jessica Fosse, Linn Abraham, Stephen H. Taplin, William Barlow, R. Edward Hendrick, Joann G. Elmore

Research output: Contribution to journalArticle

32 Citations (Scopus)

Abstract

OBJECTIVE. Federal regulations mandate that radiologists receive regular albeit limited feedback regarding their interpretive accuracy in mammography. We sought to determine whether radiologists who regularly receive more extensive feedback can report their actual performance in screening mammography accurately. SUBJECTS AND METHODS. Radiologists (n = 105) who routinely interpret screening mammograms in three states (Washington, Colorado, and New Hampshire) completed a mailed survey in 2001. Radiologists were asked to estimate how frequently they recommended additional diagnostic testing after screening mammography and the positive predictive value of their recommendations for biopsy (PPV2). We then used outcomes from 336,128 screening mammography examinations interpreted by the radiologists from 1998 to 2001 to ascertain their true rates of recommendations for diagnostic testing and PPV2. RESULTS. Radiologists' self-reported rate of recommending immediate additional imaging (11.1%) exceeded their actual rate (9.1%) (mean difference, 1.9%; 95% confidence interval [CI], 0.9-3.0%). The mean self-reported rate of recommending short-interval follow-up was 6.2%; the true rate was 1.8% (mean difference, 4.3%; 95% CI, 3.6-5.1%). Similarly, the mean self-reported and true rates of recommending immediate biopsy or surgical evaluation were 3.2% and 0.6%, respectively (mean difference, 2.6%; 95% CI, 1.8-3.4%). Conversely, radiologists' mean self-reported PPV2 (18.3%) was significantly less than their mean true PPV2 (27.6%) (mean difference, -9.3%; 95% CI, -12.4% to -6.2%). CONCLUSION. Despite regular performance feedback, community radiologists may overestimate their true rates of recommending further evaluation after screening mammography and underestimate their true positive predictive value.

Original languageEnglish (US)
Pages (from-to)42-46
Number of pages5
JournalAmerican Journal of Roentgenology
Volume187
Issue number1
DOIs
StatePublished - Jul 2006

Fingerprint

Mammography
Confidence Intervals
Biopsy
Radiologists

Keywords

  • Breast cancer
  • Breast imaging
  • Mammography

ASJC Scopus subject areas

  • Radiology Nuclear Medicine and imaging
  • Radiological and Ultrasound Technology

Cite this

Fenton, J. J., Egger, J., Carney, P. A., Cutter, G., D'Orsi, C., Sickles, E. A., ... Elmore, J. G. (2006). Reality check: Perceived versus actual performance of community mammographers. American Journal of Roentgenology, 187(1), 42-46. https://doi.org/10.2214/AJR.05.0455

Reality check : Perceived versus actual performance of community mammographers. / Fenton, Joshua J; Egger, Joseph; Carney, Patricia A.; Cutter, Gary; D'Orsi, Carl; Sickles, Edward A.; Fosse, Jessica; Abraham, Linn; Taplin, Stephen H.; Barlow, William; Hendrick, R. Edward; Elmore, Joann G.

In: American Journal of Roentgenology, Vol. 187, No. 1, 07.2006, p. 42-46.

Research output: Contribution to journalArticle

Fenton, JJ, Egger, J, Carney, PA, Cutter, G, D'Orsi, C, Sickles, EA, Fosse, J, Abraham, L, Taplin, SH, Barlow, W, Hendrick, RE & Elmore, JG 2006, 'Reality check: Perceived versus actual performance of community mammographers', American Journal of Roentgenology, vol. 187, no. 1, pp. 42-46. https://doi.org/10.2214/AJR.05.0455
Fenton, Joshua J ; Egger, Joseph ; Carney, Patricia A. ; Cutter, Gary ; D'Orsi, Carl ; Sickles, Edward A. ; Fosse, Jessica ; Abraham, Linn ; Taplin, Stephen H. ; Barlow, William ; Hendrick, R. Edward ; Elmore, Joann G. / Reality check : Perceived versus actual performance of community mammographers. In: American Journal of Roentgenology. 2006 ; Vol. 187, No. 1. pp. 42-46.
@article{8cf916f1ea00435698449e2e0685a3b9,
title = "Reality check: Perceived versus actual performance of community mammographers",
abstract = "OBJECTIVE. Federal regulations mandate that radiologists receive regular albeit limited feedback regarding their interpretive accuracy in mammography. We sought to determine whether radiologists who regularly receive more extensive feedback can report their actual performance in screening mammography accurately. SUBJECTS AND METHODS. Radiologists (n = 105) who routinely interpret screening mammograms in three states (Washington, Colorado, and New Hampshire) completed a mailed survey in 2001. Radiologists were asked to estimate how frequently they recommended additional diagnostic testing after screening mammography and the positive predictive value of their recommendations for biopsy (PPV2). We then used outcomes from 336,128 screening mammography examinations interpreted by the radiologists from 1998 to 2001 to ascertain their true rates of recommendations for diagnostic testing and PPV2. RESULTS. Radiologists' self-reported rate of recommending immediate additional imaging (11.1{\%}) exceeded their actual rate (9.1{\%}) (mean difference, 1.9{\%}; 95{\%} confidence interval [CI], 0.9-3.0{\%}). The mean self-reported rate of recommending short-interval follow-up was 6.2{\%}; the true rate was 1.8{\%} (mean difference, 4.3{\%}; 95{\%} CI, 3.6-5.1{\%}). Similarly, the mean self-reported and true rates of recommending immediate biopsy or surgical evaluation were 3.2{\%} and 0.6{\%}, respectively (mean difference, 2.6{\%}; 95{\%} CI, 1.8-3.4{\%}). Conversely, radiologists' mean self-reported PPV2 (18.3{\%}) was significantly less than their mean true PPV2 (27.6{\%}) (mean difference, -9.3{\%}; 95{\%} CI, -12.4{\%} to -6.2{\%}). CONCLUSION. Despite regular performance feedback, community radiologists may overestimate their true rates of recommending further evaluation after screening mammography and underestimate their true positive predictive value.",
keywords = "Breast cancer, Breast imaging, Mammography",
author = "Fenton, {Joshua J} and Joseph Egger and Carney, {Patricia A.} and Gary Cutter and Carl D'Orsi and Sickles, {Edward A.} and Jessica Fosse and Linn Abraham and Taplin, {Stephen H.} and William Barlow and Hendrick, {R. Edward} and Elmore, {Joann G.}",
year = "2006",
month = "7",
doi = "10.2214/AJR.05.0455",
language = "English (US)",
volume = "187",
pages = "42--46",
journal = "American Journal of Roentgenology",
issn = "0361-803X",
publisher = "American Roentgen Ray Society",
number = "1",

}

TY - JOUR

T1 - Reality check

T2 - Perceived versus actual performance of community mammographers

AU - Fenton, Joshua J

AU - Egger, Joseph

AU - Carney, Patricia A.

AU - Cutter, Gary

AU - D'Orsi, Carl

AU - Sickles, Edward A.

AU - Fosse, Jessica

AU - Abraham, Linn

AU - Taplin, Stephen H.

AU - Barlow, William

AU - Hendrick, R. Edward

AU - Elmore, Joann G.

PY - 2006/7

Y1 - 2006/7

N2 - OBJECTIVE. Federal regulations mandate that radiologists receive regular albeit limited feedback regarding their interpretive accuracy in mammography. We sought to determine whether radiologists who regularly receive more extensive feedback can report their actual performance in screening mammography accurately. SUBJECTS AND METHODS. Radiologists (n = 105) who routinely interpret screening mammograms in three states (Washington, Colorado, and New Hampshire) completed a mailed survey in 2001. Radiologists were asked to estimate how frequently they recommended additional diagnostic testing after screening mammography and the positive predictive value of their recommendations for biopsy (PPV2). We then used outcomes from 336,128 screening mammography examinations interpreted by the radiologists from 1998 to 2001 to ascertain their true rates of recommendations for diagnostic testing and PPV2. RESULTS. Radiologists' self-reported rate of recommending immediate additional imaging (11.1%) exceeded their actual rate (9.1%) (mean difference, 1.9%; 95% confidence interval [CI], 0.9-3.0%). The mean self-reported rate of recommending short-interval follow-up was 6.2%; the true rate was 1.8% (mean difference, 4.3%; 95% CI, 3.6-5.1%). Similarly, the mean self-reported and true rates of recommending immediate biopsy or surgical evaluation were 3.2% and 0.6%, respectively (mean difference, 2.6%; 95% CI, 1.8-3.4%). Conversely, radiologists' mean self-reported PPV2 (18.3%) was significantly less than their mean true PPV2 (27.6%) (mean difference, -9.3%; 95% CI, -12.4% to -6.2%). CONCLUSION. Despite regular performance feedback, community radiologists may overestimate their true rates of recommending further evaluation after screening mammography and underestimate their true positive predictive value.

AB - OBJECTIVE. Federal regulations mandate that radiologists receive regular albeit limited feedback regarding their interpretive accuracy in mammography. We sought to determine whether radiologists who regularly receive more extensive feedback can report their actual performance in screening mammography accurately. SUBJECTS AND METHODS. Radiologists (n = 105) who routinely interpret screening mammograms in three states (Washington, Colorado, and New Hampshire) completed a mailed survey in 2001. Radiologists were asked to estimate how frequently they recommended additional diagnostic testing after screening mammography and the positive predictive value of their recommendations for biopsy (PPV2). We then used outcomes from 336,128 screening mammography examinations interpreted by the radiologists from 1998 to 2001 to ascertain their true rates of recommendations for diagnostic testing and PPV2. RESULTS. Radiologists' self-reported rate of recommending immediate additional imaging (11.1%) exceeded their actual rate (9.1%) (mean difference, 1.9%; 95% confidence interval [CI], 0.9-3.0%). The mean self-reported rate of recommending short-interval follow-up was 6.2%; the true rate was 1.8% (mean difference, 4.3%; 95% CI, 3.6-5.1%). Similarly, the mean self-reported and true rates of recommending immediate biopsy or surgical evaluation were 3.2% and 0.6%, respectively (mean difference, 2.6%; 95% CI, 1.8-3.4%). Conversely, radiologists' mean self-reported PPV2 (18.3%) was significantly less than their mean true PPV2 (27.6%) (mean difference, -9.3%; 95% CI, -12.4% to -6.2%). CONCLUSION. Despite regular performance feedback, community radiologists may overestimate their true rates of recommending further evaluation after screening mammography and underestimate their true positive predictive value.

KW - Breast cancer

KW - Breast imaging

KW - Mammography

UR - http://www.scopus.com/inward/record.url?scp=33747118390&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33747118390&partnerID=8YFLogxK

U2 - 10.2214/AJR.05.0455

DO - 10.2214/AJR.05.0455

M3 - Article

C2 - 16794153

AN - SCOPUS:33747118390

VL - 187

SP - 42

EP - 46

JO - American Journal of Roentgenology

JF - American Journal of Roentgenology

SN - 0361-803X

IS - 1

ER -