Item response theory facilitated cocalibrating cognitive tests and reduced bias in estimated rates of decline

Paul K. Crane, Kaavya Narasimhalu, Laura E. Gibbons, Dan M Mungas, Sebastien Haneuse, Eric B. Larson, Lewis Kuller, Kathleen Hall, Gerald van Belle

Research output: Contribution to journalArticle

59 Citations (Scopus)

Abstract

Objective: To cocalibrate the Mini-Mental State Examination, the Modified Mini-Mental State, the Cognitive Abilities Screening Instrument, and the Community Screening Instrument for Dementia using item response theory (IRT) to compare screening cut points used to identify cases of dementia from different studies, to compare measurement properties of the tests, and to explore the implications of these measurement properties on longitudinal studies of cognitive functioning over time. Study Design and Setting: We used cross-sectional data from three large (n > 1000) community-based studies of cognitive functioning in the elderly. We used IRT to cocalibrate the scales and performed simulations of longitudinal studies. Results: Screening cut points varied quite widely across studies. The four tests have curvilinear scaling and varied levels of measurement precision, with more measurement error at higher levels of cognitive functioning. In longitudinal simulations, IRT scores always performed better than standard scoring, whereas a strategy to account for varying measurement precision had mixed results. Conclusion: Cocalibration allows direct comparison of cognitive functioning in studies using any of these four tests. Standard scoring appears to be a poor choice for analysis of longitudinal cognitive testing data. More research is needed into the implications of varying levels of measurement precision.

Original languageEnglish (US)
JournalJournal of Clinical Epidemiology
Volume61
Issue number10
DOIs
StatePublished - Oct 2008

Fingerprint

Longitudinal Studies
Dementia
Aptitude
Research

Keywords

  • Cocalibration
  • Cognition
  • Item response theory
  • Longitudinal data analysis
  • Psychometrics
  • Simulation

ASJC Scopus subject areas

  • Medicine(all)
  • Public Health, Environmental and Occupational Health
  • Epidemiology

Cite this

Item response theory facilitated cocalibrating cognitive tests and reduced bias in estimated rates of decline. / Crane, Paul K.; Narasimhalu, Kaavya; Gibbons, Laura E.; Mungas, Dan M; Haneuse, Sebastien; Larson, Eric B.; Kuller, Lewis; Hall, Kathleen; van Belle, Gerald.

In: Journal of Clinical Epidemiology, Vol. 61, No. 10, 10.2008.

Research output: Contribution to journalArticle

Crane, Paul K. ; Narasimhalu, Kaavya ; Gibbons, Laura E. ; Mungas, Dan M ; Haneuse, Sebastien ; Larson, Eric B. ; Kuller, Lewis ; Hall, Kathleen ; van Belle, Gerald. / Item response theory facilitated cocalibrating cognitive tests and reduced bias in estimated rates of decline. In: Journal of Clinical Epidemiology. 2008 ; Vol. 61, No. 10.
@article{567ada84124742518fcdc12198edf156,
title = "Item response theory facilitated cocalibrating cognitive tests and reduced bias in estimated rates of decline",
abstract = "Objective: To cocalibrate the Mini-Mental State Examination, the Modified Mini-Mental State, the Cognitive Abilities Screening Instrument, and the Community Screening Instrument for Dementia using item response theory (IRT) to compare screening cut points used to identify cases of dementia from different studies, to compare measurement properties of the tests, and to explore the implications of these measurement properties on longitudinal studies of cognitive functioning over time. Study Design and Setting: We used cross-sectional data from three large (n > 1000) community-based studies of cognitive functioning in the elderly. We used IRT to cocalibrate the scales and performed simulations of longitudinal studies. Results: Screening cut points varied quite widely across studies. The four tests have curvilinear scaling and varied levels of measurement precision, with more measurement error at higher levels of cognitive functioning. In longitudinal simulations, IRT scores always performed better than standard scoring, whereas a strategy to account for varying measurement precision had mixed results. Conclusion: Cocalibration allows direct comparison of cognitive functioning in studies using any of these four tests. Standard scoring appears to be a poor choice for analysis of longitudinal cognitive testing data. More research is needed into the implications of varying levels of measurement precision.",
keywords = "Cocalibration, Cognition, Item response theory, Longitudinal data analysis, Psychometrics, Simulation",
author = "Crane, {Paul K.} and Kaavya Narasimhalu and Gibbons, {Laura E.} and Mungas, {Dan M} and Sebastien Haneuse and Larson, {Eric B.} and Lewis Kuller and Kathleen Hall and {van Belle}, Gerald",
year = "2008",
month = "10",
doi = "10.1016/j.jclinepi.2007.11.011",
language = "English (US)",
volume = "61",
journal = "Journal of Clinical Epidemiology",
issn = "0895-4356",
publisher = "Elsevier USA",
number = "10",

}

TY - JOUR

T1 - Item response theory facilitated cocalibrating cognitive tests and reduced bias in estimated rates of decline

AU - Crane, Paul K.

AU - Narasimhalu, Kaavya

AU - Gibbons, Laura E.

AU - Mungas, Dan M

AU - Haneuse, Sebastien

AU - Larson, Eric B.

AU - Kuller, Lewis

AU - Hall, Kathleen

AU - van Belle, Gerald

PY - 2008/10

Y1 - 2008/10

N2 - Objective: To cocalibrate the Mini-Mental State Examination, the Modified Mini-Mental State, the Cognitive Abilities Screening Instrument, and the Community Screening Instrument for Dementia using item response theory (IRT) to compare screening cut points used to identify cases of dementia from different studies, to compare measurement properties of the tests, and to explore the implications of these measurement properties on longitudinal studies of cognitive functioning over time. Study Design and Setting: We used cross-sectional data from three large (n > 1000) community-based studies of cognitive functioning in the elderly. We used IRT to cocalibrate the scales and performed simulations of longitudinal studies. Results: Screening cut points varied quite widely across studies. The four tests have curvilinear scaling and varied levels of measurement precision, with more measurement error at higher levels of cognitive functioning. In longitudinal simulations, IRT scores always performed better than standard scoring, whereas a strategy to account for varying measurement precision had mixed results. Conclusion: Cocalibration allows direct comparison of cognitive functioning in studies using any of these four tests. Standard scoring appears to be a poor choice for analysis of longitudinal cognitive testing data. More research is needed into the implications of varying levels of measurement precision.

AB - Objective: To cocalibrate the Mini-Mental State Examination, the Modified Mini-Mental State, the Cognitive Abilities Screening Instrument, and the Community Screening Instrument for Dementia using item response theory (IRT) to compare screening cut points used to identify cases of dementia from different studies, to compare measurement properties of the tests, and to explore the implications of these measurement properties on longitudinal studies of cognitive functioning over time. Study Design and Setting: We used cross-sectional data from three large (n > 1000) community-based studies of cognitive functioning in the elderly. We used IRT to cocalibrate the scales and performed simulations of longitudinal studies. Results: Screening cut points varied quite widely across studies. The four tests have curvilinear scaling and varied levels of measurement precision, with more measurement error at higher levels of cognitive functioning. In longitudinal simulations, IRT scores always performed better than standard scoring, whereas a strategy to account for varying measurement precision had mixed results. Conclusion: Cocalibration allows direct comparison of cognitive functioning in studies using any of these four tests. Standard scoring appears to be a poor choice for analysis of longitudinal cognitive testing data. More research is needed into the implications of varying levels of measurement precision.

KW - Cocalibration

KW - Cognition

KW - Item response theory

KW - Longitudinal data analysis

KW - Psychometrics

KW - Simulation

UR - http://www.scopus.com/inward/record.url?scp=50249163095&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=50249163095&partnerID=8YFLogxK

U2 - 10.1016/j.jclinepi.2007.11.011

DO - 10.1016/j.jclinepi.2007.11.011

M3 - Article

VL - 61

JO - Journal of Clinical Epidemiology

JF - Journal of Clinical Epidemiology

SN - 0895-4356

IS - 10

ER -