Validation of the CogDrisk Instrument as Predictive of Dementia in Four General Community-Dwelling Populations

Lack of external validation of dementia risk tools is a major limitation for generalizability and translatability of prediction scores in clinical practice and research. We aimed to validate a new dementia prediction risk tool called CogDrisk and a version, CogDrisk-AD for predicting Alzheimer’s disease (AD) using cohort studies. Four cohort studies were identified that included majority of the dementia risk factors from the CogDrisk tool. Participants who were free of dementia at baseline were included. The predictors were component variables in the CogDrisk tool that include self-reported demographics, medical risk factors and lifestyle habits. Risk scores for Any Dementia and AD were computed and Area Under the Curve (AUC) was assessed. To examine modifiable risk factors for dementia, the CogDrisk tool was tested by excluding age and sex estimates from the model. The performance of the tool varied between studies. The overall AUC and 95% CI for predicting dementia was 0.77 (0.57, 0.97) for the Swedish National study on Aging and Care in Kungsholmen, 0.76 (0.70, 0.83) for the Health and Retirement Study - Aging, Demographics and Memory Study, 0.70 (0.67,0.72) for the Cardiovascular Health Study Cognition Study, and 0.66 (0.62,0.70) for the Rush Memory and Aging Project. The CogDrisk and CogDrisk-AD performed well in the four studies. Overall, this tool can be used to assess individualized risk factors of dementia and AD in various population settings.


Introduction
C urrently, over 55 million people live with dementia worldwide and the number of cases are projected to increase to 78 million by 2030 (1). This, along with unsuccessful clinical trials on dementia treatment, has led to urgent calls for dementia prevention. As the evidence of both risk factors and effective risk reduction interventions is increasing, there is a need to implement findings in policy and practice. Hence, there is a need for accurate risk assessment tools to be leveraged by clinicians, researchers, policy makers, and the general population to support the implementation of dementia risk reduction programs.
Several dementia and Alzheimer's disease (AD) risk models have been developed in community settings (2)(3)(4)(5). Systematic reviews (6)(7)(8) have compared different prediction models based on mid-and late-life age groups, variables included, follow-up time of dementia diagnosis, study setting, and the discriminative accuracy of the tools to predict dementia and its subtypes. However, direct comparisons are limited due to the different methodologies used for tool development and different target samples and outcomes. Few of these prediction models have been translated into practical tools. The needs of a tool for implementing dementia risk reduction in practice should inform prevention strategies and not be considered from a purely statistical perspective. Studies reporting high Area Under the Curve values (AUC) for risk tools typically include disease indicators that are not independent of disease onset such as memory loss, imaging biomarkers, or genetics which are not modifiable or widely accessible (9). In contrast, our aim was to develop a tool that informs preventive action and can be widely applicable. Of the available risk assessment tools that similarly focus on providing preventive information, two have been developed from more than one dataset.
To incorporate the newest evidence on dementia risk factors into a practical tool, we recently developed the CogDrisk tool for predicting dementia, and a version of the tool for specifically predicting AD called CogDrisk-AD (14). This tool incorporates risk and protective factors identified through systematic synthesis of the latest evidence base and selects predictors of dementia and AD based on strength of evidence as well as availability of measures that are practicable in a range of clinical and research contexts. To our knowledge, the CogDrisk includes the largest number of modifiable risk factors and incorporates age group and sex differences. Lack of external validation of risk tools is a major limitation for generalizability and translatability of prediction scores in clinical practice and research. Therefore, in this study, we aim to validate the CogDrisk tool within four international studies to evaluate how well they predict Any Dementia and AD in different populations. Once validated, the tool may be used by clinicians, the public, and policy makers to assess level of risk in individuals or communities and to guide specific feedback for dementia risk reduction.

Validation cohorts
We shortlisted datasets through database searches, review of consortia, consultation with experts, and then evaluated them in terms of the outcome measures, i.e., availability of clinical diagnosis of dementia, inclusion of majority of risk and protective factors measured in the CogDrisk assessment tool (refer Supplementary  Information 1 Part B), and long follow-up time (average of more than 5 years) for AD and dementia. Four cohorts were available that met our criteria and are briefly described below. We also considered the Framingham Heart Study (FHS) but there was a large difference in age (midlife for FHS versus late-life for other studies) and a far earlier timeframe (1975 compared with late 19th to 20th century) of baseline assessment between this and the other studies. We decided to exclude the FHS to avoid differences while comparing the validation results. Supplementary Information 1 Table A1 describes the study characteristics, number and age of study participants, follow-up scheme, the criteria used for diagnosing dementia and AD.
The Swedish National study on Aging and Care in Kungsholmen (SNAC-K) (15) The SNAC-K study was initiated in 2001-2004 (baseline), and comprised 3,363 participants aged 60 years and above. In our analysis, we have included 3,122 participants, where exclusions were due to dementia at baseline (n=241). At baseline, the mean age was 73·6 years and 36·6% of the participants were male.

The Health and Retirement Study -Aging, Demographics and Memory Study (HRS ADAMS) (16)
The HRS ADAMS is a supplementary study in the HRS (17) that conducted in-person clinical assessments to gather information on cognitive status. The study consisted of 856 community-based individuals aged 70 years and above who were assessed in 2001 (baseline) and followed through to 2008. The mean age of participants at baseline was 81·6 years and 41·5% were male.

The Cardiovascular Health Study Cognition Study (CHS-CS) (18)
The CHS-CS was an ancillary study of the main Cardiovascular Health Study. The CHS-CS was initiated in 1991-1994 and was followed up until 1999-2000 with 3,602 community-based participants who had a cerebral MRI and Modified Mini-Mental State Examination (3MSE) (19). In our analysis, participants with dementia at baseline were excluded (n=227) leaving 3,375 participants. At baseline, the mean age was 74·8 years and 40·9% were male.
The Rush Memory and Aging Project (MAP) (20,21) The MAP comprised 2,184 participants aged 60 years and older who undertook the baseline examination in 1997-1998 and were followed annually for up to 22 years at the time of these analyses. The participant's mean age at baseline was 80·0 years and 26·5% were males.

Data harmonization and coding of predictors
Data harmonization was carried out across studies using a common scoring system to standardize the measure of the variables (refer to Supplementary Information 1 Tables B1-B17 for details). Selection of predictors, their definition and coding have been described previously (14). The predictors used in this validation study are based on the component variables included in the CogDrisk assessment tool and details are described in the Supplementary Information 1 Part B. Seventeen risk/protective factors were identified for inclusion in the algorithm to estimate the risk of Any Dementia including age, sex, education, mid-life obesity, high cholesterol and hypertension, diabetes, stroke, traumatic brain injury (TBI), atrial fibrillation, insomnia, depression, physical inactivity, cognitive and social engagement, fish intake, and smoking. The CogDrisk-AD tool had similar risk factors included with the omission of atrial fibrillation and insomnia due to insufficient evidence that these factors are associated with an increased risk of AD, and the inclusion of pesticide exposure in CogDrisk-AD. We could not include pesticide exposure in the validation of the CogDrisk-AD model as this variable was not available in the validation cohorts.

Statistical analysis
The accuracy of the statistical models for identifying participants at risk of dementia and AD using the CogDrisk score was quantified by calculating the AUC and associated 95% confidence interval (22). To exclusively check the effect of modifiable risk factors, the predictive accuracy of the proposed CogDrisk score was further evaluated without age and sex for both dementia and AD. We also evaluated the predictive ability of midlife risk factors on late-life dementia with data included from mid-life participants (40 to 65 years) across all cohorts, where available.
The CogDrisk score was calculated by adding points allocated to individual risk/protective factors. The methodology for the scoring system has been described previously (14). Briefly, risk algorithms were developed for dementia and AD by first converting relative risk ratios (described in detail in the development of CogDrisk manuscript (14)) to points that were added to form a risk score. Conditional equations were specified for risk factors that only had an effect in midlife (high cholesterol, obesity and overweight, and hypertension).
Not all risk/protective factors were measured at baseline in the validation cohorts (Table 1). In such cases, measures were taken from visits closest to the baseline under the assumption that the specific characteristics were constant over time (see Supplementary Information  1 Table C1). To assess the impact of missing data we ran two different sensitivity analyses including (i) a reduced model by removing risk/protective factors with a large number of missing observations (ranging from 24% to 79·7%) which improved the sample sizes, and (ii) multiple imputations. To carry out multiple imputations, we excluded participants missing education status or who had missing data for more than five covariates (exclusions: 30 in SNAC-K, five in HRS-ADAMS and CHS-CS, and eight in MAP). We considered 20 imputed datasets (equivalent to the proportion of missing data in most covariates) following fully conditional specifications/imputations by chained equations (23). In the multiple imputation models, appropriate covariates were chosen to ensure compatibility between the imputation and analysis model. All analyses were conducted using Stata statistical software: release 16·0 (StataCorp, College Station, Texas, USA). Table 1 shows the baseline characteristics of the four validation samples. Out of the 17 risk/protective factors included in the CogDrisk score, SNAC-K had the most risk/protective factors (16) followed by MAP (14), CHS-CS (12), and HRS ADAMS had the fewest (11) risk/protective factors. Mid-life BMI and mid-life hypertension measurements were available in MAP and SNAC-K studies. Age, sex, education, diabetes, stroke, and smoking status measures were available in all four cohorts.

Baseline characteristics of participants in the four studies
The average age of the participants across all cohorts was 70 years and above. In all the studies, females comprised a greater proportion of the sample as compared to males, especially in the MAP (73·5%) and SNAC-K (63·4%). Tertiary education was lowest in HRS ADAMS (25·5%) followed by SNAC-K (34.1%) while MAP had the highest proportion of participants with tertiary education (92·8%). Each study differed in the prevalence of several risk factors. Missing data in risk/ protective factors are reported in Table 1. The points attributed to each of the risk and protective factors are also reported in Table 1
Overall, we observed good performance of the CogDrisk score when applied to SNAC-K, HRS ADAMS and CHS-CS.

Sensitivity analysis
To assess the impact of missing data on the discriminatory performance of the CogDrisk score, we evaluated the CogDrisk score on the reduced variable model and using multiple imputations with all the available variables. Although the sample size improved, the resulting AUC was similar when variables were dropped from studies, i.e., when physical activity was dropped from SNAC-K, atrial fibrillation and insomnia  Table 2). To evaluate the application of CogDrisk to midlife, CogDrisk was also validated in participants less than 65 years of age at baseline, where available (refer Table 4). Obesity and hypertension were only evaluated for midlife participants as the evidence for these risk factors relates only to midlife. Midlife participants were available in SNAC-K (n=736) and MAP (n=80), with six, and two cases respectively. We used similar methodology to apply the CogDrisk score to these subsets of cohorts as in the full cohorts. As the number of incident dementia cases were low, the performance of the CogDrisk score for midlife was poor, i.e., 0·51 (95% CI: 0·27,0·75) for SNAC-K. There was a slight improvement in AUCs with multiple imputations on the respective datasets (see Table 4).

Performance of CogDrisk-AD score in predicting Alzheimer 's disease across the studies
We also evaluated the validity of the CogDrisk-AD score in all the four datasets to assess the predictive ability for Alzheimer 's disease (using the diagnosis provided by the constituent cohorts) (refer Table 5). Specifically, the CogDrisk-AD score performed the best when applied to the HRS ADAMS and CHS-CS data, followed by SNAC-K, and MAP. When the CogDrisk score was defined based on a reduced set of variables and using multiple imputed datasets, this reduced CogDrisk performed similarly to the full model that included covariates with missing data (refer Table 5).

Discussion
In this study, we externally validated the CogDrisk on four high-quality cohort studies. Our results indicate that the CogDrisk has adequate predictive ability for dementia and AD in late-life adults and is of comparable accuracy to other risk tools that have been developed to inform preventive actions (4, 5, 24). Overall, our findings demonstrate that the CogDrisk tool can be used to assess individualized risk factors of dementia and AD in various population settings.
On externally evaluating the CogDrisk, we found that the AUCs varied between the studies with the best predictive performance in the SNAC-K and HRS ADAMS followed by, CHS-CS, and MAP. There are two potential reasons for the variation in the AUCs. Firstly, there is a difference in the mean age of the participants at baseline with HRS ADAMS and MAP having higher mean age as compared to CHS-CS and SNAC-K. Secondly, studies differed in the number and prevalence of predictors available in their datasets.
A study by Licher et al. (2018) compared different dementia risk models and identified age as a major contributing factor for dementia occurrence while other risk factors had marginal contributions (25). On analysing the model without age and sex, the AUCs were reduced for all studies with HRS ADAMS, SNAC-K, and CHS-CS maintaining an adequate predictive ability. In clinical practice, knowledge of areas where individuals can reduce risk is more important than non-modifiable factors such as age and sex.
The CogDrisk did not perform well with midlife participants probably due to the low number of incident dementia cases in these samples. Further testing of the tool on larger datasets with midlife participants is underway. Risk scores for mid-and late-life adults can be used as surrogate outcomes for mid-and late-life dementia preventive interventions.
We found that the CogDrisk was predictive of Any Dementia and that the CogDrisk-AD version was predictive of AD. The CogDrisk-AD may be useful for clinical trials focussing on AD, and to guide risk reduction advice for individuals at increased risk of AD due to family history or having an APOE ε4 genotype. This study has several strengths. The CogDrisk tool was assessed on four studies, for two outcomes, in both mid and late-life participants. To our knowledge, this is the most extensive validation conducted on any dementia risk assessment tool. The external validation samples include two countries from four different datasets, bringing in different population characteristics and, supporting the generalizability of the instrument. The tool is cost-effective because the measures are self-reported which will enable it to be used in universal health initiatives and by clinicians. The CogDrisk tool also has the potential to inform patient and health practitioners in targeting specific risk factors for individuals thus providing personalized advice in clinical settings. It includes a wider range of factors than that captured by tools that focus on cardiovascular risk factors such as SCORE2. Given the overlap in risk and protective factors of dementia with other chronic diseases, a single predictive tool for dementia and other conditions may be efficient in clinical practice. Our team is working to evaluate which such risk prediction models can be developed.
Our study has some limitations. Firstly, not all variables were available in the existing datasets to calculate the full CogDrisk score. In practice, the CogDrisk tool is available online and so all factors can be easily assessed. Secondly, though we harmonised the measures of the predictors between studies there is some variability in the measurement of risk factors across studies, for example, questions measuring moderate and vigorous physical activities. The addition of new literature to the field since earlier tools were developed has extended the number of modifiable risk factors but not led to noticeably larger AUCs. Our interpretation is that our findings demonstrate the upper limit of what is possible for low-cost, convenient dementia risk assessment when validating using cohort study data. Measurement of individual risk factors involves a degree of error and lack of specificity of clinical thresholds where patients or clinicians endorse binary responses. Co-occurrence of risk factors within individuals may also in part explain the limit on AUC that can be reached in developing practical dementia risk tools. Finally, the long prodromal period of dementia, means that unless a cohort study has followed every participant to completion, there will be undetected cases of preclinical dementia or individuals who will ultimately develop dementia who do not obtain a diagnosis during the observation period. This reduces the accuracy of predictive models.
We conclude that the CogDrisk is a valid assessment tool that predicts dementia and AD. It incorporates a large number of modifiable risk factors for dementia Abbreviations: TBI: Traumatic brain injury that are available in a range of clinical and research contexts making the tool practical and available to use for dementia prevention interventions. The CogDrisk tool can be used by clinicians, researchers, policy makers and the public for identifying individuals at risk for dementia and monitoring risk reduction efforts.