好色先生

好色先生

Explore the latest content from across our publications

Log In

Forgot Password?
Create New Account

Loading... please wait

Abstract Details

Using Explainable AI in the Clinical Validating of MyCog: A Self-administered Cognitive Screener for Primary Care Settings
Aging, Dementia, and Behavioral Neurology
P1 - Poster Session 1 (8:00 AM-9:00 AM)
13-003
This study examined whether MyCog, a brief tablet-based cognitive screening application, could accurately discriminate between older adults with and without cognitive impairment using machine learning with explainable AI (XAI) methods to enhance clinical interpretability.
Primary care settings are optimal for early cognitive impairment detection but face significant barriers including time constraints and lack of minimally burdensome assessments. Traditional screeners require staff administration and cannot capture granular behavioral data. MyCog addresses these challenges as an EHR-integrated, self-administered tablet application featuring two validated cognitive tasks: Dimensional Change Card Sort (executive functioning) and Picture Sequence Memory (episodic memory). 
Cross-sectional validation study included 65 adults aged 65+ with diagnosed cognitive impairment and 80 cognitively normal controls. We employed ensemble modeling using five machine learning (ML) approaches (LASSO, Elastic Net, Random Forest, Bayesian Logistic Regression, Gradient Boosting) with nested cross-validation. XAI techniques included SHapley Additive exPlanations (SHAP) values for individual prediction explanations, feature importance rankings, and decision boundary visualization. Performance was evaluated using ROC AUC, sensitivity, specificity, and accuracy.
All models demonstrated strong diagnostic performance (AUC: 0.817-0.873). XAI analysis revealed memory accuracy (Picture Sequence Memory exact match) and executive functioning efficiency (Dimensional Change Card Sort rate-correct score) as most predictive features. The final explainable consensus model achieved AUC 0.890, sensitivity 72.3-83.1%, specificity 78.8-91.2%, and accuracy 80.7-82.8%. SHAP analysis provided individualized feature contribution scores for clinical understanding.
MyCog demonstrates strong diagnostic accuracy through a parsimonious, clinically interpretable model enhanced with XAI capabilities. The integration of explainable AI provides clinicians with transparent, individualized insights into cognitive screening results, addressing interpretability challenges hindering ML adoption in healthcare. As a validated, self-administered tool requiring under 7 minutes with seamless EHR integration, MyCog represents a practical solution combining diagnostic accuracy with clinical transparency.
Authors/Disclosures
Callie Jones
PRESENTER
Ms. Jones has nothing to disclose.
Stephanie Ruth Young, PhD Dr. Young has nothing to disclose.
Greg Byrne, MA Mr. Byrne has received personal compensation for serving as an employee of Northwestern University. The institution of Mr. Byrne has received research support from NIH.
Elizabeth Dworak, PhD Dr. Dworak has nothing to disclose.
Julia N. Yoshino Benavente, MPH The institution of Ms. Yoshino Benavente has received research support from NIH.
Richard Gershon, PhD The institution of Dr. Gershon has received research support from National Institutes of Health.
Michael Wolf (Northwestern University) No disclosure on file
Cindy Nowinski, MD PhD The institution of Dr. Nowinski has received research support from National Institutes of Health.