Digital Pens Provide New Insight into Cognitive Testing Results

12 July 2021

Stacy Andersen

Use of a digital pen during cognitive assessments allows researchers to identify patterns of test performance that correlate with different measures of cognitive and physical function.

Boston—During neuropsychological assessments, participants complete tasks designed to study memory and thinking. Based on their performance, the participants receive a score that researchers use to evaluate how well specific domains of their cognition are functioning.

Consider, though, two participants who achieve the same score on one of these paper-and-pencil neuropsychological tests. One took 60 seconds to complete the task and was writing the entire time; the other spent three minutes, and alternated between writing answers and staring off into space. If researchers analyzed only the overall score of these two participants, would they be missing something important?

“By looking only at the outcome, meaning what score someone gets, we lose a lot of important information about how the person performed the task that may help us to better understand the underlying problem,” explains lead author Stacy Andersen, PhD, assistant professor of medicine at Boston University School of Medicine (BUSM).

Researchers with the Long Life Family Study (LLFS) used digital pens and digital voice recorders to capture differences in study participants’ performance while completing a cognitive test and found that differences in ‘thinking’ versus ‘writing’ time on a symbol coding test might act as clinically relevant, early biomarkers for cognitive/motor decline.

Participants in the LLFS were chosen for having multiple siblings living to very old ages. Longevity has long been associated with an increased health span and thus these families are studied to better understand contributors to healthy aging. The participants were assessed on a number of physical and cognitive measures, including a symbol coding test called the Digit Symbol Substitution Test.

This timed test requires participants to fill in numbered boxes with corresponding symbols from a given key and assesses both cognitive (attention and processing speed) and non-cognitive factors (motor speed and visual scanning). To allow researchers to collect data about how a participant went about completing the task, the participants used a digital pen while completing the test. On the tip of this pen was a small camera that tracked what and when a participant wrote. The LLFS researchers divided the output from this digital pen into ‘writing time’ (the time the participant spent writing) and ‘thinking time’ (the time not spent writing) and looked at how these changed over the course of the 90-second test.

The researchers then identified groups of participants that had similar patterns of writing time and thinking time across the course of the test. They found that although most participants had consistent writing and thinking times, there were groups of participants who got faster or slowed down. “This method of clustering allowed us to look at other similarities among the participants in each group in terms of their health and function that may be related to differences in writing and thinking time patterns,” said coauthor and lead biostatistician Benjamin Sweigart, a biostatistics doctoral student at Boston University School of Public Health. The researchers found that those who got slower in writing the symbols during the test had poorer physical function on tests of grip strength and walking speed. In contrast, those who changed speed in thinking time had poorer scores on memory and executive function tests suggesting that writing time and thinking time capture different contributors to overall performance on the test.

According to the researchers, these findings show the importance of capturing additional facets of test performance beyond test scores. “Identifying whether poor test performance is related to impaired cognitive function as opposed to impaired motor function is important for choosing the correct treatment for an individual patient” adds Andersen. “The incorporation of digital technologies amplifies our ability to detect subtle differences in test behavior and functional abilities, even on brief tests of cognitive function. Moreover, these metrics have the potential to be very early markers of dysfunction.”

These findings appear online in the Journal of Alzheimer’s Disease.

Other co-authors included Nancy Glynn of the University of Pittsburgh’s Department of Epidemiology; Mary Wojczynski of Washington University School of Medicine’s Department for Genetics; Bharat Thyagarajan of University of Minnesota School of Medicine’s Department of Laboratory Medicine and Pathology; Jonas Mengel-From of the University of Southern Denmark’s Institute of Public Health, Epidemiology, Biostatistics and Biodemography Unit; Stephen Thielke of Puget Sound VA Medical Center’s Geriatric Research, Education, and Clinical Center; Thomas Perls of BUSM’s Geriatrics Section; David Libon of Rowan University’s School of Osteopathic Medicine at the New Jersey Institute for Successful Aging; Rhoda Au of BUSM’s Department of Anatomy and Neurobiology and Neurology; Stephanie Cosentino of the Cognitive Neuroscience Division of the Department of Neurology at the Taub Institute for Research on Alzheimer’s Disease and the Aging Brain and Paola Sebastiani of Tufts Medical Center’s Institute for Clinical Research and Health Policy Studies.

Funding for this study was provided by the National Institute on Aging (K01AG057798 to S.L.A., U01AG023749 to S.C., U01AG023755 to T.T.P., U01AG023712, U01AG023744, U01AG023746, U19AG063893); the National Institute of General Medical Sciences Interdisciplinary Training Grant for Biostatisticians Uncorrected Author Proof S.L. Andersen et al. / Digital Patterns of Processing Speed 15 (T32 GM74905) to B.S.; the Boston University School of Medicine Department of Medicine Career Investment Award to S.L.A.; and the Marty and Paulette Samowitz Foundation to T.T.P. Additionally, the Claude D. Pepper Older Americans Indepen dence Center, Research Registry and Developmental Pilot Grant (NIH P30 AG024827), and the Intramural Research Program, National Institute on Aging supported N.W.G. to develop the Pittsburgh Fatigability Scale.

Contact: Gina DiGravio, 617-224-8962, ginad@bu.edu

Note to Editors
Stacy L. Andersen
• Consulting Fees: I have received $10,000 over the past two years for research consulting for Washington University which is unrelated to the project in this manuscript.

Rhoda Au
• Consulting Fees: Signant Health, Scientific Advisory Board - 1-2 x per year Biogen, Scientific Advisor, Diversity Advisory Board - sporadically
Grants
• Agency: Evidation Health
Dates: 3-1-18 to 3-1-21

Stephanie Cosentino
• Consulting Fees: I receive consulting fees from Sage and the Association for FTD

Nancy W. Glynn
• Nothing to Disclose

David J. Libon
• Patents/Royalties: Dr. Libon receives royalties from Oxford University Press