Machine Learning Puts New Lens on Autism Screening and Diagnostics

August 2, 2016

By Amy Blumenthal

Approximately one in 68 people are on the autism spectrum. Experts are unanimous on this: early intervention is critical for improving communication skills and addressing behavioral issues. But how can researchers expedite the identification of children in need of help and simultaneously provide a more clear-cut map for intervention and support? Researchers from the USC Signal Analysis and Interpretation Laboratory (SAIL) at the USC Viterbi School of Engineering Ming Hsieh’s Department of Electrical and Computer Engineering, along with autism research leaders Catherine Lord (of Weill Cornell Medical College) and Somer Bishop (of University of California, San Francisco), are now exploring whether machine learning might play an important role in helping screen for autism and guide caregiver and practitioner intervention.

Their newest interdisciplinary collaboration and research is documented in the paper “Use of Machine Learning to Improve Autism Screening and Diagnostic Instruments: Effectiveness, Efficiency, and Multi-Instrument,” published in The Journal of Child Psychology and Psychiatry.

Study authors Daniel Bone, Somer Bishop, Matthew P. Black, Matthew Goodwin, Catherine Lord and Shrikanth S. Narayanan looked at two established industry tests: the Autism Diagnostic Interview-Revised (ADI-R), and Social Responsiveness Scale (SRS), both exams in which parents are interviewed about their children’s behaviors. The scholars then applied machine learning techniques to analyze how parents’ responses on individual items and combinations of items matched up with the child’s overall clinical diagnosis of ASD vs. non-ASD.

One of the fundamental questions that drove this research project, said co-author and SAIL Director Shri Narayanan, was, “How can we support and enhance experts’ decision-making beyond human capability, how can we make sense of data and patterns not able to be detected by a single person?” The researchers, eager to provide parents or caregivers and evaluators with “tools for better decision-making,” studied over 1500 individuals’ test scores, comparing the results of those individuals with autism spectrum disorder to those with other non-ASD diagnoses.

By using machine learning to analyze thousands of caregiver responses, the researchers were able to identify redundancies in the questions asked to caregivers. By eliminating these redundancies, the authors identified five ADI-R questions that appeared to be capable of maintaining 95% of the instrument’s performance. While it is unclear how these questions would function if they were administered separately from the overall interview, they suggest that certain diagnostic constructs (when reported by parents) may be particularly important for predicting clinical diagnosis.

Further clinical testing is needed to understand the practical utility of these particular results, but use of these types of techniques could ultimately serve to reduce administrative time and to customize questions to identify the unique challenges that warrant intervention for a particular individual.

The authors also believe they can use machine learning to provide another lens on autism, offering a picture that is clearer, more distilled, and overall more data-informed for caregivers and practitioners. This, the authors believe, could be revolutionary in that it “takes out the guesswork or subjectivity involved even in trusted, industry-wide instruments.”

“Machine learning can make a diagnosis more effective, more systematic,” said USC’s Daniel Bone, the study’s lead author. Beyond early intervention, increased detail could also reduce the frequency of misdiagnoses that deny individuals access to services from the states or public schools, for example.

State-of-the-art computational techniques are emerging as scalable tools for clinical translation in human health and well-being.

Researchers at the Signal Analysis and Interpretation Laboratory at USC want to take a holistic approach to autism. Aside from targeting screening and diagnosis instruments with machine learning, the researchers are working to create quantitative measures of human behavior based on audio, video, and physiological sensors through signal processing. One primary target for the researchers has been to quantify what sounds atypical about the speech melody of many individuals with autism, since objective measures from a computer may supplement clinicians in this difficult judgment. Eventually, the scholars would like to train specialists to use audio and signal processing tools on a more regular basis to identify and monitor specific behavioral patterns and develop interventions to address these patterns.

In addition, one of the projects this multi-disciplinary team of engineering scholars from USC and psychologists from UCSF are planning (along with experts in adolescent social development from Cincinnati Children’s Hospital, Ryan Adams, will address social challenges that people with autism may experience. The researchers will record an individual’s behavior to try to understand speech patterns or gestures which may unknowingly be off-putting to peers and strain friendships. The researchers would like to provide data-driven insights that can be used for therapeutic interventions to improve the quality and quantity of friendships for children on the spectrum.

Narayanan said, “We are building the science first, then translating science back into useful technology—all through interdisciplinary partnerships.”

Published on August 2nd, 2016

Last updated on January 10th, 2019


Share: