Research

The CARE group aims to create computational systems that augment clinican's analytical capabilities in diagnosis and personalized treament of neuro-cognitive social disorders, particularly autism spectrum disorder (ASD)-- the fastest growing developmental disability in the United States affecting 1 in 68 children (CDC, 2014). Autism research is a model for translational research of a psychiatric disorder; collaborating psychologists, engineers, and neurologists are translating findings about this complex, heterogeneous social-communicative disorder into mechanisms that will improve the lives of affected individuals. As such, we are proud to collaborate with many of the foremost autism researchers. Active projects include:



Developing Scalable Measures of Behavior Change for ASD Treatments

Current treatments of ASD symptoms involve adapting behavioral environments to support learning, or psycho-pharmacological treatments of related conditions such as anxiety and irritability. The treatments do not affect the ASD symptoms themselves directly, owing to a lack of ASD-specific treatment-response measures that are sensitive enough to capture change. We develop and evaluate the scalability of a new instrument for measuring change in social communication behaviors, the Brief Observation of Social Communication Change (BOSCC) and investigate the possibility of developing automated methods of detecting these changes within a brief, standardized behavior sample. We extract and study objective descriptors to assess the feasibility of automating BOSCC coding by predicting BOSCC scores using machine learning methods. Certain behaviors which are difficult for humans to quantify (e.g., atypical prosody), can be used to augment the BOSCC human coding, enhancing BOSCC sensitivity and reliability. By systematically analyzing the predictive power of signal information from various combinations of gold-standard ADOS subtasks, we determine their suitability for predicting BOSCC behavioral codes.


People involved: Manoj Kumar, Daniel Bone (Alumni)


Robust Audio Processing for extracting ASD Behavioral Informatics

Computational analysis of autism diagnostic sessions using automatically extracted features combined with machine learning has provided valuable insights. Specifically, speech and language features extracted from both child and the interacting clinician have shown to be significantly predictive of autism symptom severity. However, feature extraction is often dependent on availability of speaker labels and transcripts which can be time-consuming and expensive to obtain. As a first step towards fully automatic behavioral feature extraction, we explore feature subsets from multiple modalities (acoustic-prosodic, lexical, image) robust to errors from the speech pipeline. We investigate feature sensitivity to errors from multiple components of the pipeline: speaker diarization, speech recognition, etc. We perform a comprehensive failure analysis to identify critical points and develop novel machine learning algorithms while maintaining feature interpretability.


People involved: Manoj Kumar


Coordination of speech production and facial expression in Autism

A core diagnostic criterion for Autism Spectrum Disorder is persistent deficits in social communication and social interaction. These deficits affect verbal and nonverbal communication in social interaction. Depending on the severity of the disorder, this deficit ranges from poor integration of verbal and nonverbal communication, to abnormalities in eye contact and body language or deficits in understanding and use of gestures, to a total lack of facial expressions and nonverbal communication. In high-functioning autism, abnormal integration of facial expressions and verbal communication contributes to a perceived awkwardness of communication. An ongoing study is quantifying deficits in the coordination of facial expression and speech production in high functioning autistic children. The study uses a motion capture system to measure movement and deformation of the face during affective speech, extracts speech features from the audio to characterize emotion expression, and uses eye-tracking to monitor gaze to visual stimuli as a measure of visual attention. The analysis investigates how autism influences the coordination of speech production and facial expression and how visual attention modulates this coordination.


People involved: Tanner Sorenson, Tiantian Feng


Completed Projects:

  • Verbal/non-verbal Asynchrony in Adolescents with High-Functioning Autism (NIH/NIDCD R01)