Toward creating behavioral informatics for autism through rich and efficient audio processing

  • Awarded: 2019
  • Award Type: Director
  • Award #: 627148

Shrikanth (Shri) Narayanan’s overall goal is to create capabilities in robust and context-aware audio processing. This will be achieved through a three-year research and development program at the University of Southern California’s Signal Analysis and Interpretation Laboratory. The current effort, supported by a one-year SFARI Director Award, will focus on developing an audio-processing pipeline in support of behavioral analysis focusing on child-adult human interactions with direct relevance to autism spectrum disorder (ASD).

Such interactions can happen in a variety of settings, ranging from semi-structured/controlled ones (i.e., during administrations of clinical behavioral assessments, such as the Autism Diagnostic Observation Schedule [ADOS] or the Brief Observation of Social Communication Change [BOSCC]1, or during behavioral therapy) to free living interaction settings (i.e., at home or school, where behavioral changes in response to a specific intervention can be observed).

The research will have a major focus on ‘transcription-free’ audio analytics based on signal processing and machine learning techniques2 (i.e., the extraction of behavioral information without textual transcription in a language-agnostic way). A secondary focus will be on illuminating how the automatic availability of spoken language information can further enhance the richness of the audio analytics.

Specifically, the project aims to:

(1) Create and curate rich, exemplary data resources, including annotations, reflecting the diverse contexts relevant to the ASD domain. Importantly, this includes the design and implementation of a pilot ‘guided’ audio behavioral collection in natural settings using simple phone devices (allowing for full speech recordings or just of the audio features in any setting).

(2) Develop and establish the baseline performance of state-of-the-art algorithms for speech activity detection, speaker diarization and the derivation of simple audio behavioral analytics.

(3) Design novel machine-learning algorithms that focus on optimizing performance for the specific behavioral characteristics of the target domains of interest.

Narayanan’s group will perform this work in collaboration with Somer Bishop at the University of California, San Francisco. Bishop’s lab will use the newly engineered devices to collect high-quality audio recordings from social interactions that occur between children and adults in clinical and naturalistic settings. In addition to establishing feasibility of use across different contexts, implementing the devices in multiple settings will yield critical information about best-practice procedures for collection of audio data in ASD research. Data generated from this project will also provide numerous analytic opportunities for identifying audio-related phenotypes that can be reliably measured and that may have relevance for describing between-person variability and/or within-person change over time

The resulting data resources and software tools will be shared. Furthermore, this project will help to coalesce and facilitate a broader community effort to develop audio-based analytics in service of behavioral research and practice in ASD.

References

1.Grzadzinski R. et al. J. Autism Dev. Disord. 46, 2464-2479 (2016) PubMed
2.Bone D. et al. IEEE Signal Processing Magazine 34, 189–196 (2017) Article
Subscribe to our newsletter and receive SFARI funding announcements and news