- Awarded: 2020
- Award Type: Pilot
- Award #: 733070
Among the most challenging aspects of autism is a variety of aural speech processing issues that deeply impact individuals’ language abilities. These changes may be related to a range of relatively low-level differences in encoding features of sounds, like pitch and loudness, to more speech-specific phonological and prosodic processing alterations. Crucially, there are conflicting results, primarily from behavioral studies, about whether individuals with autism have challenges for certain phonological tasks. While the key brain regions that are involved in both typical and atypical auditory and speech processing have been identified in autism, how the neural representations of speech themselves are affected is still unanswered, thereby limiting the potential of targeted remediation.
In the last decade, our understanding of the cortical mechanisms for encoding speech has blossomed, in part due to advances in human intracranial neurophysiology research1. Direct recordings from neural populations in human auditory speech cortex provide unparalleled spatial and temporal resolution and allow for the examination of neural responses at the single-trial level. We now know that local neural populations in the human superior temporal gyrus (STG) encode specific acoustic-phonetic features in speech that give rise to consonants and vowels2. These populations are scattered throughout the STG, and together cover the entire inventory of speech sounds in a given language. We also now know that neural populations in the STG are highly selective to prosodic acoustic features like pitch, amplitude envelope, and onsets, which are known to be affected in autism. Furthermore, there is evidence that individuals with autism have speech processing difficulties in the presence of background noise, which in typical individuals has important functional bases in the STG and ventrolateral prefrontal areas. Understanding how these well-characterized neural representations of speech-relevant acoustic features are impacted in individuals with autism at the level of specific populations of neurons will provide mechanistic insight that may lead to behavioral, pharmacological and neuromodulatory therapies that act in much more targeted ways than is currently possible.
In the current project, Edward Chang and colleagues propose to address auditory and speech processing in individuals with epilepsy and autism, with the specific goal of understanding how auditory, phonological and linguistic representations are affected. They will achieve this by leveraging a large existing database of direct electrocorticography (ECoG) recordings from both adult and pediatric populations, in which 20 percent of individuals meet diagnostic criteria for autism (co-morbid with epilepsy). Based on analyses of existing data, they will also implement new behavioral tasks that can be tested in prospective cases to more directly target the neural processes that Chang and his team hypothesize are affected.
Findings from these studies are designed to provide key data for a future larger project that will take an integrative approach towards studying autism, including genetics, brain tissue cellular analyses, neurophysiology and naturalistic behavior.