AI detects speech patterns for autism in different languages

Summary: Machine learning algorithms are helping researchers identify speech patterns in autistic children that are consistent across different languages.

The source: Northwestern University

A new study by researchers at Northwestern University used machine learning – a branch of artificial intelligence – to identify speech patterns in children with autism that were consistent between English and Cantonese, suggesting that speech characteristics could be a useful tool in diagnosing the disease.

Conducted with collaborators in Hong Kong, the study has yielded information that could help scientists distinguish between genetic and environmental factors that shape the communication skills of people with autism, which could help them learn more about the origins of the disorder and to develop new treatments.

Children with autism often speak more slowly than normally developing children and exhibit other differences in pitch, pitch, and rhythm. But these differences (which the researchers call “accidental differences”) have been surprisingly difficult to characterize consistently and objectively, and their origins have remained obscure for decades.

However, a team of researchers led by Northwestern scientists Molly Loach and Joseph CY Lau, along with Hong Kong collaborator Patrick Wong and his team, have successfully used supervised machine learning to identify language differences associated to autism.

The data used to train the algorithm were recordings of English and Cantonese speaking young people with and without autism telling their own version of the storyboard in a wordless children’s book called “Frog, where are you?”

The results were published in the journal ANOTHER ONE June 8, 2022.

Loach, Jo-Ann J. Pedro F. Dolly is Professor of Learning Disabilities at Northwestern University.

“But the variation we observed is also interesting, which could indicate more fluent speech features, which would potentially be good targets for intervention.”

Lau added that using machine learning to identify key elements of speech that predict autism is an important breakthrough for researchers, who have been limited by English bias in autism and subjectivity research. human when it comes to classifying differences in autism. between autistic and non-autistic.

“Using this method, we were able to identify language traits that might predict an autism diagnosis,” said Lau, a postdoctoral researcher working with Loach in the Roxlin and Richard Pepper Department of Communication Sciences and Disorders at Northwestern. .

“The most notable of these features is stimulation. We hope this study will form the basis of future autism work that improves machine learning.

The researchers believe their work has the potential to contribute to a better understanding of autism. Lau said AI has the potential to make autism diagnosis easier, helping to reduce the burden on healthcare professionals, making autism diagnosis more accessible to more people. It could also provide a tool that could one day transcend cultures, thanks to the ability of a computer to quantitatively analyze words and sounds, regardless of language.

The researchers believe their work could provide a tool that could one day transcend cultures, thanks to the computer’s ability to quantitatively analyze words and sounds, regardless of language. Image is in public domain

Since the speech features identified through machine learning include common English, Cantonese, and language-specific features, Loch said, machine learning can be useful in developing tools that not only identify aspects of speech that are appropriate for therapeutic interventions, but also measure the impact of these interventions by assessing speaker progress over time.

Finally, the study results could inform efforts to identify and understand the role of specific genes and brain processing mechanisms involved in genetic susceptibility to autism, the authors said. Ultimately, his goal is to paint a more complete picture of the factors that make up people with autistic speech differences.

“One of the brain networks involved is the auditory pathway at the subcortical level, which is closely linked to differences in how speech sounds are processed in the brain by people with autism compared to those who develop normally. across cultures,” he said. Lou.

The next step will be to determine if these differences in processing in the brain lead to the speech behavioral patterns we see here and the neurogenetics that underlie them. We’re excited about what’s to come. »

See as well

About this research news for AI and ASD

author: Max Wittinsky
The source: Northwestern University
Contact: Max Wittinsky – Northwestern University
Photo: Image is in public domain

original research: free access.
Cross-linguistic models of language differences in autism: a machine learning studyWritten by Joseph CY Lau et al. ANOTHER ONE


Summary

Cross-linguistic models of language differences in autism: a machine learning study

Differences in speech presentation are a widely observed feature of autism spectrum disorders (ASD). However, it is unclear how stereotypical ASD differences in different languages ​​show cross-linguistic variation in presentation.

Using a supervised machine learning approach, we examined vocal features relevant to rhythmic and tonal aspects of performance derived from narrative samples obtained in English and Cantonese, two languages ​​that are typically distinct and episodic.

Our models revealed successful classification of ASD diagnosis using relative rhythm characteristics within and between the two languages. Classification with intonation-related features was important for English but not for Cantonese.

The results highlight timing differences as one of the main symptomatic features affected by ASD and also show significant variation in other general features that appear to be shaped by specific language differences, such as intonation.

Leave a Comment