Dr. B. Prabhakaran from the Computer Science Department at UT Dallas, is part of the team that is giving a boost to High-Tech Speech Treatments.
Dr. Thomas Campbell, executive director of the Callier Center, and his colleagues are using electromagnetic articulography data to create a real-time computerized representation of a tongue’s movements. Patients can see how they move their tongues when they talk, which could help them improve their speech.
Researchers at The University of Texas at Dallas have received three grants from the National Institute on Deafness and Other Communication Disorders aimed at treating a variety of speech disorders.
The multidisciplinary effort will focus primarily on improving speech communication, predicting verbal deficits in patients with amyotrophic lateral sclerosis and improving diagnostic testing for speech disorders in children.
Dr. Thomas Campbell, the Ludwig A. Michael, MD, Callier Center Executive Director and Sara T. Martineau Professor, is an investigator on all three projects.
The projects will be through the UT Dallas’ Communication Technology Center (CTech), which fosters interdisciplinary collaboration and research and serves as an incubator to technology projects that focus on communication disorders.
“CTech is different from other programs because we develop the technologies to study communication and also because we combine the efforts of several specialties that could not accomplish these projects on their own,” Campbell said. “We are truly interdisciplinary in our approach to developing communication technology.”
CTech is made up of researchers from the School of Behavioral and Brain Sciences, the Erik Jonsson School of Engineering and Computer Science and the School of Arts and Humanities.
Campbell and his colleagues received a small-business grant to develop software that uses the data from electromagnetic articulography to create a computer-generated representation of a person’s tongue movements. Electromagnetic articulography detects the movements of oral sensors in an electrostatic field. The display would provide patients with a real-time view of how they are moving their tongues when they talk, which could help them to improve their speech.
Dr. William Katz, co-investigator and professor at the Callier Center for Communication Disorders, said that healthy speakers know how to control their tongues to make the right sounds. But people with apraxia of speech can have trouble with this process. They typically know what they want to say but have difficulty making their muscles perform correctly, causing sounds to come out wrong.
“It’d be like trying to grab for a cup without being able to see your hand. All you know is you didn’t get it,” Katz said about speech training without visual feedback. “Our approach is to give the patient additional real-time information about tongue movement during speech. The goal is to improve their tongue positioning behavior through self-correction and practice.”
A second grant will use the same electromagnetic articulography to predict the progression of ALS, also known as Lou Gehrig’s disease. ALS is a neurodegenerative disease that leads to paralysis and death.
As ALS progresses, the motor neurons enabling speech eventually die, thus robbing the patient of the ability to speak. Using the electromagnetic articulography, researchers hope to improve their ability to reliably monitor changes in the tongue movements in ALS patients. This information may allow future software to recognize intended words and create an avenue for continued communication.
The last grant focuses on developing new measures for diagnosing speech disorders in young children. Knowing specifically how a child is having difficulty producing speech may give clinicians a better chance to develop targeted treatments.
Campbell and his colleagues will test new software developed to aid speech-language pathologists in the diagnosis process. The main test for the software is whether it will detect the subtle speech characteristics that indicate different speech disorders, such as the length of pauses between words or the accuracy of diction.
Speech sounds recorded from 200 children will be submitted to the software, which will conduct spectral and temporal analysis on the sounds to predict disorders. By comparing the computer analysis to clinicians’ diagnoses, the researchers hope to offer refinements to the software to increase its reliability and accuracy. Eventually, they hope the software will automate the diagnosis process for clinicians.
Other UT Dallas researchers and centers involved in the studies are Dr. Robert Rennaker II, Dr. William Katz, Dr. Jun Wang, Dr. Balakrishnan Prabhakaran, Eric Farrar and the Texas Biomedical Device Center.
Co-researchers on the projects will include Dr. Jordan Green from the Massachusetts General Institute of Health Professions; Dr. Lawrence Shriberg from the University of Wisconsin; and Vulintus, a biomedical research and development company.