Oct. 21, 2025

Chat- Can You Write Our Next Episode?

Chat- Can You Write Our Next Episode?

In this episode, we examine the transformative role of AI [large language models (LLMs)] like ChatGPT in speech therapy and clinical practice. While we didn't have Chat write our episode, we do discuss how ChatGPT and other AI help our practice.

We delve into how these AI tools enhance clinical efficiency and support evidence-based practices, particularly in assessing dysarthria severity. Our discussion includes a review of a systematic study showcasing AI's capability to classify speech disorders objectively, addressing the challenges of traditional assessment methods. We explore innovative technologies like VoiceIt that improve communication for patients with motor speech impairments and discuss a case study on generating customizable therapy materials through AI. Throughout, we highlight the efficiency gains from AI integration, urging clinicians to embrace these advancements to enhance patient engagement and therapy outcomes.

In this episode, we’re diving into the world of AI and how it’s showing up in speech-language pathology. We looked at two articles—one on using AI to rate dysarthria severity, and another on using ChatGPT to help make therapy materials. We’ll break down the basics of machine learning and deep learning, talk about what works (and what’s still kind of clunky), and share how we’ve been using these tools in real-life sessions. Whether you’re AI-curious or already experimenting, this one’s for you.

You’ll learn:

  • The difference between machine learning and deep learning in speech assessment

  • How AI models can rate dysarthria severity with up to 90% accuracy

  • Why acoustic features like pitch, jitter, and shimmer are key inputs in AI analysis

  • How SLPs can use ChatGPT to generate therapy prompts for speech, language, and cognition

  • The limitations of AI, including hallucinated references and lack of language comprehension

  • Practical ideas for applying AI-generated content to your caseload

  • Why AI won’t replace SLPs—but can absolutely make our jobs easier

Get in Touch: hello@speechtalkpod.com

Or Visit Us At: ⁠www.SpeechTalkPod.com⁠ 

Instagram: @speechtalkpod

Part of the Human Content Podcast Network

 

Learn more about your ad choices. Visit megaphone.fm/adchoices

Speaker1:
[0:16] Hi, everyone. I'm Emily.

Speaker0:
[0:17] And this is Eva.

Speaker1:
[0:19] And you're listening to Speech Talk.

Speaker0:
[0:21] We're your research book club so you can do evidence-based practice in practice.

Speaker1:
[0:25] So let's start talking.

Speaker0:
[0:27] All right, you guys. Chad GPT and all of his buddies are like a thing now. And because we like learning about how to improve our clinical efficiency, working smarter, not harder, we thought we'd do an episode about LLMs, or large language models.

Speaker1:
[0:43] Yeah, well, there's a bunch of research I'm sure will be done in the coming years as AI becomes more mainstream in our field. We wanted to review a few things about AI in the field as it currently relates to evaluation and treatment.

Speaker0:
[0:55] And frankly, I feel like I should know a little bit more about this machine learning versus deep learning because my husband talks about it all the time. But apparently I'm not listening to him. Anywho.

Speaker1:
[1:06] No, and Eva, your husband is the one who is like first introduced me to like all of this stuff. You're like, oh, have you heard about AI? He said, no. He's like, my husband. He knows all things. And I was like, okay, let me look into it.

Speaker1:
[1:22] If Eva's husband is into AI, then I need to look into this.

Speaker0:
[1:29] Yeah. So this week we looked at two articles that explore how AI is used in the field in very different ways. The first is about how researchers are using AI to assess dysarthria severity levels. And the second looks at how to use chat GPT to create quick, customizable therapy activities.

Speaker1:
[1:47] So the first article, detection of dysarthria severity levels using AI models. A review by Rashish Kumar et al.

Speaker0:
[1:54] So this is a systematic review covering 44 articles related to the classification of dysarthria based on severity and intelligibility.

Speaker1:
[2:03] I chose this article partially because the difficulty of rating dysarthria is challenging for me. The research first looked at several tests that clinicians use for diagnosing and assessing dysarthria in order to provide context on AI approach to dysarthria assessment. And one of the ones they talked about was the French A. I think I've used that one time, but it is very time-consuming, and I end up writing goals mostly on my perceptual observations anyways.

Speaker0:
[2:34] Yeah, and researchers agree on that. They point out that clinical techniques are really human methods, are both time-consuming and subjective, which led them to wonder if AI methods of analysis could assess severity automatically and be diagnostically accurate, consistent, and objective. I know that for me, when I'm working with people with motor speech impairments, objectivity is so hard, particularly because when you are talking to someone, if they are really good at compensating for their motor speech, like they're really pragmatically engaged or they use gestures appropriately... Like supplements their intelligibility. Or as you're getting used to hearing them, you start to think they are more intelligible than they are.

Speaker1:
[3:23] Yeah, dang, dang, dang. That's like, as you were saying that, I was thinking the exact same thing. I'm always like, do I just have a better ear for these people talking? Or is it like I've worked with them so much, so now they went to profound, to mild. And then sometimes I'm like, well, you know, they're not that bad. And then I'm doing stuff and they're starting to get better. And I'm like, well, no, no, now they're not that bad before they were pretty bad. But I feel like we've made some progress and now it's not that bad.

Speaker0:
[3:51] I had a patient with both aphasia and dysarthria and we got in a great communication groove and I was like doing her progress notes. I was like, oh, she's going to make some progress, which she had. In total fairness, she had made a ton of progress, but I was reading her intelligibility way higher. And then I saw her in the hallway talking to someone, and the person was so bewildered. They had no idea what she was talking about. And I was like, oh, yeah, this was an important moment for me to see because nobody knows what she's saying. Yeah.

Speaker1:
[4:26] Yeah, it's like that is hard to kind of tone down our own like personal biases when it comes to the patients that we love and they think we're doing so great. And then it's like, oh, no, that hate does not know what's happening.

Speaker0:
[4:41] And people can make progress and it still doesn't necessarily functionally have a lot of carryover in the day to day. And you're like, yes, you are hitting the initial consonant more frequently. On the other hand, everything else still sounds kind of gibberish. So while that is a huge gain, there's still not a ton of like functional carryover yet on the sad face.

Speaker1:
[5:04] Back to the article, though. In order to understand what they looked at with AI, we're going to take a second to describe the two types of AI that are getting traction in the field. The first is machine learning, which is a type of AI that mimics how humans learn by collecting information and using that information to make predictions. If you want to know more about these types, we'll throw the links in the descriptions.

Speaker0:
[5:28] Yeah. So the predictive models were trained on a dataset of acoustic features from dysarthruous speech samples and the severity ratings those speech samples were given by SLPs. The predictive models learned the relationships between the language features and the severity ratings, and then were able to predict the severity of new dysarthruous speech samples. When Emily and I were in school together, we had to do all those MBS IMP videos and you just like watch two million swallow videos. And all of a sudden you start to get this kind of like intuition of, was that really bad? Was that person likely going to get pneumonia? Or are they just going to cough a little bit? So it's kind of like that. They just give it a ton of information and it begins to realize the pattern between severity ratings and what it's hearing in the acoustic samples. Right.

Speaker1:
[6:16] The second is deep learning, which is technically a subset of machine learning that uses many, potentially hundreds or thousands of layers in their, quote, neural network in order to simulate complex decision making like the human mind. With deep learning networks, researchers analyze speech signals to try and capture both temporal and spectral, remember those awful spectrograms, and using patterns in speech. These models, according to the paper, have more accurate severity assessments. The researchers note that because deep learning models can process raw speech data, it also opens the door for remote and self-administered assessments. That was a lot so much you

Speaker0:
[7:02] Really got through that deep learning section way to go and if you're wondering what aspects of speech the ai models were trained on the answer is everything, they looked at audio images video text which includes let me take a deep breath here, Pitch, rate, pause, duration, stress, vocal shape, fundamental frequency, jitter, shimmer, spectrograms, movement of the lips during speech, and phonetic characteristics of speech. I did it. One breath.

Speaker1:
[7:32] Yeah, and all of those words give me PTSD from that class. Like the pitch, the rate, the pause, the shimmer. Oh my gosh. I don't know. Jitter and shimmer. That class is hard.

Speaker0:
[7:41] Those are even real words. Shimmer and shine. It was so much like.

Speaker1:
[7:45] Math within little graphs. That was confusing. So I'm glad AI is looking to do this for us. But of the articles examined, the most research came from audio, suggesting that researchers valued this type of information the most when examining dysarthria. So was AI able to accurately rate dysarthria and its severity level?

Speaker0:
[8:09] So, yes, different AI approaches, machine learning versus deep learning and using different combinations of speech characteristics, as well as different combinations of audio, image, and video, yielded different levels of classification accuracy. But some of the deep learning models were able to get above 90% accuracy. And this is really incredible. Um, as we know from generally reading research, interclinician rating can be very difficult. And so if we can create, you know, a process where we are consistently able to rate severity levels that doesn't change between, you know, clinicians, then we are certainly making progress. But now let's look at some clinical applications. Okay.

Speaker1:
[8:58] Right. And that's making progress. And it's also not biased. They don't have our little inner voice saying like, oh, Sally's doing so good.

Speaker0:
[9:10] Yeah. Or for things like an accent. Somebody may sound more disarthric or less intelligible to me because I don't spend a lot of time in Appalachia. So that twang is really throwing me off.

Speaker1:
[9:22] Oh, my gosh. I actually have someone who is from Kentucky, and they are so very unintelligible. And sometimes it really is like their dysarthria mixing with their Appalachian accent. And what does he say? He goes, a wrestling song. And his wrestling, it takes me like five seconds every time to figure out what is happening. So having a clinical basis for the severity of dysarthria with voice samples can allow you to track progress or decline. This gives you a better understanding of your patient's baseline for accurate evaluations in half the time. The benefits of clear evaluations gives you the insight for better treatments. And I will mention that while this is all really well and good and the article talks about the ease of implementing this with just a microphone, this really isn't up and running. as of yet.

Speaker0:
[10:19] Yeah, in the sense that all you have to have is a microphone and the ability to process your speech samples through AI. So you'd have to really invest in learning that process. The benefits and uses of AI in general don't stop with evaluations though. AI is coming up with some pretty amazing compensations too. They're emerging technologies that are implementing AI for our motor speech patients. For example, VoiceIt is a software plugin for your computer that patients can download, which does voice-to-text transcriptions for people with motor speech impairments. More technologies are also being built every day. We recommend just Googling AI-supported technology to see what is available that might be a good fit for you as a clinician, such as Open Brain AI, or for your patients, depending on what setting you're in.

Speaker0:
[11:14] Okay, that was all Article 1, y'all. Let's go on to Article 2. We also looked at something from the ASHA Wire. It was written by Lori Price, Catherine Lubinowski, and Yao Duhul. And it was very directly titled, Using ChatGPT to Create Treatment Materials.

Speaker1:
[11:31] So this article takes a case study approach. They discuss using chat for a reading prompt to target the sound combination SK and paragraphs. And it worked, but with some difficulties. Like not realizing SK incorporates a variety of spellings, which is hard for learning readers. The article suggests that you learn to give specific prompts, read through the results, and provide it with feedback if there are errors. So like you, the clinician, are reading whatever AI provides, and then you then type in the feedback for AI to fix it.

Speaker0:
[12:05] And you know what I love is that it's never defensive. When you do that, you're like, actually, this is, you know, incorrect. It'll be like, thank you. I appreciate your corrections. I'm like, oh, great. Don't have to argue with you.

Speaker1:
[12:20] So if you're like me and potentially just super lazy, you'll get the AI printed out and just go with your learning reader to turn the errors into an incidental learning task.

Speaker0:
[12:36] Yeah, Emily's the queen of being like, whoops, that's not a mistake. That's a learning opportunity. And the more specific you are, kind of like giving it feedback, the better. You can give it age-based reading levels, scenario prompts, really anything you can imagine. But also know that AI is limited. The database at Bullsrum is huge, but at the same time, it doesn't have the ability to analyze all text-based nuances the way that we do. And in part, that is because it doesn't actually understand language. It's just really good at recognizing probable strings of words.

Speaker1:
[13:14] And yeah, I think that's a part that is really important to hammer on a little bit. Like it is still just software, right? It doesn't recognize language. So it really does. Like this is the part that it's like speech therapists are not losing their jobs to AI because AI does not understand language. They need you to go back through that and sift through what's going on to make sure

Speaker0:
[13:39] It's accurate. Yeah. And you can improve its accuracy. You can upload evidence. So give it research articles if you're using a subscription-based platform, and it will begin to incorporate that material. I think we've all heard by now that AI has made up research articles when tasked to cite evidence. So it's important when using AI and treatment that you have the model you are working from and using the tool for materials.

Speaker1:
[14:07] Right. So this article is focused on kids, aren't they all?

Speaker0:
[14:12] Everybody loves pediatrics. It's like people like kids or something. What is it, this whole next generation situation?

Speaker1:
[14:19] So we went through and applied their fundamental scenarios in the article to some situations and goals that we have used for our clients. So safety, providing specific scenario situations, residence space in a nursing home and ways that they can solve it. Also creating signs. I had one client that was pretty unsafe. We wanted to make sure that if there were signs that she could, one, recognize the sign and two, follow that incidental usage. I was able to pull lots of safety signs easily for her. Spoiler alert, she wasn't safe and her recommendation was 24 hour sitter, but it was something that we're able to say that, you know, we tried this too.

Speaker0:
[15:00] Yeah. And do not knock that part of the process. Like part of what we do is creating those recommendations. We're here to trial, you know, are they stimulable for environmental cues? That's very important information in terms of their overall plan of care or safe transition home. So even if you put up all the signs, I had, I did one for a patient who would wander into other people's bathrooms. and made a sign for her bathroom didn't work. But now we know she's not visually stimulable. All right, so we did safety. Let's talk about language. Using the same framework as the article, we can create word lists or stories. I've used it for word lists for aphasia patients, for generative naming or categorization. So it'll be like, give me 10 words in different categories across things that you would see every day. And it'll be like truck, car, cab, train, so on and so forth. I'm like, all right, what do these words sound like. Oh, they're all transportation. Or the inverse. I'll ask it for a bunch of category topics and then see if my patient can create semantically related words in a generative naming task.

Speaker1:
[16:13] Yeah, I've used apps like the Tactus app, which I do really love. But in their abstract concept, I found myself having trouble thinking of what really is something that's useless, which is something that they really ask for. So then going back through something like that, because you can't Google what is something useless, but the AI produces a better, more thorough list for those things.

Speaker0:
[16:40] Yeah. And again, it comes back down to like efficiency. We have such limited time for our patients. And so being able to say, hey, I know what my goal is for today. I need things to target this. It comes up with it in like a matter of seconds, whereas it would take me five to 10 to think of and write out lists of activities. And that's a large portion of their session time.

Speaker1:
[17:06] So speech, we could target repetitions and sound and phrases. So again, the same sound structure prompts or phrase starter sentences for drill practice to elicit stabilization. So this could be used for our voice clients to create wordless targeting easy onsets or for our dysarthria clients for wordless to drill and over articulation.

Speaker0:
[17:29] Yeah. And cognition. I use chat GPT all the time to create like fake voicemails, calendar appointments, fake pill prescriptions, really anything else that is kind of ADL related. And I need to have like 10 examples of it that I do not have the time or creativity to write on my own. But actually flipping back to something you said, Emily, about... Starter sentences, that's also a really great aphasia one. I have a patient who really struggles with getting out his needs, but the carrier phrase, I need, we've been training over and over again using icons. It's really helped. Now, if I say, I need, a lot of times that can kickstart what he is trying to say for himself. I think that sentence made sense.

Speaker1:
[18:24] It did. It made sense.

Speaker0:
[18:28] Yeah. So again, and I think the more that we use it, the more we realize the application. So if you haven't been using AI to generate treatment activities, just start playing around with it. I think you'll find that it starts to come to you pretty easily and you start getting more and more creative with it.

Speaker1:
[18:51] Yeah. And I mean, in general, the two applications we talked about, Chat and Gemini, are really super user-friendly. And you can type in whole sentences to bring about something. I know a lot of my therapy comes from my tablet because a lot of what I have on my tablet was gifted by my beautiful clinical, what is that called?

Speaker0:
[19:17] Clinical supervisor.

Speaker1:
[19:18] But sometimes it really, it takes me still time to go through all of the materials I have. So to just quickly type something in chat saves that time too. So it really is just about saving time, being functional, and being flexible. Sometimes I'm in therapy and somebody's talking to me about something they are really struggling with or and then it totally flips whatever i was planning to do on on its head having chat to be like okay so we're not going to be doing these starter sentences today today you really need to tell people that um you need like really super functional sentences we're going to type in a chat what are super functional sentences five to six words that we can say and we can practice doing those functional sentences.

Speaker0:
[20:13] And because of how quick it is, you can really move through them. If your patient is like, I'm having trouble communicating my needs to nursing, you can be like, chat, give me 10 highly functional phrases for a bedbound patient who needs to communicate hygiene needs. It gives you 10. You read them out to your patient. Patients can say yes or no to the phrases that they find helpful. If they want more, you can be like, give me 10 more, chat. Give me 10 more. You know, and it's this, or the patient can provide feedback, be like, oh, I need more help with ones related to toileting. I need more help with ones related to food. You can give it that feedback. It'll give you 10 related to food, 10 related to toileting. And it just moves so fast that I think you can really make a ton of progress. All right.

Speaker0:
[21:00] So we have looked at AI and its ability to diagnose the severity of dysarthria. We've looked at a lot of clinical applications. We're creating customizable experiences and treatment sessions. And if you want to learn more, we'll have the ASHA symposium video linked here. So until next time, may your treatments be good and your documentation be automated.

Speaker1:
[21:27] You've been listening to Speech Talk.

Speaker0:
[21:29] Thank you, everyone, for coming to listen to our research book club. Until next time, keep learning and leading with research.

Speaker1:
[21:35] If you like this episode and you want to give us some love, please rate us on your favorite podcasting app. Leave a review and tell the world, because as podcasters, our love language is in positive affirmations.

Speaker0:
[21:47] If you have a research topic you want us to cover, or you have episode comments, clinical experience you want to share, or just want to send us some love letters, send us an email at hello at speech talk pod.com.

Speaker1:
[22:00] If you want even more speech talk content, check out our website at speech talk pod.com where you can find all of our resources we made for you copies of articles covered and Eva's blog, following these topics and more where your hosts,

Speaker0:
[22:14] Eva Johnson and Emily Brady.

Speaker1:
[22:16] Our editor and engineer is Andrew Sims.

Speaker0:
[22:19] Our music is by Omar Benzvi.

Speaker1:
[22:20] Our executive producers are Erin Corney, Rob Goldman, and Shanti Brooke.

Speaker0:
[22:25] To learn about Speech Talk's program disclaimer and ethics policy, verification and licensing terms, and HIPAA release terms, you can go to speechtalkpod.com slash disclaimers.

Speaker1:
[22:37] Speech Talk is a proud member of the Human Content Podcast Network.