Hope Gerlach HouckCaryn HerringAnn-Marie RyanJ Scott YarussNihar MahapatraDena MujtabaMegan ArneyJia BinChelsea Johnson


About the Authors: 

Dr. Hope Gerlach-Houck is an assistant professor at Western Michigan University. Her research explores communication stigma and ableism and focuses on making a meaningful impact on the lives of people who stutter. She serves as the Publicity and Marketing Lead for HeardAI.

Caryn Herring is a person who stutters, speech-language pathologist, and a doctoral candidate at Michigan State University. Her research interests include the process of desensitization and reducing the adverse cognitive and affective components of stuttering. She serves as the Accessibility and Fairness Lead for HeardAI.

Dr. Ann Marie Ryan is a professor of Organizational Psychology at Michigan State University. Her research explores fairness in hiring processes and fosters workplace inclusion. She serves as the Policy and Advocacy Lead for Heard AI and brings insights on the use of automatic speech recognition in work contexts as well as on workplace accommodations for those with speech differences.

Dr. Scott Yaruss is a professor of Communicative Sciences and Disorders at Michigan State University. His NIH- and NSF-funded research focuses on reducing the adverse impact that people who stutter experience in their lives. He serves as the Stuttering Data Lead for HeardAI.

Dr. Nihar Mahapatra is an associate professor at Michigan State University and the principal investigator for HeardAI. His research focuses on AI accessibility for disability inclusion and environmental sustainability, with a special emphasis on voice AI for people with speech differences and smart species ID for marine conservation. Nihar has also explored AI-driven recruitment to shape the future of work and modeled protein-ligand interactions for early-stage drug discovery.

Dena Mujtaba is a doctoral student at Michigan State University studying Electrical and Computer Engineering. Her research interests include AI fairness, automatic speech recognition, and natural language processing. She works as a research assistant for HeardAI.

Megan Arney is a speech-language pathologist and doctoral student at Michigan State University. Her research interests include stuttering, specifically the impact that this has on a person’s life. She works as a research assistant for HeardAI.

Jia Bin is a doctoral student at Michigan State University. Her own journey of being a person who stutters, and an inter-cultural communicator motivates her to pursue a doctorate to help the people who stutter and bring people from various cultural backgrounds together. She works as a research assistant for HeardAI.

Dr. Chelsea Johnson is the project manager for HeardAI. She is passionate about increasing accessibility for people who stutter and strives to create an inclusive world where everyone has equal opportunities and access to the resources they need.

Listening is critical for meaningful interactions, building connections, and meeting personal needs. Good listeners can make all the difference in the day-to-day lives of people who stutter. Unfortunately, society is not always aware or considerate of how to be good listeners, particularly when speaking with someone who stutters. Poor listening can take many forms, including interrupting, finishing sentences, not attending to what someone is saying, and more. When asked to imagine a “bad listener,” most people would visualize a person, but technology can also demonstrate poor listening skills that result in inconvenience, inaccessibility, and inequity for people who stutter. In the context of voice technology, consequences of “bad listening” can include device activation failures, speech transcription inaccuracies, errors in speech recognition, and more.

Voice AI technology is integrated into billions of devices, revolutionizing life and work for many. Yet, for the more than 80 million people worldwide who stutter, along with over 180 million additional people with other communication differences, these technologies often fall short. Unfortunately, existing automatic speech recognition (ASR) systems, such as those found in our phones (e.g., Siri), phone trees (e.g., “Say in a few words why you are calling”), and automated processing of job interview responses, cannot consistently understand stuttered speech. In the workplace, this can result in unfair screening of job applicants based on incorrect inferences drawn from poorly transcribed interviews. In other contexts, speech recognition failure can result in incorrect purchases or charges, wrong directions, and increased frustration for people who stutter. Thus, the power and benefits of improved listening are relevant not only for human listeners; they are also important for digital listeners. 

We are a team of interdisciplinary researchers, people who stutter, speech therapists, engineers, and psychologists who are dedicated to solving this problem through our new initiative, HeardAI. Funded by the National Science Foundation, HeardAI is a technology and advocacy project with the mission of making voice AI more broadly usable and equitable. Our deep connection with the stuttering community is a key differentiator of our work. People who stutter serve as partners, advisors, and participants in design decisions. They contribute data for training our AI and provide feedback on all aspects of our project, ensuring that our solutions are effective and truly inclusive. Our leadership team and primary research assistants include people who stutter and allies, thereby reinforcing our commitment to the community and ensuring that their perspectives are represented at the highest level. Learn more about our vision and approach here

In order to create accessible technology, stuttered voices need to be represented in the datasets that voice AI systems are trained on. If you are an adult who stutters (18+) who is proficient in English and currently lives in the United States, you are invited to participate in our study that involves collecting speech samples from people who stutter. Please contact us at the email below for more information. If you are interested but not from the United States, please know that we plan to open this study to other countries soon. 

In addition to completing our study, we would love to learn more about your personal experiences with voice technology. Using the discussion threads for this post, please tell us about your experiences with voice AI and let’s get a conversation going. Consider the following:

  • Have you had any memorable experiences with Voice AI, either positive or challenging? 
  • How does Voice AI’s accessibility, or lack thereof, influence your daily life or work? 
  • How does the theme of “The power of listening” resonate with your experiences with Voice AI? 
  • What features or improvements would you like to see in voice AI to better support your needs? 

If you have questions or would like to get in contact with us, email us at HeardAI.team@heardai.ai  

Loading


Comments

When Technology Doesn’t Listen: Improving Voice AI for the Stuttering Community – Hope Gerlach-Houck, Caryn Herring, Ann Marie Ryan, J. Scott Yaruss, Nihar Mahapatra, Dena Mujtaba, Megan Arney, Jia Bin, Chelsea Johnson  — 11 Comments

  1. I thoroughly enjoyed reading about this new innovative technology. What features has the team improved upon thus far to increase accessibility and inclusivity? Thank you!

    • Thank you so much, Lizprice, for your message! We’re thrilled to hear that you enjoyed learning about our project. At HeardAI, we’re committed to ensuring accessibility across all touchpoints—functionality, usability, equity, and efficiency – guided by the voices of people who stutter.

      In Phase 1, we hosted a symposium, conducted barrier studies, developed a guidance framework, created a data collection app, and built initial voice AI models designed specifically for people who stutter, laying a strong foundation for inclusivity. In Phase 2, we’re focused on refining the guidance framework, developing an evaluation testbed, and creating an adaptive voice AI app to address real-world needs. For example, our voice AI app allows users to customize speech recognition settings, enabling adjustments tailored to their individual speech patterns.

      Please keep up with our progress by visiting our website: https://www.heardai.ai/

  2. Hello team HeardAI

    This is such important work. I primarily want to express my gratitude to you that you are taking upon yourselves this challenge. I imagine that the technological challenge is significant, given the variety of stuttered speech, and the fact that struggle is most often part of the experience of stuttering, and adds complexities to understanding/interpreting stuttered speech.

    The following situation, which I have experienced and which I am sure is comment, intrigues me:
    – The voice system tells me t say something
    – I block, and feel stress
    – the system prompts me again, causing me more stress, and more blocking
    – The system gives up, or I give up

    It will be fascinating to see how you HeardAI handles this situation, and how it differentiates this from, say, the phone line actually being dropped, or the phone connection actually being interrupted.

    Wishing you much success.

    Hanan

    • Hello, Hanan,

      Thank you so much for your kind words and for sharing your experience with us. Hearing about the challenges you’ve faced with current voice systems really resonates with us.

      Your experience with repeated prompts during a block highlights exactly why we’re developing HeardAI to be more responsive and supportive. Instead of interrupting or cutting off too quickly, HeardAI will allow for those natural pauses without adding extra prompts, giving you the time and space to speak comfortably at your own pace.

      We’re also working to train the AI to differentiate between a stuttered pause and an actual connection issue. With enough data, the system will be able to distinguish these moments, ensuring it doesn’t jump to conclusions or add unnecessary pressure. Insights like yours drive us to create a voice AI that genuinely respects and adapts to each person’s unique way of communicating.

      Thank you again, Hanan, for sharing this with us – it truly shapes our work in meaningful ways.

    • Hi Hanan! Thanks for sharing your experience. You’re really getting at the heart of the challenge here, which is that the issue is more than being “just” misunderstood. Just like listeners need to learn to be affirming and adjust to do their part when interacting with people who stutter, so does technology. Hopefully we will run into each other soon at a conference so we can discuss in more depth. Take care and congrats on your book!!! Hope.

  3. Thank you for the work you are doing with Heard AI. I have felt frustrated when stuck in loops of AI or automated systems that do not understand my stuttered speech. I look forward to hearing more about your work.
    Jeff

    • Thank you so much, Jeff, for sharing your experience and for your invaluable contribution in piloting our data collection app. We completely understand the frustration you’ve felt with current AI systems that don’t account for stuttered speech, leading to repetitive loops and miscommunication. Your involvement is helping us make HeardAI uniquely tailored to support stuttered speech – a differentiator that truly sets this project apart.

      Your support and feedback directly fuel our mission to make voice AI genuinely accessible for everyone. We’re excited to keep you updated on HeardAI’s progress and deeply appreciate your role in shaping this work. Please stay connected with us by visiting our website: https://www.heardai.ai/. Thank you again, Jeff!

  4. Hi all and thank you for this exciting project. I’ve initiated a project in Sweden “Speech as a digital tool”, as I was fed up by answering machines who don’t understand me, or who don’t give me time to say what I want to say. Even technical devices such as Google/Siri, my GPS, my TV, they either don’t understand my speech, or give a reply that’s not at all what I wanted, simply because they didn’t let me finish. Because even techniques need to learn how to listen. 😉 And this is not only for PWS/PWC, but also for people with an accent, a different mother tongue, or even who are temporarily hoarse. I’m interested to learn how your project will get AI to get us. 😉

    • Thank you so much, Anita, for your valuable insights and for being such a supportive friend. We wholeheartedly agree that enhancing voice AI to better serve not only people who stutter but anyone with a speech difference is crucial. Our initial focus is indeed on stuttered speech, allowing us to build a solid foundation that we can expand upon to include other speech differences as we advance.

      Your encouragement really motivates us. We’ll be sure to keep you updated every step of the way, and we’re looking forward to more great conversations and ideas together!

      Jia

  5. Hi, I am an SLP graduate student, and I wanted to say that I think this is so excellent that you are working directly with people who stutter to ensure that voice recognition becomes more inclusive for them! I also think this provided me with important insight when considering how stuttering may impact the life of my future clients who stutter – they will likely have negative situations related to voice recognition due lack of support and inclusivity. I need to be aware and always remember that. I am also a big fan of seeing how the developing field of AI can help to aid assistive technology. I am interested in hearing more about the results of your study.

    • Thank you so much Mick for your thoughtful comment and feedback! It’s truly heartening to know that this project is helping to illuminate some of the challenges people who stutter face with voice recognition technology. We believe that the voices and experiences of people who stutter are essential to developing AI-based assistive technology that’s genuinely inclusive and effective.

      It’s also inspiring to hear your enthusiasm for AI’s potential to make a positive impact. Please feel free to stay connected with us and follow our journey on our website: https://www.heardai.ai/.

      We look forward to continuing the conversation on how AI can help create more inclusive and supportive tools for everyone.

      Jia