Sample React application for voice-guided conversation with Amazon Bedrock Large Language Model.
Chatbots are no longer a niche technology. They are now ubiquitous on customer service websites, providing 24/7 automated assistance. While AI chatbots have been around for years, it's the recent advances of large language models like generative artificial intelligence (AI) that have enabled more natural conversations. Chatbots are proving useful across industries, handling both general and industry-specific questions. Voice-based assistants like Alexa demonstrate how we are entering an era of conversational interfaces. Typing questions may soon feel inconvenient when you can just ask out loud instead.
In our vision, this solution can help people with disabilities who have difficulties with typing. Still, a limited level of interaction is required, and specific identification of start and stop talking operations is required. In our sample application, we solved it by having a dedicated "Talk" button that performs the transcription process while being pressed. For people with severe disabilities, the same operation can be implemented with a dedicated button that can be pressed by a single finger or by the chin.

This project uses AWS Amplify technology. Read here about the installation steps for AWS Amplify. https://docs.aws.amazon.com/amplify/latest/userguide/manual-deploys.html
You can view AWS Blog here