top of page
Search

Revolutionizing Hands-Free Inspections: The Journey to Building Our Voice Assistant

Updated: Nov 20

In today's fast-paced world, efficiency is key. Imagine a scenario where hands-free inspection becomes a reality, allowing users to focus on capturing crucial moments through pictures rather than fumbling with buttons. This vision is not just futuristic; it's within our grasp with the advent of Large Language Model, Speech to Text, and Text to Speech technologies. In this blog post, we will take you on a journey of how we implemented a voice assistant, from the initial concept to a functioning tech demo.


Building the Foundation with OpenAI & Amazon Polly:

 Our journey commenced with the seamless integration of OpenAI’s chat-completion model, gpt-3.5-turbo in this case, and Amazon Polly’s text-to-speech technology. The ChatGPT API emerged as the cognitive powerhouse, empowering our system to not only comprehend but also interpret user input with remarkable precision. Meanwhile, Amazon Polly's contribution cannot be overstated, as its simplicity and natural voice rendering set the stage for an immersive and user-friendly experience, enhancing the overall interaction within our voice assistant ecosystem.

 

In addition to ChatGPT API's prowess in addressing users' queries, OpenAI's function calling introduces a game-changing dynamic to our application. This feature acts as a robust mechanism, allowing the execution of commands within our system, thereby elevating the functionality of our voice assistant to unprecedented heights. While we encourage exploration, for an in-depth demonstration of how function calling works, we invite you to visit this link. Through the innovative integration of function calling, spoken phrases effortlessly transform into actionable tasks, ensuring the precise execution of corresponding functions. This streamlined approach not only bolsters the efficiency of our voice assistant but also adds a layer of adaptability, enabling the application to dynamically respond to a diverse range of user inputs. With OpenAI's function calling in play, our voice assistant seamlessly translates spoken intent into practical execution, delivering a responsive and intuitive hands-free inspection experience for users.

 

 

Amazon Polly Integration Example



Navigating the Challenge: Transcribing Speech for Offline Use

The next hurdle in our journey was the need to transcribe speech back to text, particularly in offline scenarios. This challenge led us to the discovery of Picovoice, a company specializing in voice AI solutions. Picovoice's offerings, ranging from customizable wake word models to advanced speech-to-intention capabilities, perfectly aligned with our vision for an offline-ready voice assistant.

 

Porcupine Wake Word

One of the pivotal components in our voice assistant's functionality is the implementation of a wake word. A wake word acts as a trigger, signaling the voice assistant to activate and listen for further commands. In our case, we utilize Picovoice's Porcupine Wake Word model. This allows users to initiate hands-free inspection by uttering a customized wake word, such as "Hey Tenera"

 

Porcupine Wake Word Integration Example:



Cheetah Speech-to-Text:

Once the wake word is detected and the voice assistant is active, the necessity of speech-to-intent comes to the forefront. In our case, we leverage Picovoice's Cheetah Speech-to-Text model for the online cases. This component interprets spoken commands, converting them into text for OpenAI chat completion model to handle the rest.

 

Cheetah Speech-to-Text Integration Example:



Rhino Speech-to-Intent:

 In offline scenarios, where connectivity may be compromised, we seamlessly switch to leveraging Picovoice's Rhino Speech-to-Intent model. While it may exhibit slightly lower accuracy than OpenAI's function calling, Rhino proves invaluable in ensuring continuous functionality without relying on an online connection. This component interprets spoken commands, converting them into actionable tasks. Users can articulate inspection instructions, such as "Capture images of the anomaly," and Rhino facilitates the translation of these spoken phrases into executable commands for the application. The strategic use of Rhino Speech-to-Intent not only ensures uninterrupted voice assistant functionality in offline settings but also maintains a balance between accuracy and accessibility for a comprehensive hands-free inspection experience.

 

Rhino Speech-to-Intent Integration Example:



As we traverse the path towards hands-free inspections, the combined power of OpenAI APIs, Amazon Polly, Porcupine speech interpretation models has reshaped our approach to voice technology. Building a voice assistant is no longer a distant dream but a tangible reality. The seamless integration of these components empowers users to initiate and control inspections through natural spoken commands, offering a truly hands-free experience. Stay tuned for updates and improvements on this concept as we prepare our voice assistant for prime time!!




Σχόλια


bottom of page