Bus signs are great at giving passengers information regarding which bus routes serve their stop, but they fail to help with complicated questions like, "Does bus X pass by stop Y?" Fortunately, with the advent of voice recognition technology, passengers no longer have to be constrained to the information provided at traditional bus stops. We can now provide all the information a passenger would want through a voice-user interface.

Our entry for the 2016 DIY with TI Intern Edition competition was Darcy, a dedicated voice assistant that answers questions about Dallas Area Rapid Transit (DART. Darcy physically consists of a BeagleBone Black connected to a printed circuit board (PCB), a speaker, a microphone and Wi-Fi® adapter (see Fig. 1 and 2) and can be installed at bus stops or train stations. The heart of Darcy’s software is Amazon’s Alexa Voice Service (AVS), a cloud-based service that enables voice recognition for Internet-connected devices. Alexa is most commonly used in the Amazon Echo wireless speaker; however, the AVS API allows developers to utilize Alexa in their own products and create their own voice apps.

Darcy has several features including the below list of commands:

  • “Where is my bus (or train)?” — Darcy’s main feature is telling passengers when the next bus or train will arrive. Once Darcy is told at which bus stop or train station she is installed, you can save your preferred route with Darcy so that she can give the time of the upcoming bus or train.
  • “When is the next 582 bus going southbound?” – If you have already told Darcy your stop, you can ask her about any routes that service that stop, not just the one that she has saved for you.
  • “Does my route go to stop number 67890?” – Darcy can tell you whether the route she saved for you goes to a particular stop. This can be useful if you want to check whether your route will take you to your house, a museum, a restaurant, etc.
  • “How much is a ticket?” – If you’re unfamiliar with DART, Darcy can give you basic fare information, so you’re ready to pay when the bus finally arrives.

Figure 1: Darcy’s final design

Figure 2: Block diagram for Darcy

Having BeagleBone run as an Amazon Alexa client was straightforward, thanks to the Alexa BeagleBone open-source project on GitHub. The real programming challenge was developing Darcy as a custom Alexa skill. Creating a custom Alexa skill consists of two parts:

  • Designing the front-end interaction model
  • Writing the back-end code that processes user interactions.

The interaction model describes how someone should use the skill. Any custom skill is programmed to listen for certain spoken phrases, called utterances, which correspond to the user’s intended actions, called intents. For example, the WheresMyBus intent is triggered when a user asks, “Where is the next bus?” Alexa handles the challenge of converting speech into text, but it is the responsibility of the developer to account for all the possible phrases that users can say when running their skill.

The back-end code for Alexa skills is hosted through AWS Lambda, an Amazon Web Service that eliminates the hassle of maintaining a server. When a user utters a valid phrase to Darcy, Alexa packages the specified intent and any user-provided information into a JSON object and sends it to Darcy’s lambda function for processing. Darcy’s lambda function is a Python program containing the code required to validate user-provided information, communicate with DART’s servers, and formulate Alexa’s spoken responses.

Providing Darcy with DART’s transit data was a particular challenge because DART’s real-time information is not available in the standard format that other transit agencies follow (GTFS-realtime). Instead, we developed a way to utilize DART’s mobile website. DART’s website returns information about bus and train arrivals in a simple and consistent HTML format, so sending an HTTP GET request and parsing the response is trivial using Python. Because Darcy communicates directly with DART; users can trust that her information is accurate.

To complete Darcy’s design, we created a custom circuit board and mounting structure. Darcy’s PCB consists of a pushbutton to start/stop voice recordings, as well as red and green LEDs to provide feedback to users. The mounting structure was 3D printed on a Makerbot that holds all the components together.

Creating Darcy was a great learning experience, and we believe that voice assistants like Darcy have the potential to improve public transportation and other services drastically. With some more time and development, we may soon see voice user interfaces installed in all kinds of places. Where do you think a voice assistant such as Darcy would be useful? Leave a comment below!

Related Content:

  • Learn more about Darcy and see it in action here.
  • Learn more about Darcy’s GitHub repo here.
  • Learn how to get started with BeagleBone.
  • Find out more about custom Alexa skills.

Anonymous