Voice UI: GasBuddy

ux voice interaction siri alexa

This is a personal project that grew out of insights that emerged from other research for GasBuddy projects and my interest in voice interaction.

If you would like to see and hear more, send me a note!

 

GasBuddy VUI Background

GasBuddy is an app that helps users find cheap gas nearby and provides additional ways to save. Users keep gas prices current by reporting them when they fill up or while driving around. Because users are on the road, safety is a primary concern. During my interviews with users, I repeatedly heard requests for the ability to take advantage of built-in voice assistant features to find or update information quickly and safely. Users need the ability to take advantage of key features, including updating prices, hands-free if they are on the road. With a voice assistant, GasBuddy users can see if gas prices will be going up, find stations nearby with the cheapest gas, as well as update prices at stations they see.

Competitive Research

Google Maps and Waze are the primary competitors for GasBuddy. The primary use cases for voice in both apps are finding directions or searching for places nearby.

The Google Maps voice user interface (VUI) does a good job of indicating when the voice assistant is listening. However, it does not provide suggestions for acceptable commands. It also does not listen for very long, giving up after 1-2 seconds.

Users can activate the Waze VUI either by tapping a microphone icon or by saying “Ok, Waze,” which makes it very easy. Once triggered, the interface is animated to react to commands and shows different options for what the user can say, though these were not interactive. The app responded with audio cues to confirm or request more information. Even though it does listen for longer than the Google Maps feature, I could not tell if the assistant heard my commands. If it did not understand, the feature automatically closed. I anticipate users could grow frustrated, especially in a car environment where there could be lots of background noise. When it did understand, Waze showed appropriate animations and asked follow up information (slots), if necessary.

User Research

While performing user research for other GasBuddy projects and features, one thing kept surfacing: users want a way to interact with GasBuddy using AI. During interviews, many users explicitly requested the ability to use voice commands or use image recognition to update prices.

User Stories

Considering the context of using the app and the feedback from users, I came up with primary user stories to act as a guide for developing the voice assistant capabilities.

  • As a user, while I’m driving, I want the voice assistant to tell me if prices are going up or down, so I can decide whether to fill up while I am already out.
  • As a user, when I’m in the car, I want the voice assistant to list the closest gas stations, so I can decide where to go without having to look at my screen.
  • As a user, while I’m on the road, I want to update prices quickly, so I can submit a report before I pass the station(s).

Voice Assistant Scenarios

gasbuddy voice assistant

gasbuddy voice assistant

voice user interface

Voice Utterances

To make sure the voice user interface can understand the intent of the user, I created multiple prompts and scripts. Further testing with users will provide more data and guide development of additional ways users could make commands and requests.

voice interface intents

Prototype

(Oh no! The files are too large! Send me a note to see them.)

Next Steps: Usability Testing & Refinement

I am planning to test the voice interface with users to learn if it will make sense in the context of driving around. Ideally, I could use a Wizard of Oz method to test. I will need to recruit some help building that or learn a new tool first. Given my time and resource constraints, I will most likely start with paper or InVision prototypes and gather more information about the various utterances people use. I’ll also focus on any trends in questions users have, actions they try to take and what they want to do next.