Find the Right Use Case

Edit on GitHub

In Getting Started we illustrated the advantages of multi-modal interfaces. Some of these are more relevant to certain types of smartphone apps. Before mapping your voice interface, the next step in the design process is to narrow down use cases.

Here are some key questions to keep in mind when planning to add voice to a mobile app:

  1. What are the problems or pain points you are trying to address with voice?
  2. How can a multi-modal solution solve or ease these problems?
  3. Will integrating voice provide a more effective way of accomplishing certain tasks?

Addressing all answers to these questions is beyond this guide’s scope. Many boil down to two primary sources of information: your product team and your users.

Tactics for Identifying Voice Integration Points

Evaluate Product Objectives

Focus on measurable objectives that have been hard to solve with visual UI changes. For example, you may be looking to increase the accuracy of search results. A spoken natural language search can make this easier for some users. Or, you may notice users are having trouble finding a certain setting. Allowing a user to ask for it by voice can alleviate this issue.

Consider App Context

Context is key with multi-modal interactions. What are people doing while using your app? Some examples include commuting (driving, biking, walking, etc.), exercising, cooking, and childcare. Your users may not always be able to touch their device. This is where voice input and output can be valuable.

What environment are people in while using your app? Are they somewhere where it might be hard for the device to hear them? If so, touch input may be more appropriate. Are they somewhere where others may overhear them? If so, visual output may be more appropriate.

Finally, what devices are your users pairing with their mobile device? Some examples to consider include Airpods, smart watches, and other voice-activated wearables.