As digital experiences continue to expand beyond the screen and transform into multi-modal experiences, our designs and prototypes need to transform as well. One method of interaction, voice, has gained significant popularity and helps to make experiences accessible to more people through hands free operation. In this guide, you’ll learn how to use features such as voice triggers, as well as audio and speech playback, to prototype a voice assistant feature for a mobile app.
In this guide you’ll prototype voice interactions into an app designed to report snow conditions at ski resorts. To follow along in Adobe XD, download this starter file complete with assets and imagery for the snow report app. To access sounds to use within the prototype, download Sam Anderson’s sound design kit on Behance.
Setting up the document
Once the starter file is downloaded, you’ll start by setting up the artboards, which have all been provided in the starter file. This sample workflow is a simple example of what is possible using voice triggers, as well as audio and speech playback.
Several interactions are already set up in the document. In this guide the voice triggers, as well as responses, will be prototyped to complete the experience itself. Many of these commands will be connected to a single component to reduce the need to copy and paste interactions between artboards and to make voice commands accessible from anywhere.
Start by locating the Main Component of the Voice Activator component. You can find this component on the first artboard in the starter file.
This Main Component will be the command centre for voice triggers. Only a single state is needed, but if you wish to customise the styling, do so now on the Main Component.
Creating voice triggers
Creating a voice trigger in Adobe XD follows the same process as any other trigger type. First, select the item you wish to use as the target. In this scenario you’ll be using the voice icon in the Voice Activator component as a common target. Voice triggers can be arbitrarily placed within an artboard, but to maintain a sense of order in the document, it is helpful to place them onto either the same target as a Tap trigger (for the same action) or a common element that represents the action being performed.
In Prototype mode, with the voice icon selected, drag the handle to the artboard titled Conditions in Whistler Today and set the trigger type to Voice in the Properties Inspector. Use the action type of Auto-Animate to create a smooth transition. Set the command to something like “What are the snow conditions in Whistler?” to complete the interaction.
Next, return to the voice icon in the Main Component. This time, click the plus icon to add a new interaction on the same target and drag the handle to the Play my snow tunes artboard. Repeat the steps above, however this time use a command like “Play my snow tunes” or “Play my snowboarding music”. You can also stack multiple voice interactions with the same destination to capture multiple different phrases which can be helpful for handling different ways of speaking.
Repeat the above steps for the Big White Details artboard and set the command to “Show me information on Big White” or something similar. Now all of the navigation voice commands are set up and configured to this Main Component element. Selecting the component, one final trigger can be used to take you back to the main voice screen at any time.
Set a Tap trigger with the component selected and drag the arrow to set the Welcome Screen as the destination. Keep the action type as Auto-Animate. Now whenever this component is clicked, it will return the user to the main “how can I help you?” screen to start fresh.
Placing the command component
With the voice activator component set up, it needs to be placed into position on the artboards. Start by making a copy of the component (either with copy and paste or dragging an instance from the component assets) and placing it centred in the bottom half of the Welcome Screen.
Repeat the process on the other artboards (with the exception of the last artboard), however this time reduce the size of the component and place it further to the bottom of the screen, as it won’t be the main focus when the answers are being read out. To make it easier, set it up as desired on the first artboard, then simply copy and paste it in position on the others.
Adding speech responses
With the voice commands in place, previewing the design should allow you to navigate the screens using your voice. To do so, hold the space bar on your keyboard and speak the commands as entered into Adobe XD.
The voice commands themselves will help you navigate to the pages, but next, speech playback will add a response and feedback once the navigation happens.
Reply with ski conditions
The first action is asking for conditions on the mountain. Once a user asks for these conditions, the page will turn over to show the conditions in text format, but can also include a read out.
Select the Conditions in Whistler today artboard (in prototype mode) and in the Properties Inspector on the right, add a Time trigger with the action of Speech Playback. Select the voice that fits your experience the best and then enter in the text you wish the voice to reply with. Depending on the feel of your application, the tone of voice you choose will have an impact on how the application is perceived. Once the text is entered, preview the artboard to hear the voice in action.
Playing back audio
Just like speech can be read out following an action, an audio file can also be played after a time delay. Audio is great for playing back feedback like dings and chimes as users complete actions or for loading customised narrations and speech in the form of audio tracks.
In this scenario, the voice assistant is playing music requested by the user, so a music track can be uploaded and played back as a result of that action.
Start by selecting the Play my snow tunes artboard in prototype mode and creating a Time trigger. Set the action to Audio Playback and upload an .mp3 or .wav file to play. Always make sure you have the rights to use the audio if you’re distributing or using the content commercially.
Just like that your prototype can now play a song when a user requests the right command. However if they want to pause it, you can set up an additional artboard using the Pause Tunes artboard.
Select the pause button on the Play my snow tunes artboard and link it to the Pause Tunes artboard. Set the trigger to Tap and the action to Auto-Animate. This time however, an additional action will be added. Click the + icon next to Action and select Audio Playback once again as the action. This time, use a short simple sound like a tic or tac - this sound will be used to stop the previous audio from playing and provide feedback that the music has been paused.
Lastly, create a Tap interaction from the play button back to the Play my snow tunes artboard using Auto-Animate as the action.
Though you can use speech and audio playback in conjunction with voice triggers, you may just simply want to leverage voice commands to navigate the application without auditory feedback. To do this is simple and in fact, you already set up this when you created a voice command to navigate to the Big White Details view. No further action is needed, however you can experiment with adding voice commands onto that screen for navigating back to the home page.
In Prototype mode, select the back arrow at the top left of the page and click the + icon to add an additional interaction linking back to the home page. Set the trigger to Voice and enter a command of your choice. Use the action type of Transition and an animation of Slide Right to create a slide over effect between the screens.
Just like that you have prototyped a voice assistant workflow using voice triggers, as well as speech and audio playback in Adobe XD. This is just the beginning of what can be done using these voice and sound features. Experiment with the starter file and see what you can create and share it with us on Twitter using #AdobeXD.