📱Voice+ experience

From start to finish, set up a multimodal journey in minutes with NLX

Checklist

You'll complete the following to successfully launch your Voice+ experience:

You may add the following to an existing voice assistant in your workspace, if desired.


Pre-setup: Integrations

Est. time to complete: ~5 minutes

A one-time integration of a Natural Language Processing (NLP) engine must be completed in your workspace:

A one-time integration of a voice-enabled communication channel must be completed in your workspace:

A one-time set up of an Action that sends an SMS:

  • Create a SendSMS Action

    • Be sure your Action has the following properties defined in the Request model schema:

      • Message (string)

      • PhoneNumber (string)

      • URL (string)


Step 1: Create a Voice+ script

Est. time to complete: ~10 minutes

Voice+ experiences pair voice prompts from an AI assistant to a digital asset (website, mobile app, etc.) to help guide users through a self-service task. Begin by identifying the elements from your digital asset that need to be mapped to a voice line (pages, buttons, etc.)

  • Select Voice+ in your workspace > Click New script option > Name your Voice+ experience

  • Click Save

  • Click + Add step > Enter the bot's voice line in the message field*

  • Repeat for each step

  • On the final step of your Voice+ script, enable the Action toggle

    • End: Terminates the phone call after the AI assistant delivers the voice step

    • Continue: Proceeds from the Continue edge of the Voice+ node in your intent flow (see Step 2)

  • Click Save

  • Download your steps to a .csv or .json file using the Download link


Step 2: Create a Voice+ flow

Est. time to complete: ~5 minutes

As all Voice+ experiences begin with a traditional voice experience (IVR), you'll construct an intent workflow that sends the SMS containing a link to your digital asset and initiates Voice+ mode:

  • Select Intents in your workspace menu > Create a new intent and set the flow to a voice channel you integrated in your workspace

  • Add training phrases to your AI assistant to match the user's intent, or enable Skip training setting on the intent's Settings tab if your bot only automates this task

  • After adding a greeting with a Basic node to the Canvas, place an Action node that will send the SMS link (requires an API that sends SMS be previously set up under Actions in your workspace). At a minimum, be sure your Action's Request model has at least two properties 1) Phone number, 2) URL

  • Within your intent flow, select the Action node > Use the system variable {system.userId}for the Phone number field of your payload on the Action node's side panel

  • In your Action's URL payload field, enter your URL followed by the query parameter ?cid={system.conversationId}

  • Place and link a Basic node after the Action to indicate a text was successfully sent to the user

  • From the Basic node, place and link to a Voice+ node > Click Save


Step 3: Deploy & install SDK

Est. time to complete: ~10 minutes

You'll need to install NLX's Voice+ SDK to each screen of your digital asset so applicable API calls can be made to trigger voice lines where you've defined them:

  • Select Voice+ in workspace menu > Choose your Voice+ script > Click Deployment tab of your Voice+

  • Choose Review & build > Click Create build

  • After a successful build, select Deploy from the Production column > Click Create deployment

  • Select Details link next to the Deployed status > Under Setup instructions, click Open Voice+ configurator

  • API key: You may auto-generate an API key under the Voice+ script's Settings tab, Save, and then enter it in the configurator's field

  • Conversation ID: Dynamically generated for each conversation session with a user by Dialog Studio, you may parse the ID from the user's URL path. Sample code: https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams/get#examples

  • Install code snippet with applicable step IDs (downloaded in Step 1) to each page of your frontend

To avoid CORS errors, make sure to whitelist your URL domain(s) in the Voice+ script's Settings.


Step 4: Set up bot & deploy

Est. time to complete: ~5 minutes

Now you'll create the AI assistant users will interface with. This step involves attaching all intent workflows you want your bot to access, defining flows to handle certain behaviors, setting up the voice channel your bot supports, and deploying your conversational AI assistant!

  • Select Bots from workspace menu > Choose New bot

  • Enter a descriptive name > Click Save

  • Click Intents tab of bot > Select + Add intents > Attach the intent containing the multimodal workflow from Step 3

  • Select Default behaviors, assign an intent to the Welcome behavior (if you intend for your multimodal flow to be the only intent your bot handles, then assign to the Welcome behavior)

  • Select Channels tab of bot > Expand the voice channel your bot will support (e.g., Amazon Connect, Amazon Chime SDK, etc.). Be sure this is the same channel set on your intent flow(s) attached to the bot, or add the channel to your intent flows > Click + Create channel

  • Enter required details for voice > Click Create channel > Click Save

  • Click Deployment tab of bot > Select Create or Review & build

  • Wait for validation to complete > Select Create build*

  • When satisfied with a successful build, click Deploy

Last updated