Charlie

A multimodal driving navigation assistant.

charlie-hero-lg.png

The Project

Charlie is a mobile car navigation assistant that empowers drivers to safely move from location to location using multimodal input and feedback.

Project Details

  • Type: Graduate School Interaction Design Studio
  • Duration: 2 weeks
  • Teammates: Siyi Kou, Eugene Meng, Mai Bouchet & Shravya Neeruganti

The Challenge

Can we empower drivers to make safer route decisions with a multimodal navigation design?

My Role

Background Research

I conducted secondary research on VUIs, driver-device interaction and navigation challenges, distilling findings into insights and design principles.

Interaction Design

I created an interaction model highlighting Charlie’s input/output modalities and designed dialog prompts and feedback messages.

Video Prototype

I co-wrote the script for and narrated the concept envisionment video.

Background Research

To kick off the project, we immersed ourselves in academic research covering VUI design principles, interaction weaknesses of existing navigation systems, the role of emotion in driving contexts and how to design for high stakes contexts.

We uncovered three problems with existing voice UIs in a driver/navigation context:

Communicating Intentions is difficult

A driver can influence the route prior to the trip by choosing certain characteristics, but not so much during a trip. Communicating intentions during a trip can dangerously result in the driver tapping through multiple menus to abort the routing process.

Navigation devices don’t adapt well to change

This is particularly true when a driver deviates from a calculated route. The driver is rarely properly informed about route changes, recalculations are usually silent and accompanied by short visual messages like turning progress arrows.

Information exchange loops are inadequate

A study from 2010 found that when drivers intentionally left routes their navigation systems produced frequent false visual and auditory messages. The persistence of such messages increases stress for the driver in an already stressful situation.

Introducing Charlie

Charlie is a multimodal driving navigation assistant that uses voice control, haptic feedback and conversational language to help drivers make safe and stress-free decisions.


Guided Mode

In Guided Mode, Charlie prompts the driver for additional trip information before calculating a route. This helps ensure the selected route matches the driver’s intentions.


Learning Mode

Learning Mode tells Charlie that the driver wants to take control and teach Charlie a preferred route that should also be remembered for future situations. The driver initiates the mode with the command “Follow my lead.”


charlie-lead.png

Assisted Mode

Charlie can sense when other passengers are present and will offer the option to temporarily mute audio feedback. This mode offers the passenger the ability to take control of the navigation by communicating visual map feedback to the driver.


The Journey

Design Principles

Following research, the team aligned on our design goals for the project. We wanted to focus on Charlie’s role as a contextually-aware assistant that can be helpful in a variety of situations. To address the interaction pain points we identified earlier, we determined that Charlie would need to:

  1. Proactively learn a driver’s behavior and preferences
  2. Provide actionable feedback
  3. Adapt intelligently to changes during a trip

Activating Charlie

Charlie can be activated in two ways: speaking “Hey Charlie” close to the phone’s microphone or by double tapping on the steering wheel.

Providing both a voice and tactile mode of input gives the user flexibility in their interactions. When background noise is present, for instance, the user may elect to wake Charlie with the double tap to avoid risk of interference.


Limiting Interaction

When combining voice with other input and output modalities, it’s important to consider how different combinations produce different experiences. When designing Charlie, I considered the input/output relationships, focusing on flexibility and simplicity.


charlie-modalities.png

Reflection

Future Directions

  • Driver microphone: This would help in situations of background noise, reduce interruptions, reduce errors when passengers are present in the car.
  • More Tactile Input: The current design includes only one tactile function: to activate a conversation with Charlie. Another could enable Charlie mode switching or serve as an on/off switch for the audio features.
  • Competitive Analysis: I would conduct an in-depth study on existing navigation assistants in and outside of the automotive realm.

What I Learned

Voice user interfaces should be proactive

Natural language UIs often put the burden of deciding what to ask on the user. Unlike screens which can provide contextual clues about available options, voice interfaces need to be proactive and inform the user of what’s possible. This also helps the user understand the mechanics of dialog with the interface.

Respect privacy

When other passengers are present in cars, audio devices can become unnecessary or even distracting. It’s important to design with a strong understanding of the broader social contexts in which the interface is being used.

Less interaction might be better

When designing for interactions that occur while driving, the goal is to decrease distraction. Yet, there are many ways in which designing a conversational UI can result in the opposite: excessive feedback, unpredictable dialog or interference with background noise. It’s critical to design around future scenarios and especially around failures and breakdowns. Rigorous testing can be effective at mitigating risks.