Creating the Future of Autonomous Car Interfaces

Human + AI Collaborative Control

A digital interface that allows users to influence crucial elements of autonomous driving

 
 
Speed Up GIF
 

SPONSORS

HARMAN, a Samsung Company

ROLE

Project Lead, Product Designer, Product Manager
[ Team of 6 ]

TIMELINE

January - August 2018

DELIVERABLES

Proof of concept prototype, Research methodology & insights, Interaction guidelines, Future roadmap

METHODS

Lo-fi & Hi-fi in Digital & Physical, UX Strategy, Product Strategy & Value, Market & Industry Analysis, Simulated Autonomy Experiment (Wizard of Oz), Generative & Discovery Research

 

Background

HARMAN (a leading supplier of audio and infotainment systems to car manufacturers) challenged our CMU MHCI Capstone team with an open-ended prompt: to envision the future of user experiences for autonomous vehicles.  

Autonomous technology is making amazing strides, however people are still apprehensive about interacting with it directly, they don't know what it's doing or why.

Users of self-driving mode in cars experience anxiety and discomfort during the ride because the AI's driving style is too mechanical and robotic, not matching how users would handle driving scenarios themselves. If they want to change anything about the driving style, they have to go through a disjointed process of taking control back from the AI then reactivating self-driving mode.

 

SOLUTION

Through hands-on research and a methodical prototyping process, we created a curved touchscreen interface that allows the user to control key aspects of the ride without having to switch out of self-driving mode, in a co-driving (levels 2-4) environment. 

In designing this novel approach to human + AI collaboration, we discovered that users need to have the right amount of control over autonomy in order to have a positive and comfortable experience in self-driving mode, allowing them to stay in it longer. Asking users to make decisions about their ride keeps them alert and engaged, increasing their willingness to adopt self-driving technology.

Our process involved creating a Wizard of Oz experiment to observe how people interact with autonomy, and their reaction is to being driven around by an AI. Insights from research led to ideation and creation of increasingly high fidelity prototypes using a self-driving simulator.

 
dash3.png
 

Giving Drivers Control Over Autonomy

Specific Moments, Handled Well, Define the Whole Experience

 

Users want autonomous vehicles to drive like they do, not like a machine

 

The best experience in a co-driving system is created when the user has agency, throughout the drive

 
 

FINAL PROOF OF CONCEPT DEMONSTRATION

The final fully interconnected prototype was created with an autonomous vehicle simulator, a curved plexiglass surface with rear projection, and computer vision hand tracking.

 
 
Annotated Screens
 

Control Over Space & Distance


PROXIMITY PREFERENCE
How far the car is from objects on the road —distance to medians, other vehicles, oncoming traffic, lane keeping.

Speed Up | Slow Down

Nudge Left | Nudge Right

ALWAYS AVAILABLE
Users have the option to influence this at any point in their autonomous driving.

 
 

Agency in Decision Making


CONTEXTUAL DECISIONS
The AI of the car asks for a decision from the human about binary options; going or not going.

Passing a Bus — Yes | No

Blind Turn — Yes | No

SITUATIONAL AWARENESS
Keeping the driver alert and engaged as these decisions demand an understanding of the environment and the situation at hand.

 
 

Human + AI Collaboration


BUILDING A RELATIONSHIP
The ultimate goal is to create a system of communication between the human and the AI. We’ve created a space for the human to have an area of input, in the curved interactive screen, and for the AI to respond back through its own space.

SINGLE POINT OF COMMUNICATION
A system that can become a single point of communication that will enable people to actually see themselves using a self-driving car, and feel like they have agency over autonomy.

 

Creating Unique Interaction Patterns

Indirect, Loose-Reign control

Swipes not taps

Manual driving is currently all about a direct 1:1 system of engagement (tight-reign). The curved screen is designed for loose rein, indirect control, for which we created an interaction pattern that does not include any taps, buttons, or direct presses, but only swipes.

Leveraging the driving metaphors

To create a familiar, yet distinctive, interaction pattern, we leveraged the metaphor of current drivings tasks and designed the curved screen that mimics that. Swiping up to speed up is the press on the gas pedal. Swiping left or right is the turn of the steering wheel.

 

Curved Control for Proximity Preference


GESTURAL CONTROL
Vertical swipes always indicate speed, momentum, and movement forward or back. Lateral swipes always indicate horizontal motion.

BLIND REACHING
The separation of these two continuous actions allows the user to know where they are reaching, and know what outcomes are available, without looking down.

Accurate touching isn’t required, especially to change speed, with the whole screen acting as an activation area.

 
 

Curved Control for Contextual Decisions


SIMPLE DECISION MAKING
In go or don’t go, binary, decisions, the AI offers a prompt, to which the user can then physically push a confirmation toward the AI, or pull it back to dismiss the option.

ERROR PREVENTION
The accordion style, collapsible animations, have a high threshold that allows for the user to change their minds mid-motion.

The shape of the curve and the swipe interaction itself, make sure that the intention doesn’t get triggered by accident.

 
 
Binary Second Screen.gif
 

AI Prompt & Feedback for Contextual Decisions


SECONDARY SCREEN PROMPTS:
DECLOAKING THE ‘BLACK BOX’ OF AI
The AI lives in the space of the secondary screen
through which it communicates that it is aware of the environment, and understands that the human may want to input their preference over the situation — providing transparency.

ENGAGEMENT THROUGH INVOLVEMENT
As much as we want users to be fully engaged and supervising the semi-autonomous process, we know that, inevitably, they will reach for their phones. Clear visual signifiers and affordances allows people to look up at either of the screens and know what options are available.

 
Speed Up Second Screen.gif
 

AI Prompt & Feedback for Proximity Preference


”FILL TO TARGET” FEEDBACK:
As well as providing visual feedback in response to user’s intention and input in their space. Indirect control means not having an immediate response from the car. Before executing an intention from the user, the AI waits for the human to set their preference, visualized with a white target line, that is synced to the curved control.

TRANSPARENCY & COMMUNICATION
The visual feedback shows how the car executes the command and catches up to the user’s setting with a progress fill, until the two meet, indicating a completion of the intent.

 

Solving a Wicked Problem:
Research & Analysis

Exploring the state of self-driving

Academic & Industry Research Conclusions

Interviewing stakeholders, industry experts, and diving into available research informed our understanding of how people think about the spectrum of autonomy where the industry is heading, and areas of opportunity.

 

A COMMON MISCONCEPTION IS THAT AUTONOMY IS BINARY

People’s immediate thoughts involve imagining a fully robotic “living room on wheels” scenario

 
 

AUTONOMY IS FLUID AND HUMAN SUPERVISION IS REQUIRED

The reality is actually somewhere in between as self-driving cars still require that drivers takeover control in specific instances

 
Asset 811@4x.png
 
 

TECH IS STRIVING FOR FULL AUTONOMY,
CURRENT STATE IS PARTIAL AUTONOMY

As technology improves, so will the AI, but realistically the next 5-10 years of production are co-driving

 

Understanding a Future User

Primary Research & Synthesis

With a comprehensive idea of the industry, technology, and our area of opportunity, we had to start understanding our user. Our mission was undefined and unrestricted which left us with many unanswered questions.

 

RESEARCH GOALS
How people interact with autonomy, self-driving cars, and what their reaction is to being driven around by an AI. What makes people trust/distrust the AI. What would people miss about driving.

ANALYSIS
Contextual Inquiry, Coding + Affinity Diagram
In order to tackle an open ended problem, keep an open mind and create aggregated takeaways, we conducted a contextual inquiry analysis. Leading to natural groupings and data-backed conclusions, without preconceived notions.

 

METHODOLOGY
Wizard-of-Oz Ride Along Experiment
Created a custom experiment where participants were inside a vehicle that they believed was autonomous.

Tesla Owner Interviews
We needed to get to the closest population that experienced autonomy — to understand what were the pain points with AutoPilot and why or why not they use it.

Car Enthusiast Interviews
Diving into their relationship with their cars and what they will miss the most once driving is less manual.

 
We wrapped up the Discovery Phase by turning nearly 1,300 data points into themes and insights, leading to “How Might We” statements to drove us into ideation, prototyping, and design

We wrapped up the Discovery Phase by turning nearly 1,300 data points into themes and insights, leading to “How Might We” statements to drove us into ideation, prototyping, and design

 
 

observing HOW Users react to autonomy

Creating a Custom Experiment

Participants were driven around in a car with simulated autonomy, most believed that they were being driven by an AI. Participants were a representative sample, including demographics, tech familiarity, and drivers/non-drivers.

 

CAR INTERIOR EXPERIENCE

 

TURNING USER REACTIONS INTO DATA

 

Users Feel Powerless Over Autonomy

Data Points into Two Areas of UX Opportunity

Even though full autonomy is supposed to be infallible, our research participants were still visibly uncomfortable with self-driving operations, and Tesla owners found themselves anxious when in AutoPilot.

 
 

TURNING KEY THEMES INTO OPPORTUNITY

Testing Insights.jpg
 
 

PROXIMITY PREFERENCES

Users constantly care about a car’s position relative to other objects nearby and found themselves wanting to tweak that distance

 

CONTEXTUAL DECISIONS

Participants judged their experience through by evaluating how the car handled specific moments in comparison with what they would do

 

DECISION MAKING IS FLUID

The decisions on how a car should act are dependent on many different personal preferences that are multiplied across different contexts in people’s lives

 
Driving preferences factor in someone’s usual driving style, weather, who is in the car, and mood. All of these are again multiplied by each individual driving scenario.

Driving preferences factor in someone’s usual driving style, weather, who is in the car, and mood. All of these are again multiplied by each individual driving scenario.

 

Prototyping & User Testing

 

HOW do we give users control?

Idea Generation & Validation

Following our discovery period of research, we launched into coming up with pretty far out concepts, and reign back in during feasibility and risk assessment discussions.

Starting with simple mockups on paper and simulating interactions allowed us to quickly validate our concepts and narrow our scope. We assessed the usefulness and viability of our product while generating new ideas for interaction methods.

 
Generating ideas using round-robin visioning, involving the whole team

Generating ideas using round-robin visioning, involving the whole team

 
Physical prototyping and bodystorming allowed for quick concept validations with real users at every iteration. Integrating physical low fidelity prototypes with an autonomous vehicle simulator added realism, and stronger insights.

Physical prototyping and bodystorming allowed for quick concept validations with real users at every iteration. Integrating physical low fidelity prototypes with an autonomous vehicle simulator added realism, and stronger insights.

 

Methodical user testing of independent variables

 

Prototyping and testing in stages

Knowing that designing a new product with dependent variables is akin to the chicken and egg problem, we decided to split out each piece into a set of user tests. Each result would guide us in building the next element. At every step, real users (representative sample) with varying degrees of self-driving car knowledge and experience were brought in for user testing.

 

Connecting Ideas to User’s Conceptions


INITIAL DESIGN & PLACEMENT

Starting with simple interactions, we gauged user’s understanding of the concept

 
Sketches2.jpg
 

Physical Affordances Mapped to Mental Model


SHAPE TESTING

Creating a physical connection through the curve emphasized the level of control

 
 

Creating an Effective Interaction Pattern


TESTING INTERACTIONS

Simple visuals and clear feedback allowed users to quickly respond to the option — less thinking required

 
Prototype - Interactions 2.jpg
 

Giving the Right Amount of Transparency


DETAILED DESIGN OF FEEDBACK

Icon reinforcement did not improve comprehension of the car’s actions

 
 

Cohesive Form and Design Language


FULLY INTERACTIVE TESTING

Creating a progress fill helps users further understand how the AI is executing their command (beyond looking outside)

 
 
 
 

MOST USERS STATED THEY WOULD ABSOLUTELY USE THIS INSTEAD OF CURRENTLY AVAILABLE OPTIONS

“I don’t like having to intervene and then re-engage the car’s self-driving...I would rather make a decision and direct the car...I would definitely rather use [this] control.”

Tesla Autopilot User
Usability Testing Participant

 
 

Human + AI Relationship Through Design

 

Bringing Understanding and Comfort to AI Interactions

TRANSPARENCY THROUGH COMMUNICATION
No more feeling powerless during autonomous driving and confusion over the AI’s actions.

REDUCED ANXIETY
Increased control imbues a sense of safety. Customizing a ride makes users feel infinitely more comfortable and engaged.

MORE TRUST = MORE TIME IN AUTONOMY
As the car learns user’s preferences, drivers will be far more inclined to use autonomous mode more frequently.

 
communication2.png
 

Productization & Roadmap

Technical Feasibility & Integration

Intervention within the Sphere of Safety

Self-driving mode is safe, and the car will do what it think is appropriate for the situation. Letting the user have a degree of control over autonomy lives within that sphere and allows for interventions within the thresholds of safety and security.

 
 

Diagram showing the simplified decision flow between when the AI interprets it’s surroundings, and executes the appropriate action.

 
 

Creating a future experience in autonomy

We’ve built a fully functional prototype, which allowed us to learn from real users, and iterate on the form and function. However, we envision the final form to be created from curved touchscreen monitors that would be perfectly integrated into the dashboard of the future.

Expanding the Set of Use Cases

Before this can fully productized, the list of binary prompts needs to be expanded for the situations that users want control over. By leveraging the flexibility of a digital screen, they can be incorporated quickly and easily.

Building User Profiles from Data

Data gathered from the next level of testing will be able to inform an even richer set of prompts that can propagate to all drivers, learning and adjusting over time.

Single Point of Communication

As the list of possible areas of interventions with the AI expands, the newly created interaction patterns and methodologies for input and output can be leveraged for future experiences, even in level 5 autonomy.

 
 
 
 

Next Steps & Takeaways

Going into production

In-Vehicle Integration

Our client, HARMAN, has accepted our project in full and plans to integrate it into their prototype vehicles in the coming months.

Unexplored Opportunities

This project was extremely exploratory, and throughout each phase we made research and design decisions that took us on a path towards a curved screen interface. However, there are many more avenues to explore with more resources.

PRODUCT
Physical responsiveness as an additional signifier: haptic feedback, responsive slope/curve, audio prompts and feedback tones.

USER TESTING
Testing in higher fidelity with self-driving operators from Uber, Waymo, Lyft.

RESEARCH
Human factors studies for the physical parameters. Mental model analysis (relatedness study) for the interaction patterns.

PROTOTYPING IS KEY TO FUTURE PRODUCTS

Asking “what do you want?” isn’t useful

When inventing future products that have limited existing analogies, it is not useful to ask people what they want. “What would you want to do in a car if you didn’t have to drive anymore?” will yield a limited set of answers that isn’t all that different from existing perceptions. The key is putting ideas in front of people and evaluating the reaction and usability — that is where the best ideas will emerge.

Trust your intuition and leverage different fidelities

There is a time and a place to user test everything, however, when designing products with no existing design patterns, it’s important to trust your gut and know when it’s appropriate to do usability testing. Different levels of fidelity yield different results and general users have thresholds of quality that need to be met before useful information can be gathered.