For a vehicle to drive itself, it needs to
know where it is in the world, and it also needs to know what’s around it. Based on these
factors, it needs to be able to make smart and safe driving decisions in the real world.
And once you get in the car, you start to get a real feel for the way the technology
works and what it’s like in real driving situations. So there are a few things that have to happen
before the car can start safely drive itself. First, it has to figure out its location in
the world. So we use GPS, but GPS isn’t always that accurate, which is why we rely on our
other sensors, like the laser, which picks up on details in the environment that help
us identify a more precise location. So think of the sensors as the car’s eyes
and ears. But with eyes that can see far off into the distance and 360 degrees around the
car. And the great thing about having all of these sensors is that they can talk to
each other and get cross-checked information about the environment. So while we take in
a ton of information using our sensors, it’s our software that really processes all this
and differentiates between objects. All these objects are visible on the laptop that the
safety drivers use while testing the vehicles. Based on what the vehicle senses and processes,
these objects will be represented by different colored boxes. Cyclists will be red, pedestrians
yellow, and the vehicles will appear as either green or pink. These boxes demonstrate the
processing that takes place within the software. And think about the complexity here. People
look different, cars have different shapes and sizes. Yet despite these nuances, the
software needs to classify these objects appropriately based on factors like their shape, movement
pattern, or location. For example, if there’s a cyclist in the bike
lane, the vehicle understands that this is a cyclist, not another object like a car or
a pedestrian, so the cyclist appears as a red box on the safety driver’s laptop. And
the software can also detect the cyclist’s hand signal and yield to them appropriately.
When our engineers think about where the car should drive and how, safety is always the
top priority. So the vehicle takes into account many things, like how close it is to other
objects, or matching speed with traffic, or anticipating other cars cutting in.
For example, as a passenger, it can feel a little uncomfortable passing by a large vehicle
on the road. Our engineers have taught the software to detect the large vehicles and
the laptop shows them as larger boxes on screen. As our vehicle passes by a large truck, it
will actually keep to the farther side of the lane and give ourselves a little bit more
space. And, we’ve also taught the vehicle to recognize and navigate through construction
zones. The vehicle’s sensors can spot the orange signs and cones early to alert the
car of any lane blockage ahead, and then we can change lanes safely.
Another thing that’s really important is for the vehicle to drive in a naturalistic way,
because when it’s natural, and the car abides by social norms on the road, it’s also safer.
For example, at four-way stops, people typically rely on eye contact to communicate whose turn
it is. And in our case, the vehicle inches forward into the intersection to indicate
its intent. So, my role as a safety driver is first and
foremost to keep the car, myself, and everyone around me safe. And in addition to keeping
the car safe, I also provide detailed feedback to the developers and let them know if the
car does anything that maybe I wouldn’t have done personally. Maybe the car wasn’t assertive
enough in a lane change, or wasn’t fast enough at a green light. We provide the detailed
feedback so they can fine-tune the whole driving experience.
By getting out there and driving in the real world, we’re getting a better understanding
of what exactly it’s going to take to improve the safety and comfort and ease of transportation.
And that’s really what our project’s all about.