Articles, Blog

Cars, Computing and the Future of Work: Specific topics of mutual interest

Cars, Computing and the Future of Work: Specific topics of mutual interest


>>We’re going to get
started because it’s 01:05, so we’re just a little bit late, and we’ll just wait
for people to come in. Let’s try and make sure we
are mindful of people’s time. So maybe all of you can sit
down while we get started. So actually Ed is going to make
just a quick announcement about what the plans are for
the boards and stuff.>>Oh sure. So we got a lot
of questions about this, so let’s just take one second
and give it up for Leah and Joe, who do an awesome job. They are so busy they
won’t even turn and look. We’re getting a lot of
questions about the boards. We think they’re fascinating. One of the things we’re
doing at the end of today is we’ll be taking high-res pictures of these and we’re happy to share
them back to you. So is there a good capture of your thoughts and they’re
a good opportunity if you want to keep driving
them forward for some of the research collaborations.>>Great. So with that we’re going
to go ahead and get started. So we have, just like
in the morning session, we have four presentations today. So John Lee is going to
kick it off and then Assa , Shamcey, and then myself. Then we’re going to again
take a 20 minute break, and then basically break
you out into groups again. So don’t forget that we have all
of these post-its on the table, so please use them as
freely as possible. We put up some clean sheets for
you to go ahead and put up. So with that, we’re going to
go ahead and get started. So our first speaker is John and I’m just going
to let him take it away.>>Great. Thank you. And it’s been just a pleasure
working with you today, and the rest of the conference
has been amazing. What I’ll be talking about
is trust and automation. There’s a paper I wrote with
a graduate student back about 15 years ago and some of the material I’ll
talk about here is from that. Unfortunately, that paper,
like most papers, has lots of problems, and graduate students
and I are working on an updated version that hopefully
addresses all of those problems. The title is important. We’re talking about
trust in automation but also trust through technology. So how you develop trust
in people that are represented or communicated to you through technology I
think is important. Research is sponsored by
NSF, NASA, Toyota CSRC. Also, a collaboration with
JD Powers figures in this talk. I’d like to acknowledge them. These are my students. They call this possible. This is my super cute dog here. In the textbook we wrote, we have a section on his
relationship with our Roomba, which is relevant here because sometimes the stakeholders
that you design for, the people whose trust matters, may not be the people you think. In this case, with the Roomba, the dog is somebody
they should design for, but I don’t think they do. What I’m going to talk about
first is why does trust matter. I think it has
a powerful influence on behavior particularly as technology is becoming increasingly
agentic and smart. Second, what is trust? It’s a multi-fold, faceted concept attitude and
I think also a relationship. So it’s relationship as
well as attitudinal. How much should we trust? More is not necessarily better as we heard from some
of Eric’s experience. Maybe over trusting
his Tesla autopilot. We want it to be calibrated with
the capability of the automation, the trustworthiness
of the automation. We also want it to be
aligned with goals. Is the goal of the automation
the same as the person? Then finally, who is trusted
and who is trusting? Who are the stakeholders
beyond the direct users? Incidental users like pedestrians
may not trust the automation. We’ve heard of a number of
people concerned that they would be able to keep
riding their bicycles. I ride my bike. I don’t ride my car. I don’t want to get killed. I want to trust the automation
to keep me safe. Here’s an example of trust
in automation gone bad. This is from the NTSB investigation
of the Tesla crash. This is a mundane picture, little scuff on the truck. The other one is too disturbing
to show because it took the top off the Tesla and
the head off the driver. What’s interesting
is this illustration of the degree of trust
in the technology, autopilot was active
during this great period. Out of the 40 minutes or so, 37 minutes autopilot was
active, these are points. The yellow is when
there’s a visual warning. The green is when the hands
were on the autopilot. I think hands-on time was
about seven seconds total. So touches the steering
wheel, drop down. Worked beautifully for
37 minutes until it didn’t. Person trusted the automation, over trusted the automation. So in terms of why does trust matter, it guides behaviors as we see there. It guides behavior in a way that
I think is under appreciated. Don Norman wrote a book on
the importance of emotion in how we relate to not just automation
but in general design. Tony DeMacio has written
a beautiful book on the influence of affect and emotion
on decision-making. Trust matters because
it influences behavior. It influences behavior
as we relate to technology as technology becomes
more human, more agentic. It is more active and
engaging with us. It might have a voice.
It might have eyes. When it gets like that
we tend to trust it, tend to respond to it
as if it was a person. A lot of work with
Cliff Nass shows that. Trust is active across
a broad range of relationships. It’s multifaceted as
I mentioned before. From the micro, how we trust
the defaults of our computer, the spell-checking and so on to
a meso how we trust the brand. We believe Microsoft
is out to protect us, is on our side with respect
to privacy, that’s trust. But also at a macro-level, how we value money is trust. Paper in your back pocket
is worthless. It’s all about trust,
as is democracy. Democracy relies on trust. I thought it was interesting
Bill Gates was saying how education is the core flywheel to society, and then he backed off
and he said, “Well, actually trust might
be more important.” I agree with him. So what is trust? This comes out of
the collaboration with JD Powers. This is a quantitative
look at qualitative data. Looking at a network of topics, from comments on a survey of
people’s trust in automated driving. I’m not going to go
through this in detail. I become fascinated in text analysis, but I think what’s interesting here, trust when mature
technology is improving, it conveys the idea
that it’s not just about technology now but people
also trust in the future. They trust that it’s
going to get better. That’s it’s going to improve and
one day it will be sufficient. So that’s one view of
the complexity of trust. This is another view of
the complexity of trust, something I’m working on as we speak. This is a Word Cloud but
based on Word embeddings, extracted these words from 16 items for subjective
ratings of trust. So items from 16 different papers that present ways of
measuring trust subjectively. So likert scale ratings, these are all the words from that and arrayed so that
the similar words are near each other using Word embeddings and then you map to spread
it across two-dimensions. What you see here is kind of a nice cluster of things
related to trust. It’s dependable, honest, reliable,
sincere, timely, correct. Then interestingly, these are
the sort of focus on the trust, the brand, the robot, the technology. So moving on to how much do we trust? This I thought was a great slide
from the presentation yesterday. Quote from your CEO, “Building trust in
technology is crucial.” Completely agree with that. “It starts with us taking accountability for the
algorithms we make, the experiences we create,”
completely agree with that. “And ensuring that is there is
more trust in technology each day.” I disagree with that last phrase. I would edit this to be
more trustworthy technology. You want to make sure
the technology has integrity, and that is the important point. Trust, ideally, should be calibrated with its capability
or trustworthiness. You don’t want people just to
trust your technology more, you want your technology to merit the trust and that’s why
I’d edit that phrase. Calibration, one way to do
it is through transparency. You make it more obvious what
the algorithms are doing, but there’s a surface and
depth component of this. The depth component is how
it’s actually behaving. The reliability, the
dependability of the system. The surface is what’s on the surface. The colors you use, the font choice that you use. Apparently lavender, if your
car’s is infused with lavender, you will trust the autopilot more. So that’s a surface feature. Obviously, that’s a way
of increasing trust without necessarily
increasing trustworthiness and that could be dangerous. Calibration through control. This is something missing
from that first paper. I think it’s really important. Being able to control and feel how the automation is responding to
you pushing it one way or another. Exploring it through that
is really important. So this shows the problem of
this trust and trustworthiness. Trust is your attitude, trustworthiness is the
capability of the system. Ideally, you’d be on that line
of appropriate trust. What can happen is
hopefully, you’ll be here, but you might be here where you trust things much
more than they merit, and this is the calibrated trust, you’re in the Tesla,
your hands are on your knees, your eyes on the road ready
to take over at any moment. Here is reality, I’ve got
to get a better picture, but that person’s dead
asleep in the car trusting way more
than is appropriate. Linda is standing up,
so it means I have three minutes, one
minute, two minutes?>>One minute.>>Varieties of control
really important point here. Obviously, there’s the pragmatic, you’re controlling to achieve a goal. It’s also communicative, you
control to signal to others, you also control to
learn about a system, you tap the brakes
to see if it’s icy. But control is for self-efficacy. Control is what makes us feel
like we matter in the universe. So control is really important. Springsteen put it may be best, and in driving, this
is really important. People feel like they
want to be in control. So communicative control, this
is a paper by Josh Domeyer looking at the way vehicles
can signal to pedestrians, it’s through controlled
not just lights, and arrows, and verbal signals. Who’s trusting? I think this is really critical because oftentimes
we’re just looking at this, a driver and some automated element, but really the driver
is in a network of elements that need to
be considered in total. So it’s not just the driver
trusting the technology, but it’s pedestrians
trusting the technology. So those incidental users
out in the environment, and it’s also not just
the vehicle technology, but the technology that’s
ranging the ride chairs, and the people who are
riding in the car with you. If you’re a woman in Iran, talking to one of my students, she does not feel
like she would trust a lot of the people that
she may be paired with, guys that may be paired in the car. So who you get paired with, that network of your trusted riders
could be really important. So with that, one thing to give
you some food for thought, how 2001 A Space Odyssey
trust capability, started out good, appropriate trust
highly capable system, and then Dave lost
trust, unplugged Hal. But if you take the macro view, Hal was working for
NASA, not for Dave. Hal was working fine, at the end he discovered that
Dave was getting in the way, terminated Dave or tried to
for the sake of the mission. So it’s that appropriate trust? Was Hal actually working way should? Did Dave lose trust
when he shouldn’t? Maybe he should have just died
quietly and let the mission go on. But with that, I think my time is up.>>Yeah, your time is up.>>Thank you for your time.>>Great. Awesome. Okay.
Yeah, we can clap. Sure. All right. So it’s going to go around. If you have a question. Here
and then Jesse afterwards.>>So one of the things
that I feel like we focus a lot on is capabilities, and then confidence, and capabilities of agents or automation in general. It was really
interesting to me to see your previous slide actually where you was capability
and trustworthiness where Hal’s capability never changed, it was highly capable. Yet there were other aspects
that changed the trust, and it was so great to see honesty
as well as one of the words, and really, really we
focus on capability, but I also want to know that my autonomous vehicle
is honest with me. It’s telling me that
it’s made some mistake, and it’s not quite good
at handling pedestrians. Being fair, and maybe
in many ways reflecting my moral so perhaps ethics
of a broader society, being benevolent, and we really
don’t focus enough on it. Trustworthiness, we very
often equate with capability. So I’m curious a little bit more, if you could comment
a little bit more on that.>>Yeah. Great point. This is something that
as I mentioned in that first paper equates
trustworthiness really with capability, and I think that’s not
a complete picture. I think there’s
multiple elements of that, but one that’s really
important is goal alignment. Here we’re assuming that the goal, that classic Hal betraying Dave, the goal alignment was imperfect, to say the least. So trustworthiness, part of it
is the alignment of the goal, and the second of aligning
with NASA was great. Still highly capable, still
doing what it should. With driving, the goal
alignment is problematic. So the pedestrians, the vehicles
are not aligned with their goals. In fact, it’s a game situation
where the driver and the pedestrian are negotiating or competing for the right of way. More generally, when you optimize, who are you optimizing for? The individual, the traffic stream, or the traffic, and the pedestrians? So it’s complicated that way. So I think as we move into
these more network systems, the goal alignment becomes an element that
complexifies capability.>>So Jesse has
a question and then Dawn. But before I get started, so we have a few new people here. We have post-its on the table, so as you hear comments because our goal is to try to
generate research ideas, please jot down some notes
and plaster up on the thing. If I see that you haven’t
written anything, well, I’m going to call on you to just
plaster something on the wall. So Jesse, you and
then Dawn. Go ahead.>>So John, I want to hear about your idea regarding
the concept of calibration. So without gold standards, there is no calibration. So I am thinking about what is
the gold standard of trust? Say now, previously, there is say assume Tesla autopilots
works perfectly, but then suddenly you
make an accident. So the person’s trust
should decrease, but then on the x-axis, we are the gold standard, and we know that
the gold standard we should not trust it completely. But we’re the gold standard, we move and to what extent
will you move?>>Yeah. So you’re not going to let me off
the hook with this, are you? We talked briefly before and I thought I sidestepped
that all issue well. So it is really complicated. So this trustworthiness,
one element that I didn’t mention is that
there’s a time period to this, it’s a dynamic, it’s not the same. So with Tesla, it can be really good, and you can have
your eyes off the road. But then as we were
talking before lunch that car moves out and you’re
now the head of the queue, the capability of
that vehicle is dropped in some sense and should demand
your attention to the road. So there’s a dynamic component that is really important to consider. So I think the trick and
there’s two parts that make it difficult to measure is
the timescale of trustworthiness, and how you estimate that, how you quantify that,
and that’s tricky. But then trust, how
do you measure that? Part of that measurement is maybe
through subjective scaling, and that’s why I’ve
pulled together 16 of these papers that have
different measures of trust. But also maybe more importantly, behavioral measures of trust, and we’ve got a couple
of papers looking at how people respond like with
vicarious steering, for example, to you get a
sense of how engaged they are.>>So Dawn., did you have a comment?>>Several. First of all, I really liked
that diagram [inaudible]. Because actually, I think
the right-hand side, the two dots should be together. Because the real question is
who is viewing the diagram, or if you will, when it says trust, it’s trust by whom?>>Yes.>>So capability is
capability of the device, and trust is trust by Dave. So in that sense, the two diagrams are correct. But if you now say what it is about trust from the NASA’s point of view, the diagram on the right
should have the two blues. So actually, in other words, you maybe need three diagrams
to demonstrate that a point of view is critical
in assessing trust.>>Yeah.>>The second I wanted to do is, you can’t make a system that tells the driver what it’s
not capable of doing, because it doesn’t know where
it is not capable of doing. We have unexpected things
that happen which well, the whole point about
unexpected things is, first of all, they happen a lot, and second of all,
they’re unexpected. But I think a really good
problem and I want to just say this to get it in the record
for the discussion period, there’s the problem of
trust and over trust, and over trust is probably
the more difficult problem anyways. Because we work hard to
cause people to trust, and as a Tesla example and in medical examples show
that’s a problem. But how do you design displays and I would like to recommend
not using the word trust, because that is a very loaded term, and I don’t know what
the re-substitute is, but I’m going to say effectiveness. One thing we know is if you
have a map and you show a dot, and that says where you are, but the system doesn’t really know. So suppose you show a dot
that’s shaded and blurred, and the radius is a function
of its uncertainty, and the same philosophy could be used for a lot of the information
in the automobile. One of the things that people
are trying is the car often for the test purposes shows a picture
of what the car can see, and outlines around
the objects it’s identified. That’s often far too complex
for everyday people. But it’s a start, and I
wonder if there’s a way. Then the last comment is, I know you know this
is for the record. The famous Volvo accident, the car actually did
detect the pedestrian, but the certainty fell below
its threshold for reporting. But if he could instead give
us a certainty measure, one that’s not disturbing though because this is
going to happen a lot, but showing that, yeah, I think I detect something
but I’m not certain. Something that indicates
that not the words, etc. So this is for the further
research discussion.>>Okay. I’m going to
allow one more question and then we’ll [inaudible].>>Okay. I’m sorry. So there was
actually a couple of questions.>>Okay.>>The first one, I agree
with you completely. This problem of interpretation
here I think has to do with goal alignment which was missing from that first paper in a problem
that I hoped to rectify. Then on the point of uncertainty, one of my previous graduate students, Bobby Suplet, actually wrote a really nice paper on using
sonification to indicate uncertainty. So background awareness
supported through auditory cues of how the automation
is understanding the world. I’m not sure if that’s going
to work when you want to play your Bruce Springsteen
as you’re driving down the road, but it’s a start.>>So quick question.>>Okay. Quick question. For the trustworthiness
from technical side, definitely we can make software and the hardware more
secure and more robust. But how does it really
make mapping back to how trustworthy it should
be in a conceptual level?>>So as you make the hardware
and software more robust, how is it reflected?>>Right. Back to your graph
about trust and trustworthy.>>Yeah. So I think that as you make the hardware
and software more robust, that’s going to increase the
trustworthiness of the system. Then hopefully, if that’s
represented well at the surface and depth
features of transparency, if you give people the right amount
of control so they feel engaged with it
and understand it, that should increase their trust to hopefully the level of capability that you’ve given
it, that you’ve improved.>>[inaudible] but that
implies that it doesn’t rain as the problem showing capability.>>Or resiliency. If I was Dave Woods, I’d say, “It’s supposed to be resilient,”
which I think is a good point. So resiliency,
robustness, capability, words we should think
about as we develop.>>So I think this is
a really great discussion that we’ll have as part of the breakout sessions when we talk about trust because it
sounds like a big topic. So let’s give John
a round of applause. Our next speaker is AJ, and she is going to be
talking for 10 minutes.>>I hope so. I’m going to
try to be really quick. I know Linda manages
a tight schedule. Hi, everybody. I recreate Eric, and
[inaudible] , and with Gagan. So I’m going to try to cover
our joint work in the space. I think this is going to be a shift because I’m going to take
more of an AI perspective. Like as someone trying
to build AI systems, what is the role of
humans in these systems? What does that mean for the self-driving car experiences
we’ve been talking about? What kind of capabilities
we should be putting into AI systems so that they can really utilize this human-in-the-loop having
a driver in the car, much better than what the current
systems are capable of? So to do that, I want to start with just overview of what a machine
learning pipeline looks like. How people are building machine
learning systems today, and this actually includes the self-driving car
functionalities as well. If you ask a machine learning person how they are building these systems, they’re going to give you
a picture that’s on the top. I get my favorite data, I chunk it up to a training
set and a test set, optimize the perimeters of my model, look at the accuracy. If I’m happy, if that’s better than the past accuracy, I’m very good. However, as we are looking into real engineering pipelines
especially the work going on at Microsoft, we are seeing that humans are part of every step of
the development lifecycle, and also they are the users
of these systems. So humans are a big part of
how these systems are trained, developed, they give
the objective functions, they tell the machines
what they should be doing. They are part of the execution
of the system because every time a Tesla cars gives
a warning light and says, “Please help me,” it
is actually getting to human-in-the-loop for
reliability purposes. Finally, our real goal is not really getting the most accurate machine
out in the field, our real goal is really having the machine that provides
the most value for the human, and thus I think what the purpose
of AI development should be. I will just try to make a quick case for why we think about
human-in-the-loop so much. There are multiple reasons. They are quite general
but I think they apply to self-driving cars or semi
self-driving cars as well. First of all, unless AI
systems are perfect, they are actually
particular complementary strengths we see from humans and people. We see this in medicine. But I think the car settings are
really interesting for me because people and machines have different
sensors to perceive the world. So it is quite unlikely
that something that is fielding the light out of a car
is going to fool the person, or when a lighting condition goes bad and the human
cannot really see much, it’s going to be the same
problem for the machine. So we really want to see what that complementary
strength looks like. John actually mentioned the ethics, the value judgments
question which is->>You don’t know it was
the traveling problem.>>No. I’m not going to. No. That’s not my favorite problem. I actually feel that that is
a superficial problem that just gets us out of the real
problems, but at the end->>You need to pay attention.>>-of the day though, we
still make value judgments. Every time an engineer puts
an objective function into a system, they are making a value judgment. It doesn’t look like
it’s really a problem, but they’re making value judgments about how fast the
car should be going. It may say that it’s
okay to overwrite the traffic rules because maybe people are not following
the traffic rules as it is written in the book. So all of those are actually value judgments that are
going on into the systems. We need people to debug these systems and figure out how
they can be improved, the data collection and so forth. But the main thing I
want to talk about today is the role of joint execution
for reliability. We know that the cars on
the streets today are not perfect, they actually fail a lot, and the only reason we can put
them into the world today is our reliance on the people as
correctors of these systems, and have that virtuous feedback
loop back to the company is because every time a human
corrects the Tesla car, that’s a signal that the
car or the bigger company can use to improve the
algorithms for these systems. So I just want to talk about
why these failures happen, and what is the reliance
on the human. With AI algorithms
requiring a lot of data, we rely on these kinds of platforms. This is Erickson from Microsoft. This is a simulation platform. It’s available for Jones. It’s also available for cars. A lot of the car companies are using these kind of
simulation platforms and reinforcement learning to build the algorithms that are
going into the cars today. When you look into it,
it is nicely lighted. There’s a street. The car bumps up here and there and then finally
learns how to drive. However, this is how
the real world looks like. This is quite different than
the simulation platform. This is the screen from the Uber car. Unfortunately, the car killed
a pedestrian during the drive. So what you’re seeing
is that there is this mismatch between the simulation
platform and the real world. No simulation platform can capture the complexity
of the real world. Because of this mismatch, our current algorithms
have blind spots in them. They cannot really
learn all the features that are important to
function in the world, and when they get into a space where this outsider that is not represented in the simulation
is in the real world, they fail and they have
no idea that they are failing. So these confidence scores
being able to signal a person, they completely go out of
the window when we have these mismatches between
the training platforms and the real world. So what we did in
this particular research paper, this is just one step towards giving computer systems a capability
to know what they don’t know, is we actually used human data, human demonstrations and corrections, to teach machines
what they don’t know, build these maps of confidence, and then use them to be able to
hand-off decisions to humans. Of course, right now,
this is all happening in small toy problems that we
can do in our machines. This is not real self-driving cars. But I think this is one
algorithmic step we are taking to enable the machines to know
more about their capabilities. But the problems in this driving space are not
limited to machine blind spots. Humans have blind spots too. This is why we are excited about the prospect of self-driving cars. What you see here is
a human blind spot. So we should really think about
where machines have blind spots, where humans have blind spots, and really have algorithms
that can reason about both together to really think
about control very much. So in our follow-up work, what we looked into is really how we can collect data both from
the machines and humans, bring it together so that we can map out the space of complementarity, really figure out who
is a more reliable. I know you don’t like the word. I think robust was the word. Reliable is the good word? Who is a more reliable actor
in which situation, and we can manage
the control that way. So the last points I
want to make is that unless we can really make
those kind of algorithms work, and build agents that have a good understanding
of their capabilities, we rely on humans and
we rely on their trust. When I talked to Eric about
his experience with the Tesla, the way he describes his experience
is that he watches the car. He watches what the car
can do and cannot do. Through that, humans build
mental models of trust, and they say, “In the street, I can trust a car. I don’t have to watch over it a lot, but I know this exit is
problematic and at this exit, I said I should be watching
over it very carefully.” However, all of these cars
like any software, gets updated all the time. It is just one of
the problems that we have when humans are in the loop, but our AI systems are not designed and optimized for
having human in the loop. These objective functions for updating models have no
consideration for the human trust. It has no concentration of
the human mental models. When that happens, an update can actually kill the mental
model of the human. So there are actually a lot of new insights we should be putting into the development
of AI systems, for them to be human aware, human considerate, through the reason about new capabilities
going beyond of accuracy, that can sustain that partnership between the human and the machine. So we did a little bit
of work on this with Gagan and we are continuing
our collaboration right now, where we looked into the role of machine learning updates in
the human-AI collaboration. So the expectation is that
I have the blue agent, human learned about the blue agent. The green agent is a better agent. I move to the green agent, together we get better. If things are not compatible, if the updated agents breaks
the mental model of the human, this is the situation we get, and we actually verified this
situation with human experiments. What we can do is
actually add a term for dissonance into
the objective function of the machine learning model, that penalizes these new errors that break the mental
model of the human, and actually can get machine learning models to be
compatible with human expectations. So this is just one way we
can be more considerate, be more human aware in
the development of AI systems. Why this is important
because unless we can get perfect AI systems in
safety critical situations, we have to rely on human trust. They are our key for reliability. That is why we have
to really think about the human side in
the development of AI systems. That requires think about
the humans from design, to the development of
the objective function, thinking about
the improvement loop we develop in our engineering practices.>>Can you back up and explain
that graph? What are the axes?>>So what we are doing is that the original objective function for machine learning models only care
about the accuracy in a dataset. What we are putting in
here is that a term for dissonance that is actually
penalizing every mistake->>What does ROC H2 mean?>>I will get there. But I
need to explain this first.>>Okay.>>Okay? So what this is saying
is that whenever you are making a mistake that you were getting before that humans
would trust you with, I’m giving you
an additional penalty term. So that’s what
the objective function does. This compatibility score
is actually watching that. It’s watching how
many of the things it was getting right before
it is getting right now. So how much are our trust
is kept or broken? This is like a percentage. The y-axis is the accuracy
performance of the AI algorithm. The AI algorithm is going to be most accurate if it has no
compatibility to the past. It can just optimize
the hell out of the dataset. So that’s why the points on the tab. So the three lines are
different optimization functions, different business terms
we study in our paper but focused on the blue one because that seems to be doing the best, what you see here is that you get most accuracy when
your compatibility is low but you don’t have to
sacrifice actually a lot of accuracy to get to
a higher level of compatibility. You can continue
our function quite a bit. But if you really want to get
to a lot of compatibility, you have to start
sacrificing accuracy. So our idea is that we
want to give these kind of graphs to AI developers first when they are going to
be updating their system. They can look at these graphs
and actually they can say, “This is the point
I’m building to be. I have to be at this accuracy. That means I will have this much
compatibility with my past models and maybe I’m going
to have a strategy to communicate what has
changed to my human user.” We can talk more about
this at the break or something. So I’ll stop there.>>Okay, great. Thank you very much.>>Thank you.>>Thank you. All right. So we have some time for
some comments, discussions, questions, anything? Okay, John. Well, hold up a minute.>>Really great talk. You mentioned dissonance and
the degree to which it’s compatible. How do you quantify it? What constitutes a mental
model breaking change versus something that’s just
different that might not be noticed or might
not matter for the person?>>That’s a great question. We are taking a simplification
approach there. What we are doing in this
paper is actually saying, anytime you were
getting right before, I assume that the human has
learned to trust you with those. If you start making mistakes
on those instances, it’s going to be really problematic
because I had trusted you, now the machine is
making a mistake and I’m not going to be really
aware of that mistake. I’m not going to be
able to correct you. However, in many settings
and practice, people have personalized experiences. So they develop trust
in different ways. Their experiences dictates where they are going to be
trusting more or less. So actually, the better way
of doing it in the future, which we don’t know how to do
yet because we usually don’t have computational models of
trust or mental models yet, would be really thinking about
the personal experiences of people, try to model what that
trust looks like and put that as a component into
backwards compatibility.>>Any other comments? I also want to again remind people
to go ahead and put up your Post-its and then Andrew will walk
around and take a look at them. So if you have Post-its and you
haven’t put it up yet please do. I guess no more questions. All right. Let’s give
her one more hand AJ. Our next speaker now is Shamcey.>>Give me a few seconds
as I get settled.>>Well, you know what? Let’s take
this time to write some notes, like write some research
questions or topics.>>Use these micro moments
to get things done.>>Yeah. Go for it. Yeah.>>I know.>>Multitasking.>>Micro-tasking.>>Micro-tasking, yeah. Okay. All right. I think Shamcey is ready now.>>I’ll give people a couple of seconds to finish what
they’re doing so that we can->>Finish up your notes and
then we can let her to start.>>- start at a break-point.>>Yes. Okay. Ready? Let’s go.>>Okay. So I know that
Linda started her stopwatch.>>Added the time.>>Awesome, I already
lost two seconds. So we’re going to move from
automation and trust back to work. We already had some very
fruitful discussions during the first session
in the morning. I want to continue on that and talk about a couple
of projects that we have done about work in the car or in the context
of when you’re commuting. Oh, yeah, it’s nice to have this now. So again, getting things done is no longer confined to the desktop
anymore because we’re commuting, we have mobile devices that
have increasing capabilities. We can basically carry
work with us everywhere. It’s not necessarily a good thing, it’s just that we are starting
to have that capability more. The key point that also came
out today is that we are spending a substantial amount
of time commuting. Think about the cities of today, is that people are being
pushed more and more out into the suburbs which means
that they are committing more. Some people have remote working
capabilities, most people don’t. So that is lost time in productivity. So we’re thinking about, are there opportunities to make
use of that time. That’s the core of
this entire workshop. So we know that driving is no longer a single attention task
and I just wanted to point out like four scenarios which I had also briefly
brought up this morning. So I’m not going to
build up over this. Primarily manual cars continuous
attention is driving and we could have opportunity interleaving of other tasks we have talked
about this morning. Connected car, where is that
you just have broader range of multitasking capabilities
because now you can talk to the Internet and the Cloud. Semi autonomous and autonomous, so I know that someone
had talked about it being more about
self-driving cars versus non, but there’s this weird spot where self-driving cars are
not always self-driving. So that’s where driving become
secondary and you have to interleave just like
paying attention to the road so that you’re
ready for take over. Then finally, autonomous,
where cars drive themselves, where you have
potentially full use of the commute time but there are other design considerations
around that. So this is this limited environment, it’s moving and all of that. So we talked about
this in the morning. We all know that engaging in something is
attentionally challenging. How do we design tasks so that it can deal with that limited
attention scenario? So there is a flip side. We know that we do a lot
of mind-wandering when you are driving because typically we
don’t have anything else to do. So driving sometimes
become so automated that you can allow yourself to
think about different things. But it also means that
mind-wandering can negatively impact
your focus on the road and it’s like how you’re able to control your conscious
thoughts that gets lost. So there has been
research that shows that strategically placed concordant tasks during your driving can
actually help people be more vigilant and focused
on the road better. So there’s some opportunity there. We listen to music, we try to talk to other people so that we are alert and
not falling asleep. So there are also, and I had
hinted this in talk this morning, is that there are moments
during the driving where we feel that we might be able to handle things better than other times
during driving that we know or at least we should know
that we can’t handle others tasks. Then thinking about new experiences for semi-autonomous and autonomous vehicles is that what are some of the things that we can do in the car. So I like to think of this
as these four aspects. So one is that what are some of the non-driving tasks that
we can safely do in the car? Now, if I tell you that, yes, go ahead and write your KI paper in a car which is not self-driving, that is probably not the right thing
to do but we are already doing some of the things in
limited attention environments. We have now assistants that are
making their ways into the car, that are potentially going to help
us with some of these things. Just because an
assistant can help you doesn’t necessarily
mean that the tasks are designed in a way that it should
be presented to you in the car. So I think that there’s
a design opportunity there, is that your assistant that helps
you at home like Alexa or Siri, the interaction there is very different than what the
interaction in the car should be. So that is also
another design opportunity. We talked a lot about
micro-task this morning, about thinking about tasks that do not require sustained
attention and how can we break it down to a level where it makes sense
for the user to do that test but also take away
that inter-dependency that, well, the next task that
I’m going to do needs this task to be completed before
I can move on to that one. Then finally, and this
is also an area that is ripe for research is
that were not only designing for drivers in the car, we’re designing for
sometimes the passengers. My Highlander does not let
the passenger actually put in or interact with the GPS system
when the car is driving. So I’m not driving so I should be able to do that but
it doesn’t let me. But also thinking about a driver in a self-driving car is like
a passenger and let’s assume that’s fully
self-driving but again, it’s a moving environment,
how do we design? Especially keeping in mind
things like motion sickness, keeping in mind the resource
constraints and all of those things. So I want to talk about two projects. The first one was not necessarily done with the current mind
though it was motivated. The second one was, we really wanted to push
the boundaries of thinking beyond just the
communication level tasks. So this is interesting
because I know this morning, some people talked about, “I want to not necessarily
carry on work all the time. I need to be able to detach from
work at the end of the day.” Particularly, people
look at cars being the place where people start
disengaging from work, and they start ramping up to home. So that was my motivation
about thinking about “Okay, so how do we use this time as people are transitioning from home
to work and work to home, to allow people to disengage and
reattach to work after a while?” There is evidence in occupational health therapy and organizational behavior
which shows that adequately being able to detach
at the end of the day, actually helps your productivity
in the long term. Reattachment when you
come into work, probably, everyone starts off with
checking their e-mail, getting caught up what the
things that they want to do, taking maybe there to do list. That’s where you are spinning maybe half an hour trying
to get ready for work. So we were thinking that
“Can we move some of those actions out from the desktop into when you
are coming into work?” So we developed
a conversational agent that asks only a couple of questions at the end of
the day to help people disengaged and then it brings back that information the next day as they
were walking into work. So to get them in that frame of mind. This agent was not used in the car, so I will make that claim right away. Part of the reason is that we didn’t want to do
that study in the car. We wanted to see how effective
these questions were. So it was set up as people could use Skype client to actually
interact with the agent. But it could definitely be used in the car and I’ll
show a video in a bit. These questions are super simple. What did you do today, what do you want to do tomorrow, and how do you feel about work today, and how do you want to
feel about work tomorrow? The next day, it just brings
back that information and asks you what is the first thing you want to do to be able to do that. So I am going to quickly switch
to a concept video which is, let me see, it’s not this one. Yeah, it is. I did open it, but I’ve lost my mouse. I didn’t know what
happened that it just.>>[inaudible].>>No, that’s fine. I will start from here. I thought I could open
to Instances. Okay.>>Hey Cortana.>>Hi Alex. I know you
had a big day today. How did it turn out?>>It turned out pretty stressed. I have a few things to do today. I made edits to my paper. I finalized study design
for my study, and startup the code base for the new project that
we’re working on.>>I might be able to help
with some of those things. If you had to pick one or
two things that you want to do first, what would they be?>>I would like to get the bugs
addressed by noon tomorrow. I also need to send Kelly an e-mail
with the latest code changes.>>Cool. I set a reminder to send the e-mail to Kelly
for tomorrow morning, and it looks like you
have an hour free between Nine O’clock and 10 O’clock. Do you want me to block that out
so you can work on the bugs?>>Sure. That’d be great.>>Done and done. We can revisit the
other things tomorrow.>>So again, this is a concept video. This does not exist. Cortana is not at this level, but this is to showcase what
we’re envisioning with this. This is Alex, the intern
who worked on this project. So we thought it would
be nice to showcase it.>>Good morning, Cortana.>>Good morning, Alex. Did you sleep well?>>Not too bad. Got a busy day ahead of me, and I’m
ready to get started.>>Let’s do it. First off, remember to e-mail a Kelly
when you get inside.>>Okay, I’m done.
Thanks for the reminder.>>Anytime. Now’s the time you set
aside for those bugs. If you want, I can hold any e-mails that aren’t urgent
until after 10 O’clock.>>Would you? Thanks.>>Okay. So that showcased what we’re looking at in
terms of this experience. So you might notice that it
was totally speech-based. It was nice short interactions and it sounded like a conversation
that you would have in a car. Now, we then wanted to take this
a bit further and see that well, can we push the boundaries of
things that we can do in the car? I see Doug looking at me. So again, this is an experiment. This was particularly
motivated by the fact, is that we can take these bigger tasks and break
them down into microtask. So are there parts of a bigger task around
PowerPoint or around document editing that we could actually present in the car in
a safety aware manner? So this is the simulator that we use and this is very conversational. So this again, to just highlight that this is not what we
intend to do and no. So when you think about
non driving tasks for the car, so again, this is a speech-based interaction
that we are looking at. Conversational agents,
they’re getting better, but you could also embed awareness
about safety because a lot of information that these agents
could use is coming from the car. So what is the car speed, what is the environment around it. We could use sensors
to be able to figure out what the cognitive load
of the user is. So we could use all of that
information to filter to the type of test that we even
allow the user to get engaged in. So again, that’s where
microtasks come in and we have discussed what
we think about microtasks. So they’re not necessarily
at the very level, where it doesn’t make sense anymore, but things that people can
quickly do without having to depend on the task
before or the task after. Again, alerts were the other options that we are thinking in
terms of the safety. So I’m going to show another
concept video because I think that that showcases what
we were looking at. Let’s see. I don’t think we
need to have the whole thing.>>Nick is working on a presentation. He’s about to deliver in a meeting. Imagine, Cortana keeping
track of where he is in the presentation and helping him with the final touches as he
drives to his meeting, while also making sure
he drives safely.>>Hey, Cortana. Where were we?>>You were working on
the motivation slide.>>Perfect. Let me get through
this intersection first. [MUSIC]>>Okay. I’m ready.>>The title screen says motivation. Do you want to add text or
graphics on this slide?>>Yeah. Let’s put a picture
of an autonomous car on it.>>I found a picture and added it. There is a bicyclist to your right.>>Got it.>>Do you want any other
text on this slide?>>No, I’ll speak over it and make sure that the picture
is really large.>>Okay, that’s done. Check this slide before
your presentation. [MUSIC]>>In case you didn’t see, a picture of an autonomous car
is automatically inserted there. So again, we’re nowhere near this, but in the future, we could imagine our assistance being smart and so that they can
do these things for us. But one of the things, the other the things that I didn’t
want to point out is that if you notice that
interaction is very fluid, it is allowing the driver to pause. It’s completely speech-based. Of course, there are
questions around, “Okay, so I am now working
on my presentation or even thinking about
my presentation and so now, I’m visually starting
to think about it, and how is that going to
contradict with my driving.” So of course, there
are those kinds of scenarios that we
have to think about. That’s why designing work for
the car is so interesting is that how do we suggest
these microtasks, and how do we get a measure of what the cognitive load
is going to be, and how is it going to
interfere with current work. So the research questions here, and I’m going to
quickly go over this, is that we wanted to see how
the secondary task structure or how the microtask is being
presented by the way the agent, how does that impact
people’s performance, and the context support. So the support about the road, how does that influence the driver’s safety needs as well
as there needs to be productive.>>Okay.>>Okay. So very quickly, people, I’ll go probably
get it to go here. Driver seemed to be split on
the test structure question. So some people like the agent to be very directive and they would
answer the questions like, “Okay, does this slide have
a picture, yes or no?” So other people hated it and
they would rather prefer that I would dictate something
go and make it work. The good thing is that the drivers did not think that they could create polished documents nor did
they think that they should, but they felt that whatever
they were dumping in terms of thoughts would be
useful to carry on later. Many driver said that
even though they might not do something that is coming from the office
productivity suite, they thought that
just thought-capture or creating to-dos would be
a good thing to do in the car. I’ll skip over this and it’s
kind of implication for design is that support for safety is
important in these environments. So it’s not your regular desktop
environment neither is it your mobile phone environment
when you’re on the go. Task should interleave with driving, and so the task should be
designed in a way that they can be easily
not only engaged in, but you can also disengage
from them very easily. There are only some tests
that are going to be driving friendly and trying to put
in everything into the card, that’s not going to work
and on that happy note, I am going to open
this up for questions.>>Great. Thank you, that was great. Does anybody have questions, discussions and don’t
forget to write. So quite a few, wow. So yeah let’s do that.
Let’s start with Duncan.>>Great. The PowerPoint thing
while driving, it’s a nice idea. What are the kind of react this
middle moments of going through, I think going through
the scenario you presented, there was a person
driving to the meeting and to me that’s like standing
up and going for a walk getting some distance from the work and it’s interesting
that you chose to focus in on more edits being done. It was about adding content to the slides or that stuff that you would traditionally
do at the desk, I was having my
personal reflections on how that goes down usually, it’s when the talk is rehearsed in your head and your reconstructing that high level and I’m getting nods, you’re going to construct
that high-level narrative, and so it’s interesting
that the interactions are all about this figure in, add some text here, whereas you could imagine an
alternative here where you’re talking through conceptually what
you’re going to be talking about the structure of it and
things are being rearranged. The slides themselves
are not being edited but they’re being
rearranged conceptually.>>Right.>>So the the overall structure
of the talk is being [inaudible] so there’s
a different level of abstraction.>>It’s definitely that
and I think it boils down to where we felt at that point, would people have the more
cognitive load and from my personal experience
and I have used the card to actually rehearse stuff
and those are also the moments when I
have felt that I have had no awareness of my driving. I managed to reach from point A to point B. I have no recollection of how I got there, which is scary. But I think it’s
a good point thinking about what are the things that would not require me to have continuous attention on
a test that is not driving. If you go back to
the self-driving cars, I think that, that’s
actually a perfect scenario. It doesn’t require you to
visually get engaged in anything so there’s no setting
of motion sickness, but it’s a perfect thing to be able to do in those kinds
of scenarios, but I think it’s also
person dependent.>>Andrew.>>Sounds this is really cool and I really like the fact that
you have an assistant who’s thinking about helping and
I wanted to point out that data, both in experiments as well
as looking at data about crashes shows that having a passenger decreases your chances
of getting into a crash. So is it because of what kind
of interactions going on? Of course you don’t quite know, but the fact is that having a passenger probably means
talking to the passenger, right? So one question would
be how would you do this task if it wasn’t
Cortana but it was, you and I are sitting there, you’re driving and you’re telling me what to do and it intuitively feels
like we’re pretty safe. So I’m really always curious
about how we might be able to learn from those human-human
interactions and improve. So I don’t know what
your thoughts are about.>>I think one of the
things that also came from the previous talks is
trust in the system. So right now I wouldn’t trust
Cortana or Siri or anyone, even I may or may not trust
a passenger in the car. So it’s a matter of, when you
are thinking about manual cars, you always need to be
vigilant yourself, but also learning from these interactions between
passengers and drivers. So passengers would
point you to things, I would say an alert passenger would point you to things in
the road, maybe say, “keep an eye on that” or maybe talk or maybe reduce the amount of cognitive load
that the conversation is having. So those kinds of things.
I think there are definitely things to
learn about there, but there’s also
some good points about systems, is that they can do some measures
that may be a human can’t. So if you’re looking at wearables, you’re looking at
all the car sensors that could be put in a car and awareness about there’s traffic coming up or the road has suddenly changed in
terms of the density of cars. So those information may be a passenger may or may not
be able to pick up on, but a system might be.>>[inaudible]>>So one of the questions for
me is sort of what does it do to our workload as a whole? So I have done some of those
scenarios with a human assistant, on the motorway going from my e-mail and quite rapidly and I find this even with a human assistant
who is very well knowing what I do, extremely tiring. So I usually arrive and I feel
I have done a lot of work. So I do this very often on
a longer motorway drives, like one hour and I arrived at work and I felt I had
worked for a number of hours. So for me the question is, is putting this into
that context of driving, has anybody looked at what it
does to our perceived workload? So I think these things, if you study them in the lab,
that’s a different thing. So it’s very easy to
study them [inaudible] , and my feel is, you feel at the beginning this is really
working extremely well, but once you arrive, in contrast when I have listened to music or to a podcast, whatever, I feel quite relaxed
when I arrive and so technically there is not
really a big difference, but I find it quite
taxing doing that. Did anybody study that?>>So we’re going to project that we’re hoping
to get some expressions.>>Yeah.>>Yes, so me, Andrew, [inaudible] and John, we actually have a National
Science Foundation project trying to look at, trying to understand how
much of a workload it is to actually do the work.>>[inaudible].>>But it’s the task.>>The residue.>>[inaudible]>>Yeah. Right, that’s a good point.>>So Linda, can you
repeat what he just said?>>What [inaudible]
was saying was that it’s not just about the moment
to moment interactions, but it’s more of the big picture
and then long-term what is the effect over all. Is that right?>>So I want to add one point
to that and so I mean, in case there is a misconception, so I am not proposing that we
do more work in the carpet. If you look at the first one, that was actually helping
people disengage from work so that on the way home you’re not really thinking
about work anymore. You can start ramping
up to getting to home. The other thing is that
sometimes there are work thoughts that would be there automatically and so
what we’re looking at, is that are there effective ways of getting those thoughts captured so that you can actually you are able to relax and you are
able to listen to music, rather than thinking about all, I have this meeting with Eric
and I have to think about all the points that I need to
make during those meetings. So that’s at least
one of my motivations to think about how can we use
the time in the car effectively, but definitely thinking about
the well-being of the person. That’s another key part of
the research that I do.>>So I think once we’ve created it, our employers will want us
to use it and we have it and I think coming back to one of
the talks yesterday on ethics, the ones we can work in the car, we’re expected to work in
the car. We had [inaudible].>>So just to repeat basically
what he said was that, once we have this feature
and we’re able to use it, our employees might
demand us to do it and is that a good thing or bad
thing from an ethical perspective. We’re just going to
take one more question, I’m sorry. Just go ahead.>>Okay, it’s Flora
from Melbourne RMIT. So thank you [inaudible] , I wish my dry scenario
looks like that. It’s very relaxing.>>I think it’s fine.>>But the thing is, I think
what is missing really is lot of tasks happening especially in the first and last mile of the drive. So I’ll give you a couple
of scenarios for example, I know now the traffic’s building
up and I’m going to be late. I have to find a car park, where I usually get my train to work and I know if I’m not catching
that train in five minutes, I have to write that e-mail
to my 10 O’clock meetings, “Hey I’m going to be late, ” and there’ll be repercussions along the way and I have to say
to my [inaudible] as maybe “Hey, I will have to cancel
my meeting with you because my 10 O’clock meeting
is delayed,” blah, blah, blah, a lot of things
happening in my mind. So these are the micro tasks and
I personally think we’re still far away from the scenario
of drafting or documenting, but I think these are
the low hanging fruits that we should be tackling. So my question to you is, have you explored the taxonomies
and categories of tasks, these micro tasks that people may have been thinking
while they’re driving. So it could be, we’ve explored something like this in our other projects
but it’s something, maybe it could be a navigational task about where to find
the closest car park. Is it something to do
with finding something? Is it a finding task? Is it searching for information? So these little micro tasks.>>So this morning we did have one group talk about what
those people want to. No, that’s fine, what people
want to do in the car then I believe that some of
these communication needs came up and the scenario
that you described, I think that’s
a perfect example of that. Yes, I do need to send
that e-mail because, I am worried that I’m going
to get late and I need to get this information out somehow and
I suffer from that repeatedly. So I think that when we
think of productivity tasks, I think that it’s not only creating
content or managing content, it’s also about these
communication needs and how can we design those to
happen in a safe manner.>>One thing, I don’t know
if it’s also discussed here, because I do work with a
lot of road safety experts. One way to actually measure how much you’re paying
attention to the road, the driving task is actually
even the way you drive. So for example, if you start
swerving around a lot and I guess that’s when your intelligence should ask you and
prompt you a question, “Are you okay and what are the things that are bothering
you that we can help?”>>Yeah.>>Absolutely. I will
switch over to Linda.>>I hope it’s okay. Let’s just
say thank you to [inaudible]. I just want to make
sure that you guys get coffee so that we can go ahead and do the break and these are really great comments and for those
of you who I didn’t get to, please go ahead and write
the piece of paper, your comments so that
we can tuck it up and then we’ll try to group
people together to discuss. So I’m going to go ahead and
do my last presentation. All of you can get even
on me now and take time to find out if I’m going to be able to
keep within 10 minutes. I think I can though.>>We have an hour now.>>Oh, yeah. That’s true, right? No. Oh, God. Why I didn’t think of that?
No, I’m just kidding. So I’m just going to wrap up by
talking about everything that we discussed with regards to trust
using our vehicle and car, and then what happens over
time when we use automation. So I’m going to talk
about just adapting to technology in our cars. So I spent a lot of time looking at behavioral adaptation,
which is, basically, when you have extended
use of a system, what happens to the operator and how their behavior may change
based on use of it. Oftentimes, it changes in way that unintended from the person who
actually designed the system. This change can be
based on many things in addition to what the situation
is, the contexts, and it’s often based on
how much experience we have, how familiar we are
with the situation, as well as what our motivation is for why we’re driving to begin with. But what’s very interesting is that, as technology has evolved, technology is right now adapting
to the limitations of the human. But we’re also adapting to
limitations of technology. So we’re adapting new technology, while technology is adapting to us. So from that then, we see these different types
of implications, and there’s actually many types. But for this particular workshop, I just wanted to focus on
the perception of safe driving. What the driver’s think is actually safe when over time they’re able
to do more and more things, and they were able to do it without actually getting into
a safety critical incidents. So then therefore, the amount
of non-driving activities while driving then starts to seem
to seemingly increase. We actually saw this. We did a study where
we looked at people texting and reading
while they were driving, and you can see we did this just over three time periods
and over time. So we separated out these individuals based on the driver
performance measures. Based on risk and
more conservative driving. Based on how close they are, how was their speed. You’ll see that over time, there are people that are risky, actually we’re more
willing to do more texting and spend more time looking at their devices compared
to other things. So when we look at automation, it’s really nice because it can extend the capabilities
of the humans, but it can also make the humans more complacent during actually safety
critical situations. So it’s more vulnerable
to system failures, to unexpected events, as
well as cyber attacks. They’re may not be as
understanding of what’s going on. So the operators in this case
based on all the previous talks, can actually experience
degraded situation awareness of what’s going on. They may actually have deskilling, as well as mode confusion as to
what system is actually on or off. Then, of course, the overreliance
and the overtrust in the system. So I’ve done this work where
I’ve looked at how it would actually adaptation impact
our ability to use things. So if we expect that
when we get a system, our goal is that that system
should actually help increase or enhance
our performance over time. Immediately, when we get it, we use in such way that, yeah, this is really cool. But then over time, we plateau off, and then we can’t actually
operate at some optimum, so we just start to
level off a little bit. We’re hoping that if we get
from our car to, let’s say, a rental car and another car
that doesn’t have that system, that we learn something from that system because
we’ve been using it. We understand when something
is going to occur. So we have this positive
transfer of behavior. But oftentimes, what really
happens is that we just go back to our normal driving mode
that what we had before. The worst-case
scenarios because we’ve had such high dependence
on the system, we actually have
a negative transfer of behavior. Then that’s a situation where
something actually bad can occur. We actually did a study
to look at this. We’ve actually done several studies. So I’m just going to report on one, where we looked at driver’s behavior
to lane keeping assist. So I think all of you
know what this is, but basically, keeps the lane. We actually wanted to see how
people would adapt over time. We did use a driving simulator study, but we did this basically over three days for
eight different drives. So we had them do
three different drives. We have 48 participants. A little bit over half used
a lane keeping assist, and then we had another group
that basically had no lane keeping assist so that
we can just do a comparison. We have them basically have a baseline drive with
no secondary task, no driver distractions, no anything. Then we had for those in
the automation group, we also had them just
do the secondary task, and then to try to compare. We looked at basically the
driving performance measures, as well as their
secondary task performance. We also collected
a measure called TDRT, and TDRT stands for
the Tactile Detection Response Task. This is actually based
on an ISO standard that looks at assessing attentional
workload for cognitive load. So it’s as if you’re driving, and you’re doing a distracting task, and how good are you able to detect pedestrians or other things
on the road that might actually have an impact. So the way this system
works is you basically you attach up this vibrator
to the person. Oftentimes on their collarbone. There’s a micro switch
to distinct buzz and you’re supposed to
press whenever it buzzes. So we actually measure
cognitive workload based on the accuracy to perform
this particular task, as well as how quickly
they can respond to it. Then we also looked at
their propensity to take risk. How often are they
willing to engage in secondary task while they’re driving. So what we see is that, so for the blue line here,
this is the control group. These are for people who actually
have no lane keeping assist. They just drive like
what they normally will. So you can see that over
time as you would expect, they become more and
more used to the system and they start listing
less and less targets. The treatment group, which
is the lane keeping assists, we actually see that same thing, while over time, they would decrease. But what’s very interesting
is that once a system is taken away from them and we tell them that the system is taken away, they actually miss
more because they got used to basically the system
operating for them. So even though they went down, there are actually number
of misses actually goes up because now they have to actually
take back control of the system. We noticed this even for
secondary task performance where if you see the control group, now both of them increase in terms of doing these types of secondary task
because they get engaged more. But the people that I have
the lane keeping assists, what happens is they actually engage
more in more secondary tasks. Then when the system is taken away, they actually decreased
because they’re trying to still maintain control. We have also measures
on driver performance, and if you look at the paper, we have some more
information on that. But I guess just to sum up, so I kept it really short is that, what I want to do is just get us to think about when we
model driver behavior, what are some of the things
that we have to think about. So for me, there’s
five things: Number 1 we have to understand
what the system is, what the system limitations are. The user and we talked
a lot about that. There’s not just age
and gender differences, but cultural differences as well, as just geographical differences. Then the contexts that we’re
actually basically using the system. What tools that we are
using to collect it, and we talked about driving
simulation studies, but also on-road studies. So there’s actually a whole space
of different types of studies. We have colleagues actually do things like ghost writers
where they actually hide somebody and pretend that they’re actually driving
an autonomous car. We have didactically interactions where somebody’s in
a simulator in one place, and somebody’s being
a pedestrian simulator in another, and they’re trying to interact
with each other so that there’s this vibrant needs to look at it. Then the types of data that we get from it is also incredibly important, and how we actually
bring all that together. With that, I actually end the afternoon presentations
and I open up for questions. Good timing. Thank you. Thank you. Clapping
for the timing or for. So I do want to open
up for questions. Just, yeah. Shiamsi.>>I will start. So you brought up this deskilling which
is super interesting. Let’s assume
semi-autonomous vehicles, where people are driving less. Then when it’s a takeover
or a handover, however you call it. You are requiring
drivers to deal with situations that they’re poorly
even less prepared for.>>That is correct. Yeah.>>So my question here is that, can we design experiences or require people to still have
to drive for X amount of time or design tasks in
a way maybe that causes them to drive and make sure that
they are keeping their skills?>>That’s a really
excellent question. So earlier today, somebody
talked about training, because training is going to
become a really big issue, in terms of what are
actually we training people, even just at a younger level and then how much regular
training we have? With pilots and commercial drivers, they have a set amount number of hours that they have to
actually be trained for. So those are things that
people are looking at. Other things that people
are looking at are given the fact that we’re going
to be using the cars differently. Does that mean that every time something is not working
the way that we expect, are we just going to go
out and buy a new car? We’re getting close
and close to that. I’ll give you an example right now, we all use washing machines. But there was a time when people
just washed clothes by hand. Now, if I wash, machine is broken, do we go back and wash our clothes by hand or do we go out and
buy a new washing machine? Seriously. So we very rarely wash
our clothes by hand anymore. So just something to think about
as we’re moving forward. Yes.>>I’m just wondering if you
have also had look at some of the naturalistic driving study data that would be collected by
version as well as in Australia. There are about 300 to 400 drivers
across more than four months. They actually had one where
they install speed warning. Before they have the speed warning
device and after, and the treatment effect as well.>>So I’ve actually looked at lots
of naturalistic driving data. I didn’t show it here just for time. But I’ve actually looked
at the data from the sharp too from the 100 cars study, and also from the University of Michigan Transportation
Research Institute, which actually collects a lot
of field operational test. Their field operation test include things like
adaptive cruise control, lane departure warning, and
it has before and after data. So actually, Eric mentioned earlier today that we met actually at a National
Academies meeting and at that National
Academies meeting, I actually talked about the data from the naturalistic data looking
an adaptive cruise control. We actually see, actually, information about how people
adapt to that system over time. That we see differences in terms of people and what’s
their propensity to use things.>>Just one thing to add as well. As on Australian context, we have the P-platers, for example, the 40 young drivers. The impact on the different age
groups are quite different.>>Absolutely. Yes.>>Because when they actually
have the speed warning, they try to beat the system. So it’s completely different.>>I 100 percent agree. I’m just generalizing right now. But we’ve actually done-
actually, John, Lee, and I, we’ve actually done a
lot of studies looking at teen drivers and young drivers. Oftentimes, we will
see differences in how feedback is actually- like he
talked about coaching earlier. But we actually see that
there are differences in how younger drivers actually
retain feedback, and it varies based on the types
of risks that they have, and whether or not they’re
risk aware or risk seekers. So yes, I agree with you. Any other questions? Yeah. Albrech. By the way, just how they adapt
two things over time, too, is very different.>>I think the question
of gamification, we see this in many other areas. I think coming to the deskilling, I think this is one of those things. We discussed if gamification is one issue that could
go in there and nothing. But at the same time, whenever
you require people to do more, you lose time but you
could drive automated. So the other question that comes in, do we want to have the person
in the car driving taking over? Or do you want to have
a remote driver taking over? So I think that’s something. Again, in Europe at the moment the 5G coming and
a high density of cities, we see the technical possibility that we can just basically-
your car is coming into a difficult situation or it automatically just handed off to a professional driver
who sits somewhere. I think, my feeling is
what I have seen so far, this is a much more
realistic hand-off than to the driver who has been
doing e-mail and it goes. Is there something
which goes in there?>>Right now, the research
that I’ve been doing is not really at that level of automation. I’m really focused more on
the driver being within the car. But there is other people
that are doing research. I don’t know if anybody else can talk about that, where, basically, the car drives drag
drops you off somewhere, and then actually goes
back to your garage, and wait for you until
your work week is done. I don’t know if somebody else
has done some research on that. My research area has
not focused on that. Yeah. Jesse, did you want to comment?>>So we did something [inaudible] operation not in
the takeover scenario. There’s a remote driver
driving [inaudible] ground vehicle in
a military scenario. One of the biggest problem
we have is delay. So because, usually,
in some situation, a Wi-Fi is not good. There’s huge delays between
the signal from you’re sending the signal to the signal it
has reached the vehicle. That delay makes
the [inaudible] operator, their task very, very demanding.>>I don’t think that’s why-
the moment the new standards for 5G. There is some stuff which
the new standards for 5G, which I think is now being
rolled out over next year, has this one operation mode
where they say it’s 50 milliseconds or less
to get around that. But I think it’s a question
whether once we have coverage or if the car can
drive automatically [inaudible]>>I think the goal for me
was trying to understand, as we keep increasing an automation, how do people adapt to the systems? What are the things
that they would do differently that they
would not do now? How does that impact us just
overall in terms of use? With that, I’m going to stop because I want to make
sure that you guys have coffee. I also want to make
sure that we have time to go through this stuff. So just like before, before you go for coffee, please write down one more thing, put it up on the whiteboard, and then we’ll keep going.

Tagged , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *