NITRD Cybersecurity and CPS Panel, 2014 IEEE Symposium on Security and Privacy

NITRD Cybersecurity and CPS Panel, 2014 IEEE Symposium on Security and Privacy


OK. Ladies and gentlemen. We’ll move on to
the last session of this conference. You could have a seat. We’ll get started. So this is
a NITRD panel on cyber-physical systems and security. My name is Tomas Vagoun I’m the
Cyber Security R&D Coordinator at the National Coordination Office for the Networking and
IT R&D, a federal program that coordinates investments and strategies in the federal
government within the research in IT, including cyber security. So in the past, the NITRD has participated
in this conference with panels and other events. And we appreciate being able to be here at
this conference and bring up a topic that’s of increasing importance within the federal
government research community. So I’ll start with the obvious, which is cyber-physical
systems are a big deal. And that’s why we’re here. We have a panel of experts assembled
that will give you perspectives on what are some of the latest activities within the federal
government with respect to research and CPS. We’ll hear from NSF, from DHS, and from NIST
representatives, and also provide other perspectives from the industry, as well as from the academia
in terms of objectives and priorities for research within this domain. So what we’ll do initially is each panelist
has few slides for a couple opening remarks to share with you what’s important to them
and what’s on their mind for this panel. After we go through that, we will then proceed to
the open panel session, if you will, with opportunities for questions from the audience,
as well as questions among the panelists and from myself. So as the panelists give you their view of
some of the points they’d like to share, you can start sort of formulating your questions,
and we hope that you’ll take the opportunity to ask questions, make comments, and discuss
issues related to cyber-physical systems, cyber-physical systems and security, because
indeed, it’s a big deal. There is an increasing level of investments and coordination within
the federal government, which also translates to some of the work that you have the opportunity
to participate in. And with that, I will turn it over to our
first panelist, David Corman. He’s the Program Officer and CPS Lead from National Science
Foundation from the CISE directorate. And you can come here, or you can do it from there.
OK. Thank you, Tomas. So as he said, securing
cyber-physical systems is a really big deal. At the NSF, we’ve had a CPS program ongoing
for probably five or six years now. And for the last probably two years, we’ve been very
interested in technologies and research in the area of, how do we develop secure cyber-physical
systems? This year, if you’re following the solicitations,
we have a solicitation with proposals due June 2. And that includes some special attention
in the area of secure cyber-physical systems, and includes for the first time on the CPS
program a joint solicitation that includes Department of Homeland Security and the Department
of Transportation. So in order to just talk very briefly, what
do we really mean by CPS? So cyber-physical systems have cyber, so that’s communications,
computation, and control. They interact with the physical world. They sense the physical
world, they have actuation that operates capabilities. And the important thing that we have to remember
is they have, because they operate in the physical world, their behavior is really constrained
by the laws of physics. So it’s a very important characteristic, and one that we think is important
as we start to talk about how do we develop a secure cyber-physical systems. So CPS are cyber capability in every physical
component. They’re tightly integrated capabilities, so the computing, the control, the sensing,
and the networking are very tightly coupled. The time scales can vary from very short time
scales in the order of milliseconds or microseconds. Two time scales when human operators are involved
in the behavior, the order of seconds or more. They’re also complex in that they existed
both many temporal, well as many spatial scales. So if you look at systems that are deployed
on airplanes, or automobiles, or bridges for structural health monitoring, the capabilities
may exist at very short distance, or also at very long distances. High degrees of automation,
scales, control loops are closed in many different levels. And they may have very different and
very unconventional substrates, including some biological, some nano, chemical, et cetera. And what’s very important, and one that was
pointed out earlier today in the discussion, for years, we’ve talked about safety of cyber-physical
systems. And there are all types of regulations with the FAA, the Department of Transportation,
on how to design and develop safe cyber-physical systems in those domains. What we haven’t really had is a lot of research,
a lot of technical discussion, on how to actually build secure cyber-physical systems. So a
couple of things that we’ll talk about throughout the discussion today is what makes things
different in the world of cyber-physical systems? So the solutions need to be lightweight. When
we look at that, we’re not dealing with very large enterprise systems which are maybe perhaps
too processor-intensive, too power-intensive, and are not going to be very applicable. The
systems that we talk about have human operators frequently that will interact with them, and
these human operators are people like pilots, drivers, doctors, nurses, people in the manufacturing
domain. The point is they’re not system administrators,
and that the security decisions have to be presented to them in a form that they can
understand, and not force them to be the administrators or do forensics on a false detection. So the
problem there is, of course, the operators doing other things like driving a car, or
flying an airplane, operating a [? CNC ?] machine. When we look at today’s systems, your own
computer systems, your systems logged in, and it’s getting updates of antivirus and
other malware detection capability. When you look at cyber-physical systems, they may be
embedded deeply into some systems, and it’s very difficult to provide updates to them.
So security solutions that require updates or interactions with a home base may not be
very practical. And one of the points that I particularly
am concerned about is, what do we do when we defend? How do we really defend these cyber-physical
systems? And what does the concept of resilience and mission assurance really mean? It’s easy
to develop some anecdotal definitions, but in terms of research, it’s hard to be very
precise. And finally, a last thought is when we talk
about cyber-physical systems, the laws of physics don’t change. And so the question
that I ask and I think is one of interest here, can we really treat the physical world
as our friend in the area of CPS security? Thank you. Thank you, David. [APPLAUSE] For the next presenter, for just opening statements,
we’ll have a Victoria Pillitteri. She’s the Smart Grid Cyber Security and Cyber-Physical
System Security Lead at NIST and Chair of the SGIP, the smart. Grid. Interoperability
Panel. I think I said that right. Cyber Security Committee Lead for that. So Vicki, would you
like to come here? Sure [INAUDIBLE]. OK. [INAUDIBLE]. Thank you, everyone. As Tomas
mentioned, my name is to Vicki Pillitteri, and I’m from the Computer Security Division
at the National Institutes of Standards and Technology. So I hail from the division that’s
developed such publications including “Guidelines to Smart Grid Cyber Security,” “Guide to Industrial
Control System Security,” “Security and Privacy Controls for Federal Information Systems and
Organizations.” We do a lot of cyber security work. So I’m
here to talk a little bit about NIST Cyber-Physical Systems Public Working Group, which is an
effort that’s being developed in partnership with NIST’s Engineering Laboratory under the
direction of Chris Greer, the Director of Smart Grid and Cyber Physical Systems Program
Office. So the NIST initiative and CPS will provide
scalable design strategies based on new standards for integrating architectural layers and hypercomplex
CPSes for connecting multiple CPSes from the component level– for example, in composite
medical systems– to the continental scale, the smart grid, and a focus on robust science-based
metrics and agile research and testing platforms for integrated CPS performance measurement
and management. So the long and the short of it is we’re going
to be starting off an initiative to create a new form to help break down the existing
silos of excellence between communities in order to promote cooperation and coordination
amongst the broad community of CPS designers and users across different sectors for better
connecting industries’ needs to research capabilities, and allowing researchers to begin tackling
those cross-domain CPS challenges. So as identified by the stakeholder community
through a series of workshops and workshop reports, including strategic R&D opportunities
for the 21st century CPS and designed in cyber security for cyber-physical systems, there’s
a critical need here for a common lexicon, taxonomy, and a common architectural vision
to help facilitate interoperability between the elements and systems, and promoting communication
across a broad breadth of the stakeholders in the CPS community. This need was also identified internally at
NIST early in 2013, or late in 2012. And as a result, a cross-lab group was convened to
develop a draft notional CPS reference architecture. So now, before I get into any of the next
slides, I want to begin with a caveat that all the material here is draft, and we’re
really excited for the Public Working Group to take this definition and notional reference
architecture as a starting point, and one of many inputs to begin developing a consensus
definition and reference architecture. So the CPS notional definition. Very similar
to David’s definition. CPS’ are hybrid network cyber and engineered physical elements co-designed
to create adapted and predictive systems that respond in real time to enhanced performance,
with varying degrees of human interaction. So these systems, really, we’re interested
in them because they represent a key innovation-based growth engine for the industrial internet,
the US economy, and just society in general. A number of related CPS concepts come together
under this umbrella of CPS, including industrial internet, Smarter Planet, sustainable cities,
smart systems, machine to machine, and internet of things. So CPS is really broad, is the
best way to summarize that. So again, similar to David’s points, the essential
characteristics of CPS, NIST came to the conclusion that they’re very similar to what NSF came
to. Co-design, treating cyber engineering and the human elements is integral components
of a functional whole system, the integration of physics-based and digital world models
to provide learning and predictive data analytic capabilities, system engineering-based architectures
and standards to provide for modularity and composability, reciprocal feedback loops between
the computational elements and the distributed sensing and actuation and monitoring control
elements to enable adaptive multi-objective performance. And of course, the network secure
cyber components, which provide the basis for scalability, complexity management, and
resilience. So this is the notional CPS reference architecture
that the NIST team developed, all within the context– it’s a functional multi-stack architecture,
and all the layers are co-designed in the context of the physical environment. A management
function, which is not depicted in the diagram, provides an oversight capability and ensures
coordination and composability. The horizontal layers of the stack depict
a hierarchy of functions, but does not imply that the communication is limited to the adjacent
layers only. The physical environment composes the aggregate surrounding environment to all
connected conditions, influences, or surroundings. All of the layers, the horizontal and vertical,
the architecture layers which are horizontal and the cross-cutting functions which are
vertical, are co-designed in the context of the physical environment. The vertical cross-cutting functions show
the critical elements that connect the architecture layers. So these cross-cutting functions are
essential to ensure that each of the architecture layers can share and act on data from other
layers effectively and securely. And finally, the management function that I referenced
earlier allows the ability to oversee the complexity across the CPS systems. So again, this is a starting point, a notional
CPS reference architecture. So we’re really looking forward to the kick off of the NIST
Public Working Group. So looking ahead, this Public Working Group will consist of stakeholders
from government, academia, and industry, and is open to all. And we’ll be working to develop
definitions and taxonomies, requirements and use cases, cyber security and privacy considerations,
a reference architecture, and a technology roadmap. So having the breadth of representation from
all the stakeholder groups, government, academia, and industry is critical to the success of
this working group. So for each of the technicals subgroups within this public. Working. Group,
there’ll be a co-chair from NIST, academia, and industry. So what you’ll notice here is none of the
subgroups have a specific scope, or objective, or goals yet. So the co-chairs are currently
meeting and coordinating to discuss and develop an initial scope, objective, and goals. And
those will continue to evolve as the Public Working Groups convene and kick off later
this summer. So for more information on the upcoming launch
of the Public Working Group, we anticipate the initial meetings to start in the June
and July time frame. Please contact Jerry Castellucci. He’s the overall PM of the project.
So thank you so much for your time, and I look forward to questions. Thank you, Vicki. [APPLAUSE] OK. So we’ve heard from NSF, from NISTS, representatives
of what’s on their mind. And to round up the government view, we will hear from Scott Tousley.
He’s the deputy. Director. within the Cyber Security Division of DHS S&T. Thank you, Tomas. I’ve got a few slides just
to sort of illustrate some things that I’ll talk through, but I won’t take a whole lot
of time on them. You can feel free to grab me afterwards if you have particular questions
that don’t come up here in the session itself, or the email is there. I help Doug Maughan run the Research Group
for Cyber Security inside the science and tech portion of DHS, so we’re different than
the [? CS&C, ?] US-CERT sort of operating of the things that are there at DHS and different
from the CIO and CISO organization. We focus pretty much exclusively on research. Dr. Dan f has joined us about to eight or
nine months ago from the faculty at Colorado State. In large measure to help us with parts
of this, he’s working for us for a couple years as an IPA. And so some of the material
I’ll show you here reflects what Dan’s put together in a program sense, and then you
can reach him also for discussions in this topic. Anytime we put money into a program area,
we sort of have to lay out for the decision-makers in our organization. Where are things today?
What are we proposing to do? How are we going to make an impact? And if you look at how
cyber-physical systems– and I’ll just use that as the term of reference for right now.
If you look at how it works, there’s an awful lot of stuff in academia. There’s an awful
lot of stuff in industry. There’s a lot of cross play left to right here, as you see
on the chart. The challenge, of course, is we don’t have
a strong architectural or systemic approach as all of these reasonably good well-meaning
individual efforts happen, and this whole thing, of course, is a large dynamic changing
shape of different technologies, different industries and sectors, different time frames,
and so on and so forth. What’s interesting– and Dan actually brought
this out in the discussion some months ago– is that it’s not like these are system areas
that you can opt out. You’re going to be operating in cyber-physical systems. And so everybody
has to sort of ride along– it’s a little bit like going to an amusement park, and you’re
going to go on the ride and take the roller coaster. Whether you realize that was the
ride was or not, you’re going to go there. What we’re proposing to do is sort of marked
out in red here. And it’s a little bit of a full court press. We’re trying to do some
things research-wise with academic organizations. We’re trying to engage industry fairly heavily,
and I’ll come back to that in a minute. We’re trying to actually advance some areas more
towards the front-end fundamental research. Also, more towards the components and then
the later systems design. One of the decisions we made was the system
is so large, so evolving in different areas, that trying to focus on just one or two specific
things was simply too narrow against the problem space. So we’re going to try and look at a
whole bunch of these different areas. See if we can’t help strengthen collaboration
across some of the boundaries inherent in a block diagram like this, and try and make
security a much more integral part throughout, which we think will help tie back into a lot
of industry interest right now in more systemically folding and risk analysis considerations as
a part of what they look at for investment, technology needs, deployment operations, and
so on and so forth. So our program basically has three elements
to it here in this triangle that Dan put together. The first thing we’re doing is we’re working
a research consortium– in this case, with the automotive industry. This is something
we’ve done in some other areas that I’ll mention too in a little bit. Second is we’re working
through applied research with some solicitations that we’ve we put together, partly on our
own here in the second area, and partly in collaboration with NSF and a number of the
partners that you see up here at the table. We think we can make some advancements in
almost building code type things, in the applied research area in particular. And we’re actually
going to be interested to see where the industry consortium takes the focus of their effort,
where we’re providing some funding to sort of underwrite the operation of it, but the
different automotive organizations bring their own direct funding into the effort, and we
see where they want to go in terms of their specific research investments. The consortia is something that our particular
research division has used with great success. We’ve had a fairly good effort with the Department
of Energy in what’s called TCIPG, Trustworthy Computing Infrastructure for the Power Grid.
We’ve also had an effort with the oil and gas industry directly. Again in both of these
cases, we are providing funds to allow research collaboration to be administratively and program
management supported. But in both cases, the money that the government
has put into the effort is exceeded by a factor of five or 10 by the industry investment,
and we’re able to work through the antitrust questions, able to work through the intellectual
property questions, and it ends up sparking a lot of direct benefit to the industry, and
a lot of secondary indirect benefit. So this model that we’ve used in a couple
of areas looking back is the same sort of approach we’re going to try and use here in
the CPS and the CPS area. We really have to also focus on transition into use. That’s
something that we work on very hard. We’re actually pretty widely known for it. Again
at the Department of Homeland Security, we get very quickly asked, OK, so what’s going
to change on the ground when your research investment finishes out? We’re very heavily
oriented towards applied research. So in the CPS area, we are looking for how
we’re going to ultimately transition the different technology, knowledge products, particular
breakthroughs that might come about from it and focus on that. It’s quite a priority. In my case, I’m also heading up, along with
a colleague of mine at S&T, the work on a National Critical Infrastructure Security
and Resilience R&D plan. That’s a big mouthful. It’s the R&D plan that was tasked in the administration
guidance from a year and three months ago. The executive order and the presidential policy
directive. And we focused on CPS as a main core theme in this research plan because CPS
captures so many of the other elements that are called out in the tasking, and some of
the other things you see listed on there. So the national R&D plan that was called out–
and notice it’s a national plan, not a federal plan, so it’s even broader. And this is something
I’m actually going to talk a little bit with Vicki about, because we think there may be
some commonalities in approach between the working group in the public sense that you’re
doing and what we’re trying to do in a more traditional planning document here. And the last thing I’ll leave you with, this
is an interesting thing that was published a couple weeks ago. The administration put
a lot of effort into examining CPS of humans, meaning the privacy things that are coming
out as the data systems get better and stronger, as things become more and more pervasive across
our society and our systems, and our technology systems, our human systems, and so on. And I’m not really somebody that can go through
the [? Podesta ?] document in terms of some of the policy recommendations. But I think
the PCAS report here that was published on the same day is good professional reading
for anybody in this room. Because I think five years from now, you’re going to see the
privacy questions inherently a full-blown full element of the CPS questions that we
were asked to talk to here today. So I think I’ll leave this little one as a
little homework for all of us to think about as we go off and leave this excellent conference.
Tomas, I’ll turn the floor back over. Great. Thank you. [APPLAUSE] So as you can see, just within the next in
the last 10-15 minutes, there’s a lot happening in the federal government within different
agencies with respect to cyber-physical systems research, and then applying those fruits in
new technologies that can be used within the society. And that’s also why we’re glad to have this
panel, is to be able to describe to you some of the priorities that are shaping the field,
whether it’s the basic research from NSF, or the view of the framework, and what are
all the parts to CPS that NIST is starting to work on, as well as some of the applied
activities, and putting it together into a national R&D plan for critical infrastructure. So that was a quick view from the government.
The next speaker is Ulf Lindqvist, Director of the Infrastructure Security Program at
SRI International. SRI International is involved both with government programs, but also industry
activity, so he will give us a brief perspective on that from the industrial side. Thank you, Tomas. And thanks to everyone for
being here this late in the afternoon to discuss this very important topic. As Tomas said,
I’m with a SRI International, which is a non-profit, independent research institute. So there are
many kinds of cyber-physical systems and we’ve heard the very general definitions, but I’m
going to focus on transportation, also known as planes, trains, and automobiles. Just because
we see so much happening there right now. So much development. And keep in mind this, of course, includes
fully automated, unmanned vehicles. So why should we care about cyber-physical systems
in transportation? Well, these are systems where people often ride inside, or are in
the path of a very fast moving system, and a crash here can actually really mean a crash.
And the blue screen of death can be exactly that. So these are systems where consequences of
a failure or an attack really manifests in the physical world and can have a large impact
on human health and safety. So that’s why I think this is very important that we get
this right, and as was mentioned earlier, safety is important, but you cannot have safety
without security in these systems. If someone can break in, takeover the systems,
then of course, all bets are off when it comes to safety. So in terms of the threat, we know
the mass transit, especially rail and air, unfortunately are traditional targets for
terrorism. So there’s definitely a threat out there. We can imagine that there are people
who want to do bad things to these systems that we depend on, and that like [? Scott
?] or Dan Massey at DHS, systems that we really bet our lives on. We know that there are many challenges for
a stationary cyber-physical system, but the transportation systems are mobile. They move
around, sometimes at a very high speed. And if you have a manufacturing plant and oil
refinery or other situations where you have stationary CPS, those have challenges. But
when it comes to safety and safe shutdown, there are ways to do that. When something
goes wrong, there’s a safety system that can shut down the control system in a fail safe
manner so things don’t go out of hand. When the pressure gets too high in the tank,
we can turn off the heat. We can let up with some of the product. We can make sure there’s
not an explosion basically. Of course, there issues when these safety systems are also
CPS, and connected, and can be hacked into and so forth. But generally, this is true. However, if you look at an aircraft flying
at cruising altitude, what’s the safe system shutdown at that point? Same issue when you’re
in a car going on the freeway at 65 miles an hour. So it’s important that a CPS is able
to maintain safety defined by that particular application, whatever that application is,
even when the system is under cyber attack. And of course, related to this is, as was
mentioned earlier, the human operators and interaction with these systems. And that becomes
especially prevalent when it comes to vehicles. You’ve probably seen in the news and in technical
forums the incredible development that’s happening on the vehicle front now. There’s a lot of
automation connected vehicles, where vehicles communicate with each other, and with fixed
nodes in the infrastructure in the roads. And of course, what drives this is usually
safety, primarily. Also to some extent capacity on the highways and such. But we believe that
systems are much better than humans at making good decisions, timely decisions. When it
comes to driving, we already see some advanced driver assist systems in modern cars that
are on the market today. And of course, the next natural step is to relieve the driver
of some of the actual driving duties. And I know there are people from Google here
at the symposium. I happen to live in Mountain View, and my morning route seems to be one
that Google uses to test their self-driving cars. So about every morning, I find myself
next to one of these white SUVs, where there’s a person in the driver’s seat, but he’s not
doing anything. The car is driving itself. And of course for a long time, we’ll see a
mix of these automated vehicles and traditional old vehicles, smart vehicles, vehicles with
stupid drivers and, so forth. So this is all going to be challenging. And then we throw
into this the mix of cyber security. It’s already been shown, as you’ve seen them in
the past year or two, and thanks to the efforts of some people in the room, that we are able
to hack into traditional cars that are on the market today. And with more automation,
I’m afraid we’re just going to see more vulnerabilities. And then there’s a whole human interaction
element. How do you interact? When does the human have to do anything? And does the human
really understand the full automation? We can have this is situation of mode confusion
that has lead to some very unfortunate accidents in aviation, for example, when the pilot has
one notion of what the system does and the designer had another. So we need to make sure that these systems
provide robustness and resilience. We know that development is fast-paced. It’s very
feature-focused. It’s typically not focused on security. It’s focused on getting the latest
and greatest features in. And we know, I think everyone in this room knows, that vulnerabilities
will be introduced. It’s inevitable. There will be vulnerabilities, and they will keep
being introduced, even if they are patched. Patching and updates are difficult in CPS,
but even if we can do that, there will be new vulnerabilities in the updated software. So our systems need to have strong robustness
and resilience properties, so that they can then operate under attack. That they can bounce
back after attacks, and limit the scope and effect of vulnerabilities in attacks. So some
of the things we want these systems to have are capabilities in self-healing. To be able
to do some isolation and repair. We know the concept that was introduced 10 to 15 years
ago, intrusion tolerance, that even if you attack some part of the system, that maybe
the important safety functions can still go on. So some of the research areas are very viable
security properties. What properties can be defined for a system, and how can we actually
verify that it has those properties? We want to look into automated attack detection, diagnosis,
and response on the fly. So just to wrap up my introduction and summarize
some of these challenge areas, metrics, for example, we don’t have a good system to measure
security of general systems. And we certainly don’t have it for cyber-physical systems.
How can we compare two systems? How can we, when we make a change to introduce some security
supposed improvement, how can we verify that security actually was improved? I mentioned the human-computer interaction.
Testing, in general, for systems with very low failure rates. When we require a long
time, very long time, hopefully between any catastrophic failures, how can we test and
verify that? How can we– if you require a billion hours between failures, how can you
test that for 10 billion hours with a number of systems and so forth? What are the safe states? We already talked
about what does that mean in transportation. Robustness and resilience, as I mentioned.
And also, CPS environments have very specific requirements, often when it comes to cost,
especially in the vehicle market. That includes weight of the system, in cars, in planes,
space vehicles, also known as satellites, and so forth. Power can be very limited. Connectivity can
be challenging. Sometimes you have connectivity in a mobile system. Sometimes, you’re out
of radio range, and the system still needs to be able to operate autonomously. So these
are just some of these challenge areas. And then finally, we talked all about technology,
but there are policy issues as well when it comes to modern cyber-physical systems. Who’s
really in charge, and who should get the speeding ticket when your automatic car goes too fast?
Thank you. [APPLAUSE] OK. Thank you, Ulf. So for our final panelist,
we will hear from Mark Tehranipoor. He’s Associate Professor at the Electrical Computer Engineering
Department, University of Connecticut, and also Director of UConn Chase Center and Comcast
Center of Excellence in Security Innovation. As an accomplished academic, he said he can
speak for five minutes or five hours. So we’ll see how much he can put into the next 5, 10
minutes. Thanks. Thanks, Tomas. So when Tomas asked
me to come and join the panel, I knew I was the only academic researcher here. And I was
thinking I have several hats, so which one should I take with me to come? And I realized
that I could actually use all three of them. So as a researcher myself, and then director
of two centers at UConn, basically put my slides together. And more than probably what
everybody else has, but I’ll try to finish it in that 10 minutes that Tomas gave us. Being the last presenter, in some ways, things
are a little easier for you because now, all the backgrounds are covered. So a terrific
job on those presentations. Learned quite a bit as well. But in the meantime, makes
it very difficult for you to present something new that is different and give a different
perspective. Having a background on hardware security,
when you look at the cyber-physical systems, very often we start talking about very high
levels of abstraction. And very often, we forget that what makes the systems are those
teeny tiny devices that you have to design. And we make sure that those devices actually
work perfectly. So all of the examples are given by the panelists. I could go on and
on and talk a little bit different physical systems, and I could even think about the
Internet of Things, because that’s coming as well. A lot of devices that are connected
together, et cetera. But as I said, when you look at every system,
it starts with the integrated circuits. And I see some familiar faces in the room that
I know care so much about integrated circuits. Very often, we build a system later and we
go back and say, gee, is this secure? And what we wanted to do in this community of
a hardware security, per se, is to start from looking at the circuit and say, well, is this
secure before I actually continue? One of the new projects, new [? immunity ?] projects,
that we have started at UConn and UMD and Rice is just to answer that one question,
that when you start thinking about even a new device, let alone just integrate the circuits
as you see over here, think about a transistor. And as you start thinking about power that
Ulf mentioned, and you bring it down to that small scale, and you think about reliability
and performance, et cetera, why don’t we start thinking about security at that level as well? Can I change the parameters in a way that
later on when I built my system, I actually have a much better time? Very often, we build
the system, and then we start asking that question. We want to start asking that question
at the very step of the process when we start thinking about putting a system [INAUDIBLE]
basically a circuit together. The challenges that now we have– they’re
in different numbers. 800,000 different type of chips in the markets. So you want to make
sure all of them are secure. They’re going to different systems. They’re going to PCBs.
You hear very often these PCBs are being attacked, cloned, and then– let me just going backward.
I’m used to Apple Mac. And then goes to cars. Ulf gave an example of automotive. But many
of you may find it surprising to know that in 2014 cars, we may have up to 1,000 microcontrollers.
So 1,000 microcontrollers. Everything is being controlled. And many of these microcontrollers
are actually talking to each other. And if assuming that these are– and many of them,
I know we work with [INAUDIBLE] semiconductor design and test some of the microcontrollers.
They’re the same microcontrollers that are being used. If one of them is compromised,
that means all is compromised, and they can easily be communicating with each other. Another major concern that I have actually
is the fact that we build systems based on many of the commercial-off-the-shelf. We just
go to market and grab a system, and then use it. But the reality is that who built the
system? Who put them together from the transistor, to the chip, to the PCB, to your system that
now you’re using in the cyber-physical system? That is a major challenge. That takes me basically
to that big question that I often ask in the supply chain areas, especially at the hardware
community, where I often show this example and say, what [INAUDIBLE] circuit that dot
is? And that dot is– I’m not going to challenge
people here, but it’s the answer to the question, the question says, who really designed those
IPs for us, the ICs, and then put the PCB together, and then made the whole system together.
But if you look at the way things are done today, it’s very different. It’s globalization
that is impacting the way we build the system. So when you start thinking about that big
idea, it goes all over the place, and then in your RTL, and then design, and then the
fabrication, assembly, test, packaging, putting it into a PCB, and if somebody else in another
country does the PCB for us, is that trusted? Am I going to be able to build a CPS basically
for it, but not knowing exactly what that system is? And a lot of other issues associated with
it. So one of the things we do in the center is basically do all sorts of vulnerability
analysis. I could give you several examples. I actually have two examples. One is the domain
of electronic components, where we look at the supply chain and then say, where are the
vulnerabilities associated with those? So if you look at the process from design, what
I’d like to call them from design to resign, from the beginning when you have the idea,
all the way when you throw away your system, you think that everything is fine we actually
find a way coming back to the market. But in the process from the fabrication, assembly,
distribution, system integration, et cetera, there’s a lot of issues associated with it
that cause trust problems, security problems. I talk about cloning, et cetera. One of the examples is hardware trojans, which
could easily come into the system, including CPS. Basically what it does is it allows the
adversary to get access to our system. And examples of it, you probably read articles
about it. If not, go to a web page that is [INAUDIBLE] responsive, that page actually
called trusthub.org, where you get a lot of stories about different type of hardware attacks
that have been carried out to many systems. And they basically could get access to your
system and be able to control it, send information, receive information, et cetera. Another one
basically would be some sort of a, we call them time bomb, where they can again easily
cause problem into your system. One of the things that really concerns me
a lot when I look at the way we design our systems is concept of tampering and cloning.
Cloning is on the rise. You’re going to get a system you think is yours. But believe me,
you need to double check. I’ll often give an example of a laser system [INAUDIBLE] company
state of Connecticut. A laser system could be as big as my office,
is being completely cloned, and redesigned, and fabricated. It has the same specification
and everything in China. And they’re selling under the name of that company. Now you go
and get that system, put it into your own system, and not knowing that it’s been compromised
and a lot of other things that come with it. So I take a different hat now. And one of
the things we do in the Comcast Center of Excellence is a vulnerability analysis. We
can’t unfortunately name what devices et cetera, we get, but you kind of guess what Comcast
is. And not many people know how much is actually Comcast. We always look at Comcast as a cable
provider, but the reality is that Comcast has one set-up box, or router, or gateway,
or home security device in your home. So when you think about it, you’re talking
about millions and millions of devices that they have. And those devices could easily
be tampered. So when we get those devices in the center, we basically put them into
a system that Comcast allows us to get access through and do all sorts of vulnerability
analysis. We have a long list of vulnerabilities that you basically go through that checklist.
And very often, we start adding more to it because there is a group of researchers at
UConn that are doing all sorts of stuff. Then we basically take them and then try to implement
it on Comcast devices. Just to name a few, we do a lot of tampering.
We do reverse engineering. We do test analysis. We do hardware security. We do a black box
test, as Ulf mentioned. We do a lot of tests on those devices to actually see whether they
can tolerate some of the attacks that we carry. And we do [INAUDIBLE]. We really go inside,
identify the chips, identify the system, get into the software, go through the firmware,
get them out, and analyze them, put them back again, act like a good device, et cetera.
So we do all of that, and see what the result looks like. Unfortunately, I was told to get all of that
information out. But you kind of guess what sort of vulnerabilities we’re talking. So
this is a system that we have to look at. This is Internet of Things or cyber-physical
system. You basically put all of these devices communicating with each other, and all we
do is we look at the vulnerabilities. It’s about 10-12 faculty working together on this,
and we analyze it at all levels of abstraction. I’m almost done. Another thing we do, this
is again part of the Comcast Center of Excellence. In the supply chain assurance, where we’re
really now going down to the hardware level there, and if they’re analyzing their supply
chain, then you start basically developing new solutions. Working with industry cost
is key. We have to bring down the cost as much as possible. I want to give an example. In the bottom right,
we actually developed a technology that allows them to be able to keep track of every device
that leaves the inventory and does the authentication, identification, et cetera. Goes from inventor
to inventor, installer to installer. They would know exactly where the device is, who
had it the last time. It addresses their [INAUDIBLE] problem. It addresses their tampering problem,
et cetera. And they wanted it, for example, something below $0.50. Now you basically [? put in ?] the whole system,
but you have to bring the cost down. I’m going to skip a few slides because the work we do
with Comcast, we need to also keep NIST and NIST framework in mind. And a lot of projects
that we have in the center, basically we try to see which of those importance functions
that are being addressed by the projects. I’m going to skip a few of them and get to
the couple of slides that they have on the challenges. Ulf helped me out here and mentioned
about metrics. Metrics are important. When you do vulnerability analysis, security analysis,
are there ways that you can actually evaluate to see how security devices are in [INAUDIBLE]? It’s a moving target, but what we do today,
it’s important to be able to tolerate the attacks. Very often, we talk about security,
but the reality is the security, reliability, quality, and resiliency. They all have some
relationship here and there. Very much so, we think about security, but the reality is
that some security challenges could actually be called a quality problem, where they present
themselves as a quality problem, and you might have already some solutions there. So we may want to analyze those vulnerabilities
as well, and then focus on what the real security problems are actually [INAUDIBLE]. Security
assessment at all levels of abstraction, we do the [? voice, ?] we do system, we put them
into bigger system, and then try to analyze them. That’s what we need to do. I need to mention a couple of [INAUDIBLE]
[? worked ?] on. One is the supply chain vulnerability analysis. An interesting program is out from
DARPA called DARPA SHIELD. They’re supposed to design a dielet that’s supposed to be just
once and goes on every device, every chip. Allows us to be able to keep track of where
it is. So very interesting program. Another– I want to again go back to what
Ulf mentioned about the industry consortium, et cetera. Speaker before that, NSF [? starts
?] the interesting program. Brought a lot of semiconductor companies and EDA companies
together to really work together to address this big problem. So it’s important we look
at the consortium activities. At UConn and Chase, we also have a consortium
of several companies, and those numbers are increasing. In terms of CPS, taxonomy was
mentioned earlier. Best practices, we really need to put best practices out there too so
people will see. If you look at trust [INAUDIBLE] as an example, we have a of benchmarks, we
have a lot of tools. We have a lot of hardware platforms, et cetera, for people to try it
out, to use it. I think for CPS, we need to do the same as
well. And let’s not– I’m from academy. I can’t leave this podium and not talk about
education. We definitely need to put more focus on education at all levels, from the
science of security at the low level, to the higher level. I’ll leave a couple of items
up there, and I’ll talk about it as we get the questions. Thank you. [APPLAUSE] Thank you, Mark. So I hope you are all thinking
about some questions or comments that you would like to make. And in the meantime, I’ll
just put the names up again so that it’s clear. But I think the challenge is clear. We are
heading towards, in a big way, to a CPS world. There are just too many opportunities in manufacturing,
in transportation, in health care, in agriculture– you name it– huge opportunities for improving
efficiencies, for new products, new services that will improve societies. But the challenge is those systems have to
be also secure, and cyber security. And so the challenge is, how does this community
work on these problems with the traditional cyber-physical communities that often even
speak or approach the problems in different ways? So it’s a challenge collectively for
communities to come together and figure out ways to secure these systems. If you talk to the people who designed these
systems, traditionally, you’ll hear a lot about control loops and control theory. Well,
that’s not really a language of the community here, but those two bodies have to come together
to solve these problems. So it looks like we have already some questions
in mind. We’ll start with the gentleman in the middle. Jim [INAUDIBLE], University of New Mexico.
Given all the threats that you guys covered very well from hardware to software, cyber-physical
systems, I’d like to get the opinion of each of the panelists in terms of what they think
the most important step, or component, or process would be for securing our current
cyber-physical systems and our future ones. Is it something that would be hardware-oriented?
Mark had mentioned a couple things regarding giving devices identities, much like we have
identities and DNA. Do we need to have that level of security to provide cars? Come on. I mean, you know, I’m not going to
be one of the first adopters of the Google car that drives itself until people prove
to me that you can’t be an internet hacked and have a massive accident that’s caused
by some hacker or terrorist. Anyway, I’d like to get your opinion on that. OK. Should we start with Mark here? OK. It’s a question I’ll give you a lot of
different answers. I think the one that I think about is supply chain. Supply chain
for hardware, supply chain for software, supply chain for many other components of the CPS.
If we take control of what’s going in the supply chain, I think it’s going to have a
major impact on improving the security of our systems. That’s a very good question. It’s like picking
your favorite child. I’ll have to say that risk management, I think, is a key component.
Understanding that cyber security risk management is like any other business process. It depends
on what the threat is. It depends what your asset. So knowing, identifying– I guess as
Mark had in his slide from the cyber security framework, identifying, protecting, detecting,
responding, and recovering. Having that process in place for cyber security is really critical. So I’m actually glad you asked that question,
because listening to our academic, man, we’re in trouble. We’re bleeding everywhere, is
it sounds. And so the question is really, where do you start? And in a sense, what you
have to do is look at both what is practical and what can actually prevent issues? And so the supply chain is difficult because
it’s such a large problem. And the supply chain, frankly, you’ve already got so many
systems in place already, so you can look at, can you fix it going forward? So the question
in my mind is really taking advantage of the fact that these systems have to interact with
the physical world and looking at novel ways of detection, as well as adding some resilient
or other capabilities to define what can the system do. How can I achieve a safe landing
with this system? What kind of behaviors can I implement through software that essentially
add a level of safety and security into these systems? So in a sense, I’m looking at the detection
side, the physical world. I think, can help with, and the fact that I think we can make
some progress in that direction. When it comes to the very first steps to take,
they may not necessarily be so research-oriented. One technological thing is the low-hanging
fruit, which relates to the device authentication that you mentioned, for example. In a network,
in a car or a plane, it’s assumed that if the device is on the network, it’s authorized
to be there because there was no outside connections in the past. So that’s something that is relatively
easy to fix. More of an engineering issue than research perhaps. Other things, as was illustrated in the recent
car hacking efforts by Charlie Miller and Chris Valasek. They used diagnostic messages
that are meant to be used in a workshop. When the car’s in for service, they sent those
messages when the car was driving down the freeway, and it accepted those messages. Things
like that. So there’s some things that can be done there.
But then also, we need to make sure that we as a security community involve and educate,
as Tomas mentioned, the people who are more on the automation side of things and build
these systems. And there are several efforts that have started to do that. There’s the
DHS effort with the automotive consortium that Scott mentioned. For aviation, the American
Institute of Aeronautics and Astronautics have just formed a cyber security working
group, for example, to educate their industry on what the cyber security issues are. So some efforts underway. But then in addition
to that, of course, all the really hard problems that we all listed and will take some more
research. I think it might be feedback. And I don’t
mean in a controlling sense, but in an awareness sense. If you look out there at some of the
best organizations in any field, there are organizations that somehow managed to couple
all of what they do and operate into sort of continuous improvement and adaptation things
going forward. The old sports team analogy of the really good coaches can take your team
and beat you with your team, not just their team. Because they’ve got simply a way of
sort of going back and forth within the activities that they have now and continuously make them
better. And I think since we’re going to be leaking,
or bleeding, or faced with some of these challenges no matter what and where, it’s the organizations
that have feedback systems to sort of make them continuously better from a design, and
operations, and improvements sense, et cetera that may be the single best thing to focus
on. Great. Thank you. [INAUDIBLE] after that. With regard to hardware/software,
there’s a very simple observation that hardware devices being put into cars now will still
be on the road 20 years. And with software, you can do a dynamic patch, and download it
or something. But if there’s a thousand chips on the car, there’s no way that you can economically
change that without just scrapping the car, essentially, and starting over. So it would seem that there’d be an urgent
priority that when new kinds of physical interactions, new kinds of physical devices, are added to
cars, the security considerations be given a much higher consideration than they are
right now. And that should almost sort of preempt everything else going on, because
that’s going to be a legacy going forward for decades. So no matter how good a job we
do in software security, if it’s based on [INAUDIBLE] bus or something like that, which
has got inherent insecurities in it, there’s no way you can fix it. OK. So we have a vote here, strong vote, for
hardware security first. Mark, you’re the hardware expert here. What to take the comment? Well, I think you touched on a very, very
important point here. I was actually, as other speakers were talking about it, I rode down
here for [? my self ?] legacy, because that’s exactly what it is. When we think about CPS,
very often we don’t think about the fact that CPS actually is usually targeted for systems
that have a very long lifetime. And think about automotive application, we
work with [? Freescale ?] semiconductor and their microcontrollers. And 20 years is their
lifetime, making sure devices are going to last that long. Now, that’s a reliability
problem. But when it comes to security issues, today’s parts are tomorrow’s legacy components.
If we don’t do it right now, we’re going to go back again and deal with these issues. We have to come up with ways to make them
more secure. The reason I answered that question by Jim [INAUDIBLE] supply chain was just exactly
that. We can’t make every device that goes out to be very secure. The reason is because
at time zero, I think this device meets certain specifications or security metrics if we have
that metric in place. But the reality is that five years down the road, that device may
actually be compromised, not necessarily with hard– see, the good thing about hardware
is that you can’t go back in and basically change it. However, it could be subjected to side channel
attacks if side channel attacks do [INAUDIBLE]. Even though you think that you actually covered
it, but five years down the road, there are much better attacks that would do that. And
that’s where the software can come in to help hardware. You can’t go back and try to take
all of those hardware out, and then address the problem. So I agree with you. It’s a major challenge.
The fact that you could do your best, whatever best means. You have to have a metric in place
to know what you’re trying to protect those devices against. I showed that the supply
chain. There are costs associated with each of those vulnerabilities. Once you know what
they the mitigation processes are, you can address some of the issues. Believe me, we actually do some of the tests
on these devices that come to us. They have no security mechanism whatsoever. And forget
about even just the ID in it. There’s nothing in there. And it could easily open the device.
it can easily put anything in the device. We’re back again. Put it into the system.
It could do a lot more than what it’s supposed to do. So I agree with you. it’s. A major,
major problem, and it comes to addressing hardware times zero. Any other comments on hardware? I just want to add to Jim’s comment about
systems being on the road for 20 years. It’s even worse because right now, the chips that
are being designed for cars are being designed for model year 2020. That’s the lead times
in development of the basic chips that go into cars. So they’re being designed now,
and then after that, they’re going to be on the road for 10, 20 years. And when it comes to software, where’s the
chance where we can actually do upgrades and security patching. I’m not sure you want patch
Tuesday for your car, but that’s going to happen. We need to make sure, of course, that
those updates are secure because that could otherwise be an excellent channel for an attacker
to put something nasty in there. Great. Greg, you’ve been patient. Over to
you. So, Greg Shannon from Carnegie Mellon University’s
Software Engineering Institute. Scott and Ulf, I’m particularly interested in your answer
to this question in the vain of what we do first. What have you seen as examples of hope,
promise, success, if you will, that you think that can be used as an example, a qualitatively
producible example of success in [INAUDIBLE]. A physical system, a lot of times, people
will reference NASA, for example, or other systems where millions or billions of dollars
were invested, and that really doesn’t work if $0.50 per party’s your budget. So I’m curious.
Where have you seen possibilities of success? Because there’s, in my sense, too much doom
and gloom. What’s [INAUDIBLE]? [LAUGHING] Yeah, this is the opportunity to say that
the sky is falling. There is no hope and so forth. So let’s try to look at the positive.
So I think that there’s some promising developments when it comes to hardware solutions. Actually,
we’ve seen that in industrial control systems, for example. The so-called bump in the wired
devices that you put between the legacy insecure system that was never designed to really communicate
with the outside world. And you put something in front of it, whether
it’s an encryption device, an application firewall, or something that really takes care
of most of the security issues, and you can still live with having that old insecure system
behind there. And there are similar things being developed actually for cars for example.
Encryption boxes that you can put in there. I think unfortunately, as we know, it’s so
easy to build a system without security, and just make sure that it works, and worry about
security later, which is not good. The other, I guess, glimmer of hope is in the successful
partnerships that we’ve seen. Scott mentioned the logic effort where DHS works with the
oil and gas industry, for example. That’s been going on now for more than five years.
And they have done some successful projects in cyber-physical systems where they evaluated
the security risks of various solutions, come up with recommendations for the industry on
how to use, for example, safety systems that I mentioned earlier, and wireless control
systems, and so forth. So I there’s some hope in that we see that
that kind of collaboration actually works when we get the researchers together with
the practitioners and those who own the systems, and the problems, and together work on solutions. Thank you. Let’s go over to the gentleman
in the middle. Hi. John [? Daly ?] from Akamai. Enjoyed the
thoughts there. We definitely have complex systems with many complex interactions going
from hardware for more software. And going back to something Scott said, I don’t think
security can be bolted on. I don’t think anyone in this room would suggest that. It needs
to be designed in from the start. Ulf, you made the comment that vulnerabilities
will be added in, I captured, in a feature-focused design. And that’s really the core of my question.
We’ve kind of been talking around it a little bit already. But there’s a trade-off between
security of systems and the feature set, the time to market, and ultimately the cost of
development of that. And I’d like to hear your thoughts on how
we in industry, or in government, or elsewhere can balance the decision for making appropriate
level of security. Especially in a low cost oriented model, a low bid wins model, consumers
have cost as a metric that, in considering various types of quality, cost is absolutely
comparable brand to brand, but other aspects like security or reliability are less tangible. So in a low cost, low bid, low price model,
how can we ensure that the investments are made appropriately, and they’re able to be
recouped by those who make them? In other words, how can we align the incentives of
development to market? One possible answer may be we can’t. [LAUGHING] So who would like to start on the panel here? I think it’s an interesting conundrum you
bring up. I mean, you’re really asking, what is the ROI of designing in cyber security?
And that’s a really tough question. I think we’re in initial stages, that the fact that
cyber security is on the front cover and it’s being discussed, that’s a first step in ensuring
that cyber security is a design consideration, or even part of overall risk management within
your organization, within your product, within your system. So it’s a starting point. But when can we
get to the ROI, where I can say I spent $5 on cyber prevention? Where’s my $5 or $10
back? $5.01 back? That is a really hard question that I don’t think– at least I don’t know
the answer to. One of your points is just grow in complexity
of features and systems. And to some degree, that’s an area that has been looked at across
many different programs, including things at DARPA and other military. And in the consumer
world, the real question is, what is the public demand? The public demands the features and
presumes security. And the only way to– [INAUDIBLE]. That’s right. It’s just a risk. It’s intangible. Right. Because there’s no perceived cost to
it ’till you have some actual incident. And in which case, what you almost need to do
is create incentives, or create disincentives, on the production of insecure systems. That’s
maybe one way to deal with it. It almost comes back to metrics and our ability
or difficulty in measuring risk and the return on investment for security. If we look at
what drives the whole vehicle automation trend, it’s the fact that we have 30,000 to 40,000
deaths on our highways in the US every year. People have a vision of a zero fatality highway
system, where everything’s automated. Which is fantastic, but can we measure the risk
of new fatalities due to problems with those automation systems or cyber attacks on them? And we know how difficult it has been all
along to get a real estimate of the security threat and the consequences. But I think one
thing that’s really come out of what we have said so far in the panel is that cyber-physical
systems are not IT systems. They should not be treated as such, and not in the development
either. I mean, we know about the very rapid development
of IT systems, and an app or your computer crashes, and that happens. It’s just annoying.
But as I mentioned, it’s very different when it’s a system that you bet your life on. And then of course, no one really likes to
mention it, but we know that for automotive, we know that the regulation was what got the
automotive manufacturers to focus on safety back in the late ’60s, early ’70s. And maybe
something like that is required for this field as well. I think one combination of things that may
help us over time in this space is I think what’s going on now that’s different than
five, 10, 15 years ago is you’ve got major companies, all of the industry, seriously
trying to figure out risk in a connected world, whereas before they sort of talked about it
in a specialty areas, but it was not a boardroom discussion. And now it is. And that’s new. There is more and more ability to, to some
extent, analyze the complexities of the system you’re talking about, what you’re operating,
and to learn from it. And hopefully, we will keep learning from our own operating experience.
I know the insurance industry is working extremely hard to suck data out of that operating history
to underwrite and to justify the market they see in providing a much more effective insurance
outcome like you have seen in other industries going back in prior decades before we all
got wired in and connected. So if we’re fortunate, the network world,
more awareness, more data, more business interest in it. The fact that clearly, a system crash
of some kind in any industry could be much larger in scope than say, 10 years ago or
more, hopefully that combination of things will make the system more resilient than some
of our technical forecasting is suggesting now. Because we obviously see all the problems
and the technical issues. If I may add to the discussions, we need to
also consider the adversarial model when we talk about this. The sophistication of the
adversary is very important. For any system, we look at the specification, and we do vulnerability
analysis. And then given the vulnerability analysis, you try to address some of the trends. I’ll give an example of, for example, with
the work we do with Comcast is. We don’t want an average user to know how to break into
a system. And you look at the knowledge of that adversary, whether it’s a user, whether
it’s a consumer, or a bad guy has, [? worsens ?] is the impact. And that goes back to the
discussion that you mentioned about risk. And if the impact is high but the security
threat is so easy to learn and apply it, that’s where the industry wants a solution. And those
are basically the ones that usually end up being low cost solution. Because I use the term 95 plus 5. I address
95% of the problems. I leave the other 5% out. If the adversary has all the resources
to be able to attack my device, then so be it. I won;t say that, but the companies, many
tend to put it that way. But very often, you find very low cost solutions
that address major security threats that we never thought they would be addressed that
easily. But because during design we never took security into account, you see examples
like your example, where somebody was able to get access to somebody else’s camera, and
the person is [? in, ?] and he could easily send a message, et cetera. Those could be
addressed a lot easier, but it’s never taken into account before. Thank you. Let’s go over to this side, to
the lady. Sorry. My name is [INAUDIBLE]. I work at CAVE,
which is the Center for Advanced Vehicular Environments. And as a security practitioner
who is now in the automotive domain, I wanted to just share a perspective and encourage
everybody here how to think in the automotive space. It is, as everybody said, it’s a very unique
space. It is a cyber-physical system. But within cyber-physical systems, it has its
own uniqueness. So for example, every car is owned by a different individual. So it’s
under a different administration if you want to think of it like that against a fleet of
aircraft and other cyber-physical systems. So that is a very different constraint compared
to the others. Also, think of the adversary model. When we typically talk about an adversarial
model, we’re thinking of all the poor surfaces, all the attack surfaces, for a given entity,
and how we can penetrate that. But along with that for an automotive system, think about
the uniqueness of the user and how that marries into your adversary model. You cannot lock out a legitimate user from
using his car. And by using his car, I also mean if something’s wrong with your car, and
you need to get a part changed, you don’t have to necessarily go to the dealer. You
can go to Joe the car mechanic at the end of the street. Or if you’re handy enough,
you can make changes yourself. It’s called the right to repair, and our government gives
us that. There’s an SAE regulation that allows you to do that. So think of the legitimate user who is plopping
parts in and out of his car to make his car work. After you buy a car, he’s trying to
make his car beefier. We’ve seen that. So you have to incorporate those legitimate cases
also in your adversary model. So the adversary model is far more complicated than your typical,
bad guy is trying to penetrate your system. It’s also, don’t keep the good guy out from
his ownership of whatever the resource is. It’s the car in this case. James pointed out something. I wanted to touch
base on that as well. Cars have a very long lifetime. And they’re on the road for a very
long time. But don’t think of it as a single entity. A connected car may be one entity
from a macro perspective. But from a micro perspective, it’s got multiple interconnected
devices inside it that are communicating. And each of those parts has a different lifetime.
So those can be taken out and put back in. So hardware security, when we say we want
to do hardware-based security here, think of the provisioning that goes into a hardware
device. A system is fully working, and the manufacturer gives you all keys installed,
TPM, [? so it’s going to be ?] something. They work together fine. You pluck one thing
out because its lifetime is over and you put something else in, now that new part has to
be able to work with co-existing part. So these are challenges. I mean, they have
to be incorporated into the picture. So I’m actually just dicing all the different layers
and telling– sorry. That cross-functional approach, we’re talking physical sensors.
We’re talking legislation. We’re talking about adversary model. Very, very different from
typical systems. And I wanted to introduce this great audience
to that kind of thinking process. So also think about– when we say security, we think,
oh, we can do encryption, we can do crypto, we can do access control, we can do hardware
security– all these things. But also think of security as robustness. So without incorporating any of these traditional
cryptographic mechanisms, can we harden the vehicle so that we can actually make it less
penetrable to an adversary, so that it strengthens the case for a legitimate user to continue
to use this car without being blocked out, and all these other cases of popping parts
in and out. So think of casting the security problem as a robustness problem, from a control
systems theory point of view. So I just wanted to add that, and share that,
and get the panel’s perspective. When you have an overarching theme like this, where
the government wants to come in and help solve this problem, that is fantastic. The auto
industry is a very, very slow moving industry. A lot of inertia. So like you said, when legislation
comes in, they start looking at safety. And traditionally, that’s how they do things.
And with cyber security also, that’s how they’re going to start doing things. How do you propose, when you look at the problem
and all the different layers here, how do you propose that we go constructively about
it? How do we– It’s very complicated. Even just for this one cyber-physical system, cars. Great. Thank you. Any reactions? Or comments? I think it was a great set of points. And
in fact, I like the control system analogy in terms of robustness that you’re addressing.
When you start to look at CPS security, you start to look at, what are different avenues
of approaching the problem? And robustness of control systems is certainly one. I also agree completely with what [? Olivia
?] said. And the aspect, of course, as she said, every car is owned and operated by a
different person, and we can’t think in traditional security terms. How do you alert the driver
to an ongoing cyber attack? Do you pop something up on the dashboard? What are they supposed
to do about that? Questions like that that sort of really turns around our traditional
view of how we deal with security. Great. Thank you. Let’s go to the lady in
the middle, Cynthia. Hi. I’m Cynthia Irvine from the Naval Postgraduate
School. In the physical world, time and location are two of the most important measurable.
To what extent, especially in this transportation context, are secure time and secure location,
i.e., secure GPS, taken either as axioms or as part of the context of the research? I think those would make very interesting
research challenges that could be part of a CPS security solution. I think in general, it’s about securing the
information you get from sensors in some way. All this automated or driver assist vehicles,
they depend on a number of sensors to look at the environment. And there are ways, of
course, to trick those sensors. I heard a talk at DEFCON where someone analyzed
this and talked about things like painting a hole in the road to make the car believe
that there is a hole in the road when there actually isn’t, and things like that. And
we’ve seen GPS spoofing and so forth. And I think this relates to what David said about
relying on properties of the physical world. We need to make sure that that information
that the system gets about the physical world is trustworthy. And look at things like orthogonality of sensing.
Sensing in different domains, fusing that information together in terms of creating
more reliable or more robust capability. OK. Over to you, gentlemen. This is Jim McDonald again from Kestrel Institute.
I was stuck by [INAUDIBLE] comment about TPM. There might be an excellent analog in the
hardware system, if when you turned on the key to your car, if the first thing that happened
was a TPM like process so that every component of the car was verified before the car would
actually fire up and start moving. That’s the kind of thing that could be enforced by
regulation. And then if Joe goes to the guy down the street
and swaps in a part, the car won’t start unless it satisfies the constraints that are dictated
by the TMP mechanism, the trusted platform. So each component as it fired up would have
to provide some assurance that it was doing the right thing, or that it had the right
[INAUDIBLE], or any other kind of security consideration that you would want to build
into it. Sounds like digital rights management. [LAUGHING] Yeah, maybe. But the point is if you had a
counterfeit chip, or if you had a defective chip, or if it was miswired or something,
the system would catch that at the earliest possible moment. [INAUDIBLE]. Just a comment on that. When you turn on your
car and you turn off your car, there is a few milliseconds that the actual car tests
itself. So I can’t tell the numbers, but it runs a test on every chip in every system.
And when you turn off the car, it’ll actually be longer. But it does that test. But you bring an interesting point that you
don’t want to just do that for quality purpose. You can also do it for your reliability and
security purpose as well, which is actually very interesting. Great. Let’s go over to this side. Peter Chen from Carnegie Mellon University
Soft Engineering Institute and CERT. Basically, I’d like to have each panelists predicting
of the future, particularly about what kind of disaster mostly likely to happen for the
cyber-physical system in the next five years. Now, the reason for my question is because
we always want to get funding. The funding agency will usually say, your scenario, it’ll
never happen. Give me some examples, like biological attempts, people say it’ll never
happen. Even [INAUDIBLE] serious. Now [INAUDIBLE] if you may, predicting about
the next five years which one you think would happen. Would likely happen any place in the
world which would have a serious impact of the people’s lives or whatever? Anybody want
to try that? Five years later, we come back to see whether your predication’s correct
or not. Well, I’ll take a shot at it, but make a joke
out of it too. You watch Star Wars right? And clones? Very often, I raise this in hardware
community. What really keeps me at night is actually clones. Especially when it comes
to chips and systems. And we make the same device. In our center right now, we’re actually testing
some of these clones, chips. They have the same look, same [? pins, ?] same package,
same specification, but they could do something more than what they’re supposed to do. And
then they build this. They make millions of them. Think of it as devices that goes into
very critical applications such as public rates. And they’re able to get access to the
public rate, not necessarily through network and software. [? Rather ?] there’s a hardware
in there that is [INAUDIBLE] helping them out. And that’s really a major concern, that you
go and buy a chip from company X, but the reality is that it’s a clone, and there’s
a malicious circuitry on that as well. And I can give you a lot of examples of dealing
with these clones in our center. And one interesting example of it actually was there’s a chip
that– this is not cloned, but there’s a chip that when we did the analysis, we actually
saw another die on top of the actual die. In other words, the chip was cut open, and
then they put additional die on top of it, and then they repackaged them again. Those
are the chips that could really hurt our systems, because now you have a microprocessor on top
of another microprocessor and doing malicious thing. So that keeps me at night when I think
about it. Any other doomsday prophesies? Obviously, I can’t suggest where it’s going
to happen. But if you just look at the history of electric power generation and failures
in places in the system, joints between different parts of the grid and how they’re managed,
now things will happen, and you could envision that happening. What I hope is that we’re able to spend some
time going through the no fault after action analysis so that we keep learning from not
just failures, but also near misses. We don’t really do a good job of studying things that
almost failed. And yet, there are a lot of professionals throughout industries and sectors
and so that have a pretty good feel for how close sometimes things came. One of the things– I have some military background,
and one of the things that goes unrecognized in a lot of cases was a lot of the military
improvement in the 1980s, ’90s, and so forth was not in the hardware, it was in the organizational
software, if you will. And a big part of that was going through from low level units all
the way up, relatively low threat after action analysis so people kept learning from the
mistakes they made, or their colleagues made, or something in the system, or whatever it
happened to be. So I think we’ll see some breakdowns, just
because the numbers would suggest that. But I hope we can learn from them in a good smart
way. OK. We’re coming to the end, but let’s take
a last question over there. Thanks. I am [INAUDIBLE] with the University
of Texas of Dallas. And I wanted to come back a little bit to the question of incentives.
Because at the end of the day, the state– I think the most pressing problem we need
to solve in the short-term is that the current state of cyber-physical systems as they are
is this lack of incentives for investing in security. That’s the reason why if you look at the state
of the power grid security, you find vulnerabilities in both places, or the state of [? industry
control ?] system networks or [INAUDIBLE] networks. So yeah, I wanted to pose two questions,
the optimistic one and the pessimistic one. So the optimistic one, I guess, as an academic,
I obviously would like to encourage more people to fund, give funding in this type of research,
to understand [INAUDIBLE] economics or interdependent games, or correlated risks to try to understand
what is the state, what are the current failures that are happening, and how can we ideally
find ways to solve them? So the pessimistic view at the end of the
day probably is, I think, [? Peter ?] [INAUDIBLE] mentioned that maybe we can’t. Maybe we can’t
do this security analysis or skewed incentives. So I was thinking at the end of the day, for
example, I guess there are some analogies too, for example, to home security. At the
end of day, homeland security’s a public good. And most of the times, I guess, whoever ends
up paying for the public good is the government, because it’s a market failure. Because the
other parties who are investing, they don’t have enough– they don’t receive the benefits
if they want to invest. So I was wondering if there’s some maybe potential analogies?
Like, for example, we protect against terrorism. Like, who pays for protection against terrorism?
So right? It’s the government, and it’s not like, the industry developing new security
mechanisms. So one thing I would add is that we do, at
NSF, we do have a frontiers project that is called Forces. And it’s a first activity that’s
starting to look at what are the role of incentives in development of resilient systems? So it’s
looking at technologies involving the grid, electric power grid. Also, transportation
industry. So it’s a first start at looking at some science of security with incentives,
but it’s only a first step I also want to mention a program that’s part
DHS SNT. Our program manager named Joe Kielman is running a program on cyber economic incentives.
I don’t think it’s specifically focused on cyber-physical systems, but it’s investigating
various incentives for individuals and organizations to apply cyber security measures. I just want to make an observation about the
legal incentives. The risk to a corporation is limited by the value of that corporation.
So if I’m putting a product out in the market– say I’m worth a billion dollars. If I introduce
a risk to the public that’s $2 billion or $20 billion, that makes zero difference to
me. Because the most I can lose is $1 billion, so I have no incentive to choose the $2 billion
or the $20 billion risk over the others. They’re exactly equivalent. And even in a strong legal sense, because
if I can get the better return to my shareholders by taking on the $20 billion than the $2 billion
risk, I’m not a lawyer, but I think I’m legally required to do that. That’s my fiduciary responsibility,
is to go for the profit for the shareholders. So you need to somehow internalize that risk
in a way that they can’t just sort of fall back on the fact that corporations have limited
liability. So it seems like one solution would be to require insurance for security issues,
or these very low probability events to be handled through an arms length transaction. So company would not be allowed to internalize
the risk, they would have to get an insurance company to acquire that risk for them. And
then the child becomes let the market– you know, let them convince some insurance company
that they’re doing a good enough job that the insurance company, which has got $100
million worth of assets, should take on this $20 billion risk, where it really makes a
difference to the insurance company. But I think at the same time, there’s also
a fiduciary responsibility on the company to not put themselves at the risk of losing
the entire business. Yeah. But my point is it’s already taking
that into account. In other words, there might be into two competing strategies where in
one of them, I make $100 million profit with a $2 billion risk, or I can make $120 million
profit with a $20 billion risk. I’m going to go for the extra profit because the extra
risk– I would challenge that the board of directors
of that company would say that scenario would place them in a position of jeopardy, that
they would not be forced to make that kind of decision. No, but I’m saying if those are the only two
alternatives they have. I do think one of the reasons to be somewhat
optimistic, though, is that unlike as much as five years ago and certainly before then,
I’m seeing major companies looking at the insurance equation and calculation as a part
of their planning for systems that they’re now recognizing are more substantial in terms
of risk than they calculated before. So the market is perhaps beginning to self-correct
in the ways that we would like to see go. Yeah. My argument is simply that if you force
the companies to make that visible, in a way– they can hide it if it’s internal to the company.
But if they’re forced to go through an external transaction with another company, then that
risk becomes visible, and it has to appear on the bottom line in that quarter. They have
to pay for it. So if we didn’t say that this was complicated,
now we have the lawyers involved as well. [LAUGHING] Let’s end on that note. But before we give
a warm applause, I want to mention– remember there’s a reception. Crystal room. It’s a
joint perception between the symposium, the NITR program, and SRC and the workshop that
they will be hosting tomorrow at the same place. So please when you leave, make sure you stop
at the reception. And I want to thank the panel, and thank you for staying and having
this discussion. Hopefully we, can continue at the reception. And with that join me in
welcoming and thanking everyone. Apparently, one more announcement. So, some closing comments. I guess at this,
point everybody’s papered out and has PowerPoint poisoning. So at that point, don’t want anymore
slides. I was asked to review and revisit all the people that have helped for the conference.
We could go through all these slides. But the point is there’s really a huge amount
of effort that people have been putting in. It’s really an incredible amount of work by
the technical committee, by the program committee. We estimated each program committee member
performed about 50 to 70 hours of work of reviewing papers and attending the meeting.
And it’s really a huge amount of work. So hopefully, you have all enjoyed this, including
the workshops, and I’ll just briefly go through this. And I wanted to say thanks for attending,
and have a safe trip home. Thanks, everyone.

Related Posts

Rhino Rack Road-Warrior Roof Bike Racks Review – 2016 Ford Escape – etrailer.com

TravelSmith RFID Wallet with AntiTheft PacSafe Features

CONFIRMATION. CONFIRMATION. OH CONFIRMATION. OH YES! CONFIRMATION. OH YES! I CONFIRMATION. OH YES! I JUST CONFIRMATION. OH YES! I JUST
How to keep your mobile safe with Sophos

Leave a Reply

Your email address will not be published. Required fields are marked *