Dave Monnier


November, 2013

Tech and Law Center interviews Dave Monnier, Fellow and Director of Threat Intelligence and Outreach at Team Cymru. With over fifteen years of experience in a wide-range of technologies, Dave brings a wealth of knowledge and understanding to every situation. Dave began his career performing UNIX and Linux administration in academic and high performance computing environments where he helped build some of the most powerful computational systems of their day. Subsequent to systems administration Dave moved into Internet security, having served as a Lead Security Engineer for a Big ten University and later helped to launch the Research and Education Networking ISAC, part of the formal U.S. ISAC community. Dave joined Team Cymru in 2007 where he has served as their Senior Engineer and later as a Security Evangelist. In 2010, Dave was granted the title of Team Cymru Fellow, a highest honor of Team Cymru.

Twitter @dmonnier

What is the single most challenging thing we deal with in information
security today?

Detection. Everyday the opportunity to master detection and remediation
gets harder. As network capacity increases, network utilization
increases hand in hand. This means a massive and continuous rate of
growth in signal and this makes it harder to detect when something bad
has happened. The later you decide to try the harder the problem will
be. Aside from the CPU power required to do line-rate detection on
multi-gigabit networks, if you haven’t grappled with determining a state
of “normal” your chances of detecting “abnormal” are ever decreasing.

Will the shift into more embedded systems present new challenges for
security, or will old problems just continue as they have in the past?

Embedded technology just invited variation and new attacks. Most
embedded systems allow updates to the FPGA and some ROM-based systems do
as well. This means it’s still possible to compromise these devices in
similar ways as non-embedded systems. This excludes the issue of
default accounts and other “out of the box” or design-related issues
that tend to plague embedded systems.

Looking forward, what security trends, offensive or defensive, scare you
the most?

The trend we’re seeing regarding motivation being a major factor worries
me. People discuss “APT” as if it warrants more concern than another
threat actor. People discuss PRISM as if it warrants more concern than
another threat actor. I won’t disagree that there’s a potential
resource advantage to a well sponsored threat but it’s critical that we
no lose sight of the fundamental aspects of our trade. Good technique
should survive contact with any adversary. Regardless of motivation or
resource availability.

On the flip side, what trends, if any, in information security give you
the most hope?

People are starting to join the chorus of detection, or situational
awareness, matters. There are a lot of great data management tools out
now and they get better everyday. Many of these are open-source and
available at no cost. I’ve preached, and sometimes pushed, this idea
for a very long time. My approach has always been to not try and find
out what happened after an event. Log absolutely everything, even kernel
and systems calls if you can, and let what happened present itself.
Your goal should be to replay time. This ensures you don’t introduce
bias as well. If you’re looking for technique A or B and don’t think to
look for technique C you’re going to miss it. I’m glad it’s catching on.

For people who work in security, is the existence of PRISM surprising?
Which aspects of it are routine or expected or even necessary, and which
are genuinely dangerous?

It absolutely should not be surprising. Nations do these things. They
always have and always will. What is more surprising to me is the lack
of fundamental security and service understanding that’s fueling the
outrage. For example, data protection is key to our trade. Why are
people in our industry shocked that someone was able to read unencrypted
traffic? Also surprising to that specific news is the shock that the
free services listed were being read by a third party. It states in the
terms of service for most of them that it will happen. All kinds of
investment, marketing, and sales efforts are derived from the same
function on the same information and no one is upset by that.

There seems to be a great deal of fear and hyperbole about potentially
catastrophic cyberattacks against critical infrastructure such as the
power grid. How do we clear away the hype and determine what threats
realistically exist and what should the industry consider doing about them?

We have protective policy in place to protect all kinds of data and
devices. In the US there are things like HIPPA, SOX, etc. Perhaps we
need something similar. If you make a remote switching device for high
capacity electrical systems perhaps you should be required to help
secure it somehow. Perhaps the installation engineers should be
required to ensure that it’s properly secure. It’s hard to say what
will work best. Government regulation can sometimes not be a good thing
but we have to remember that these technologies are being adopted at a
massive rate. They save a lot of time and money at the operation stage
and as a result the pressure to adopt them may invite too much error
without some kind of oversights. As to fear and hyperbole, we
absolutely should be fearful. Infrastructure is massively relied upon.
We’ve seen instances where natural disaster shows that society falls
apart very quickly when basic needs are not available. Technical
disaster won’t be any different..

The United States v. Jones ruling could limit how drones might be used for surveillance?

Technically none, since the decision was based on the officers’ trespassing on the car to attach the GPS device and drones need not be attached. But in Jones, five justices worried aloud that at least “electronic” surveillance was getting too easy, and might eventually violate the citizen’s reasonable expectation of privacy. I think it is too soon to apply this concern to drones, but could imagine one day, when drones can stay up in the air, autonomous and silent for a long time, that courts would invoke Jones in relation to drone surveillance.

What kinds of incentives can organizations put into place to 1) decrease
the effectiveness of social engineering, and 2) persuade individuals to
take an appropriate level of concern with respect to organizational
security? Are you aware of any particularly creative solutions to these

I’ve always been a fan of the “everyone is on the security team”
approach. Did you get a strange email? Let someone know! Did you see
someone in the hall without a badge or that you’ve not seen before? Stop
and ask, or at least report that! Just as with networks, the more
sensors you have the better. If you can get every staff member to
become a “sensor” your security will be that much stronger. Obviously,
just as with technical sensors, there will be false positives. That’s
okay, that’s always going to happen. As for incentive? Maybe a treat of
some sort is worthwhile, or perhaps a genuine reward. Catch a spear
fishing attempt that the IDS missed? You just won a free meal or
better, a paid day off!