Discussion:
[Dailydave] Improvements
Dave Aitel
2017-02-15 15:59:39 UTC
Permalink
http://www.securityweek.com/crowdstrike-sues-nss-labs-prevent-publication-test-results

[image: fRPrLXf.jpg]
One thing I've had problems with is learning that people can "get gud".
It's one of the reasons I always cringe at the inevitable policy trope of
"Cyber war is easier for attackers than defenders. Yesterday I was talking
to a professional CISO - one of the ones I've known for years out of the
NYC scene. He's like "Yes, individually none of the stuff anyone sells you
works at all. But once you connect, say, Bromium, to the BlueCoat API with
a bit of analysis glue you can have five minute response metrics, where
once you find any anomaly, you can do memory searches for that running
anywhere in your org, then automatically stuff those machines on their own
VLANS.

"When I join a new org, whatever random vendors they've bought into, I can
make that really work. It does't really matter what they have, as long as
they have something."

Automated response has always been the real market. I can see people
actually DOING it now, even though no product vendor wants to talk about
it. And it's one of the few things that actually scares me as an attacker.

-dave
Jordan Wiens
2017-02-15 16:46:34 UTC
Permalink
When I last played defender over a decade ago at a large university, we
built what sounds like exactly the same sort of system. It was an ugly mess
of perl and it worked fantastically. The rules were crude and didn't have
nearly the visibility into the network (partially because the host
inspection technologies didn't exist and partially because as a university
security engineering you often don't have permission to touch most of the
endpoints on your network), but we were wiring up the more reliable IDS
signatures, DNS queries, and flow data indicators to:

- our campus captive portal to de-auth
- automatic emails to users and network administrators with specific
remediation information
- blackhole routes for managed machines until the local admin
self-certified the host was cleaned
- or in some cases, disable the user's login for repeat offenders of
non-university machines until they visited the helpdesk to get cleaned

At the time the signatures that were effective were mostly super dumb.
Stuff like visiting known IRC C&C servers and channels, but it worked. It
required manual effort to constantly tune actions and inputs, but it was a
heck of a lot easier than trying to fight that flood by hand.

It sounds like the specific actions and data ingests might be different,
but the idea of rolling your own automated system hasn't changed a bit in
ten years. Surprised to not hear more about the approach, but agree
completely that no one vendor does it, and yet every vendor can easily be a
part of it.
Post by Dave Aitel
http://www.securityweek.com/crowdstrike-sues-nss-labs-
prevent-publication-test-results
[image: fRPrLXf.jpg]
One thing I've had problems with is learning that people can "get gud".
It's one of the reasons I always cringe at the inevitable policy trope of
"Cyber war is easier for attackers than defenders. Yesterday I was talking
to a professional CISO - one of the ones I've known for years out of the
NYC scene. He's like "Yes, individually none of the stuff anyone sells you
works at all. But once you connect, say, Bromium, to the BlueCoat API with
a bit of analysis glue you can have five minute response metrics, where
once you find any anomaly, you can do memory searches for that running
anywhere in your org, then automatically stuff those machines on their own
VLANS.
"When I join a new org, whatever random vendors they've bought into, I can
make that really work. It does't really matter what they have, as long as
they have something."
Automated response has always been the real market. I can see people
actually DOING it now, even though no product vendor wants to talk about
it. And it's one of the few things that actually scares me as an attacker.
-dave
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
Wim Remes
2017-02-15 18:59:22 UTC
Permalink
Isn't this what Phantom and other "security orchestration" companies are
pushing right now?

The biggest roadblock is that every traditional security vendor is trying
to be the "data hub", hoarding information. Badly constructed and horribly
documented APIs, stupid myopic dashboards, rate limiting on APIs, etc. etc.
are the trademarks of those data hoarders. I wonder how long it takes
before they realize they're contributing more by becoming data providers.
Hell, every RFP for security products should score their ability to provide
data.

Cheers,
Wim
Post by Jordan Wiens
When I last played defender over a decade ago at a large university, we
built what sounds like exactly the same sort of system. It was an ugly mess
of perl and it worked fantastically. The rules were crude and didn't have
nearly the visibility into the network (partially because the host
inspection technologies didn't exist and partially because as a university
security engineering you often don't have permission to touch most of the
endpoints on your network), but we were wiring up the more reliable IDS
- our campus captive portal to de-auth
- automatic emails to users and network administrators with specific
remediation information
- blackhole routes for managed machines until the local admin
self-certified the host was cleaned
- or in some cases, disable the user's login for repeat offenders of
non-university machines until they visited the helpdesk to get cleaned
At the time the signatures that were effective were mostly super dumb.
Stuff like visiting known IRC C&C servers and channels, but it worked. It
required manual effort to constantly tune actions and inputs, but it was a
heck of a lot easier than trying to fight that flood by hand.
It sounds like the specific actions and data ingests might be different,
but the idea of rolling your own automated system hasn't changed a bit in
ten years. Surprised to not hear more about the approach, but agree
completely that no one vendor does it, and yet every vendor can easily be a
part of it.
http://www.securityweek.com/crowdstrike-sues-nss-labs-prevent-publication-test-results
[image: fRPrLXf.jpg]
One thing I've had problems with is learning that people can "get gud".
It's one of the reasons I always cringe at the inevitable policy trope of
"Cyber war is easier for attackers than defenders. Yesterday I was talking
to a professional CISO - one of the ones I've known for years out of the
NYC scene. He's like "Yes, individually none of the stuff anyone sells you
works at all. But once you connect, say, Bromium, to the BlueCoat API with
a bit of analysis glue you can have five minute response metrics, where
once you find any anomaly, you can do memory searches for that running
anywhere in your org, then automatically stuff those machines on their own
VLANS.
"When I join a new org, whatever random vendors they've bought into, I can
make that really work. It does't really matter what they have, as long as
they have something."
Automated response has always been the real market. I can see people
actually DOING it now, even though no product vendor wants to talk about
it. And it's one of the few things that actually scares me as an attacker.
-dave
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
J. Oquendo
2017-02-16 18:49:24 UTC
Permalink
Post by Wim Remes
Isn't this what Phantom and other "security orchestration" companies are
pushing right now?
The biggest roadblock is that every traditional security vendor is trying
to be the "data hub", hoarding information. Badly constructed and horribly
documented APIs, stupid myopic dashboards, rate limiting on APIs, etc. etc.
are the trademarks of those data hoarders. I wonder how long it takes
before they realize they're contributing more by becoming data providers.
Hell, every RFP for security products should score their ability to provide
data.
Cheers,
Wim
While bored (which is often) I rigged together quite a few
applications into a suite of my own to go out, aggregate,
then correlate, then go back out, and see what exactly are
threats, and what are not. E.g. How many of us have tried
to ping a site, or ssh somewhere, and fat-fingered (sorry
all couldn't find politically correct term) an address?
E.g. ssh 19.0.0.1 when it should have been 10.0.0.1. Now
imagine the amounts of data caught in the "cross fire."

What I sought to do what take data and find out why exactly
are causing say 8.8.8.8 (example) to be re-aggregated into
threat lists. Too many "threat" lists with little info
to go by. What I found over time was even stranger... Not
naming names, but 90+% of "threat" vendors cross correlate
the same nonsense/pollution into a smorgasbord of: "OMG
your mom is a threat" alerting.

Hoarding data is meaningless if terabytes of the data being
captured is insignificant. I have been playing with IBM's
Watson so sooner or later when I am even more bored than I
am, I will dump terabytes and say: "Go make sense of this."
To be honest, the Watson Analytics side could not do this
as good as I connected my own dots with i2 Analyst Notebook
so who knows what AI Watson will push out. (Maybe Grugq is
responsible for 97% of traffic to my Alexa Echo). Data is
becoming too polluted over time (IMHO).
--
=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
J. Oquendo
SGFA, SGFE, C|EH, CNDA, CHFI, OSCP, CPT, RWSP, GREM

"Where ignorance is our master, there is no possibility of
real peace" - Dalai Lama

0B23 595C F07C 6092 8AEB 074B FC83 7AF5 9D8A 4463
https://pgp.mit.edu/pks/lookup?op=get&search=0xFC837AF59D8A4463
Oliver Friedrichs
2017-02-23 17:54:10 UTC
Permalink
Since I’m on this list and rarely get to contribute it seems like a good time to jump in (although Phantom coincidentally almost started by focusing on offense – google “Phantom Access” if you are curious where the name came from): https://en.wikipedia.org/wiki/Phantom_Access. I’m sure Dave is happy about that since who needs more offense vendors. :-)

Obviously I am biased, but IMO automation and orchestraton is one of the few new technologies to arrive in our industry in quite some time. Is it new? Obviously not.. in fact back at McAfee in 1999 we tried to build something like this, funny enough it was covered in an article by Stuart McClure on Adaptive Security back then: https://books.google.com/books?id=2lEEAAAAMBAJ&pg=PA78.

That being said, the industry “forgot” about this stuff for almost 15 years. It took NSA and DHS to resurrect this through a project they funded at JHU APL called IACD: https://secwww.jhuapl.edu/iacdcommunityday/PreviousEventMaterial. At the same time, everyone was cobbling together scripts to do most of this themselves without much formality.

Anyways, enough of a history lesson. The fact is that everything has gotten worse to the point where automating standard operating procedures in order to augment human analysts is now a necessity.. no longer optional. Whether you do it by building or buying, it’s clear that tying together the hundreds of discrete security products into a cohesive system is an obvious and natural evolution of the industry.

We run into companies all of the time who are deciding to build or buy.. usually web scale companies decide to build, because they have the engineers to do so and can put a team together in days.. but most “normal” commercial enterprises don’t have this luxury. That being said, a COTS solution lets you get straight to building your Playbooks, and not becoming a plumber. Nothing again plumbers.. but who wants to write API integrations for hundreds of security products, maintain then, keep them up to date, etc. In addition there is all of this typical enterprise stuff: reporting, AD integration, secure credential storage, penetration testing the solution, scalability, auditing, out of the box connectors, RBAC, TFA integration, revision control, an IDE, human prompting,

After connecting with over 120 other security products now I found that most vendors are open and easy to deal with and dozens are now even writing Apps to contribute to the platform. Frankly I’ve only run into a few, who are also trying to build products this category, who are protective of their APIs (FireEye and Proofpoint). Their belief is that by being closed with their APIs they can somehow sell more product since open APIs make their products too replaceable. It’s interesting to see that behavior.. and the belief that protecting “APIs” is some kind of competitive advantage. In those cases it’s usually users who write the Apps to connect to their APIs anyways.. since the user is the one who needs it.

Oliver
Post by Wim Remes
Isn't this what Phantom and other "security orchestration" companies are
pushing right now?
The biggest roadblock is that every traditional security vendor is trying
to be the "data hub", hoarding information. Badly constructed and horribly
documented APIs, stupid myopic dashboards, rate limiting on APIs, etc. etc.
are the trademarks of those data hoarders. I wonder how long it takes
before they realize they're contributing more by becoming data providers.
Hell, every RFP for security products should score their ability to provide
data.
Cheers,
Wim
While bored (which is often) I rigged together quite a few
applications into a suite of my own to go out, aggregate,
then correlate, then go back out, and see what exactly are
threats, and what are not. E.g. How many of us have tried
to ping a site, or ssh somewhere, and fat-fingered (sorry
all couldn't find politically correct term) an address?
E.g. ssh 19.0.0.1 when it should have been 10.0.0.1. Now
imagine the amounts of data caught in the "cross fire."

What I sought to do what take data and find out why exactly
are causing say 8.8.8.8 (example) to be re-aggregated into
threat lists. Too many "threat" lists with little info
to go by. What I found over time was even stranger... Not
naming names, but 90+% of "threat" vendors cross correlate
the same nonsense/pollution into a smorgasbord of: "OMG
your mom is a threat" alerting.

Hoarding data is meaningless if terabytes of the data being
captured is insignificant. I have been playing with IBM's
Watson so sooner or later when I am even more bored than I
am, I will dump terabytes and say: "Go make sense of this."
To be honest, the Watson Analytics side could not do this
as good as I connected my own dots with i2 Analyst Notebook
so who knows what AI Watson will push out. (Maybe Grugq is
responsible for 97% of traffic to my Alexa Echo). Data is
becoming too polluted over time (IMHO).

--
=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
J. Oquendo
SGFA, SGFE, C|EH, CNDA, CHFI, OSCP, CPT, RWSP, GREM

"Where ignorance is our master, there is no possibility of
real peace" - Dalai Lama

0B23 595C F07C 6092 8AEB 074B FC83 7AF5 9D8A 4463
https://pgp.mit.edu/pks/lookup?op=get&search=0xFC837AF59D8A4463
_______________________________________________
Dailydave mailing list
***@lists.immunityinc.com
https://lists.immunityinc.com/mailman/listinfo/dailydave
Chris Kuethe
2017-02-16 19:46:12 UTC
Permalink
Post by Wim Remes
The biggest roadblock is that every traditional security vendor is trying
to be the "data hub", hoarding information. Badly constructed and horribly
documented APIs, stupid myopic dashboards, rate limiting on APIs, etc. etc.
are the trademarks of those data hoarders. I wonder how long it takes
before they realize they're contributing more by becoming data providers.
Hell, every RFP for security products should score their ability to provide
data.
They'll realize it when you specifically tell them that data hoarding is
costing them the sale: "you don't provide us with an API to build our own
custom integrations, a real-time event feed, or machine-readable bulk
history download. Your product may look shiny but until we can hook it up
to our own existing systems we won't give you any money. Having it on the
roadmap doesn't count - come back when the PoC can talk to our splunk."
--
GDB has a 'break' feature; why doesn't it have 'fix' too?
Tracy Reed
2017-02-16 07:47:15 UTC
Permalink
Post by Jordan Wiens
It sounds like the specific actions and data ingests might be different,
but the idea of rolling your own automated system hasn't changed a bit in
ten years. Surprised to not hear more about the approach, but agree
completely that no one vendor does it, and yet every vendor can easily be a
part of it.
In the industry that I see there is huge pressure from the c-suite to
buy a pre-packaged product (aka silver bullet) and strong disincentive
to spend time rolling your own custom franken-solution which the
management will have no faith in because one of their own employees
built it instead of a big name which can boast about magic quadrants and
such.
--
Tracy Reed
Andrew Becherer
2017-02-16 19:00:39 UTC
Permalink
Post by Tracy Reed
Post by Jordan Wiens
It sounds like the specific actions and data ingests might be different,
but the idea of rolling your own automated system hasn't changed a bit in
ten years. Surprised to not hear more about the approach, but agree
completely that no one vendor does it, and yet every vendor can easily be a
part of it.
In the industry that I see there is huge pressure from the c-suite to
buy a pre-packaged product (aka silver bullet) and strong disincentive
to spend time rolling your own custom franken-solution which the
management will have no faith in because one of their own employees
built it instead of a big name which can boast about magic quadrants and
such.
To Wim's point I have people who can, and do, design and implement the
described automation from scratch. I hate the pain and inefficiency of
my current and potential future vendors' integration patterns. In
Wim's words, "hoarding information. Badly constructed and horribly
documented APIs, stupid myopic dashboards, rate limiting on APIs, etc.
etc." I'm not expecting a silver bullet, and I have incredible faith
in my employees, but I'd like to share the burden of integration
implementation across the entire customer base of a Phantom.us or
Komand or other "security orchestration" company. My people can then
focus on writing and debugging the automation logic. I have little
faith that, in any reasonable timeframe, vendors will emphasize data
interchange over features with broader market appeal.

--
Andrew Becherer
Andre Gironda
2017-02-16 19:23:35 UTC
Permalink
Post by Tracy Reed
In the industry that I see there is huge pressure from the c-suite to
buy a pre-packaged product (aka silver bullet) and strong disincentive
to spend time rolling your own custom franken-solution which the
management will have no faith in because one of their own employees
built it instead of a big name which can boast about magic quadrants and
such.
Want to echo what Jordan, Wim, and Tracy are saying loudly.

We need a platform for Security Operations Automation, but only if
it's a subcomponent of a larger Security Operations Management
Platform -- https://blog.rooksecurity.com/security-operations-management-7c444cf2c33f

The focus, of course, is optimization of process, people, and tools
(in that order).

I think the first problem we should automate away are the
decision-making low-value input chains (i.e., management, people
leadership) in order to solve for stronger DFIR professional-led
high-value output chains (i.e., people with hands-on skills,
problem-solving capabilities, critical-thinking skills, creativity, et
al).

dre
Jimmy D
2017-02-16 21:55:09 UTC
Permalink
That pressure isn’t just from the C-suite. Many of us have been burned (at least indirectly) by a tool author who either abandoned locally built tools or who tried to use their knowledge of one as as a form of blackmail in salary negotiations or promotions. Add to that the fact that I pay people to perform specific functions usually aligned with their core skills. I’ve generally had tremendous respect for my team members (else they’d be elsewhere) and no real love of vendors or “big names”, but I know that isn’t the case for everyone. Obviously, this is completely different for a team in an actual software company.

At the C level, I’ve also heard some pretty appalling stories of vendors (FireEye came up multiple times) threatening to alert regulators and media if a company has an incident and didn’t buy their product.

My point is that these issues are often less straightforward than they might appear and that you shouldn’t infer a lack of faith/love/respect when your execs don’t let you write enterprise tools.

P.S.: We used Hexadite at a former employer to eliminate the need for about 1.5 FTEs just by automating our process for responding to suspected phishing emails. Improved efficiency, 1/3 the cost, built-in metrics, 7x24x365 coverage, no real estate costs, and no HR complaints. There was much to be admired about that specific scenario for us. YMMV.

Jim
Post by Tracy Reed
Post by Jordan Wiens
It sounds like the specific actions and data ingests might be different,
but the idea of rolling your own automated system hasn't changed a bit in
ten years. Surprised to not hear more about the approach, but agree
completely that no one vendor does it, and yet every vendor can easily be a
part of it.
In the industry that I see there is huge pressure from the c-suite to
buy a pre-packaged product (aka silver bullet) and strong disincentive
to spend time rolling your own custom franken-solution which the
management will have no faith in because one of their own employees
built it instead of a big name which can boast about magic quadrants and
such.
--
Tracy Reed
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
Dominique Brezinski
2017-02-17 04:39:53 UTC
Permalink
All the notable, large tech companies and cloud providers roll their own everything. Most of the hyperscale companies buy very little third-party security product. The things they build are everything from a little python glue to massive analytics systems backed by software development teams running on tens of thousands of cores, tens of terabytes of ram, and tens of petabytes of storage.

Automating as much detection through response is the name of the game for both practical and theoretical reasons. Walking the RSA expo floor, I can attest that there are less than a half dozen companies that have any understanding of what it actually looks like and takes to be effective at scale. All the ones that do are because the founders had some exposure to these environments or people that worked in them. If your durable data store is Elasticsearch or Mongodb, you are doing it wrong. Sorry Logrhythm, your choice of datastore and product packaging do not work at cloudscale. You won't find it in Google, Amazon, Facebook, or even Yahoo. Look what AirBNB just open sourced. That is an example of what a small, but cloud and scale aware, team did to solve some of their monitoring and response problems.

If you don't get that the most secure place to build your systems are on AWS or Google's clouds, then you don't have any idea about what problems need to be solved to effectively monitor and respond to threats. I will leave that as a thought exercise, though I am happy to elaborate if anyone honestly cares to hear the answers.

Dom
Post by Tracy Reed
Post by Jordan Wiens
It sounds like the specific actions and data ingests might be different,
but the idea of rolling your own automated system hasn't changed a bit in
ten years. Surprised to not hear more about the approach, but agree
completely that no one vendor does it, and yet every vendor can easily be a
part of it.
In the industry that I see there is huge pressure from the c-suite to
buy a pre-packaged product (aka silver bullet) and strong disincentive
to spend time rolling your own custom franken-solution which the
management will have no faith in because one of their own employees
built it instead of a big name which can boast about magic quadrants and
such.
--
Tracy Reed
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
Dominique Brezinski
2017-02-23 22:50:08 UTC
Permalink
inline...
Hi List, Dom,
Automating as much detection through response is the name of the game
Post by Dominique Brezinski
for both practical and theoretical reasons. Walking the RSA expo
floor, I can attest that there are less than a half dozen companies
that have any understanding of what it actually looks like and takes
to be effective at scale. All the ones that do are because the
founders had some exposure to these environments or people that worked
in them. If your durable data store is Elasticsearch or Mongodb, you
are doing it wrong. Sorry Logrhythm, your choice of datastore and
product packaging do not work at cloudscale. You won't find it in
Google, Amazon, Facebook, or even Yahoo. Look what AirBNB just open
sourced. That is an example of what a small, but cloud and scale
aware, team did to solve some of their monitoring and response
problems.
What did AirBNB just open source?
https://github.com/airbnb/streamalert

There is a lot more that needs to be done to cover the broad range of
capabilities needed for detection and response, but StreamAlert achieves
something very important even for huge companies -- it radically lowers the
operational overhead of maintaining and scaling the infrastructure. We
really want our human capital investment concentrated on the analysis and
response phases of the process; the places where the human brain still
exceeds automated reasoning.
If you don't get that the most secure place to build your systems are
Post by Dominique Brezinski
on AWS or Google's clouds, then you don't have any idea about what
problems need to be solved to effectively monitor and respond to
threats. I will leave that as a thought exercise, though I am happy to
elaborate if anyone honestly cares to hear the answers.
I honestly care to hear the answers.
Probably the best way to think about risk mitigation -- or defense -- is in
terms of assets, threats and controls. Assets are the hosts, applications,
systems, data stores, and specific data that compose our computing
environments and are at risk. Threats constitute all forms of exploitation,
loss, disclosure, manipulation, and unavailability that affect our assets.
Controls are all the available mechanisms we can apply to our assets to
eliminate, reduce the frequency of or reduce the impact of threats. I also
like to think of threats as static -- patch state, access control, network
accessibility, etc. -- or dynamic, as in adversarial activity.

The huge advantages of operating your systems in mature cloud environments
predominantly center around complete visibility and malleability your
assets and controls and centralization of security expertise and headcount
on deeply technical and high-scale problems. To really cover these topics
would take a book or ten, but I will try to hit the salient points.

In AWS [using AWS as example, because I am most familiar with
the primitives] you can enumerate all your assets and their current state
through API. You can also enumerate and manipulate much of your security
control state through API. The security control gaps are the controls you
apply at the OS and application level that the cloud provider does not have
visibility into. The logical progression is to move from polling for asset
inventory and control state to an event model. AWS Config and Amazon
Cloudwatch Events are great examples of services that receive events for
asset state changes and allow those events to trigger code that evaluates
them. Having programmatic access to all your asset inventories, security
controls and overall computing environment composition is something that is
extremely difficult and costly outside cloud environments. In fact, the
only way to achieve it is to run your own cloud. Your compute, network and
storage must all be virtualized and/or distributed to achieve the necessary
visibility and malleability. It is this visibility and malleability that
remove the asymmetry between offense and defense. More on that in a minute.

What we see of the cloud is a service view. Underneath is obviously real
hardware, software layers, control-plane services etc. Somebody has to
worry about the security of this stuff too. If you deploy your own private
cloud to achieve the visibility and malleability of your application and
service assets, you are responsible for the security of the underlying
hardware, software and control plane. However, Amazon and Google are
amongst a very small set of organizations that have quietly hired a
majority of the best security people in the world and focused them on
securing the hardware, software and control-plane services that make up
their data centers and the cloud. Want to know who employs, either directly
or through contract, the best virtualization security people? Yup. Security
and hardware designers to build security coprocessors? Indeed. Firmware
integrity verification? Yes. Secure SDN hardware and software? This is
getting boring, and you get the point. How many Xen vulnerabilities have
their been? How many of those affected EC2? It is a subset, largely because
AWS knows Xen deeply and makes good choices that restrict attack surface.
Such expertise and resourcing extends through the entire cloud substrate,
and just as importantly, the operational processes used to manage and
secure. Chances are very good that the expertise and resourcing applied to
the security of everyone else's data centers and infrastructure don't come
anywhere close. This is essentially the security corollary to the 0.01%
wealthiest.

We all know there will be vulnerabilities in hardware, in ring -N to 3, in
control planes, in operating systems, and in applications. From the offense
side, the questions are whether the attack surface is reachable, if
exploitation is possible given the operating conditions, and if
post-exploitation actions can reach targets and are visible. From the
defense side, the questions are whether we have visibility into the state
and behavior of our assets and if we can manipulate a sufficient set of
controls to prevent or degrade the adversary's impact. Much of the
asymmetry attributed to offense actually stems from defense's lack of
asset visibility, understanding of attack trees, and ability to apply and
manipulate security controls. At a theoretic level, I dare say there is
nothing inherently asymmetric about offense. The asymmetry is only in
practice, and the cloud can change the defender's practices. If defenders
leverage the visibility provided by the cloud, they can go as far as
applying automated reasoning and automated changes to security controls to
defeat adversaries. I would yield that offense still has an OODA loop speed
advantage if defense is always reactionary. If you can execute action on
objectives in my log collection latency...but said visibility, automated
reasoning and automated control changes can be used by defenders
asynchronously. Defenders can enumerate attack paths manually or
automatically and reason about control changes that would mitigate an
attack path. Defenders can hypothesize the application of known tactics and
techniques to determine outcome and make changes accordingly. Defenders can
manipulate their environment to confuse or deceive adversaries. All these
things can be done when you have programmatic visibility and malleability
over your environment and synchronously or asynchronously. Attackers have
inverse and proportional work items.

The cloud frees defenders from attacker asymmetry...in theory and in
practice for some. Make yourself one of the some.
.
Dom
Laurens Vets
2017-02-28 19:37:39 UTC
Permalink
See inline.
Hi List, Dom,
Automating as much detection through response is the name of the game
for both practical and theoretical reasons. Walking the RSA expo
floor, I can attest that there are less than a half dozen companies
that have any understanding of what it actually looks like and takes
to be effective at scale. All the ones that do are because the
founders had some exposure to these environments or people that worked
in them. If your durable data store is Elasticsearch or Mongodb, you
are doing it wrong. Sorry Logrhythm, your choice of datastore and
product packaging do not work at cloudscale. You won't find it in
Google, Amazon, Facebook, or even Yahoo. Look what AirBNB just open
sourced. That is an example of what a small, but cloud and scale
aware, team did to solve some of their monitoring and response
problems. What did AirBNB just open source?
https://github.com/airbnb/streamalert

There is a lot more that needs to be done to cover the broad range of
capabilities needed for detection and response, but StreamAlert achieves
something very important even for huge companies -- it radically lowers
the operational overhead of maintaining and scaling the infrastructure.
We really want our human capital investment concentrated on the analysis
and response phases of the process; the places where the human brain
still exceeds automated reasoning.

Thanks, I didn't know about StreamAlert. A cool feature would be to make
this cloud provider independent. I think both Google and Microsoft
provide (sort of) the same functions/features as Amazon.
If you don't get that the most secure place to build your systems are
on AWS or Google's clouds, then you don't have any idea about what
problems need to be solved to effectively monitor and respond to
threats. I will leave that as a thought exercise, though I am happy to
elaborate if anyone honestly cares to hear the answers. I honestly care to hear the answers.
Probably the best way to think about risk mitigation -- or defense -- is
in terms of assets, threats and controls. Assets are the hosts,
applications, systems, data stores, and specific data that compose our
computing environments and are at risk. Threats constitute all forms of
exploitation, loss, disclosure, manipulation, and unavailability that
affect our assets. Controls are all the available mechanisms we can
apply to our assets to eliminate, reduce the frequency of or reduce the
impact of threats. I also like to think of threats as static -- patch
state, access control, network accessibility, etc. -- or dynamic, as in
adversarial activity.

The huge advantages of operating your systems in mature cloud
environments predominantly center around complete visibility and
malleability your assets and controls and centralization of security
expertise and headcount on deeply technical and high-scale problems. To
really cover these topics would take a book or ten, but I will try to
hit the salient points.

In AWS [using AWS as example, because I am most familiar with the
primitives] you can enumerate all your assets and their current state
through API. You can also enumerate and manipulate much of your security
control state through API. The security control gaps are the controls
you apply at the OS and application level that the cloud provider does
not have visibility into. The logical progression is to move from
polling for asset inventory and control state to an event model. AWS
Config and Amazon Cloudwatch Events are great examples of services that
receive events for asset state changes and allow those events to trigger
code that evaluates them. Having programmatic access to all your asset
inventories, security controls and overall computing environment
composition is something that is extremely difficult and costly outside
cloud environments. In fact, the only way to achieve it is to run your
own cloud. Your compute, network and storage must all be virtualized
and/or distributed to achieve the necessary visibility and malleability.
It is this visibility and malleability that remove the asymmetry between
offense and defense. More on that in a minute.

What we see of the cloud is a service view. Underneath is obviously real
hardware, software layers, control-plane services etc. Somebody has to
worry about the security of this stuff too. If you deploy your own
private cloud to achieve the visibility and malleability of your
application and service assets, you are responsible for the security of
the underlying hardware, software and control plane. However, Amazon and
Google are amongst a very small set of organizations that have quietly
hired a majority of the best security people in the world and focused
them on securing the hardware, software and control-plane services that
make up their data centers and the cloud. Want to know who employs,
either directly or through contract, the best virtualization security
people? Yup. Security and hardware designers to build security
coprocessors? Indeed. Firmware integrity verification? Yes. Secure SDN
hardware and software? This is getting boring, and you get the point.
How many Xen vulnerabilities have their been? How many of those affected
EC2? It is a subset, largely because AWS knows Xen deeply and makes good
choices that restrict attack surface. Such expertise and resourcing
extends through the entire cloud substrate, and just as importantly, the
operational processes used to manage and secure. Chances are very good
that the expertise and resourcing applied to the security of everyone
else's data centers and infrastructure don't come anywhere close. This
is essentially the security corollary to the 0.01% wealthiest.

We all know there will be vulnerabilities in hardware, in ring -N to 3,
in control planes, in operating systems, and in applications. From the
offense side, the questions are whether the attack surface is reachable,
if exploitation is possible given the operating conditions, and if
post-exploitation actions can reach targets and are visible. From the
defense side, the questions are whether we have visibility into the
state and behavior of our assets and if we can manipulate a sufficient
set of controls to prevent or degrade the adversary's impact. Much of
the asymmetry attributed to offense actually stems from defense's lack
of asset visibility, understanding of attack trees, and ability to apply
and manipulate security controls. At a theoretic level, I dare say there
is nothing inherently asymmetric about offense. The asymmetry is only in
practice, and the cloud can change the defender's practices. If
defenders leverage the visibility provided by the cloud, they can go as
far as applying automated reasoning and automated changes to security
controls to defeat adversaries. I would yield that offense still has an
OODA loop speed advantage if defense is always reactionary. If you can
execute action on objectives in my log collection latency...but said
visibility, automated reasoning and automated control changes can be
used by defenders asynchronously. Defenders can enumerate attack paths
manually or automatically and reason about control changes that would
mitigate an attack path. Defenders can hypothesize the application of
known tactics and techniques to determine outcome and make changes
accordingly. Defenders can manipulate their environment to confuse or
deceive adversaries. All these things can be done when you have
programmatic visibility and malleability over your environment and
synchronously or asynchronously. Attackers have inverse and proportional
work items.

The cloud frees defenders from attacker asymmetry...in theory and in
practice for some. Make yourself one of the some.
.
Dom

Very interesting & thank you, there's a bunch of items there I didn't
even think of...

Loading...