Discussion:
[Dailydave] Equitablefax
dave aitel
2017-09-27 15:13:28 UTC
Permalink
So I assume most people skim any news reports of big breaches in the
same way these days. Was this predictable? Was it preventable? Do we
know who did it? Did they do anything new to attack or defend?

In Equifax's case, the reportable information clearly is the alleged
trading anomalies, rather than the hack itself. But the third question
is interesting to a point. I've been trying to write a keynote for T2
for the past few weeks, and while my muse is clearly on an extended
vacation, there are some interesting generational changes afoot with
regards to these questions.

At some level, in a world where vulnerabilities are super rare,
governments dominate the discussion of malicious actors. I think there's
a lot of news chaff about every little 20-something hacker or aspiring
malware businessman who gets caught. Filtering those out, there are
relatively few reports of hacking groups with high skills levels. And
because of our assumptions that "Governments" are behind everything now,
I think we naturally err towards flinching at boogeymen who...wield SQLi
and Phishing with .jar files.

But when you look at the accomplishments of truly skilled hackers,
they're amazing. And the environment we live in is not one where major
vulnerabilities are rare. The environment is such that any specialized
extremophile
<Loading Image...>
can penetrate and persist all of cyberspace. In a sense, the entire bug
bounty market is a breeding ground for a species that can collect
extremely low impact web vulnerabilities into a life sustaining nutrient
cycle, like the crabs on volcanic plumes in the depths of the Pacific.
Likewise, learning everything about RMI is enough to be everywhere, or
.Net serialization, or CCleaner. In cyber, where there's a way there's a
will.

It used to be we would be more afraid if it was China or Russia or Iran
or whoever. But these days I like to annoy people by asking what if it's
not?

Also, does anyone know how often Equifax did their penetration testing?
My new rule is that if you only do it in Q4 you are unlikely to have a
mature security program. :)

-dave
Steve R. Smith
2017-09-27 16:00:21 UTC
Permalink
Was this predictable: probably
I would be surprised if the PCI assessors (and therefore leadership) didn't know about some of the control environment deficiencies. Typically you get - "that's not a priority", "it was designed that way", "we need to update to the next version first", or even "we don't have the budget to fix that". In some cases, if you think it's an issue - you have to rationalize, push, and play politics to get it addressed. Maybe even threaten to escalate the issue. I've had IT VPs that I worked with refuse to fix something because it was a revenue generating system and they didn't want to risk business objectives.    
Was it preventable: unlikely
I think based on historical trends and what we see in the wild, we can predict with confidence that many companies are and/or will be at risk for compromise. IT environments were complicated 18 years ago when I first got into security and they've become even more complicated with the evolution of technology.      Do we know who did it: maybe 
Mandiant is very good at what they do but sometimes attribution just isn't possible because of all the hops the attackers may have taken to get to their final target. The other compromised systems sometimes live in countries that won't help us investigate cyber crimes.  
Did they do anything to new to attack or defend: unlikely
As you point out above, there are many vulnerabilities that go unpatched and unaddressed. Combine that with IT operational mistakes and you may have have a large environment susceptible to compromise. This could be a misconfiguration (TFTP with / access, world readable/writeable cron scripts owned by root), purposeful change that introduces a weakness (open NFS shares combined with availability of r-services, open X display), trust relationships, shared passwords across the environment- you name it.  
My rule is if all you're doing are the bare minimums and/or you have leadership pushing back in the form of not providing executive level support, determining your strategy or tactics, or limiting your budget - you are unlikely to have an effective security program.

By the way - I think you're right. We focus way too much on claiming these compromises are caused by nation states. It very well could be one person or a small team of opportunists. 
No, I have no clue how or the frequency of their penetration testing. Considering that it's been reported that web portals with easily guessable usernames/passwords were used for data exfiltration, their competence is questionable. 
Kind regards, ~steve 


On Wednesday, September 27, 2017, 10:15:12 AM CDT, dave aitel <***@immunityinc.com> wrote:


So I assume most people skim any news reports of big breaches in the same way these days. Was this predictable? Was it preventable? Do we know who did it? Did they do anything new to attack or defend?

In Equifax's case, the reportable information clearly is the alleged trading anomalies, rather than the hack itself. But the third question is interesting to a point. I've been trying to write a keynote for T2 for the past few weeks, and while my muse is clearly on an extended vacation, there are some interesting generational changes afoot with regards to these questions.

At some level, in a world where vulnerabilities are super rare, governments dominate the discussion of malicious actors. I think there's a lot of news chaff about every little 20-something hacker or aspiring malware businessman who gets caught. Filtering those out, there are relatively few reports of hacking groups with high skills levels. And because of our assumptions that "Governments" are behind everything now, I think we naturally err towards flinching at boogeymen who...wield SQLi and Phishing with .jar files.


But when you look at the accomplishments of truly skilled hackers, they're amazing. And the environment we live in is not one where major vulnerabilities are rare. The environment is such that any specialized extremophile can penetrate and persist all of cyberspace. In a sense, the entire bug bounty market is a breeding ground for a species that can collect extremely low impact web vulnerabilities into a life sustaining nutrient cycle, like the crabs on volcanic plumes in the depths of the Pacific. Likewise, learning everything about RMI is enough to be everywhere, or .Net serialization, or CCleaner. In cyber, where there's a way there's a will.


It used to be we would be more afraid if it was China or Russia or Iran or whoever. But these days I like to annoy people by asking what if it's not?


Also, does anyone know how often Equifax did their penetration testing? My new rule is that if you only do it in Q4 you are unlikely to have a mature security program. :)


-dave



_______________________________________________
Dailydave mailing list
***@lists.immunityinc.com
https://lists.immunityinc.com/mailman/listinfo/dailydave
Kristian Erik Hermansen
2017-09-27 16:30:40 UTC
Permalink
If Equifax had a public bug bounty program, someone would have reported the
Java RCE in March 2017 and picked up $10K or more for it. But no, Equifax
did not have a public bug bounty program. Say what you will about the pros
and cons of a bug bounty program, especially for financial institutions
which "know better than the public how to protect themselves", but at least
in this case a known issue would have been well documented much earlier. We
should encourage other credit and financial companies to consider public or
at the very least private bug bounty programs. It's a mess to operate them,
but not patching a known critical web flaw ASAP that allows RCE is
precisely the legal definition of negligence. Equifax should pay dearly for
it.

Perhaps it's time to consider federal Cyber Security Insurance laws for
such companies which forces them to pay fees to operate on the Internet
just like everyone that drives a car on the road? If you crash your car
every time you get on the highway, or you damaged 140 million cars while
driving, you would lose your license for some time. Why hasn't Equifax lost
their license to operate on the internet for some time? How about a 2 year
hiatus on their annual revenue to punish them? Just a thought. Maybe Halvar
can chime in on why Cyber Security Insurance regulation like that is OR is
not the answer. He has been working on that lately...
Chuck McAuley
2017-09-27 18:00:50 UTC
Permalink
In the US, the roads are owned by someone (Private Individual, Town, State, Country). They can set the rules for driving on them as they see fit.

Who owns the Internet? In the US, definitely not the government. I guess you could argue it would be ISPs. They could govern who peers. But why would they care?

More noise should be made that the current credit scoring model cannot be trusted after this PII data has been leaked. I can't see a reliable means to protect 'your' score after this breach.

-chuck

From: Dailydave <dailydave-***@lists.immunityinc.com> on behalf of Kristian Erik Hermansen <***@gmail.com>
Date: Wednesday, September 27, 2017 at 1:32 PM
To: Dave Aitel <***@immunityinc.com>
Cc: dailydave <***@lists.immunityinc.com>
Subject: Re: [Dailydave] Equitablefax

If Equifax had a public bug bounty program, someone would have reported the Java RCE in March 2017 and picked up $10K or more for it. But no, Equifax did not have a public bug bounty program. Say what you will about the pros and cons of a bug bounty program, especially for financial institutions which "know better than the public how to protect themselves", but at least in this case a known issue would have been well documented much earlier. We should encourage other credit and financial companies to consider public or at the very least private bug bounty programs. It's a mess to operate them, but not patching a known critical web flaw ASAP that allows RCE is precisely the legal definition of negligence. Equifax should pay dearly for it.

Perhaps it's time to consider federal Cyber Security Insurance laws for such companies which forces them to pay fees to operate on the Internet just like everyone that drives a car on the road? If you crash your car every time you get on the highway, or you damaged 140 million cars while driving, you would lose your license for some time. Why hasn't Equifax lost their license to operate on the internet for some time? How about a 2 year hiatus on their annual revenue to punish them? Just a thought. Maybe Halvar can chime in on why Cyber Security Insurance regulation like that is OR is not the answer. He has been working on that lately...
Kristian Erik Hermansen
2017-09-27 18:06:39 UTC
Permalink
But clearly Equifax didn't know ALL public facing attack surfaces
controlled by Equifax which were affected by that vulnerability. A bug
bounty likely would have surfaced those missing attack surfaces. Internal
folks always make assumptions about their own network, which is biased and
almost never reality.

From the Equifax blog post:

- Based on the company's investigation, Equifax believes the
unauthorized accesses to certain files containing personal information
occurred from May 13 through July 30, 2017.
- The particular vulnerability in Apache Struts was identified and
disclosed by U.S. CERT in early March 2017.
- Equifax's Security organization was aware of this vulnerability at
that time, and took efforts to identify and to patch any vulnerable systems
in the company's IT infrastructure.
- While Equifax fully understands the intense focus on patching efforts,
the company's review of the facts is still ongoing. The company will
release additional information when available.

There is also no mention of the other International systems that had
"admin/admin" as the portal credentials to some customer data.

Just like when Yahoo was affected by HeartBleed in 2014 and went on to
write a blog post about "all systems being fully patched and heartbleed no
longer being on the Yahoo network" (months later) I disclosed numerous
additional systems that Yahoo operated that were still unpatched and
leaking private data. It's hard to identify ALL attack surfaces. And even
if Equifax thought they were well patched, maybe they forgot to reload the
application / libraries or reboot the systems.

Anyone that has run a full entity Internet facing penetration test knows
that there is the list that you get from the client that they THINK is the
attack surface...and that list is almost always incomplete. It's the duty
of a pentester to fill in those gaps, validate if the list is complete, and
suggest additional targets for inclusion if appropriate. External attackers
don't have that internal organizational bias and that's why you should
consult wide external expertise for something so important.

I still stand by the claimed benefits of such a bug bounty system. It's
clear that Equifax hadn't patched enough systems quickly enough...well into
March and beyond. What if I told you Equifax still has at least one
publicly facing system still vulnerable to that March Struts bug? Would
that change your mind?
Katie M
2017-09-27 20:07:20 UTC
Permalink
I actually tried helping coordinate one of the new bugs that someone found
and wanted to report to Equifax. Unfortunately, before they had time to
even look up from their current conflagration, eyebrows still singed, a
reporter published it.

At this instant, even one bug report, while completely helpful in the
micro-sense, is process-wise another tax on the resources they have working
on the big breach. It still has to go into the queue of their existing
technical debt in a long mission of what they are already clearly
struggling with.

Not to say don't report it - definitely do and I can help if that's the
issue. But that is very different than recommending a bug bounty to them
right now.

But a homeowner currently putting out a fire on their house shouldn't be
simultaneously setting up a bug bounty program to pay for folks to point
out that each blade of dry grass on their lawn is also flammable and could
cause another fire.

-K8e

On Wed, Sep 27, 2017 at 11:06 AM, Kristian Erik Hermansen <
Post by Kristian Erik Hermansen
But clearly Equifax didn't know ALL public facing attack surfaces
controlled by Equifax which were affected by that vulnerability. A bug
bounty likely would have surfaced those missing attack surfaces. Internal
folks always make assumptions about their own network, which is biased and
almost never reality.
- Based on the company's investigation, Equifax believes the
unauthorized accesses to certain files containing personal information
occurred from May 13 through July 30, 2017.
- The particular vulnerability in Apache Struts was identified and
disclosed by U.S. CERT in early March 2017.
- Equifax's Security organization was aware of this vulnerability at
that time, and took efforts to identify and to patch any vulnerable systems
in the company's IT infrastructure.
- While Equifax fully understands the intense focus on patching
efforts, the company's review of the facts is still ongoing. The company
will release additional information when available.
There is also no mention of the other International systems that had
"admin/admin" as the portal credentials to some customer data.
Just like when Yahoo was affected by HeartBleed in 2014 and went on to
write a blog post about "all systems being fully patched and heartbleed no
longer being on the Yahoo network" (months later) I disclosed numerous
additional systems that Yahoo operated that were still unpatched and
leaking private data. It's hard to identify ALL attack surfaces. And even
if Equifax thought they were well patched, maybe they forgot to reload the
application / libraries or reboot the systems.
Anyone that has run a full entity Internet facing penetration test knows
that there is the list that you get from the client that they THINK is the
attack surface...and that list is almost always incomplete. It's the duty
of a pentester to fill in those gaps, validate if the list is complete, and
suggest additional targets for inclusion if appropriate. External attackers
don't have that internal organizational bias and that's why you should
consult wide external expertise for something so important.
I still stand by the claimed benefits of such a bug bounty system. It's
clear that Equifax hadn't patched enough systems quickly enough...well into
March and beyond. What if I told you Equifax still has at least one
publicly facing system still vulnerable to that March Struts bug? Would
that change your mind?
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
Katie M
2017-09-27 19:38:25 UTC
Permalink
Having a bug bounty program wouldn't have helped Equifax. Only Equifax
could have helped Equifax. The root cause of the problem wasn't that they
didn't know about the bug, it was that they face the same patch
prioritization risk vs resource balance that all orgs gamble with. They
lost that gamble, which is what every breach represents: a lost bet on the
tradeoffs. Simply knowing about a bug, via a bug bounty or otherwise, is
just that. And knowing is at best half the battle.

But to return to Dave's assertion about the bug bounty ecosystem itself and
what it currently is good for and what it's used for - I have many
thoughts. And even more songs.

"In a sense, the entire bug bounty market is a breeding ground for a
species that can collect extremely low impact web vulnerabilities into a
life sustaining nutrient cycle, like the crabs on volcanic plumes in the
depths of the Pacific. "

https://en.m.wikipedia.org/wiki/Mariana_Trench

Agreed that the bug bounty market has evolved in this *particular stage of
it's growth* through its own complex system, the dynamics of which are heavily
influenced by factors like:

1. the types of organizations who have been adopting these incentives so
far (mostly tech companies),
2. the typical targets (mostly web sites), and
3. the types of vulnerabilities they tend to use bug bounties to find
(mostly low hanging fruit that could have been found using common free
tools & techniques).

Also a factor in this ecosystem is the geolocation and socioeconomic status
of the script kiddie bug hunting masses, who, unlike the early professional
penetration testers like us, don't have to adapt their techniques to find
more interesting, higher quality bugs to continue to be paid relatively
small amounts that are worth much more to them in their part of the world.

That's good for those bug hunters who are in this category. That's actually
bad for the evolution of the bug bounty ecosystem, and is accurate in
Dave's characterization of what's happening *right now*.

The upside effect though is that the bug hunter masses can now access a
safe marketplace for their skills regardless of those facts of where they
are and whether they could ever become a "security consultant". That's
generally good, but the *dominant* "species" of bug hunter, as Dave
accurately points out about them now will remain relatively unskilled if we
don't act with higher-order outcomes in mind.

It will be like an attempt at brewing beer that gets taken over and soured
by undesirable flora before the brewers yeast kicks in and creates the
desired effect. And I've been brewing the defensive market for
vulnerabilities far too long to watch idly and let the batch sour.

We ideally want to create an upward trend in bug hunter population skills,
as well as move the bug hunter targets themselves, towards more
sophisticated bugs. We are not raising the tide, and we are not causing all
ships to rise with it. Just by slapping a bug bounty or vuln disclosure
program on something, we are missing the point.

One of the papers that we produced out of the MIT Sloan School visiting
scholar systems modeling I worked on will come out sometime this fall
(2017) as a chapter in an MIT Press book. That paper looks specifically at
bug bounty participant data at a specific point in the development of this
economy. Bet you're curious about that supply side snapshot of the bug
bounty Mariana Trench. :) Look for that book with our research paper when
it's out.

Bug bounties as they have mostly manifested *right now, at this specific
stage in that ecosystem's development,* are a cheap, shiny thing to do,
with few exceptions.

And no, the exceptional bug bounties are not the ones that pay the most
more on that later. The presence of a bug bounty program is being currently
used by organizations to virtue signal that they take security seriously by
paying for web bugs, but often missing or ignoring aggregate threats, and
ignoring their internal failing processes to fix bugs.

It matters very much what's on the inside, versus the superficial, shiny,
bug bounty exterior.

Shiny (but still very insecure):



The alliterative buzz word "bug bounty", deceptively simple and so very
misunderstood, needs to evolve as an accepted concept into the more
accurate, more strategic "incentives".

Straight cash as the only lever for bringing all the (good) bugs to the
yard is short-sighted & pollutes the entire defensive reward ocean in this
evolution of the vulnerability and exploit markets. Cash is only one lever
in this system, and it isn't the most effective one if you're buying bugs
for defense purposes, as I've been saying for several years.

Perhaps if a strapping demigod of security would just repeat this for me,
it would replace the econ 101 BS that has plagued the emerging bug bounty
market. Of course, I'm sure they'd happily forget where they heard it first.

<https://mail.google.com/mail/u/0/goog_765668778>


Just kidding, I'll speak for myself, as always:

https://www.rsaconference.com/writable/presentations/file_
upload/ht-t08-the-wolves-of-vuln-street-the-1st-dynamic-
systems-model-of-the-0day-market_final.pdf

Better-than-a-bug-bounty incentives that are much more effective for
improving defense may not be direct cash, may not be rewards at all.
Instead they might be a much harder deep introspective process, to examine
what drives the heart of an organization, what they are doing to defend
what's important to them, and whether the security choices, tradeoffs,
resources, and budgets are actually working for them. What incentives can
they use to tease out real risk, rather than being lazy and trendy and
calling it a success.

No, a bug bounty would not have helped Equifax prevent what happened, and
we need to seriously stop the VC-backed tsunami of propaganda that says
that it would. That stupid marketing trick employed by at least one of the
bug bounty platform vendors should be beneath the critically-thinking
readers of this list to entertain in terms of its obvious oversimplicity of
a non-trivial problem.

I'm not even going to address the cyber insurance idea on this, and by now
in this long operetta of a post, it should be obvious as to why.

Bug bounties and cyber insurance are not a remedy for a fundamentally
unscalable remediation model that most orgs and governments face today.
That's precisely why 94% of the Forbes Global 2000 in 2015 didn't even have
a front door to report a vulnerability, let alone a bug bounty, and it's
not much better now.

They struggle to fix the bugs they already know about, and the bottlenecks
in that *internal* process are what need work. Putting up a front door to
receive bug reports, even without a bug bounty, when there's nothing
operationally sufficient inside that org to address what comes through the
door, is not the chaos an org needs in the midst of drowning in technical
debt.

It's time to return the heart of the bug bounty ocean to stop the spread of
this intellectual and ecosystem-poisoning darkness. I've been staring at
the edge of the water, long as I can remember...



- k8eM0ana




🏝👩‍💻🐞🌋🌺 @k8em0 @lutasecurity @k8eM0ana 🏝👩‍💻🐞🌋🌺



On Wed, Sep 27, 2017 at 9:30 AM, Kristian Erik Hermansen <
Post by Kristian Erik Hermansen
If Equifax had a public bug bounty program, someone would have reported
the Java RCE in March 2017 and picked up $10K or more for it. But no,
Equifax did not have a public bug bounty program. Say what you will about
the pros and cons of a bug bounty program, especially for financial
institutions which "know better than the public how to protect themselves",
but at least in this case a known issue would have been well documented
much earlier. We should encourage other credit and financial companies to
consider public or at the very least private bug bounty programs. It's a
mess to operate them, but not patching a known critical web flaw ASAP that
allows RCE is precisely the legal definition of negligence. Equifax should
pay dearly for it.
Perhaps it's time to consider federal Cyber Security Insurance laws for
such companies which forces them to pay fees to operate on the Internet
just like everyone that drives a car on the road? If you crash your car
every time you get on the highway, or you damaged 140 million cars while
driving, you would lose your license for some time. Why hasn't Equifax lost
their license to operate on the internet for some time? How about a 2 year
hiatus on their annual revenue to punish them? Just a thought. Maybe Halvar
can chime in on why Cyber Security Insurance regulation like that is OR is
not the answer. He has been working on that lately...
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
the grugq
2017-09-28 23:12:34 UTC
Permalink
I’m not going to address any of the points in the excellent post by Katie but rather put some facts together in a timeline so people can see the Equihax event better. The “if only bug bounty” claptrap is, as Katie points out (much more politely), complete bullshit.


Timeline of events:


2017-03-06: Apache announces struts bug
2017-03-07: PoC exploit released to public

2017-03-10: Equihax compromised via struts exploit. Genius hackers use super elite hacker command “whoami” during their sophisticated hacking session. [0]

2017-03-13: Equihax genius elite hackers install 30 webshells to allow traversing all the different compromised hosts to pass data out of the company

2017-04-xx: Oracle releases quarterly bundle of patches, including the Struts patch. (They actually crow about this while blasting Equihax for being slow to apply the patch) [1]

2017-06-30: Equihax patches their struts installs, no longer vulnerable to the struts exploit. They patch the boxes that got popped and almost certainly had webshells installed but notice nothing. [2]

2017-07-29: Equihax discovers they have been compromised by super elite awesome hackers using one webshell for every day of the month (spares in Feb.)
2017-07-30: Equihax evicts the elite hackers and their 30 webshells from their systems

2017-08-01: Equihax CFO sells $1mm stock, US President of Information sells $600k stock, President of Workforce Solutions sells $250k stock [3]

2017-09-05: FireEye registers the Equihax domain name as part of a broader PR damage control move, which Equihax will do everything it can to sabotage

2017-09-07: Equihax mentions that maybe there might have been some sort of hack or something but definitely not a big deal unless you're an American adult with a credit record.

2017-09-08: Equihax offers an opportunity to sign away your right to sue Equihax in exchange for waiting a week and getting yet another year of free credit reporting. (If you don’t already have 3-5 years of free credit reporting by now, are you even using the Internet??) [4]

2017-09-11: FireEye (owner of Mandiant, who did the IR + PR for Equihax) quietly pulls the case study white paper about how FireEye 0day protection technology is keeping Equihax safe from unknown threats and "up to 29 webshells”


What this looks like to me is a bunch of web app hackers who used a fresh PoC exploit to mass hack everything they could find. Then, while going through their hacked logs, they discover they have an interesting victim. They turn their attention to it and start working on getting deeper into the environment (this is around the 13th, so a couple days after they popped a shell). I’m guessing that what happened was they went on a bit of a rampage inside the DMZ area popping all the shells they could. Then assembled some Rube Goldberg webshell machine to exfil data from the various databases, including, apparently, legacy databases.

I’m calling this mostly a problem with Equihax architecture. This isn’t about a struts bug, this is about a terrible network design that allows random kiddies to scrape the data store clean via a single shell (well, 30, but still). That Equihax was focussing on buying boxes to protect against 0day, and (from stories I’ve read circa 2015) working on ensuring employee phones are compartmented for BYOD. Well, they were clearly spending money out of the security budget. And it wasn’t trivial sums either, FireEye boxes aren’t exactly free. But from the looks of it, the problem wasn’t that they got compromised, the problem was that they couldn’t detect a compromise and prevent it from becoming a breach (seriously: 30 webshells exfiltrating data on 143 million people would have left some pretty hefty “access.log” files).

This is not a “bug” issue, it is an architecture issue. You know, if they threw a canary.io tool into that DMZ and configured it to look like a database, they’d have known about the hack during that first week. If they monitored their logs for unusual activity, such as the installation of 30 webshells, and gigabytes of data going the wrong way. If they had an architecture that prevented a compromise of a web server enabling access to sensitive company data. If they had asset management and decommissioned legacy databases, rather than leaving them in the DMZ.

There are a lot of things here which would have prevented this compromise from becoming a disastrous breach, but spending money on a bug bounty program or FireEye silver bullet boxes, or mobile device management systems — none of those would, or did, help.


The important things are always simple. The simple things are always hard. The easy way is always mined.
— Murphy’s Laws of Enterprise Information Security.


—gq

[0]: https://arstechnica.com/information-technology/2017/09/massive-equifax-hack-reportedly-started-4-months-before-it-was-detected/
[1]: https://threatpost.com/oracle-patches-apache-struts-reminds-users-to-update-equifax-bug/128151/
[2]: a cron job running `find` for new files, AIDE (or Tripwire), would trivially notice the modifications to the file system and alert.
[3]: https://www.bloomberg.com/news/articles/2017-09-07/three-equifax-executives-sold-stock-before-revealing-cyber-hack
[4]: https://www.cnbc.com/2017/09/08/were-you-affected-by-the-equifax-data-breach-one-click-could-cost-you-your-rights-in-court.html
Having a bug bounty program wouldn't have helped Equifax. Only Equifax could have helped Equifax. The root cause of the problem wasn't that they didn't know about the bug, it was that they face the same patch prioritization risk vs resource balance that all orgs gamble with. They lost that gamble, which is what every breach represents: a lost bet on the tradeoffs. Simply knowing about a bug, via a bug bounty or otherwise, is just that. And knowing is at best half the battle.
But to return to Dave's assertion about the bug bounty ecosystem itself and what it currently is good for and what it's used for - I have many thoughts. And even more songs.
"In a sense, the entire bug bounty market is a breeding ground for a species that can collect extremely low impact web vulnerabilities into a life sustaining nutrient cycle, like the crabs on volcanic plumes in the depths of the Pacific. "
https://en.m.wikipedia.org/wiki/Mariana_Trench
1. the types of organizations who have been adopting these incentives so far (mostly tech companies),
2. the typical targets (mostly web sites), and
3. the types of vulnerabilities they tend to use bug bounties to find (mostly low hanging fruit that could have been found using common free tools & techniques).
Also a factor in this ecosystem is the geolocation and socioeconomic status of the script kiddie bug hunting masses, who, unlike the early professional penetration testers like us, don't have to adapt their techniques to find more interesting, higher quality bugs to continue to be paid relatively small amounts that are worth much more to them in their part of the world.
That's good for those bug hunters who are in this category. That's actually bad for the evolution of the bug bounty ecosystem, and is accurate in Dave's characterization of what's happening *right now*.
The upside effect though is that the bug hunter masses can now access a safe marketplace for their skills regardless of those facts of where they are and whether they could ever become a "security consultant". That's generally good, but the *dominant* "species" of bug hunter, as Dave accurately points out about them now will remain relatively unskilled if we don't act with higher-order outcomes in mind.
It will be like an attempt at brewing beer that gets taken over and soured by undesirable flora before the brewers yeast kicks in and creates the desired effect. And I've been brewing the defensive market for vulnerabilities far too long to watch idly and let the batch sour.
We ideally want to create an upward trend in bug hunter population skills, as well as move the bug hunter targets themselves, towards more sophisticated bugs. We are not raising the tide, and we are not causing all ships to rise with it. Just by slapping a bug bounty or vuln disclosure program on something, we are missing the point.
One of the papers that we produced out of the MIT Sloan School visiting scholar systems modeling I worked on will come out sometime this fall (2017) as a chapter in an MIT Press book. That paper looks specifically at bug bounty participant data at a specific point in the development of this economy. Bet you're curious about that supply side snapshot of the bug bounty Mariana Trench. :) Look for that book with our research paper when it's out.
Bug bounties as they have mostly manifested *right now, at this specific stage in that ecosystem's development,* are a cheap, shiny thing to do, with few exceptions.
And no, the exceptional bug bounties are not the ones that pay the most more on that later. The presence of a bug bounty program is being currently used by organizations to virtue signal that they take security seriously by paying for web bugs, but often missing or ignoring aggregate threats, and ignoring their internal failing processes to fix bugs.
It matters very much what's on the inside, versus the superficial, shiny, bug bounty exterior.
http://youtu.be/93lrosBEW-Q
The alliterative buzz word "bug bounty", deceptively simple and so very misunderstood, needs to evolve as an accepted concept into the more accurate, more strategic "incentives".
Straight cash as the only lever for bringing all the (good) bugs to the yard is short-sighted & pollutes the entire defensive reward ocean in this evolution of the vulnerability and exploit markets. Cash is only one lever in this system, and it isn't the most effective one if you're buying bugs for defense purposes, as I've been saying for several years.
Perhaps if a strapping demigod of security would just repeat this for me, it would replace the econ 101 BS that has plagued the emerging bug bounty market. Of course, I'm sure they'd happily forget where they heard it first.
http://youtu.be/79DijItQXMM
https://www.rsaconference.com/writable/presentations/file_upload/ht-t08-the-wolves-of-vuln-street-the-1st-dynamic-systems-model-of-the-0day-market_final.pdf
Better-than-a-bug-bounty incentives that are much more effective for improving defense may not be direct cash, may not be rewards at all. Instead they might be a much harder deep introspective process, to examine what drives the heart of an organization, what they are doing to defend what's important to them, and whether the security choices, tradeoffs, resources, and budgets are actually working for them. What incentives can they use to tease out real risk, rather than being lazy and trendy and calling it a success.
No, a bug bounty would not have helped Equifax prevent what happened, and we need to seriously stop the VC-backed tsunami of propaganda that says that it would. That stupid marketing trick employed by at least one of the bug bounty platform vendors should be beneath the critically-thinking readers of this list to entertain in terms of its obvious oversimplicity of a non-trivial problem.
I'm not even going to address the cyber insurance idea on this, and by now in this long operetta of a post, it should be obvious as to why.
Bug bounties and cyber insurance are not a remedy for a fundamentally unscalable remediation model that most orgs and governments face today. That's precisely why 94% of the Forbes Global 2000 in 2015 didn't even have a front door to report a vulnerability, let alone a bug bounty, and it's not much better now.
They struggle to fix the bugs they already know about, and the bottlenecks in that *internal* process are what need work. Putting up a front door to receive bug reports, even without a bug bounty, when there's nothing operationally sufficient inside that org to address what comes through the door, is not the chaos an org needs in the midst of drowning in technical debt.
It's time to return the heart of the bug bounty ocean to stop the spread of this intellectual and ecosystem-poisoning darkness. I've been staring at the edge of the water, long as I can remember...
http://youtu.be/GeIHvhnQbbI
- k8eM0ana
http://youtu.be/Lg_cweoJXyo
If Equifax had a public bug bounty program, someone would have reported the Java RCE in March 2017 and picked up $10K or more for it. But no, Equifax did not have a public bug bounty program. Say what you will about the pros and cons of a bug bounty program, especially for financial institutions which "know better than the public how to protect themselves", but at least in this case a known issue would have been well documented much earlier. We should encourage other credit and financial companies to consider public or at the very least private bug bounty programs. It's a mess to operate them, but not patching a known critical web flaw ASAP that allows RCE is precisely the legal definition of negligence. Equifax should pay dearly for it.
Perhaps it's time to consider federal Cyber Security Insurance laws for such companies which forces them to pay fees to operate on the Internet just like everyone that drives a car on the road? If you crash your car every time you get on the highway, or you damaged 140 million cars while driving, you would lose your license for some time. Why hasn't Equifax lost their license to operate on the internet for some time? How about a 2 year hiatus on their annual revenue to punish them? Just a thought. Maybe Halvar can chime in on why Cyber Security Insurance regulation like that is OR is not the answer. He has been working on that lately...
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
Arrigo Triulzi
2017-09-29 15:31:46 UTC
Permalink
Post by the grugq
This is not a “bug” issue, it is an architecture issue. You know, if they threw a canary.io tool into that DMZ and configured it to look like a database, they’d have known about the hack during that first week. If they monitored their logs for unusual activity, such as the installation of 30 webshells, and gigabytes of data going the wrong way. If they had an architecture that prevented a compromise of a web server enabling access to sensitive company data. If they had asset management and decommissioned legacy databases, rather than leaving them in the DMZ.
Just in passing: "Equifax is ISO/IEC 27001:2013 certified by a reputable independent third party.”[0]. Asset management is a core part of ISO27001:2013.

Cheers,

Arrigo

[0] https://www.equifax.com/assets/WFS/the_work_number_best_practices_in_data_security.pdf (1st page)
s***@spacerogue.net
2017-09-29 16:15:27 UTC
Permalink
Thank you for this timeline because honestly I haven't been paying that
close attention.

Based on this it looks like Equifax did actually patch, just not fast
enough, and by the time they got around to it the bad guys where already
inside. Based on this list the delta from patch release to install was
<91 days. Am I reading this correctly?

If so then the absolute shit ton of criticism heaped on Equifax for not
patching is IMO unwarranted. While a 91 day patch cycle for a huge
enterprise isn't great it's a hell of a lot better than most other orgs.

Granted your points about bad network design and inability to detect a
breach are well made but based on this list the communities criticism on
Equifax's inability to patch is inappropriate.

- SR
Post by the grugq
I’m not going to address any of the points in the excellent post by Katie but rather put some facts together in a timeline so people can see the Equihax event better. The “if only bug bounty” claptrap is, as Katie points out (much more politely), complete bullshit.
2017-03-06: Apache announces struts bug
2017-03-07: PoC exploit released to public
2017-03-10: Equihax compromised via struts exploit. Genius hackers use super elite hacker command “whoami” during their sophisticated hacking session. [0]
2017-03-13: Equihax genius elite hackers install 30 webshells to allow traversing all the different compromised hosts to pass data out of the company
2017-04-xx: Oracle releases quarterly bundle of patches, including the Struts patch. (They actually crow about this while blasting Equihax for being slow to apply the patch) [1]
2017-06-30: Equihax patches their struts installs, no longer vulnerable to the struts exploit. They patch the boxes that got popped and almost certainly had webshells installed but notice nothing. [2]
2017-07-29: Equihax discovers they have been compromised by super elite awesome hackers using one webshell for every day of the month (spares in Feb.)
2017-07-30: Equihax evicts the elite hackers and their 30 webshells from their systems
2017-08-01: Equihax CFO sells $1mm stock, US President of Information sells $600k stock, President of Workforce Solutions sells $250k stock [3]
2017-09-05: FireEye registers the Equihax domain name as part of a broader PR damage control move, which Equihax will do everything it can to sabotage
2017-09-07: Equihax mentions that maybe there might have been some sort of hack or something but definitely not a big deal unless you're an American adult with a credit record.
2017-09-08: Equihax offers an opportunity to sign away your right to sue Equihax in exchange for waiting a week and getting yet another year of free credit reporting. (If you don’t already have 3-5 years of free credit reporting by now, are you even using the Internet??) [4]
2017-09-11: FireEye (owner of Mandiant, who did the IR + PR for Equihax) quietly pulls the case study white paper about how FireEye 0day protection technology is keeping Equihax safe from unknown threats and "up to 29 webshells”
What this looks like to me is a bunch of web app hackers who used a fresh PoC exploit to mass hack everything they could find. Then, while going through their hacked logs, they discover they have an interesting victim. They turn their attention to it and start working on getting deeper into the environment (this is around the 13th, so a couple days after they popped a shell). I’m guessing that what happened was they went on a bit of a rampage inside the DMZ area popping all the shells they could. Then assembled some Rube Goldberg webshell machine to exfil data from the various databases, including, apparently, legacy databases.
I’m calling this mostly a problem with Equihax architecture. This isn’t about a struts bug, this is about a terrible network design that allows random kiddies to scrape the data store clean via a single shell (well, 30, but still). That Equihax was focussing on buying boxes to protect against 0day, and (from stories I’ve read circa 2015) working on ensuring employee phones are compartmented for BYOD. Well, they were clearly spending money out of the security budget. And it wasn’t trivial sums either, FireEye boxes aren’t exactly free. But from the looks of it, the problem wasn’t that they got compromised, the problem was that they couldn’t detect a compromise and prevent it from becoming a breach (seriously: 30 webshells exfiltrating data on 143 million people would have left some pretty hefty “access.log” files).
This is not a “bug” issue, it is an architecture issue. You know, if they threw a canary.io tool into that DMZ and configured it to look like a database, they’d have known about the hack during that first week. If they monitored their logs for unusual activity, such as the installation of 30 webshells, and gigabytes of data going the wrong way. If they had an architecture that prevented a compromise of a web server enabling access to sensitive company data. If they had asset management and decommissioned legacy databases, rather than leaving them in the DMZ.
There are a lot of things here which would have prevented this compromise from becoming a disastrous breach, but spending money on a bug bounty program or FireEye silver bullet boxes, or mobile device management systems — none of those would, or did, help.
The important things are always simple. The simple things are always hard. The easy way is always mined.
— Murphy’s Laws of Enterprise Information Security.
—gq
[0]: https://arstechnica.com/information-technology/2017/09/massive-equifax-hack-reportedly-started-4-months-before-it-was-detected/
[1]: https://threatpost.com/oracle-patches-apache-struts-reminds-users-to-update-equifax-bug/128151/
[2]: a cron job running `find` for new files, AIDE (or Tripwire), would trivially notice the modifications to the file system and alert.
[3]: https://www.bloomberg.com/news/articles/2017-09-07/three-equifax-executives-sold-stock-before-revealing-cyber-hack
[4]: https://www.cnbc.com/2017/09/08/were-you-affected-by-the-equifax-data-breach-one-click-could-cost-you-your-rights-in-court.html
Having a bug bounty program wouldn't have helped Equifax. Only Equifax could have helped Equifax. The root cause of the problem wasn't that they didn't know about the bug, it was that they face the same patch prioritization risk vs resource balance that all orgs gamble with. They lost that gamble, which is what every breach represents: a lost bet on the tradeoffs. Simply knowing about a bug, via a bug bounty or otherwise, is just that. And knowing is at best half the battle.
But to return to Dave's assertion about the bug bounty ecosystem itself and what it currently is good for and what it's used for - I have many thoughts. And even more songs.
"In a sense, the entire bug bounty market is a breeding ground for a species that can collect extremely low impact web vulnerabilities into a life sustaining nutrient cycle, like the crabs on volcanic plumes in the depths of the Pacific."
https://en.m.wikipedia.org/wiki/Mariana_Trench
1. the types of organizations who have been adopting these incentives so far (mostly tech companies),
2. the typical targets (mostly web sites), and
3. the types of vulnerabilities they tend to use bug bounties to find (mostly low hanging fruit that could have been found using common free tools& techniques).
Also a factor in this ecosystem is the geolocation and socioeconomic status of the script kiddie bug hunting masses, who, unlike the early professional penetration testers like us, don't have to adapt their techniques to find more interesting, higher quality bugs to continue to be paid relatively small amounts that are worth much more to them in their part of the world.
That's good for those bug hunters who are in this category. That's actually bad for the evolution of the bug bounty ecosystem, and is accurate in Dave's characterization of what's happening *right now*.
The upside effect though is that the bug hunter masses can now access a safe marketplace for their skills regardless of those facts of where they are and whether they could ever become a "security consultant". That's generally good, but the *dominant* "species" of bug hunter, as Dave accurately points out about them now will remain relatively unskilled if we don't act with higher-order outcomes in mind.
It will be like an attempt at brewing beer that gets taken over and soured by undesirable flora before the brewers yeast kicks in and creates the desired effect. And I've been brewing the defensive market for vulnerabilities far too long to watch idly and let the batch sour.
We ideally want to create an upward trend in bug hunter population skills, as well as move the bug hunter targets themselves, towards more sophisticated bugs. We are not raising the tide, and we are not causing all ships to rise with it. Just by slapping a bug bounty or vuln disclosure program on something, we are missing the point.
One of the papers that we produced out of the MIT Sloan School visiting scholar systems modeling I worked on will come out sometime this fall (2017) as a chapter in an MIT Press book. That paper looks specifically at bug bounty participant data at a specific point in the development of this economy. Bet you're curious about that supply side snapshot of the bug bounty Mariana Trench. :) Look for that book with our research paper when it's out.
Bug bounties as they have mostly manifested *right now, at this specific stage in that ecosystem's development,* are a cheap, shiny thing to do, with few exceptions.
And no, the exceptional bug bounties are not the ones that pay the most more on that later. The presence of a bug bounty program is being currently used by organizations to virtue signal that they take security seriously by paying for web bugs, but often missing or ignoring aggregate threats, and ignoring their internal failing processes to fix bugs.
It matters very much what's on the inside, versus the superficial, shiny, bug bounty exterior.
http://youtu.be/93lrosBEW-Q
The alliterative buzz word "bug bounty", deceptively simple and so very misunderstood, needs to evolve as an accepted concept into the more accurate, more strategic "incentives".
Straight cash as the only lever for bringing all the (good) bugs to the yard is short-sighted& pollutes the entire defensive reward ocean in this evolution of the vulnerability and exploit markets. Cash is only one lever in this system, and it isn't the most effective one if you're buying bugs for defense purposes, as I've been saying for several years.
Perhaps if a strapping demigod of security would just repeat this for me, it would replace the econ 101 BS that has plagued the emerging bug bounty market. Of course, I'm sure they'd happily forget where they heard it first.
http://youtu.be/79DijItQXMM
https://www.rsaconference.com/writable/presentations/file_upload/ht-t08-the-wolves-of-vuln-street-the-1st-dynamic-systems-model-of-the-0day-market_final.pdf
Better-than-a-bug-bounty incentives that are much more effective for improving defense may not be direct cash, may not be rewards at all. Instead they might be a much harder deep introspective process, to examine what drives the heart of an organization, what they are doing to defend what's important to them, and whether the security choices, tradeoffs, resources, and budgets are actually working for them. What incentives can they use to tease out real risk, rather than being lazy and trendy and calling it a success.
No, a bug bounty would not have helped Equifax prevent what happened, and we need to seriously stop the VC-backed tsunami of propaganda that says that it would. That stupid marketing trick employed by at least one of the bug bounty platform vendors should be beneath the critically-thinking readers of this list to entertain in terms of its obvious oversimplicity of a non-trivial problem.
I'm not even going to address the cyber insurance idea on this, and by now in this long operetta of a post, it should be obvious as to why.
Bug bounties and cyber insurance are not a remedy for a fundamentally unscalable remediation model that most orgs and governments face today. That's precisely why 94% of the Forbes Global 2000 in 2015 didn't even have a front door to report a vulnerability, let alone a bug bounty, and it's not much better now.
They struggle to fix the bugs they already know about, and the bottlenecks in that *internal* process are what need work. Putting up a front door to receive bug reports, even without a bug bounty, when there's nothing operationally sufficient inside that org to address what comes through the door, is not the chaos an org needs in the midst of drowning in technical debt.
It's time to return the heart of the bug bounty ocean to stop the spread of this intellectual and ecosystem-poisoning darkness. I've been staring at the edge of the water, long as I can remember...
http://youtu.be/GeIHvhnQbbI
- k8eM0ana
http://youtu.be/Lg_cweoJXyo
If Equifax had a public bug bounty program, someone would have reported the Java RCE in March 2017 and picked up $10K or more for it. But no, Equifax did not have a public bug bounty program. Say what you will about the pros and cons of a bug bounty program, especially for financial institutions which "know better than the public how to protect themselves", but at least in this case a known issue would have been well documented much earlier. We should encourage other credit and financial companies to consider public or at the very least private bug bounty programs. It's a mess to operate them, but not patching a known critical web flaw ASAP that allows RCE is precisely the legal definition of negligence. Equifax should pay dearly for it.
Perhaps it's time to consider federal Cyber Security Insurance laws for such companies which forces them to pay fees to operate on the Internet just like everyone that drives a car on the road? If you crash your car every time you get on the highway, or you damaged 140 million cars while driving, you would lose your license for some time. Why hasn't Equifax lost their license to operate on the internet for some time? How about a 2 year hiatus on their annual revenue to punish them? Just a thought. Maybe Halvar can chime in on why Cyber Security Insurance regulation like that is OR is not the answer. He has been working on that lately...
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
the grugq
2017-09-29 18:08:39 UTC
Permalink
Hey
Thank you for this timeline because honestly I haven't been paying that close attention.
I wasn’t either since it doesn’t impact me, but I had to research it for this week’s news segment on Risky.Biz ==> https://risky.biz/RB471/

During the research it became clear that the public narrative and the facts were diverging quite a bit. In particular this “failure to patch” story line. Yes, they were slow to patch. However, their upstream provider didn’t even make the patch available until weeks after the compromise had already happened.

The time from bug -> to exploit -> to the 1st of 30 web shells was about 3-4 days. There is no major enterprise that is able to consistently deploy patches same day -- as would be necessary in this case.

Same day deployment of patches just isn’t possible at scale. Business units have to schedule a service window, test the patch against their applications, allocate resources for deploying and testing the patch in place. Even if the enterprise has automated pretty much all of this, they would still have to wait for a maintenance window to minimise business risk.

So, yeah, Equifax failed to apply a patch that wasn’t available at the time a mere four days after the announcement is basically a “dog bites man” story.
Based on this it looks like Equifax did actually patch, just not fast enough, and by the time they got around to it the bad guys where already inside. Based on this list the delta from patch release to install was <91 days. Am I reading this correctly?
Yes, they did actually apply the patch. Eventually. Mid April release, and deployed to live about 5-6 weeks later. Thats not terribly fast nor terribly slow for a large company. Granted, they’d been breached for over a month by the time the patch was released so it is a bit of a moot point, but we can take the “failure to patch” narrative behind the shed and put it to sleep.

There are things that concern me more than the delay with deploying the patch. Realistically every organisation has to assume that a system will be compromised. Unless they are built to be resilient after an attacker gains a foothold on the system, they are living in a fantasy world and banking on borrowed time. Compromise is inevitable. Whether the systems and networks are hostile to attackers even after they get a shell is the sign of a mature cyber secure posture.

I know of companies that do rolling automated re-installs of systems so that the infrastructure is factory fresh daily. This makes persistences really annoying for attackers who migrate onto a stable system or else do the noisy exploit all over again. Attackers tend to give up in frustration.
If so then the absolute shit ton of criticism heaped on Equifax for not patching is IMO unwarranted. While a 91 day patch cycle for a huge enterprise isn't great it's a hell of a lot better than most other orgs.
Agreed.

This is where things get kinda interesting. The breach was discovered on July 29th. For people lacking calendar technology, this is July 2017:

July 2017
Su Mo Tu We Th Fr Sa
1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29 <== not a work day
30 31

Note: the breach was discovered on a Saturday. That, to me, reads like a maintenance window was open and the web servers were being serviced. Possibly because of a marketing campaign or something else that would coincide with an “end of July” time frame. What this says to me is that someone was doing work on a compromised server and uttered that phrase dreaded by hackers everywhere “huh? thats weird. . .” I wouldn't be at all surprised if someone was updating a website with new content for a marketing campaign and stumbled over one of the thirty webshells installed all over the place.

Regardless of the circumstances surrounding the breach discovery (my money's on web admin updating the site for a Fall marketing campaign) one thing is seems certain — this breach wasn't uncovered by the cybersecurity team in the regular course of their day to day activities. If it were found by the infosec team as part their regular day job via: threat hunting, log monitoring, deception tools, investigating alerts from tripwire etc etc. ; then the discovery would’ve been earlier, and during business hours.

If it were found by the infosec team as part of their regular job it would've been discovered on a weekday during any of the preceding 4 months.

Based on this data, I assess that the breach was discovered purely by accident -- sheer chance. The Equifax infosec team failed to notice exfiltration of hundreds of millions of records, and they failed to detect the active operational use of 30 web shells. This is far more damning that any patch related tardiness.

Was the breach discovery accidental? Was it by a member of the infosec team, or someone else? What systems are being deployed to detect modification to the web servers, notice exfiltration of gigs of data, and might I suggest asset tracking to decommission legacy databases? Maybe rather that shifting the security paradigm to focus on next gen unknown malware threats[0], Equifax should focus on implementing the bare minimum required to detect known malware and techniques.
Granted your points about bad network design and inability to detect a breach are well made but based on this list the communities criticism on Equifax's inability to patch is inappropriate.
Correct.



—gq

[0]: http://www.cnmeonline.com/myresources/fireeye/fireeye-cso-less-secure-than-you-think.pdf
- SR
Post by the grugq
I’m not going to address any of the points in the excellent post by Katie but rather put some facts together in a timeline so people can see the Equihax event better. The “if only bug bounty” claptrap is, as Katie points out (much more politely), complete bullshit.
2017-03-06: Apache announces struts bug
2017-03-07: PoC exploit released to public
2017-03-10: Equihax compromised via struts exploit. Genius hackers use super elite hacker command “whoami” during their sophisticated hacking session. [0]
2017-03-13: Equihax genius elite hackers install 30 webshells to allow traversing all the different compromised hosts to pass data out of the company
2017-04-xx: Oracle releases quarterly bundle of patches, including the Struts patch. (They actually crow about this while blasting Equihax for being slow to apply the patch) [1]
2017-06-30: Equihax patches their struts installs, no longer vulnerable to the struts exploit. They patch the boxes that got popped and almost certainly had webshells installed but notice nothing. [2]
2017-07-29: Equihax discovers they have been compromised by super elite awesome hackers using one webshell for every day of the month (spares in Feb.)
2017-07-30: Equihax evicts the elite hackers and their 30 webshells from their systems
2017-08-01: Equihax CFO sells $1mm stock, US President of Information sells $600k stock, President of Workforce Solutions sells $250k stock [3]
2017-09-05: FireEye registers the Equihax domain name as part of a broader PR damage control move, which Equihax will do everything it can to sabotage
2017-09-07: Equihax mentions that maybe there might have been some sort of hack or something but definitely not a big deal unless you're an American adult with a credit record.
2017-09-08: Equihax offers an opportunity to sign away your right to sue Equihax in exchange for waiting a week and getting yet another year of free credit reporting. (If you don’t already have 3-5 years of free credit reporting by now, are you even using the Internet??) [4]
2017-09-11: FireEye (owner of Mandiant, who did the IR + PR for Equihax) quietly pulls the case study white paper about how FireEye 0day protection technology is keeping Equihax safe from unknown threats and "up to 29 webshells”
What this looks like to me is a bunch of web app hackers who used a fresh PoC exploit to mass hack everything they could find. Then, while going through their hacked logs, they discover they have an interesting victim. They turn their attention to it and start working on getting deeper into the environment (this is around the 13th, so a couple days after they popped a shell). I’m guessing that what happened was they went on a bit of a rampage inside the DMZ area popping all the shells they could. Then assembled some Rube Goldberg webshell machine to exfil data from the various databases, including, apparently, legacy databases.
I’m calling this mostly a problem with Equihax architecture. This isn’t about a struts bug, this is about a terrible network design that allows random kiddies to scrape the data store clean via a single shell (well, 30, but still). That Equihax was focussing on buying boxes to protect against 0day, and (from stories I’ve read circa 2015) working on ensuring employee phones are compartmented for BYOD. Well, they were clearly spending money out of the security budget. And it wasn’t trivial sums either, FireEye boxes aren’t exactly free. But from the looks of it, the problem wasn’t that they got compromised, the problem was that they couldn’t detect a compromise and prevent it from becoming a breach (seriously: 30 webshells exfiltrating data on 143 million people would have left some pretty hefty “access.log” files).
This is not a “bug” issue, it is an architecture issue. You know, if they threw a canary.io tool into that DMZ and configured it to look like a database, they’d have known about the hack during that first week. If they monitored their logs for unusual activity, such as the installation of 30 webshells, and gigabytes of data going the wrong way. If they had an architecture that prevented a compromise of a web server enabling access to sensitive company data. If they had asset management and decommissioned legacy databases, rather than leaving them in the DMZ.
There are a lot of things here which would have prevented this compromise from becoming a disastrous breach, but spending money on a bug bounty program or FireEye silver bullet boxes, or mobile device management systems — none of those would, or did, help.
The important things are always simple. The simple things are always hard. The easy way is always mined.
— Murphy’s Laws of Enterprise Information Security.
—gq
[0]: https://arstechnica.com/information-technology/2017/09/massive-equifax-hack-reportedly-started-4-months-before-it-was-detected/
[1]: https://threatpost.com/oracle-patches-apache-struts-reminds-users-to-update-equifax-bug/128151/
[2]: a cron job running `find` for new files, AIDE (or Tripwire), would trivially notice the modifications to the file system and alert.
[3]: https://www.bloomberg.com/news/articles/2017-09-07/three-equifax-executives-sold-stock-before-revealing-cyber-hack
[4]: https://www.cnbc.com/2017/09/08/were-you-affected-by-the-equifax-data-breach-one-click-could-cost-you-your-rights-in-court.html
Having a bug bounty program wouldn't have helped Equifax. Only Equifax could have helped Equifax. The root cause of the problem wasn't that they didn't know about the bug, it was that they face the same patch prioritization risk vs resource balance that all orgs gamble with. They lost that gamble, which is what every breach represents: a lost bet on the tradeoffs. Simply knowing about a bug, via a bug bounty or otherwise, is just that. And knowing is at best half the battle.
But to return to Dave's assertion about the bug bounty ecosystem itself and what it currently is good for and what it's used for - I have many thoughts. And even more songs.
"In a sense, the entire bug bounty market is a breeding ground for a species that can collect extremely low impact web vulnerabilities into a life sustaining nutrient cycle, like the crabs on volcanic plumes in the depths of the Pacific."
https://en.m.wikipedia.org/wiki/Mariana_Trench
1. the types of organizations who have been adopting these incentives so far (mostly tech companies),
2. the typical targets (mostly web sites), and
3. the types of vulnerabilities they tend to use bug bounties to find (mostly low hanging fruit that could have been found using common free tools& techniques).
Also a factor in this ecosystem is the geolocation and socioeconomic status of the script kiddie bug hunting masses, who, unlike the early professional penetration testers like us, don't have to adapt their techniques to find more interesting, higher quality bugs to continue to be paid relatively small amounts that are worth much more to them in their part of the world.
That's good for those bug hunters who are in this category. That's actually bad for the evolution of the bug bounty ecosystem, and is accurate in Dave's characterization of what's happening *right now*.
The upside effect though is that the bug hunter masses can now access a safe marketplace for their skills regardless of those facts of where they are and whether they could ever become a "security consultant". That's generally good, but the *dominant* "species" of bug hunter, as Dave accurately points out about them now will remain relatively unskilled if we don't act with higher-order outcomes in mind.
It will be like an attempt at brewing beer that gets taken over and soured by undesirable flora before the brewers yeast kicks in and creates the desired effect. And I've been brewing the defensive market for vulnerabilities far too long to watch idly and let the batch sour.
We ideally want to create an upward trend in bug hunter population skills, as well as move the bug hunter targets themselves, towards more sophisticated bugs. We are not raising the tide, and we are not causing all ships to rise with it. Just by slapping a bug bounty or vuln disclosure program on something, we are missing the point.
One of the papers that we produced out of the MIT Sloan School visiting scholar systems modeling I worked on will come out sometime this fall (2017) as a chapter in an MIT Press book. That paper looks specifically at bug bounty participant data at a specific point in the development of this economy. Bet you're curious about that supply side snapshot of the bug bounty Mariana Trench. :) Look for that book with our research paper when it's out.
Bug bounties as they have mostly manifested *right now, at this specific stage in that ecosystem's development,* are a cheap, shiny thing to do, with few exceptions.
And no, the exceptional bug bounties are not the ones that pay the most more on that later. The presence of a bug bounty program is being currently used by organizations to virtue signal that they take security seriously by paying for web bugs, but often missing or ignoring aggregate threats, and ignoring their internal failing processes to fix bugs.
It matters very much what's on the inside, versus the superficial, shiny, bug bounty exterior.
http://youtu.be/93lrosBEW-Q
The alliterative buzz word "bug bounty", deceptively simple and so very misunderstood, needs to evolve as an accepted concept into the more accurate, more strategic "incentives".
Straight cash as the only lever for bringing all the (good) bugs to the yard is short-sighted& pollutes the entire defensive reward ocean in this evolution of the vulnerability and exploit markets. Cash is only one lever in this system, and it isn't the most effective one if you're buying bugs for defense purposes, as I've been saying for several years.
Perhaps if a strapping demigod of security would just repeat this for me, it would replace the econ 101 BS that has plagued the emerging bug bounty market. Of course, I'm sure they'd happily forget where they heard it first.
http://youtu.be/79DijItQXMM
https://www.rsaconference.com/writable/presentations/file_upload/ht-t08-the-wolves-of-vuln-street-the-1st-dynamic-systems-model-of-the-0day-market_final.pdf
Better-than-a-bug-bounty incentives that are much more effective for improving defense may not be direct cash, may not be rewards at all. Instead they might be a much harder deep introspective process, to examine what drives the heart of an organization, what they are doing to defend what's important to them, and whether the security choices, tradeoffs, resources, and budgets are actually working for them. What incentives can they use to tease out real risk, rather than being lazy and trendy and calling it a success.
No, a bug bounty would not have helped Equifax prevent what happened, and we need to seriously stop the VC-backed tsunami of propaganda that says that it would. That stupid marketing trick employed by at least one of the bug bounty platform vendors should be beneath the critically-thinking readers of this list to entertain in terms of its obvious oversimplicity of a non-trivial problem.
I'm not even going to address the cyber insurance idea on this, and by now in this long operetta of a post, it should be obvious as to why.
Bug bounties and cyber insurance are not a remedy for a fundamentally unscalable remediation model that most orgs and governments face today. That's precisely why 94% of the Forbes Global 2000 in 2015 didn't even have a front door to report a vulnerability, let alone a bug bounty, and it's not much better now.
They struggle to fix the bugs they already know about, and the bottlenecks in that *internal* process are what need work. Putting up a front door to receive bug reports, even without a bug bounty, when there's nothing operationally sufficient inside that org to address what comes through the door, is not the chaos an org needs in the midst of drowning in technical debt.
It's time to return the heart of the bug bounty ocean to stop the spread of this intellectual and ecosystem-poisoning darkness. I've been staring at the edge of the water, long as I can remember...
http://youtu.be/GeIHvhnQbbI
- k8eM0ana
http://youtu.be/Lg_cweoJXyo
If Equifax had a public bug bounty program, someone would have reported the Java RCE in March 2017 and picked up $10K or more for it. But no, Equifax did not have a public bug bounty program. Say what you will about the pros and cons of a bug bounty program, especially for financial institutions which "know better than the public how to protect themselves", but at least in this case a known issue would have been well documented much earlier. We should encourage other credit and financial companies to consider public or at the very least private bug bounty programs. It's a mess to operate them, but not patching a known critical web flaw ASAP that allows RCE is precisely the legal definition of negligence. Equifax should pay dearly for it.
Perhaps it's time to consider federal Cyber Security Insurance laws for such companies which forces them to pay fees to operate on the Internet just like everyone that drives a car on the road? If you crash your car every time you get on the highway, or you damaged 140 million cars while driving, you would lose your license for some time. Why hasn't Equifax lost their license to operate on the internet for some time? How about a 2 year hiatus on their annual revenue to punish them? Just a thought. Maybe Halvar can chime in on why Cyber Security Insurance regulation like that is OR is not the answer. He has been working on that lately...
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
_______________________________________________
Dailydave mailing list
https://lists.immunityinc.com/mailman/listinfo/dailydave
Loading...