Discussion:
Graphing: Don't believe everything you see.
Dave Aitel
2007-02-07 00:57:07 UTC
Permalink
Graphs can be quite misleading. They make people think they see
something which is blindingly obvious, but totally wrong.

http://blogs.zdnet.com/threatchaos/?p=311
(Check out the pictures.)

"""
Windows is inherently harder to secure than Linux. There I said it.
The simple truth.

Many millions of words have been written and said on this topic. I
have a couple of pictures. The basic argument goes like this. In its
long evolution, Windows has grown so complicated that it is harder to
secure. Well these images make the point very well. Both images are a
complete map of the system calls that occur when a web server serves
up a single page of html with a single picture. The same page and
picture. A system call is an opportunity to address memory. A hacker
investigates each memory access to see if it is vulnerable to a buffer
overflow attack. The developer must do QA on each of these entry
points. The more system calls, the greater potential for
vulnerability, the more effort needed to create secure applications.

"""

As soon as I saw those pictures, I was like "Hey, Sana Security guys
spend hours staring at this stuff" and lo and behold, that's where
they come from. The more system calls, the harder to secure with
Sana's particular flavor of HIDS. But not "the greater potential for
vulnerability".

You don't get to see the syscall names here, but there's a few large
segments of IIS you don't get to see anywhere in Apache are as follows
(I've read the source code for both, so bear with me):
1. The metabase - essentially a registry of configuration data that
works on a per-directory or per-page basis. This is rather complex
stuff, requiring MSRPC calls and all sorts of craziness. But it
doesn't necessarily add to the insecurity of the product.
2. Threading and impersonation. My bet is that the syscall graph he
generated for Apache was in forking mode. No need to thread or handle
asynchronous operations at all. Just read(data); handle(data).

Complexity only correlates with insecurity; it doesn't let you make
order-of-magnitude judgment calls. Especially not based on graphs like
that.

For the record, or at least, as a reminder to the record, anything
based solely on system call ordering is going to have a bugger of a
time dealing with CreateThread(). On Windows you might be better off
ignoring system call ordering entirely and dealing only with system
call arguments. Having more system calls might make the entropy of the
arguments of any one system call much smaller (ioctl() has very high
argument entropy). Based on that, Windows might actually be MORE
secure, just looked at from a different angle than the call graph he
chooses to represent.

- -dave
Felix von Leitner
2007-02-07 02:58:54 UTC
Permalink
Post by Dave Aitel
Complexity only correlates with insecurity; it doesn't let you make
order-of-magnitude judgment calls. Especially not based on graphs like
that.
Actually, an asynchronous webserver needs these syscalls to handle the
two requests:

GetQueuedCompletionStatus returns
[socket+AcceptEx+CreateIoCompletionPort to queue the next request]
CreateFile on the file to be served
GetFileSize et al to get header data (optional)
TransmitFile to send the response
CloseFile to close the file
ReadFile to read the second request

GetQueuedCompletionStatus returns again
CreateFile on the file to be served
GetFileSize et al to get header data (optional)
TransmitFile to send the response
CloseFile to close the file
closesocket

That's it. No, really. Sprinkle in some VirtualAlloc and friends for
malloc and free, but that's it.

So if you see a graph in fine print about how a couple hundred syscalls
are being called by a web server, that's a pretty good indicator that
there's something wrong with it.

Keep things simple.

That said: this particular troll is from mid-2006 and has been on
Slashdot back then, too. There is no reason to get worked up about it
now.

Felix

PS: Apache is a bloated pig. People use it because so many other
people are using it, not because there are any actual rational reasons
to use it. IIS is a pig, too. People use it because it comes with
Windows, and because it cheats (so it's faster than a pure user space
web server can be).
d***@geer.org
2007-02-07 07:35:38 UTC
Permalink
If anyone wants to argue about whether complexity
and security are negatively correlated, then let's
get to it.

--dan, resisting burning bandwidth unasked
Adam Shostack
2007-02-07 18:39:26 UTC
Permalink
Speaking for myself, I think there are much more interesting questions
than looking at correlations between defects and complexity. For
example, we could look at correlations between failures in the real
world and training/education.

The breach notices that Attrition is accumulating
(http://attrition.org/dataloss) give us a set of real wolrd failure
data. That's something we've never really had. Now we can start
mining it and learning things. For example, does the number of CISSPs
employed by an organization correlate with the reports of failures
compared to other similar orgs? Is that correlation positive or
negative? Does "user education" have an effect?

There's a huge amount of data in the attrition data set, and it all
involves real pain that real organizations are feeling as they try to
secure their data. It's worth studying.

Adam

On Wed, Feb 07, 2007 at 02:35:38AM -0500, ***@geer.org wrote:
|
| If anyone wants to argue about whether complexity
| and security are negatively correlated, then let's
| get to it.
|
| --dan, resisting burning bandwidth unasked
|
| _______________________________________________
| Dailydave mailing list
| ***@lists.immunitysec.com
| http://lists.immunitysec.com/mailman/listinfo/dailydave
Adam Shostack
2007-02-08 19:37:14 UTC
Permalink
Avery,

I'll know it when I see it. :)

I was really excited to see "Is There a Cost to Privacy Breachs? An
Event Study," Alessandro Acquisti, Allan Friedman, and Rahul
Telang. WEIS 2006 and ICIS 2006.
(http://www.heinz.cmu.edu/~acquisti/papers/acquisti-friedman-telang-privacy-breaches.pdf)
This study debunked the idea that breach notices hurt the company's
shareholders in the long run. It's an important mis-conception, and
I'm glad to have data to show that it's wrong.

Similarly, I was pleased to see my co-blogger Chris Walsh refute a
claim about 'the industry's dumbest practice' by looking at data.
(http://www.emergentchaos.com/archives/2006/12/lets_look_at_some_data.html)

So I don't know what I want to see in detail. But what I want to see,
in a broad sense, is that we get over our shame over having made
mistakes, and start discussing what really goes wrong. I want to see
us discussing it in a data driven fashion. Data is not the plural
of anecdote. Data comes from having a consistent sampling method.
"Compelled by law to disclose, and unable to find a loophole" is
admittedly not the best sampling method, but it's better than
anecdote, and it's better than voluntary anonymous survey. I hope
that by understanding that the sky isn't falling, we can evolve better
sampling and disclosure, and start to make real progress by studying
problems.

I'll get off my soapbox before Dave kills me now.

Adam


On Wed, Feb 07, 2007 at 09:15:14PM -0500, Avery Sawaba wrote:
| I'm actually doing some analysis on this data right now (I'm
| ***@attrition.org). Is there anything in particular you'd like to see?
| Perhaps I already have some of what you're looking for, but I haven't posted
| any of my metrics. I can drop a note to the list if/when something is posted.
|
| --Sawaba
|
| On 2/7/07, Adam Shostack <***@homeport.org> wrote:
|
| Speaking for myself, I think there are much more interesting questions
| than looking at correlations between defects and complexity. For
| example, we could look at correlations between failures in the real
| world and training/education.
|
| The breach notices that Attrition is accumulating
| (http://attrition.org/dataloss) give us a set of real wolrd failure
| data. That's something we've never really had. Now we can start
| mining it and learning things. For example, does the number of CISSPs
| employed by an organization correlate with the reports of failures
| compared to other similar orgs? Is that correlation positive or
| negative? Does "user education" have an effect?
|
| There's a huge amount of data in the attrition data set, and it all
| involves real pain that real organizations are feeling as they try to
| secure their data. It's worth studying.
|
| Adam
|
| On Wed, Feb 07, 2007 at 02:35:38AM -0500, ***@geer.org wrote:
| |
| | If anyone wants to argue about whether complexity
| | and security are negatively correlated, then let's
| | get to it.
| |
| | --dan, resisting burning bandwidth unasked
| |
| | _______________________________________________
| | Dailydave mailing list
| | ***@lists.immunitysec.com
| | http://lists.immunitysec.com/mailman/listinfo/dailydave
| _______________________________________________
| Dailydave mailing list
| ***@lists.immunitysec.com
| http://lists.immunitysec.com/mailman/listinfo/dailydave
|
|
Douglas F. Calvert
2007-02-09 17:57:06 UTC
Permalink
Post by Adam Shostack
Avery,
I'll know it when I see it. :)
I was really excited to see "Is There a Cost to Privacy Breachs? An
Event Study," Alessandro Acquisti, Allan Friedman, and Rahul
Telang. WEIS 2006 and ICIS 2006.
(http://www.heinz.cmu.edu/~acquisti/papers/acquisti-friedman-telang-privacy-breaches.pdf)
This study debunked the idea that breach notices hurt the company's
shareholders in the long run. It's an important mis-conception, and
I'm glad to have data to show that it's wrong.
Why wouldn't you want the market to punish actors with security lapses? Economic incentives are
the only way security will be taken seriously.


In related news:

"Mutually Assured Protection: Toward Development of Relational
Internet Data Security and Privacy Contracting Norms"
SECURING PRIVACY IN THE INTERNET AGE, Radin et al., eds.,
Stanford University Press, 2006


Contact: ANDREA M. MATWYSHYN
University of Florida, University of Cambridge
Email: ***@ufl.edu
Auth-Page: http://ssrn.com/author=627948

Full Text: http://ssrn.com/abstract=914420

ABSTRACT: This paper empirically and normatively explores the
current data security contracting regime that exists online.
Using an analytical lens from complexity theory, this article
presents an empirical study of 75 websites of publicly traded
companies across time, tracking legal emergence of data security
contracting practices. It then argues that a new legal
construction for data security contracting is needed to replace
the current regime of terms of use and privacy policies; current
internet data security contracting structures do not facilitate
building of commercial trust.
--
Douglas F. Calvert -/- ***@anize.org
0xC9541FB2 / 0817 30D4 82B6 BB8D 5E66 06F6 B796 073D C954 1FB2
George Ou
2007-02-07 06:27:38 UTC
Permalink
Ok this is really stupid. Why is it that Apache has so many more critical
flaws than IIS 6.0 then?

IIS 6.0
http://secunia.com/product/1438/?task=advisories

Apache 2.0
http://secunia.com/product/73/?task=advisories

Note that a lot of those Apache advisories are MULTIPLE exploits.


Also note that Windows Server 2003 has had a fairly solid track record on
security when you count the number of critical exploits over its lifetime
compared to Linux.

Take a look at Microsoft SQL 2005 and you'll see that's been ROCK SOLID with
ZERO vulnerabilities.
http://secunia.com/product/6782/?task=advisories
Compare that to the mess of Oracle over the same time period.


So let's not base our analysis on some stupid trumped up diagram and let's
not make stupid generalizations about platforms. Let's try and be objective
and factual. There are times one can bash Microsoft but this so called
picture "analysis" just isn't one of them.


George Ou

-----Original Message-----
From: dailydave-***@lists.immunitysec.com
[mailto:dailydave-***@lists.immunitysec.com] On Behalf Of Dave Aitel
Sent: Tuesday, February 06, 2007 4:57 PM
To: ***@lists.immunitysec.com
Subject: [Dailydave] Graphing: Don't believe everything you see.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Graphs can be quite misleading. They make people think they see something
which is blindingly obvious, but totally wrong.

http://blogs.zdnet.com/threatchaos/?p=311
(Check out the pictures.)

"""
Windows is inherently harder to secure than Linux. There I said it.
The simple truth.

Many millions of words have been written and said on this topic. I have a
couple of pictures. The basic argument goes like this. In its long
evolution, Windows has grown so complicated that it is harder to secure.
Well these images make the point very well. Both images are a complete map
of the system calls that occur when a web server serves up a single page of
html with a single picture. The same page and picture. A system call is an
opportunity to address memory. A hacker investigates each memory access to
see if it is vulnerable to a buffer overflow attack. The developer must do
QA on each of these entry points. The more system calls, the greater
potential for vulnerability, the more effort needed to create secure
applications.

"""

As soon as I saw those pictures, I was like "Hey, Sana Security guys spend
hours staring at this stuff" and lo and behold, that's where they come from.
The more system calls, the harder to secure with Sana's particular flavor of
HIDS. But not "the greater potential for vulnerability".

You don't get to see the syscall names here, but there's a few large
segments of IIS you don't get to see anywhere in Apache are as follows (I've
read the source code for both, so bear with me):
1. The metabase - essentially a registry of configuration data that works on
a per-directory or per-page basis. This is rather complex stuff, requiring
MSRPC calls and all sorts of craziness. But it doesn't necessarily add to
the insecurity of the product.
2. Threading and impersonation. My bet is that the syscall graph he
generated for Apache was in forking mode. No need to thread or handle
asynchronous operations at all. Just read(data); handle(data).

Complexity only correlates with insecurity; it doesn't let you make
order-of-magnitude judgment calls. Especially not based on graphs like that.

For the record, or at least, as a reminder to the record, anything based
solely on system call ordering is going to have a bugger of a time dealing
with CreateThread(). On Windows you might be better off ignoring system call
ordering entirely and dealing only with system call arguments. Having more
system calls might make the entropy of the arguments of any one system call
much smaller (ioctl() has very high argument entropy). Based on that,
Windows might actually be MORE secure, just looked at from a different angle
than the call graph he chooses to represent.

- -dave
Robert E. Lee
2007-02-07 16:05:46 UTC
Permalink
Post by George Ou
Take a look at Microsoft SQL 2005 and you'll see that's been ROCK SOLID with
ZERO vulnerabilities.
http://secunia.com/product/6782/?task=advisories
Compare that to the mess of Oracle over the same time period.
So let's not base our analysis on some stupid trumped up diagram and let's
not make stupid generalizations about platforms. Let's try and be objective
and factual.
In the spirit of "[silly] generalizations".... the number of
vulnerabilities publicly disclosed for a product doesn't seem to be a
valid metric for measuring security between products. There are different
disclosure policies for every organization/product. Some applications
are just going to get more attention than others.

Closed source vs Open Source changes the methods available to an outside
researcher for testing. For results to be compared, the same tests have
to be run
equally for both projects.

Comparing the end result (vulnerability count) without taking into account
how we got to the end result (testing methodology) reminds me a bit of:

"If... she... weighs... the same as a duck,... she's made of wood. And
therefore? A witch!!!"

Cheers :),

Robert
--
Robert E. Lee
Chief Information Officer
http://www.dyadsecurity.com

phone: +46-708-474-320
fax : +46-0455-13960
email: ***@dyadsecurity.com
jf
2007-02-08 03:03:05 UTC
Permalink
Really, almost all of these metrics are flawed- of the critical
vulnerabilities listed many of them are things like critical bug in
OpenSSL, problems in ftp proxy with IPv66 sockets, et cetera; which I guess
depending on who you are, may or may not be critical, but to most of us
who aren't using any type of proxy or IPv66 sockets, it's not so important.

This is important to take into account when reviewing those number of
critical bugs comparisons. If we compare MS Office to OpenOffice in this
light, it would show that OO is greatly superior in security to MS Office
because of the number of critical flaws found, but I'd be willing to bet
that many of us may not necessarily agree with that conjecture. The number
of reported bugs are just that, and shouldn't be used as a metric to
determine if a product is secure or not. (however, when a bug is reported
and then some time in the distant future another similar bug affecting the
same region of code does indicate a failure on the vendors part to really
care at all, which IMHO is a much better metric)

Then we have things like 'time to patch' metrics, which are also flawed,
for instance does MS release patches for third-party products, or rather
if there is (yet another) bug in a CA product and MS doesn't patch it, do
we count that against them? Why do we do that for Redhat? Maybe that isn't
the best point as Redhat did indeed ship with a product, but where does
responsibility lie? What if the bug is on the 'extras' CD in an unstable
directory, do we count that? How about if it took organization Y several
weeks to produce a patch for their product and then in less than Z hours
the OS vendor provides the patch to their customers, do we count the time
as 'several weeks' or Z hours? That all said, because of different models,
comparing time to patch for Windows to Linux/BSD/any of the OSs that
comprise of mostly third party applications provides a false view of the
situation.

As for the graphs, they provide an idea of the potential amount of bugs,
but provide no real firm data. Speaking in a sense of probability of
course. To declare however that one product is more secure than another
simply based off of a graph like that is absurd and silly, and I think
everyone realizes this.
--
Success is not final, failure is not fatal:
it is the courage to continue that counts.

-- Sir Winston Churchill
Date: Wed, 07 Feb 2007 17:05:46 +0100
Subject: Re: [Dailydave] Graphing: Don't believe everything you see.
Post by George Ou
Take a look at Microsoft SQL 2005 and you'll see that's been ROCK
SOLID with
Post by George Ou
ZERO vulnerabilities.
http://secunia.com/product/6782/?task=advisories
Compare that to the mess of Oracle over the same time period.
So let's not base our analysis on some stupid trumped up diagram and
let's
Post by George Ou
not make stupid generalizations about platforms. Let's try and be
objective
Post by George Ou
and factual.
In the spirit of "[silly] generalizations".... the number of
vulnerabilities publicly disclosed for a product doesn't seem to be a
valid metric for measuring security between products. There are different
disclosure policies for every organization/product. Some applications
are just going to get more attention than others.
Closed source vs Open Source changes the methods available to an outside
researcher for testing. For results to be compared, the same tests have
to be run
equally for both projects.
Comparing the end result (vulnerability count) without taking into account
"If... she... weighs... the same as a duck,... she's made of wood. And
therefore? A witch!!!"
Cheers :),
Robert
LMH
2007-02-07 18:15:39 UTC
Permalink
Post by George Ou
Ok this is really stupid. Why is it that Apache has so many more critical
flaws than IIS 6.0 then?
IIS 6.0
http://secunia.com/product/1438/?task=advisories
Apache 2.0
http://secunia.com/product/73/?task=advisories
Note that a lot of those Apache advisories are MULTIPLE exploits.
http://secunia.com/product/4661/

lighttpd "just" has 3 known "advisories" released there. And well,
running lighttpd on a production system and being concerned about
security is pretty much like walking nude in a donkey farm, fully
covered with pheromones.

Any 'study' done upon known statistics is already a flawed assumption
as a whole.

Not that I'm doing propaganda for Apache. Given that nowadays people
pretend to publicize mod_security and friends for improving the
security of their 'web applications' the situation isn't really nice,
for them.

Cheers.
Alexander Sotirov
2007-02-09 01:49:30 UTC
Permalink
Post by Dave Aitel
For the record, or at least, as a reminder to the record, anything
based solely on system call ordering is going to have a bugger of a
time dealing with CreateThread().
What is the problem with CreateThread? You just need to look at the syscall
ordering per thread, not per process, and everything will be fine.

Alex
Dave Aitel
2007-02-09 21:17:23 UTC
Permalink
In the famous Buffy episode "Hush", Joss Whedon demonstrates through a
creative plot device - removing the voices from the entire town - that
often talking is the opposite of communication. But I don't have time to
draw pretty pictures, so here goes.

Imagining a simple host intrusion protection device that makes a graph of
system call chains of a process as it runs normally, and then in the future
restricts the process to those system call chains. These chains start with a
CreateThread() and can end at any point, but typically with an ExitThread().

Given this simple system, we can defeat it with a "hooker shellcode" which
hooks the functions our shellcode wants to call. For example, "accept()"
,"recv()", "CreateFile", "Write()" and so on. Because system call arguments
are not looked at, we replace the original arguments with the arguments we
would prefer, and then let the process continue. Each system call may happen
in a completely different thread, but it will happen exactly as the HIPS
thinks it should, just with different arguments.

Essentially the problem is that the HIPS models on a per-thread basis, and
there is no per-thread memory isolation. Of course to do the hooks
themselves you'll want to call VirtualProtect, but we can do something more
invasive to take over every thread's exception handler and play our little
raindeer games. We can, after all, write into every thread's stack.

And of course, it may be that statistically, CreateThread() branches quite
predictably. So if we can call CreateThread, we might be able to do anything
we want after that point.
CreateThread(DoAcceptData()); CreateThread(DoWriteDataToFile())
CreateThread(DoExecFile()) and so on.

Today I played a lot more with Vista. It turns out it DOES have the
10-half-open TCP connection limit. And there's no way to shut that off. I
take back what I said about it being better than XP SP2.

-dave
Post by Alexander Sotirov
Post by Dave Aitel
For the record, or at least, as a reminder to the record, anything
based solely on system call ordering is going to have a bugger of a
time dealing with CreateThread().
What is the problem with CreateThread? You just need to look at the syscall
ordering per thread, not per process, and everything will be fine.
Alex
_______________________________________________
Dailydave mailing list
http://lists.immunitysec.com/mailman/listinfo/dailydave
Ed Ray
2007-02-12 16:14:57 UTC
Permalink
Post by Dave Aitel
Today I played a lot more with Vista. It turns out it DOES
have the 10-half-open TCP connection limit. And there's no
way to shut that off. I take back what I said about it being
better than XP SP2.
-dave
Yeah, just noticed this myself. Seems the lvllord patch to up the
half-open connections from 10 to 50 does not work on Vista.

:(

Edward Ray
--
This mail was scanned by BitDefender
For more informations please visit http://www.bitdefender.com
Loading...