[prev in list] [next in list] [prev in thread] [next in thread] 

List:       full-disclosure
Subject:    Re: [Full-disclosure] Two MSIE 6.0/7.0 NULL pointer crashes
From:       Christian Sciberras <uuf6429 () gmail ! com>
Date:       2010-02-28 20:51:18
Message-ID: 3af3d47c1002281251u75940ccra94d98fb1a83e160 () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


"Sometimes the vulnerability itself is a functional requirement (or
considered to be one of them). Has anyone mentioned ActiveX?"
Or NPAPI for the matter. Really, other then the
automated-after-user-accept-installation they're both the same.

On Sun, Feb 28, 2010 at 9:22 PM, Pavel Kankovsky <
peak@argo.troja.mff.cuni.cz> wrote:

> On Sun, 24 Jan 2010, Dan Kaminsky wrote:
>
> It took me more than one month to write this response? Ouch!
>
> > >  When you discover the program is designed too badly to be
> > > maintained, the best strategy is to rewrite it.
> > No question.  And how long do you think that takes?
>
> It depends. Probably in the order of several years for a big application.
>
> On the other hand, existing code is not always so bad one has to throw it
> out all and rewrite everything from the scratch in one giant step.
>
> > Remember when Netscape decided to throw away the Navigator 4.5
> > codebase, in favor of Mozilla/Seamonkey?  Remember how they had to do
> > that *again* with Mozilla/Gecko?
>
> Mozilla (even the old Mozilla Application Suite known as Seamonkey today)
> has always been based on Gecko (aka "new layout", "NGLayout").
>
> The development of Gecko started in 1997 as an internal Netscape project.
> Old Netscape Communicator source (most of it) was released in March 1998.
> The decision not to use it was made in October 1998. Gecko source was
> released in December 1998. Mozilla 0.6 was released in December 2000,
> 0.9 in May 2001 and 1.0 in June 2002. This makes approximately 5 years.
>
> Firefox started as a "mozilla/browser" branch approximately in April 2002
> (the idea is probably dating back to mid 2001). The first public version
> known as Phoenix 0.1 was released in September 2002, 0.9 was released in
> June 2004, 1.0 in November 2004. 2.5 years.
>
> To put thing into a broader perspective: MSIE 5.0 was released in March
> 1999, 6.0 in August 2001, 7.0 in October 2006, and 8.0 in March 2009.
> This makes 2.5 years from 5.0 to 6.0, 5 years to 7.0 and 2.5 years to 8.0.
> The development of Google Chrome is reported to have started in spring
> 2006 and 1.0 was released in December 2008. 2.5 years again (but they
> reused WebKit and other 3rd party components).
>
> > "Hyperturing computing power" Not really sure what that means,
>
> The ability to solve problems of Turing degree [1] greater than zero.
> "Superturing" is probably a more common term although various terms
> starting with "hyper-"  are used as well [2].
>
> (Alternatively, it can relate to a certain kind of AIs in Orion's Arm
> universe [3] but that meaning is not relevant here. <g>)
>
> For the most part it is a purely theoretical notion but there is at least
> one kind of oracle that is more or less physically feasible: a hardware
> random number generator--such an oracle might look pointless but quite a
> lot of cryptography relies on the ability to generate numbers that
> cannot be guessed by an adversary.
>
> Anyway, real computer are not true Turing machines and they are not turing
> complete. The point of my comment, translated into a more realistic
> setting, is as follows: one must assume the attacker can wield much more
> computing power than the defender.
>
> [1] <http://en.wikipedia.org/wiki/Turing_degree>
> [2] <http://en.wikipedia.org/wiki/Hypercomputation>
> [3] <http://www.orionsarm.com/eg-topic/45c54923c3496>
>
> > > But I do not think this case is much different from the previous one:
> > > most, if not all, of those bugs are elementary integrity violations
> > > (not prevented because the boundary between trusted and untrusted data
> > > is not clear enough) and race conditions (multithreading with locks is
> > > an idea on the same level as strcpy).
> > Nah, it's actually a lot worse. You have to start thinking in terms of
> > state explosion -- having turing complete access to even some of the
> > state of a remote system creates all sorts of new states that, even if
> > *reachable* otherwise, would never be *predictably reachable*.
>
> I dare to say it can make the analysis more complicated if the
> ill-defined difficulty of exploitation is taken into consideration.
>
> In many cases the ability to execute a predefined sequence of operations
> is everything you need to reach an arbitrary state of the system (from a
> known initial state). You do not need anything as strong as a Turing
> machine, even a finite state machine is too powerful, a single finite
> sequence of operations (or perhaps a finite set of them) is sufficient.
>
> > I mean, use-after-free becomes ludicrously easier when you can grab a
> > handle and cause a free.
>
> I admit use-after-free does not fit well into the two categories I
> mentioned. But it is still a straightforward violation of a simple
> property (do not deallocate memory as long as any references to it exist)
> and it is quite easy to avoid it (e.g. use a garbage collector).
>
> > Sure.  But we're not talking about what should be done before you
> > write.  We're talking about what happens when you screw up.
>
> I do not think it is reasonable to separate these two questions.
> After all people are supposed to learn from their mistakes and avoid them
> in the future.
>
> > > (An interesting finding regarding the renegotiation issue: [...]
> > Eh.  This was a subtle one, [...]
>
> I do not want to downplay the ingenuity of Marsh Ray and Steve Dispensa
> (and Martin Rex) but...
>
> Any attempt to formalize integrity properties SSL/TLS is supposed to
> guarantee would inevitably lead to something along the lines of "all data
> sent/received by a server within the context of a certain session must
> have been received/sent by the same client". And I find it rather
> unplausible the problem with renegotiations would avoid detection if
> those properties were checked thoroughly.
>
> > >> c) The system needs to work entirely the same after.
> > > Not entirely. You want to get rid of the vulnerability.
> > I wouldn't consider being vulnerable "working" :)  But point taken.
> > The system needs to meet its functional requirements entirely the same
> > after.
>
> Sometimes the vulnerability itself is a functional requirement (or
> considered to be one of them). Has anyone mentioned ActiveX?
>
> --
> Pavel Kankovsky aka Peak                          / Jeremiah 9:21        \
> "For death is come up into our MS Windows(tm)..." \ 21st century edition /
>
>
> _______________________________________________
> Full-Disclosure - We believe in it.
> Charter: http://lists.grok.org.uk/full-disclosure-charter.html
> Hosted and sponsored by Secunia - http://secunia.com/
>

[Attachment #5 (text/html)]

&quot;Sometimes the vulnerability itself is a functional requirement (or<br>
considered to be one of them). Has anyone mentioned ActiveX?&quot;<br>Or NPAPI for the matter. \
Really, other then the automated-after-user-accept-installation they&#39;re both the \
same.<br><br><div class="gmail_quote">On Sun, Feb 28, 2010 at 9:22 PM, Pavel Kankovsky <span \
dir="ltr">&lt;<a href="mailto:peak@argo.troja.mff.cuni.cz">peak@argo.troja.mff.cuni.cz</a>&gt;</span> \
wrote:<br> <blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); \
margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">On Sun, 24 Jan 2010, Dan Kaminsky wrote:<br> \
<br> It took me more than one month to write this response? Ouch!<br>
<div class="im"><br>
&gt; &gt;  When you discover the program is designed too badly to be<br>
&gt; &gt; maintained, the best strategy is to rewrite it.<br>
&gt; No question.  And how long do you think that takes?<br>
<br>
</div>It depends. Probably in the order of several years for a big application.<br>
<br>
On the other hand, existing code is not always so bad one has to throw it<br>
out all and rewrite everything from the scratch in one giant step.<br>
<div class="im"><br>
&gt; Remember when Netscape decided to throw away the Navigator 4.5<br>
&gt; codebase, in favor of Mozilla/Seamonkey?  Remember how they had to do<br>
&gt; that *again* with Mozilla/Gecko?<br>
<br>
</div>Mozilla (even the old Mozilla Application Suite known as Seamonkey today)<br>
has always been based on Gecko (aka &quot;new layout&quot;, &quot;NGLayout&quot;).<br>
<br>
The development of Gecko started in 1997 as an internal Netscape project.<br>
Old Netscape Communicator source (most of it) was released in March 1998.<br>
The decision not to use it was made in October 1998. Gecko source was<br>
released in December 1998. Mozilla 0.6 was released in December 2000,<br>
0.9 in May 2001 and 1.0 in June 2002. This makes approximately 5 years.<br>
<br>
Firefox started as a &quot;mozilla/browser&quot; branch approximately in April 2002<br>
(the idea is probably dating back to mid 2001). The first public version<br>
known as Phoenix 0.1 was released in September 2002, 0.9 was released in<br>
June 2004, 1.0 in November 2004. 2.5 years.<br>
<br>
To put thing into a broader perspective: MSIE 5.0 was released in March<br>
1999, 6.0 in August 2001, 7.0 in October 2006, and 8.0 in March 2009.<br>
This makes 2.5 years from 5.0 to 6.0, 5 years to 7.0 and 2.5 years to 8.0.<br>
The development of Google Chrome is reported to have started in spring<br>
2006 and 1.0 was released in December 2008. 2.5 years again (but they<br>
reused WebKit and other 3rd party components).<br>
<div class="im"><br>
&gt; &quot;Hyperturing computing power&quot; Not really sure what that means,<br>
<br>
</div>The ability to solve problems of Turing degree [1] greater than zero.<br>
&quot;Superturing&quot; is probably a more common term although various terms<br>
starting with &quot;hyper-&quot;  are used as well [2].<br>
<br>
(Alternatively, it can relate to a certain kind of AIs in Orion&#39;s Arm<br>
universe [3] but that meaning is not relevant here. &lt;g&gt;)<br>
<br>
For the most part it is a purely theoretical notion but there is at least<br>
one kind of oracle that is more or less physically feasible: a hardware<br>
random number generator--such an oracle might look pointless but quite a<br>
lot of cryptography relies on the ability to generate numbers that<br>
cannot be guessed by an adversary.<br>
<br>
Anyway, real computer are not true Turing machines and they are not turing<br>
complete. The point of my comment, translated into a more realistic<br>
setting, is as follows: one must assume the attacker can wield much more<br>
computing power than the defender.<br>
<br>
[1] &lt;<a href="http://en.wikipedia.org/wiki/Turing_degree" \
target="_blank">http://en.wikipedia.org/wiki/Turing_degree</a>&gt;<br> [2] &lt;<a \
href="http://en.wikipedia.org/wiki/Hypercomputation" \
target="_blank">http://en.wikipedia.org/wiki/Hypercomputation</a>&gt;<br> [3] &lt;<a \
href="http://www.orionsarm.com/eg-topic/45c54923c3496" \
target="_blank">http://www.orionsarm.com/eg-topic/45c54923c3496</a>&gt;<br> <div \
class="im"><br> &gt; &gt; But I do not think this case is much different from the previous \
one:<br> &gt; &gt; most, if not all, of those bugs are elementary integrity violations<br>
&gt; &gt; (not prevented because the boundary between trusted and untrusted data<br>
&gt; &gt; is not clear enough) and race conditions (multithreading with locks is<br>
&gt; &gt; an idea on the same level as strcpy).<br>
&gt; Nah, it&#39;s actually a lot worse. You have to start thinking in terms of<br>
&gt; state explosion -- having turing complete access to even some of the<br>
&gt; state of a remote system creates all sorts of new states that, even if<br>
&gt; *reachable* otherwise, would never be *predictably reachable*.<br>
<br>
</div>I dare to say it can make the analysis more complicated if the<br>
ill-defined difficulty of exploitation is taken into consideration.<br>
<br>
In many cases the ability to execute a predefined sequence of operations<br>
is everything you need to reach an arbitrary state of the system (from a<br>
known initial state). You do not need anything as strong as a Turing<br>
machine, even a finite state machine is too powerful, a single finite<br>
sequence of operations (or perhaps a finite set of them) is sufficient.<br>
<div class="im"><br>
&gt; I mean, use-after-free becomes ludicrously easier when you can grab a<br>
&gt; handle and cause a free.<br>
<br>
</div>I admit use-after-free does not fit well into the two categories I<br>
mentioned. But it is still a straightforward violation of a simple<br>
property (do not deallocate memory as long as any references to it exist)<br>
and it is quite easy to avoid it (e.g. use a garbage collector).<br>
<div class="im"><br>
&gt; Sure.  But we&#39;re not talking about what should be done before you<br>
&gt; write.  We&#39;re talking about what happens when you screw up.<br>
<br>
</div>I do not think it is reasonable to separate these two questions.<br>
After all people are supposed to learn from their mistakes and avoid them<br>
in the future.<br>
<br>
&gt; &gt; (An interesting finding regarding the renegotiation issue: [...]<br>
&gt; Eh.  This was a subtle one, [...]<br>
<br>
I do not want to downplay the ingenuity of Marsh Ray and Steve Dispensa<br>
(and Martin Rex) but...<br>
<br>
Any attempt to formalize integrity properties SSL/TLS is supposed to<br>
guarantee would inevitably lead to something along the lines of &quot;all data<br>
sent/received by a server within the context of a certain session must<br>
have been received/sent by the same client&quot;. And I find it rather<br>
unplausible the problem with renegotiations would avoid detection if<br>
those properties were checked thoroughly.<br>
<div class="im"><br>
&gt; &gt;&gt; c) The system needs to work entirely the same after.<br>
&gt; &gt; Not entirely. You want to get rid of the vulnerability.<br>
&gt; I wouldn&#39;t consider being vulnerable &quot;working&quot; :)  But point taken.<br>
&gt; The system needs to meet its functional requirements entirely the same<br>
&gt; after.<br>
<br>
</div>Sometimes the vulnerability itself is a functional requirement (or<br>
considered to be one of them). Has anyone mentioned ActiveX?<br>
<font color="#888888"><br>
--<br>
</font><div><div></div><div class="h5">Pavel Kankovsky aka Peak                          / \
Jeremiah 9:21        \<br> &quot;For death is come up into our MS Windows(tm)...&quot; \ 21st \
century edition /<br> <br>
<br>
_______________________________________________<br>
Full-Disclosure - We believe in it.<br>
Charter: <a href="http://lists.grok.org.uk/full-disclosure-charter.html" \
target="_blank">http://lists.grok.org.uk/full-disclosure-charter.html</a><br> Hosted and \
sponsored by Secunia - <a href="http://secunia.com/" \
target="_blank">http://secunia.com/</a><br> </div></div></blockquote></div><br>



_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic