[prev in list] [next in list] [prev in thread] [next in thread] 

List:       owasp-washington
Subject:    Re: [Owasp-washington] Web App Scanner Shootout
From:       "Alejandro (Alex) Buschel" <alex.buschel () gmail ! com>
Date:       2010-02-05 9:24:34
Message-ID: 8d2b616d1002050124i791295e8uc2babf3d6810baab () mail ! gmail ! com
[Download RAW message or body]

Matt, thanks for the lengthy reply, much appreciated.

Regarding your last point: in order to have time to do manual testing,
you have to have a 'target population'. That is what the scanners
provide. Now, if you have to spend too much time vetting results, at
some point it becomes very expensive. If you are charging on an open
contract, you may get away with it, although it will drive you crazy.
If, on the other hand, you are working on a fixed rate basis, and you
want to do a thorough job, vetting of false results becomes a drag.

We have seen interesting logic flaws that were found only by manual
testing, things like captcha bypass/ caching.

I think the level of effort in terms of qualified personnel and
tracking tools is not fully realized by most organizations. I know we
cover many different customer bases, but it is rare to see web app sec
teams provided with the tools needed to do an outstanding job. For
many, the goal is to 'pass PCI'.

I guess the season is wearing on me....

Hope to see you guys at some point, won't make it to shmoocon, but
will be at RSA.

Alex

On Thu, Feb 4, 2010 at 8:41 PM, Matt Fisher <matt@piscis-security.com> wrote:
> Hey Alex,
> 
> I'd say we're quite in agreement ... really this is all just reinforcement of my \
> scan-monkey talk. 
> > Testing should be done against a known set of issues
> Yes, and a broad and deep set of issues.  Testing against an input field that \
> allows a <script> doesn't help you determine if the scanner can find an xss that \
> requires some crazy escaped evasive string; your instrumented app has to have \
> increasing levels of defenses and complexity added to it so that you can tell where \
> things break down. 
> > Accuracy is more important than speed. Speed may mean less test cases...
> It absolutely is.  Besides, if it does a decent job of holding state then you're \
> talking about computer hours anyhow, not human hours.  Unfortunately though, I \
> often find myself babysitting scans on tricky sites due to bizzare auth issue, \
> network anamolies, etc. 
> > and enforcement of domain restrictions are critical.
> agreed, although I haven't seen any problems with domain restrictions in any of the \
> scanners I've used.  they all seem quite good about "this link isn't our app" and \
> not trying to scan 'the internet'.  Have you seen issues with that ? I'd love to \
> hear about them. 
> > with third-party authentication providers, readability and flexibility
> of results, export features and more.
> agreed, and dealing with tricky auth scenarios is pretty important in the \
> enterprise, where you could have some seriously whacky stuff going on.   \
> Readability and flexibility of results is important as well, particularly if you're \
> the guy who just 20 scans from the team and needs to analyze and correlate them ;)  \
> Fortunately it seems many of them export to xml which your local perl wizard can \
> likely parse out into anything you like.  At least one scanner provides merge \
> fields into Word so you can easily build customer report templates (there are \
> limitations to this feature though). 
> > I read the previous iteration of this report and used it to test some software. I \
> > wasted tons of time and effort based on the recommendations provided in that \
> > report.
> I would suggest that the methodology used great improved between reports.  I read \
> the original one and while I can't recall it, I do recall the overal impression of \
> being rather flabergasted by it.  This one seems considerably more defendable.  Of \
> course, it's the job of everyone who didn't come in first place to shoot it down \
> and argue it's merits (or, alternatively, get back to fixing bugs and improving the \
> product). 
> > I wish the service provided by WhiteHat security was included, for completeness.
> Yes, it's interesting that they opted out.  I don't think the scanner industry has \
> a lot of love for the author.  However, isn't WhiteHat an MSP ?  Comparing service \
> providers against ISV's, even if they provide the same functionality, may be \
> considered somewhat apples to oranges (even if the differences really are in \
> deployment alone) 
> > Bottom line: it is far more important in the price/benefit function to employ \
> > qualified, resourceful testers.
> Absolutely.  Given that some of these products cost over twenty thousand dollars \
> per user, it would be insance to give them to a scan monkey.  Thing is, if you have \
> a decent sized, skilled testing team, how much value do these things really add to \
> them ?  Does 10 scanner licenses - which essentially costs a house to buy and a car \
> to maintain every year - really add that much in benefit to them anyhow ? Sure, \
> there's the scalability argument, but what if you aren't doing continual \
> operatinoal assessments and are doing point validations ? I'd like to here your \
> thoughts here. 
> Good point here Alex; having a scanner is something like having a nice CCM router \
> in a guitar factory.  Now that you have your shiny new robot, now what ? You've now \
> transfered the quality input from the person who used to route bodies to the person \
> who programs the router.  A scanner can be very useful, but only if employed by \
> someone who really knows what they're doing with it, just as a poor engineer \
> driving that router will result in lots of unplayable guitars. 
> So out of curiosity, does this statement "of course, the most interesting findings \
> I'm creating and seeing from others continue to be those found manually" hit home \
> with you as well ? 
> 
> 
> ________________________________________
> From: Alejandro (Alex) Buschel [alex.buschel@gmail.com]
> Sent: Thursday, February 04, 2010 10:43 PM
> To: Matt Fisher
> Cc: Doug Wilson; Owasp-Washington
> Subject: Re: [Owasp-washington] Web App Scanner Shootout
> 
> After having my team analyze the performance and coverage of most of
> these tools against real sites, I have a few comments to make:
> 
> Testing should be done against a known set of issues, and configure
> all scanners against the same baseline, noting the level of effort
> needed to accomplish this.
> 
> Accuracy is more important than speed. Speed may mean less test cases...
> 
> When doing blackbox testing against an application, accurate spidering
> and enforcement of domain restrictions are critical.
> 
> This kind of research provides some starting point for buyers.
> However, it leaves a good amount left to cover, such as integration
> with third-party authentication providers, readability and flexibility
> of results, export features and more.
> 
> I read the previous iteration of this report and used it to test some
> software. I wasted tons of time and effort based on the
> recommendations provided in that report.
> 
> I wish the service provided by WhiteHat security was included, for completeness.
> 
> Bottom line: it is far more important in the price/benefit function to
> employ qualified, resourceful testers.
> 
> I welcome all rants and arguments :-)
> 
> Alex
> 
> On Thu, Feb 4, 2010 at 9:42 AM, Matt Fisher <matt@piscis-security.com> wrote:
> > So to continue this thread:
> > 
> > re the 'death by acquisition' note.  yep, it sucks to see something you put a lot \
> > of blood, sweat and tears into turn into a bucket o' bugs in the latest release.  \
> > I honestly couldn't tell you how stable WI is now, as with the release of version \
> > 8 it was time to move on. 
> > I skimmed the PDF and i have to say I really like it.  I can't say I really \
> > analyzed it in depth as I'm just not all that concerned about web scanners \
> > anymore but the take aways I got were: 
> > - this was a somewhat defendable approach to testing,
> > - the scanners were run against their own dummy sites, so they knew the answers \
> >                 ahead of time
> > - performance apparently has nothing to do with market position.  that's \
> > unsurprising though; the most successful company isn't always the one with the \
> >                 best product ( just as the most popular music isn't always the \
> >                 best music)
> > - signifcant effort and expertise required.
> > 
> > all points I was already familliar with.
> > 
> > I have to concur that NTO is completely underrated.  I had the pleasure of \
> > getting slightly familiar with their product (just slightly), and the impressions \
> > I got were that a) they're doing some very clever things, b) they've very \
> > dedicated and eager.  Suto's notes on them jibe right.  Burp's performance (or \
> > lack thereof) as a *scanner* doesn't really bother me; I consider Burp to be \
> > great for manual testing or automated "sniping" - automatting single isolated \
> > tasks such as fuzzing a single parameter.  I would never use Burp as a 'spray n' \
> > pray" scanner. 
> > I think this document really opens eyes, and frankly makes a nice follow on to \
> > some of the talks I've been giving lately, as a lot of the points overlap.  If \
> > you're in the market for a scanner, this should be mandatory reading - not for \
> > the purposes of purchasing but rather for the enlightenment as to the nuance of \
> > web scanning.   Of course, the most interesting findings I'm creating and seeing \
> > from others continue to be those found manually. 
> > 
> > 
> > ________________________________________
> > From: owasp-washington-bounces@lists.owasp.org \
> > [owasp-washington-bounces@lists.owasp.org] On Behalf Of Doug Wilson \
> >                 [doug.wilson@owasp.org]
> > Sent: Wednesday, February 03, 2010 5:26 PM
> > To: Owasp-Washington
> > Subject: [Owasp-washington] Web App Scanner Shootout
> > 
> > Don't know if anyone has seen this yet:
> > http://ha.ckers.org/blog/20100203/accuracy-and-time-costs-of-web-application-security-scanner-report/
> >  
> > Comments? The report is available as a PDF link in the article.
> > 
> > My thoughts:
> > 
> > I'm impressed that NTOSpider manages to get that high a success rate --
> > debunking the commonly held barrier that scanners can't get above a
> > certain success rate. And yes, test app, not a real site, yadda yadda --
> > it still cleaned the clock of everything else out there. Definitely an
> > impressive showing, especially for a tool that is not a "market leader"
> > (though apparently this is the engine that is under the hood for eEye's
> > web scanner?)
> > 
> > I'm saddened that Burp didn't do better, but you know what? This is not
> > what Burp is designed for . . . and it still beat out several of the
> > "big names," including WebInspect. And considering the "scope" of Burp
> > compared to the other products (price, dev team, etc), it's still an
> > impressive showing.
> > 
> > And, finally, the fact that WebInspect came in pretty much dead last . .
> > . well, HP has only themselves to blame for what they've done there . .
> > . note that the study says it was hard to get find anyone to get support
> > from, etc. Death by acquisition is never pretty.
> > 
> > 
> > 
> > Doug
> > 
> > 
> > --
> > 
> > Doug Wilson
> > 
> > dougDOTwilsonATowaspDOTorg
> > 
> > --
> > 
> > OWASP DC Chapter Co-Chair
> > 
> > https://www.owasp.org/index.php/Washington_DC
> > 
> > AppSec US 09 Organizer
> > 
> > http://appsecdc.org
> > _______________________________________________
> > Owasp-washington mailing list
> > Owasp-washington@lists.owasp.org
> > https://lists.owasp.org/mailman/listinfo/owasp-washington
> > _______________________________________________
> > Owasp-washington mailing list
> > Owasp-washington@lists.owasp.org
> > https://lists.owasp.org/mailman/listinfo/owasp-washington
> > 
_______________________________________________
Owasp-washington mailing list
Owasp-washington@lists.owasp.org
https://lists.owasp.org/mailman/listinfo/owasp-washington


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic