[prev in list] [next in list] [prev in thread] [next in thread] 

List:       focus-ids
Subject:    Re: Generating Traffic to Stress Test IDS
From:       "Samuel f. Stover" <sstover () enterasys ! com>
Date:       2002-01-25 18:40:42
[Download RAW message or body]

<Vendor opinion alert Warning>

> - It's important to distinguish between BACKGROUND traffic, and ATTACK
> traffic.
> 

I cannot stress how important this is from our point of view.  I have to 
groan when I talk to a potential third-party tester who says they are 
going to use SmartBits to generate background traffic.  I just had this 
happen yesterday and fortunately the tester was open-minded enough to 
listen to my objections in that area.  The problem is (I'm restating 
Greg, I know) is the dearth in these kind of products.  Another product 
that hadn't been mentioned at the time of my post is Antara's 
flamethrower.  I'm heading to their lab in a couple weeks to see what 
they have to offer, so we'll see...

> If I were tucked away
> in a vendor QA lab somewhere and was tasked with making sure 1000+ NIDS
> signatures worked correctly, I might be singing a different tune.  But I'm like you \
> - I'm looking for thresholds and breaking points.  If you're after finding breaking \
> points, you probably only need 5-10 attacks to determine whether the IDS is failing \
> or not.  Pick those 5-10 wisely, but the real trick IMHO is the background traffic, \
> which leads me to: 
I guess I qualify as a tucked away vendor QA guy, but breaking points 
are just as important to me as you.  Maybe more, because if you are 
aware of those breaking points, and I'm not, then I look dumber than I 
am (and I try to minimize that when possible).  I think everyone is in a 
similar position, whether they want to admit it or not.  Vendors, 
testers, and customers all want a good solid way to run an IDS through 
it's paces and see how they measure.  Do I hear unified testing strategy???

> - Legitimate background traffic is important for a number of reasons, one
> of them is due to the fact that not all NIDS sensors have the same engine
> design "under the hood."  *Most* NIDS engines are based on either a
> general "packet grep" model (SNORT, Enterasys' Dragon, etc.) while others
> have a more protocol-aware approach (Intrusion.com's SNP, ISS' BlackICE,
> etc.)  [Note:  many of the packet-greppers are building in protocol
> pre-processor engines /normalizers/whatchamacallits, so these distinctions might \
> blur over time.] 
Don't you think this undermines a unified testing strategy?  I mean real 
world traffic is real world traffic - to that end all NIDS should be 
able to look at that traffic and alert.    Obviously some will do a 
better job than others, but if I'm a customer, the last thing I expect 
to hear from a vendor is "Well, considering your traffic composition, 
Biglizard won't work as well as GreySLUSH."

> Some NIDS vendors groan when I draw this analogy,
> but it's kind of like comparing a stateful packet filtering firewall to a
> proxy-based one - one model cares about the application layer, one
> doesn't.  (and one checks it, one doesn't)
> 
[Groan] - now you make it sound like Dragon and Snort are just network 
greppers and that's not the case, as both do some pretty deep protocol 
analysis;  and everyone's doing some type of pattern matching.  Just 
from different angles.

> 
> ANYWAY, if you've got one NIDS that's checking protocol compliance and one
> that's just packet grepp'ing, they are going to be affected differently by
> different types of traffic.
> 
True, I suppose, but back to my earlier point:  the ultimate goal is to 
handle whatever traffic the customer has on their network.  That's why 
we HAVE to find ways to emulate real-world traffic.  I'm agreeing with 
Greg, but for a different reason.

> This is why many of us testing d00dz have
> gotten upset in the past when organizations use silly traffic going over
> silly ports - it reduces the legitimacy of the tests and can serve as
> points of confusion.  But I'll spare everyone from that soapbox speech.
> > )
> 
Look at it this way:  We try very hard to make an IDS that can handle 
real-world traffic (a big task, I know).  We don't try very hard to make 
an IDS that can handle SmartBits traffic because no-one has a need for 
that.

> - Concerning packet dropping stats: I'll let someone like Marty or Elliot
> take this one on for more/accurate details, but packet dropping stats have
> always been a problem.  In order for them to be accurate you have to
> assume that a) the NIC keeps those stats, b) the NIC can report those
> stats *accurately*, c) the driver knows how to get those stats from the
> NIC and report it to, d) the kernel, that may or may not present an easy
> or accurate way for e) the user, you, to get that info.  I've heard horror
> stories about hard-coded settings of "0" being found in some driver code,
> and that's just the start.
> 
> So in short, who bloody knows on that one.  What I've always tried to do
> is double-check things by counting stats on the sending machines, the
> switches, and the receiving machines.  Not pretty, or fun.
> 
The only way to fly in my book.  Many times mirroring/SPANing issues can 
cause an IDS to miss packets which is easy to construe as a missed 
attack.  You always have to keep an eye on all the details.

> P.S. Sam, I don't want to hear any crap out of you for this being so long.
> > )
> 

Hah, hah - that would be the pot calling the kettle black, eh?

Samuel f. Stover
Director of IDS QA
Enterasys Networks/NSW
sstover@enterasys.com


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic