[prev in list] [next in list] [prev in thread] [next in thread]
List: pfsense-support
Subject: Re: [pfSense Support] Benchamrking and import Rules?
From: Bill Marquette <bill.marquette () gmail ! com>
Date: 2005-03-10 1:11:46
Message-ID: 55e8a96c05030917113f935e78 () mail ! gmail ! com
[Download RAW message or body]
On Wed, 9 Mar 2005 16:52:17 -0800, Scott Nasuta <tcslv@cox.net> wrote:
> I have the machine all ready with latest version of each and my two
> client PC's running QCheck. But I wonder if there are any of you that
> can submit large rules for me to import to do my testing. Also is this
> possible to do or will I have to manually add 25,50,100 rules?
Pull down /cf/conf/config.xml, and create your rules there. Should be
pretty easy to programmatically generate rules there using your
favorite scripting language. I can't vouch for performance on
FreeBSD, but using Smartbits against a OpenBSD 3.5 box running on a
2.8Ghz Xeon machine w/ a 400Mhz FSB, PC2100 ECC RAM, and a dual Intel
Pro/1000 card I've been able to push 600+Mbit through the box and
create/delete 1000 states/second. This closely mirrors my production
load. During my testing of PF as long as all rules used keep state, I
didn't see a performance difference between 1 rule and 100,000. W/out
keep state, the box bombed at approx 200 rules.
> I am also open to suggestions on what to test/change, better benchmark
> software, etc. QCheck seems like a good utility for being free. I plan
> on testing throughput of course, plus latency and UDP streaming. My
> guess is that there will be little difference bewteen the combatants
> since I an unable to hardly tax the systems (unless someone has 1000
> rules or something), but I am curious nonetheless if there are
> differences between IPFilter & PF.
It really depends on what you want to test. Testing throughput using
1500 byte packets is a good start to get max. throughput, but is
nowhere near a real world test. In my case max throughput was less
important than being able to create and delete 1000+ states/second. A
good benchmark utility should be able to (reproducably) generate
traffic at different byte loads (which creates traffic at different
packets/second) and use varying/random addresses and ports to help
stress out the hashing algorithms. A good tester will also take into
account that the various OS's treat hardware differently and treat
different hardware better than others. A fair benchmark would require
multiple hardware platforms.
All the same, it would be interesting to see if there are any
differences in the platforms.
--Bill
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic