[prev in list] [next in list] [prev in thread] [next in thread]
List: oisf-users
Subject: Re: [Oisf-users] Fast replay of pcap files
From: Dave Remien <dave.remien () gmail ! com>
Date: 2011-07-16 16:03:01
Message-ID: CAD8uqfBak-CkQQsmuk=PwoD8ofss4w8eZ1NQnPr2hmuFKJCwxQ () mail ! gmail ! com
[Download RAW message or body]
[Attachment #2 (multipart/alternative)]
Gene,
For example, here's what I get on my desktop:
dave:/local/try7> time dd if=/v/110421.pcap of=/dev/null bs=128k
335693+1 records in
335693+1 records out
44000013514 bytes (44 GB) copied, 195.631 s, 225 MB/s
real 3m15.634s
user 0m0.216s
sys 0m24.287s
dave> ll /v/110421.pcap
-rw-r--r-- 1 dave users 44000013514 Jul 7 08:17 /v/110421.pcap
/v in this case is an ext4 FS on a fairly spiffy SSD drive....
dave:/local/try7> lsscsi
[3:0:0:0] disk ATA ATP Velocity SII 1002 /dev/sdc
YMMV 8-)
Cheers,
Dave
On Sat, Jul 16, 2011 at 9:53 AM, Dave Remien <dave.remien@gmail.com> wrote:
> Gene,
>
> Try:
>
> time dd if=file of=/dev/null bs=128k
>
> That'll tell you how fast your I/O is. If it's less than 70-80MB a second,
> I doubt that you're exercising Suricata to it's capacity.
>
> There are a few small system changes that can improve I/O performance to a
> small degree; try
>
> find /sys | grep read_ahead_kb
>
> The typical value in these is 128 (kb); you can set it up to 512 or so.
>
> Here's the thing about doing this in a VM - you're reading from a file,
> which is living in a file (your OS on the VM's file system) under another
> OS. Everything you're reading is going thru two FS reads, and it happens as
> each OS "gets there". You'd be far better off to be testing this on a system
> directly on the hardware. Unless your intention is to prove that everything
> runs more slowly in a VM, of course 8-). Those of us who enjoy this kind of
> stuff do a similar thing by creating a large, empty file, doing a mke2fs on
> it, and loop mounting it as a partition, then copying the file you want to
> read into the loop-mounted partition. Voila, slow FS access...
> Realistically, you can do this several times. Eventually the file reading
> will slow to a crawl.
>
> Cheers,
>
> Dave
>
> On Fri, Jul 15, 2011 at 11:00 AM, Gene Albin <gene.albin@gmail.com> wrote:
>
>> Victor,
>> Added the --runmode=autofp switch and while the CPU cycles across all
>> four cores did increase to a range between 8 and 25 the overall time to
>> complete the pcap run was only marginally better at around 3:40.
>>
>> I'm looking over the disk stats to try and determine if I'm I/O limited.
>> I'm getting average rates of 28MB/sec read and about 250 I/O reads/sec.
>> (minimal writes/sec) I'll check with the sysadmin guys to see if this is
>> high for this box, but I don't think it is.
>>
>> Other than that I'm not sure where the bottleneck could be.
>>
>> As a side note I did try uncommenting the "#runmode: auto" line in the
>> suricata.yaml file yesterday and found it made no apparent difference.
>>
>> Thanks,
>> Gene
>>
>>
>> On Thu, Jul 14, 2011 at 11:44 PM, Victor Julien <victor@inliniac.net>wrote:
>>
>>> Gene, can you try adding the option "--runmode=autofp" to your command
>>> line? It does about the same, but with a different threading
>>> configuration (runmode).
>>>
>>> Cheers,
>>> Victor
>>>
>>> On 07/15/2011 04:24 AM, Gene Albin wrote:
>>> > Dave,
>>> > Thanks for the reply. It's possible that I'm I/O limited. Quite
>>> simply I
>>> > drew my conclusion from the fact that when I run the same 6GB pcap file
>>> > through Suricata via tcpreplay the CPU utilization rises up to between
>>> 13
>>> > and 22 percent per core (4 cores). It completes in just over 2
>>> minutes.
>>> > Once complete it drops back down to 0%. Looking at the processes
>>> during
>>> > the run I notice that Suricata and tcpreplay are both in the 60% range
>>> > (using top the process table shows the average across all CPUs, I
>>> think).
>>> > However, when I run Suricata with the -r <filename> option the CPU
>>> > utilization on all 4 CPU's barely increases above 1, which is where it
>>> > usually sits when I run a live capture on this interface and the run
>>> takes
>>> > around 4 minutes to complete.
>>> >
>>> > As for the hardware I'm running this in a VM hosted on an ESX server.
>>> OS
>>> > is CentOS 5.6, 4 cores and 4GB ram. Pcaps are on a 1.5TB drive
>>> attached to
>>> > the server via fiberchannel (I think). Not sure how I can measure the
>>> > latency, but up to this point I haven't had an issue.
>>> >
>>> > For ruleset I'm using just the open ET ruleset optimized for
>>> suricata.
>>> > That's 46 rule files and 11357 rules loaded. My suricata.yaml file is
>>> for
>>> > the most part stock. (attached for your viewing pleasure)
>>> >
>>> > So I'm really at a loss here why the -r option runs slower than
>>> tcpreplay
>>> > --topspeed. The only explanation I see is that -r replays the file at
>>> the
>>> > same speed it was recorded.
>>> >
>>> > Appreciate any insight you could offer...
>>> >
>>> > Gene
>>> >
>>> >
>>> > On Thu, Jul 14, 2011 at 6:50 PM, Dave Remien <dave.remien@gmail.com>
>>> wrote:
>>> >
>>> >>
>>> >>
>>> >> On Thu, Jul 14, 2011 at 7:14 PM, Gene Albin <gene.albin@gmail.com>
>>> wrote:
>>> >>
>>> >>> Hi all,
>>> >>> I'm experimenting with replaying various pcap files in Suricata.
>>> It
>>> >>> appears that the pcap files are replaying at the same speed they were
>>> >>> recorded. I'd like to be able to replay them faster so that 1) I can
>>> stress
>>> >>> the detection engine, and 2) expedite post-event analysis.
>>> >>>
>>> >>> One way to accomplish this is by using tcpreplay -t, but when
>>> running on
>>> >>> the same machine that takes lots of cycles away from Suricata and
>>> sends the
>>> >>> recorded pcap traffic onto an interface that already has live
>>> traffic.
>>> >>>
>>> >>> Is there some other way to replay captured traffic through Suricata
>>> at
>>> >>> an accelerated speed?
>>> >>>
>>> >>
>>> >> Hmm - I've done pretty extensive replay of pcaps with Suricata. I have
>>> a
>>> >> 750GB pcap that was recorded over a 9 hour time range, and takes about
>>> 3.5
>>> >> hours to be replayed through Suricata. The alerts generated show the
>>> pcap
>>> >> time (i.e., over the 9 hour range). The machine replaying the pcap is
>>> a 16
>>> >> core box with a RAID array.
>>> >>
>>> >> Is it possible that you're I/O limited?
>>> >>
>>> >> So... I guess I'd ask about your configuration - # of CPUs, disk
>>> speeds,
>>> >> proc types, rule set, suricata.yaml?
>>> >>
>>> >> Cheers,
>>> >>
>>> >> Dave
>>> >>
>>> >>
>>> >>> --
>>> >>> Gene Albin
>>> >>> gene.albin@gmail.com
>>> >>> gene_albin@bigfoot.com
>>> >>>
>>> >>> _______________________________________________
>>> >>> Oisf-users mailing list
>>> >>> Oisf-users@openinfosecfoundation.org
>>> >>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>> >>>
>>> >>>
>>> >>
>>> >>
>>> >> --
>>> >> "Of course, someone who knows more about this will correct me if I'm
>>> >> wrong, and someone who knows less will correct me if I'm right."
>>> >> David Palmer (palmer@tybalt.caltech.edu)
>>> >>
>>> >>
>>> >
>>> >
>>> >
>>> >
>>> > _______________________________________________
>>> > Oisf-users mailing list
>>> > Oisf-users@openinfosecfoundation.org
>>> > http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>
>>>
>>> --
>>> ---------------------------------------------
>>> Victor Julien
>>> http://www.inliniac.net/
>>> PGP: http://www.inliniac.net/victorjulien.asc
>>> ---------------------------------------------
>>>
>>> _______________________________________________
>>> Oisf-users mailing list
>>> Oisf-users@openinfosecfoundation.org
>>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>
>>
>>
>>
>> --
>> Gene Albin
>> gene.albin@gmail.com
>> gene_albin@bigfoot.com
>>
>> _______________________________________________
>> Oisf-users mailing list
>> Oisf-users@openinfosecfoundation.org
>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>
>>
>
>
> --
> "Of course, someone who knows more about this will correct me if I'm
> wrong, and someone who knows less will correct me if I'm right."
> David Palmer (palmer@tybalt.caltech.edu)
>
>
--
"Of course, someone who knows more about this will correct me if I'm
wrong, and someone who knows less will correct me if I'm right."
David Palmer (palmer@tybalt.caltech.edu)
[Attachment #5 (text/html)]
Gene,<div><br></div><div>For example, here's what I get on my \
desktop:</div><div><br></div><div><div>dave:/local/try7> time dd if=/v/110421.pcap \
of=/dev/null bs=128k</div><div>335693+1 records in</div><div>335693+1 records \
out</div> <div>44000013514 bytes (44 GB) copied, 195.631 s, 225 \
MB/s</div><div><br></div><div>real 3m15.634s</div><div>user \
0m0.216s</div><div>sys 0m24.287s</div><div>dave> ll /v/110421.pcap \
</div><div>-rw-r--r-- 1 dave users 44000013514 Jul 7 08:17 /v/110421.pcap</div> \
<div><br></div><div>/v in this case is an ext4 FS on a fairly spiffy SSD drive.... \
</div><div><br></div><div><div>dave:/local/try7> \
lsscsi</div><div><br></div><div>[3:0:0:0] disk ATA ATP Velocity SII 1002 \
/dev/sdc </div> <div><br></div></div><div>YMMV \
8-)</div><div><br></div><div>Cheers,</div><div><br></div><div>Dave</div><br><div \
class="gmail_quote">On Sat, Jul 16, 2011 at 9:53 AM, Dave Remien <span \
dir="ltr"><<a href="mailto:dave.remien@gmail.com">dave.remien@gmail.com</a>></span> \
wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px \
#ccc solid;padding-left:1ex;"><div>Gene,</div><div><br></div>Try:<div> </div><div> \
time dd if=file of=/dev/null bs=128k<br><br></div><div>That'll tell you how fast \
your I/O is. If it's less than 70-80MB a second, I doubt that you're \
exercising Suricata to it's capacity.</div>
<div><br></div><div>There are a few small system changes that can improve I/O \
performance to a small degree; try</div><div><br></div><div> find /sys | grep \
read_ahead_kb</div><div><br></div><div>The typical value in these is 128 (kb); you \
can set it up to 512 or so.</div>
<div><br></div><div>Here's the thing about doing this in a VM - you're \
reading from a file, which is living in a file (your OS on the VM's file system) \
under another OS. Everything you're reading is going thru two FS reads, and it \
happens as each OS "gets there". You'd be far better off to be testing \
this on a system directly on the hardware. Unless your intention is to prove that \
everything runs more slowly in a VM, of course 8-). Those of us who enjoy this kind \
of stuff do a similar thing by creating a large, empty file, doing a mke2fs on it, \
and loop mounting it as a partition, then copying the file you want to read into the \
loop-mounted partition. Voila, slow FS access... Realistically, you can do this \
several times. Eventually the file reading will slow to a crawl.</div>
<div><br></div><div>Cheers,</div><div><br></div><font \
color="#888888"><div>Dave</div></font><div><div></div><div class="h5"><div><br><div \
class="gmail_quote">On Fri, Jul 15, 2011 at 11:00 AM, Gene Albin <span \
dir="ltr"><<a href="mailto:gene.albin@gmail.com" \
target="_blank">gene.albin@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex">Victor,<div> Added the --runmode=autofp switch and while the \
CPU cycles across all four cores did increase to a range between 8 and 25 the overall \
time to complete the pcap run was only marginally better at around 3:40.</div>
<div><br></div><div> I'm looking over the disk stats to try and determine if \
I'm I/O limited. I'm getting average rates of 28MB/sec read and about 250 \
I/O reads/sec. (minimal writes/sec) I'll check with the sysadmin guys to see if \
this is high for this box, but I don't think it is. </div>
<div><br></div><div> Other than that I'm not sure where the bottleneck could \
be.</div> <div><br></div><div> As a side note I did try uncommenting the \
"#runmode: auto" line in the suricata.yaml file yesterday and found it made \
no apparent difference.</div><div><br></div><div>Thanks,</div><div>Gene<div>
<div></div><div><br>
<br><div class="gmail_quote">On Thu, Jul 14, 2011 at 11:44 PM, Victor Julien <span \
dir="ltr"><<a href="mailto:victor@inliniac.net" \
target="_blank">victor@inliniac.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex">Gene, can you try adding the option \
"--runmode=autofp" to your command<br> line? It does about the same, but \
with a different threading<br> configuration (runmode).<br>
<br>
Cheers,<br>
Victor<br>
<div><div></div><div><br>
On 07/15/2011 04:24 AM, Gene Albin wrote:<br>
> Dave,<br>
> Thanks for the reply. It's possible that I'm I/O limited. Quite \
simply I<br> > drew my conclusion from the fact that when I run the same 6GB pcap \
file<br> > through Suricata via tcpreplay the CPU utilization rises up to between \
13<br> > and 22 percent per core (4 cores). It completes in just over 2 \
minutes.<br> > Once complete it drops back down to 0%. Looking at the processes \
during<br> > the run I notice that Suricata and tcpreplay are both in the 60% \
range<br> > (using top the process table shows the average across all CPUs, I \
think).<br> > However, when I run Suricata with the -r <filename> option \
the CPU<br> > utilization on all 4 CPU's barely increases above 1, which is \
where it<br> > usually sits when I run a live capture on this interface and the \
run takes<br> > around 4 minutes to complete.<br>
><br>
> As for the hardware I'm running this in a VM hosted on an ESX server. \
OS<br> > is CentOS 5.6, 4 cores and 4GB ram. Pcaps are on a 1.5TB drive attached \
to<br> > the server via fiberchannel (I think). Not sure how I can measure \
the<br> > latency, but up to this point I haven't had an issue.<br>
><br>
> For ruleset I'm using just the open ET ruleset optimized for suricata.<br>
> That's 46 rule files and 11357 rules loaded. My suricata.yaml file is \
for<br> > the most part stock. (attached for your viewing pleasure)<br>
><br>
> So I'm really at a loss here why the -r option runs slower than \
tcpreplay<br> > --topspeed. The only explanation I see is that -r replays the \
file at the<br> > same speed it was recorded.<br>
><br>
> Appreciate any insight you could offer...<br>
><br>
> Gene<br>
><br>
><br>
> On Thu, Jul 14, 2011 at 6:50 PM, Dave Remien <<a \
href="mailto:dave.remien@gmail.com" target="_blank">dave.remien@gmail.com</a>> \
wrote:<br> ><br>
>><br>
>><br>
>> On Thu, Jul 14, 2011 at 7:14 PM, Gene Albin <<a \
href="mailto:gene.albin@gmail.com" target="_blank">gene.albin@gmail.com</a>> \
wrote:<br> >><br>
>>> Hi all,<br>
>>> I'm experimenting with replaying various pcap files in Suricata. \
It<br> >>> appears that the pcap files are replaying at the same speed they \
were<br> >>> recorded. I'd like to be able to replay them faster so \
that 1) I can stress<br> >>> the detection engine, and 2) expedite \
post-event analysis.<br> >>><br>
>>> One way to accomplish this is by using tcpreplay -t, but when running \
on<br> >>> the same machine that takes lots of cycles away from Suricata and \
sends the<br> >>> recorded pcap traffic onto an interface that already has \
live traffic.<br> >>><br>
>>> Is there some other way to replay captured traffic through Suricata \
at<br> >>> an accelerated speed?<br>
>>><br>
>><br>
>> Hmm - I've done pretty extensive replay of pcaps with Suricata. I have \
a<br> >> 750GB pcap that was recorded over a 9 hour time range, and takes about \
3.5<br> >> hours to be replayed through Suricata. The alerts generated show the \
pcap<br> >> time (i.e., over the 9 hour range). The machine replaying the pcap \
is a 16<br> >> core box with a RAID array.<br>
>><br>
>> Is it possible that you're I/O limited?<br>
>><br>
>> So... I guess I'd ask about your configuration - # of CPUs, disk \
speeds,<br> >> proc types, rule set, suricata.yaml?<br>
>><br>
>> Cheers,<br>
>><br>
>> Dave<br>
>><br>
>><br>
>>> --<br>
>>> Gene Albin<br>
>>> <a href="mailto:gene.albin@gmail.com" \
target="_blank">gene.albin@gmail.com</a><br> >>> <a \
href="mailto:gene_albin@bigfoot.com" target="_blank">gene_albin@bigfoot.com</a><br> \
>>><br> >>> _______________________________________________<br>
>>> Oisf-users mailing list<br>
>>> <a href="mailto:Oisf-users@openinfosecfoundation.org" \
target="_blank">Oisf-users@openinfosecfoundation.org</a><br> >>> <a \
href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" \
target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
>>><br>
>>><br>
>><br>
>><br>
>> --<br>
>> "Of course, someone who knows more about this will correct me if \
I'm<br> >> wrong, and someone who knows less will correct me if I'm \
right."<br> >> David Palmer (<a href="mailto:palmer@tybalt.caltech.edu" \
target="_blank">palmer@tybalt.caltech.edu</a>)<br> >><br>
>><br>
><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Oisf-users mailing list<br>
> <a href="mailto:Oisf-users@openinfosecfoundation.org" \
target="_blank">Oisf-users@openinfosecfoundation.org</a><br> > <a \
href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" \
target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
<br>
<br>
--<br>
</div></div>---------------------------------------------<br>
<font color="#888888">Victor Julien<br>
<a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
PGP: <a href="http://www.inliniac.net/victorjulien.asc" \
target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
---------------------------------------------<br>
</font><div><div></div><div><br>
_______________________________________________<br>
Oisf-users mailing list<br>
<a href="mailto:Oisf-users@openinfosecfoundation.org" \
target="_blank">Oisf-users@openinfosecfoundation.org</a><br> <a \
href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" \
target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
</div></div></blockquote></div><br><br clear="all"><br></div></div>-- <br><div>Gene \
Albin<br><a href="mailto:gene.albin@gmail.com" \
target="_blank">gene.albin@gmail.com</a><br><a href="mailto:gene_albin@bigfoot.com" \
target="_blank">gene_albin@bigfoot.com</a><br>
</div></div>
<br>_______________________________________________<br>
Oisf-users mailing list<br>
<a href="mailto:Oisf-users@openinfosecfoundation.org" \
target="_blank">Oisf-users@openinfosecfoundation.org</a><br> <a \
href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" \
target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>"Of course, someone who \
knows more about this will correct me if I'm<br>wrong, and someone who knows less \
will correct me if I'm right." <br>David Palmer (<a \
href="mailto:palmer@tybalt.caltech.edu" \
target="_blank">palmer@tybalt.caltech.edu</a>)<br>
<br>
</div>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>"Of course, \
someone who knows more about this will correct me if I'm<br>wrong, and someone \
who knows less will correct me if I'm right." <br> David Palmer (<a \
href="mailto:palmer@tybalt.caltech.edu">palmer@tybalt.caltech.edu</a>)<br><br> </div>
_______________________________________________
Oisf-users mailing list
Oisf-users@openinfosecfoundation.org
http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic