[prev in list] [next in list] [prev in thread] [next in thread] 

List:       oisf-users
Subject:    Re: [Oisf-users] Fast replay of pcap files
From:       Dave Remien <dave.remien () gmail ! com>
Date:       2011-07-16 16:03:01
Message-ID: CAD8uqfBak-CkQQsmuk=PwoD8ofss4w8eZ1NQnPr2hmuFKJCwxQ () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Gene,

For example, here's what I get on my desktop:

dave:/local/try7> time dd if=/v/110421.pcap of=/dev/null bs=128k
335693+1 records in
335693+1 records out
44000013514 bytes (44 GB) copied, 195.631 s, 225 MB/s

real    3m15.634s
user    0m0.216s
sys     0m24.287s
dave> ll /v/110421.pcap
-rw-r--r-- 1 dave users 44000013514 Jul  7 08:17 /v/110421.pcap

/v in this case is an ext4 FS on a fairly spiffy SSD drive....

dave:/local/try7> lsscsi

[3:0:0:0]    disk    ATA      ATP Velocity SII 1002  /dev/sdc

YMMV 8-)

Cheers,

Dave

On Sat, Jul 16, 2011 at 9:53 AM, Dave Remien <dave.remien@gmail.com> wrote:

> Gene,
>
> Try:
>
>      time dd if=file of=/dev/null bs=128k
>
> That'll tell you how fast your I/O is. If it's less than 70-80MB a second,
> I doubt that you're exercising Suricata to it's capacity.
>
> There are a few small system changes that can improve I/O performance to a
> small degree; try
>
>     find /sys | grep read_ahead_kb
>
> The typical value in these is 128 (kb); you can set it up to 512 or so.
>
> Here's the thing about doing this in a VM - you're reading from a file,
> which is living in a file (your OS on the VM's file system) under another
> OS. Everything you're reading is going thru two FS reads, and it happens as
> each OS "gets there". You'd be far better off to be testing this on a system
> directly on the hardware. Unless your intention is to prove that everything
> runs more slowly in a VM, of course 8-). Those of us who enjoy this kind of
> stuff do a similar thing by creating a large, empty file, doing a mke2fs on
> it, and loop mounting it as a partition, then copying the file you want to
> read into the loop-mounted partition.  Voila, slow FS access...
> Realistically, you can do this several times. Eventually the file reading
> will slow to a crawl.
>
> Cheers,
>
> Dave
>
> On Fri, Jul 15, 2011 at 11:00 AM, Gene Albin <gene.albin@gmail.com> wrote:
>
>> Victor,
>>   Added the --runmode=autofp switch and while the CPU cycles across all
>> four cores did increase to a range between 8 and 25 the overall time to
>> complete the pcap run was only marginally better at around 3:40.
>>
>>   I'm looking over the disk stats to try and determine if I'm I/O limited.
>>  I'm getting average rates of 28MB/sec read and about 250 I/O reads/sec.
>>  (minimal writes/sec)  I'll check with the sysadmin guys to see if this is
>> high for this box, but I don't think it is.
>>
>>   Other than that I'm not sure where the bottleneck could be.
>>
>>  As a side note I did try uncommenting the "#runmode: auto" line in the
>> suricata.yaml file yesterday and found it made no apparent difference.
>>
>> Thanks,
>> Gene
>>
>>
>> On Thu, Jul 14, 2011 at 11:44 PM, Victor Julien <victor@inliniac.net>wrote:
>>
>>> Gene, can you try adding the option "--runmode=autofp" to your command
>>> line? It does about the same, but with a different threading
>>> configuration (runmode).
>>>
>>> Cheers,
>>> Victor
>>>
>>> On 07/15/2011 04:24 AM, Gene Albin wrote:
>>> > Dave,
>>> >   Thanks for the reply.  It's possible that I'm I/O limited.  Quite
>>> simply I
>>> > drew my conclusion from the fact that when I run the same 6GB pcap file
>>> > through Suricata via tcpreplay the CPU utilization rises up to between
>>> 13
>>> > and 22 percent per core (4 cores).  It completes in just over 2
>>> minutes.
>>> >  Once complete it drops back down to 0%.  Looking at the processes
>>> during
>>> > the run I notice that Suricata and tcpreplay are both in the 60% range
>>> > (using top the process table shows the average across all CPUs, I
>>> think).
>>> >  However, when I run Suricata with the -r <filename> option the CPU
>>> > utilization on all 4 CPU's barely increases above 1, which is where it
>>> > usually sits when I run a live capture on this interface and the run
>>> takes
>>> > around 4 minutes to complete.
>>> >
>>> >   As for the hardware I'm running this in a VM hosted on an ESX server.
>>>  OS
>>> > is CentOS 5.6, 4 cores and 4GB ram.  Pcaps are on a 1.5TB drive
>>> attached to
>>> > the server via fiberchannel (I think).  Not sure how I can measure the
>>> > latency, but up to this point I haven't had an issue.
>>> >
>>> >   For ruleset I'm using just the open ET ruleset optimized for
>>> suricata.
>>> >  That's 46 rule files and 11357 rules loaded.  My suricata.yaml file is
>>> for
>>> > the most part stock.  (attached for your viewing pleasure)
>>> >
>>> >  So I'm really at a loss here why the -r option runs slower than
>>> tcpreplay
>>> > --topspeed.  The only explanation I see is that -r replays the file at
>>> the
>>> > same speed it was recorded.
>>> >
>>> >   Appreciate any insight you could offer...
>>> >
>>> > Gene
>>> >
>>> >
>>> > On Thu, Jul 14, 2011 at 6:50 PM, Dave Remien <dave.remien@gmail.com>
>>> wrote:
>>> >
>>> >>
>>> >>
>>> >> On Thu, Jul 14, 2011 at 7:14 PM, Gene Albin <gene.albin@gmail.com>
>>> wrote:
>>> >>
>>> >>> Hi all,
>>> >>>   I'm experimenting with replaying various pcap files in Suricata.
>>>  It
>>> >>> appears that the pcap files are replaying at the same speed they were
>>> >>> recorded.  I'd like to be able to replay them faster so that 1) I can
>>> stress
>>> >>> the detection engine, and 2) expedite post-event analysis.
>>> >>>
>>> >>>   One way to accomplish this is by using tcpreplay -t, but when
>>> running on
>>> >>> the same machine that takes lots of cycles away from Suricata and
>>> sends the
>>> >>> recorded pcap traffic onto an interface that already has live
>>> traffic.
>>> >>>
>>> >>>   Is there some other way to replay captured traffic through Suricata
>>> at
>>> >>> an accelerated speed?
>>> >>>
>>> >>
>>> >> Hmm - I've done pretty extensive replay of pcaps with Suricata. I have
>>> a
>>> >> 750GB pcap that was recorded over a 9 hour time range, and takes about
>>> 3.5
>>> >> hours to be replayed through Suricata. The alerts generated show the
>>> pcap
>>> >> time (i.e., over the 9 hour range).  The machine replaying the pcap is
>>> a 16
>>> >> core box with a RAID array.
>>> >>
>>> >> Is it possible that you're I/O limited?
>>> >>
>>> >> So... I guess I'd ask about your configuration - # of CPUs, disk
>>> speeds,
>>> >> proc types, rule set, suricata.yaml?
>>> >>
>>> >>  Cheers,
>>> >>
>>> >> Dave
>>> >>
>>> >>
>>> >>> --
>>> >>> Gene Albin
>>> >>> gene.albin@gmail.com
>>> >>> gene_albin@bigfoot.com
>>> >>>
>>> >>> _______________________________________________
>>> >>> Oisf-users mailing list
>>> >>> Oisf-users@openinfosecfoundation.org
>>> >>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>> >>>
>>> >>>
>>> >>
>>> >>
>>> >> --
>>> >> "Of course, someone who knows more about this will correct me if I'm
>>> >> wrong, and someone who knows less will correct me if I'm right."
>>> >> David Palmer (palmer@tybalt.caltech.edu)
>>> >>
>>> >>
>>> >
>>> >
>>> >
>>> >
>>> > _______________________________________________
>>> > Oisf-users mailing list
>>> > Oisf-users@openinfosecfoundation.org
>>> > http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>
>>>
>>> --
>>> ---------------------------------------------
>>> Victor Julien
>>> http://www.inliniac.net/
>>> PGP: http://www.inliniac.net/victorjulien.asc
>>> ---------------------------------------------
>>>
>>> _______________________________________________
>>> Oisf-users mailing list
>>> Oisf-users@openinfosecfoundation.org
>>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>>
>>
>>
>>
>> --
>> Gene Albin
>> gene.albin@gmail.com
>> gene_albin@bigfoot.com
>>
>> _______________________________________________
>> Oisf-users mailing list
>> Oisf-users@openinfosecfoundation.org
>> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users
>>
>>
>
>
> --
> "Of course, someone who knows more about this will correct me if I'm
> wrong, and someone who knows less will correct me if I'm right."
> David Palmer (palmer@tybalt.caltech.edu)
>
>


-- 
"Of course, someone who knows more about this will correct me if I'm
wrong, and someone who knows less will correct me if I'm right."
David Palmer (palmer@tybalt.caltech.edu)

[Attachment #5 (text/html)]

Gene,<div><br></div><div>For example, here&#39;s what I get on my \
desktop:</div><div><br></div><div><div>dave:/local/try7&gt; time dd if=/v/110421.pcap \
of=/dev/null bs=128k</div><div>335693+1 records in</div><div>335693+1 records \
out</div> <div>44000013514 bytes (44 GB) copied, 195.631 s, 225 \
MB/s</div><div><br></div><div>real    3m15.634s</div><div>user    \
0m0.216s</div><div>sys     0m24.287s</div><div>dave&gt; ll /v/110421.pcap \
</div><div>-rw-r--r-- 1 dave users 44000013514 Jul  7 08:17 /v/110421.pcap</div> \
<div><br></div><div>/v in this case is an ext4 FS on a fairly spiffy SSD drive.... \
</div><div><br></div><div><div>dave:/local/try7&gt; \
lsscsi</div><div><br></div><div>[3:0:0:0]    disk    ATA      ATP Velocity SII 1002  \
/dev/sdc </div> <div><br></div></div><div>YMMV \
8-)</div><div><br></div><div>Cheers,</div><div><br></div><div>Dave</div><br><div \
class="gmail_quote">On Sat, Jul 16, 2011 at 9:53 AM, Dave Remien <span \
dir="ltr">&lt;<a href="mailto:dave.remien@gmail.com">dave.remien@gmail.com</a>&gt;</span> \
wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px \
#ccc solid;padding-left:1ex;"><div>Gene,</div><div><br></div>Try:<div>  </div><div>   \
time dd if=file of=/dev/null bs=128k<br><br></div><div>That&#39;ll tell you how fast \
your I/O is. If it&#39;s less than 70-80MB a second, I doubt that you&#39;re \
exercising Suricata to it&#39;s capacity.</div>

<div><br></div><div>There are a few small system changes that can improve I/O \
performance to a small degree; try</div><div><br></div><div>    find /sys | grep \
read_ahead_kb</div><div><br></div><div>The typical value in these is 128 (kb); you \
can set it up to 512 or so.</div>

<div><br></div><div>Here&#39;s the thing about doing this in a VM - you&#39;re \
reading from a file, which is living in a file (your OS on the VM&#39;s file system) \
under another OS. Everything you&#39;re reading is going thru two FS reads, and it \
happens as each OS &quot;gets there&quot;. You&#39;d be far better off to be testing \
this on a system directly on the hardware. Unless your intention is to prove that \
everything runs more slowly in a VM, of course 8-). Those of us who enjoy this kind \
of stuff do a similar thing by creating a large, empty file, doing a mke2fs on it, \
and loop mounting it as a partition, then copying the file you want to read into the \
loop-mounted partition.  Voila, slow FS access... Realistically, you can do this \
several times. Eventually the file reading will slow to a crawl.</div>

<div><br></div><div>Cheers,</div><div><br></div><font \
color="#888888"><div>Dave</div></font><div><div></div><div class="h5"><div><br><div \
class="gmail_quote">On Fri, Jul 15, 2011 at 11:00 AM, Gene Albin <span \
dir="ltr">&lt;<a href="mailto:gene.albin@gmail.com" \
target="_blank">gene.albin@gmail.com</a>&gt;</span> wrote:<br>

<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex">Victor,<div>  Added the --runmode=autofp switch and while the \
CPU cycles across all four cores did increase to a range between 8 and 25 the overall \
time to complete the pcap run was only marginally better at around 3:40.</div>


<div><br></div><div>  I&#39;m looking over the disk stats to try and determine if \
I&#39;m I/O limited.  I&#39;m getting average rates of 28MB/sec read and about 250 \
I/O reads/sec.  (minimal writes/sec)  I&#39;ll check with the sysadmin guys to see if \
this is high for this box, but I don&#39;t think it is.  </div>


<div><br></div><div>  Other than that I&#39;m not sure where the bottleneck could \
be.</div> <div><br></div><div> As a side note I did try uncommenting the \
&quot;#runmode: auto&quot; line in the suricata.yaml file yesterday and found it made \
no apparent difference.</div><div><br></div><div>Thanks,</div><div>Gene<div>

<div></div><div><br>
<br><div class="gmail_quote">On Thu, Jul 14, 2011 at 11:44 PM, Victor Julien <span \
dir="ltr">&lt;<a href="mailto:victor@inliniac.net" \
target="_blank">victor@inliniac.net</a>&gt;</span> wrote:<br>

<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex">Gene, can you try adding the option \
&quot;--runmode=autofp&quot; to your command<br> line? It does about the same, but \
with a different threading<br> configuration (runmode).<br>
<br>
Cheers,<br>
Victor<br>
<div><div></div><div><br>
On 07/15/2011 04:24 AM, Gene Albin wrote:<br>
&gt; Dave,<br>
&gt;   Thanks for the reply.  It&#39;s possible that I&#39;m I/O limited.  Quite \
simply I<br> &gt; drew my conclusion from the fact that when I run the same 6GB pcap \
file<br> &gt; through Suricata via tcpreplay the CPU utilization rises up to between \
13<br> &gt; and 22 percent per core (4 cores).  It completes in just over 2 \
minutes.<br> &gt;  Once complete it drops back down to 0%.  Looking at the processes \
during<br> &gt; the run I notice that Suricata and tcpreplay are both in the 60% \
range<br> &gt; (using top the process table shows the average across all CPUs, I \
think).<br> &gt;  However, when I run Suricata with the -r &lt;filename&gt; option \
the CPU<br> &gt; utilization on all 4 CPU&#39;s barely increases above 1, which is \
where it<br> &gt; usually sits when I run a live capture on this interface and the \
run takes<br> &gt; around 4 minutes to complete.<br>
&gt;<br>
&gt;   As for the hardware I&#39;m running this in a VM hosted on an ESX server.  \
OS<br> &gt; is CentOS 5.6, 4 cores and 4GB ram.  Pcaps are on a 1.5TB drive attached \
to<br> &gt; the server via fiberchannel (I think).  Not sure how I can measure \
the<br> &gt; latency, but up to this point I haven&#39;t had an issue.<br>
&gt;<br>
&gt;   For ruleset I&#39;m using just the open ET ruleset optimized for suricata.<br>
&gt;  That&#39;s 46 rule files and 11357 rules loaded.  My suricata.yaml file is \
for<br> &gt; the most part stock.  (attached for your viewing pleasure)<br>
&gt;<br>
&gt;  So I&#39;m really at a loss here why the -r option runs slower than \
tcpreplay<br> &gt; --topspeed.  The only explanation I see is that -r replays the \
file at the<br> &gt; same speed it was recorded.<br>
&gt;<br>
&gt;   Appreciate any insight you could offer...<br>
&gt;<br>
&gt; Gene<br>
&gt;<br>
&gt;<br>
&gt; On Thu, Jul 14, 2011 at 6:50 PM, Dave Remien &lt;<a \
href="mailto:dave.remien@gmail.com" target="_blank">dave.remien@gmail.com</a>&gt; \
wrote:<br> &gt;<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; On Thu, Jul 14, 2011 at 7:14 PM, Gene Albin &lt;<a \
href="mailto:gene.albin@gmail.com" target="_blank">gene.albin@gmail.com</a>&gt; \
wrote:<br> &gt;&gt;<br>
&gt;&gt;&gt; Hi all,<br>
&gt;&gt;&gt;   I&#39;m experimenting with replaying various pcap files in Suricata.  \
It<br> &gt;&gt;&gt; appears that the pcap files are replaying at the same speed they \
were<br> &gt;&gt;&gt; recorded.  I&#39;d like to be able to replay them faster so \
that 1) I can stress<br> &gt;&gt;&gt; the detection engine, and 2) expedite \
post-event analysis.<br> &gt;&gt;&gt;<br>
&gt;&gt;&gt;   One way to accomplish this is by using tcpreplay -t, but when running \
on<br> &gt;&gt;&gt; the same machine that takes lots of cycles away from Suricata and \
sends the<br> &gt;&gt;&gt; recorded pcap traffic onto an interface that already has \
live traffic.<br> &gt;&gt;&gt;<br>
&gt;&gt;&gt;   Is there some other way to replay captured traffic through Suricata \
at<br> &gt;&gt;&gt; an accelerated speed?<br>
&gt;&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; Hmm - I&#39;ve done pretty extensive replay of pcaps with Suricata. I have \
a<br> &gt;&gt; 750GB pcap that was recorded over a 9 hour time range, and takes about \
3.5<br> &gt;&gt; hours to be replayed through Suricata. The alerts generated show the \
pcap<br> &gt;&gt; time (i.e., over the 9 hour range).  The machine replaying the pcap \
is a 16<br> &gt;&gt; core box with a RAID array.<br>
&gt;&gt;<br>
&gt;&gt; Is it possible that you&#39;re I/O limited?<br>
&gt;&gt;<br>
&gt;&gt; So... I guess I&#39;d ask about your configuration - # of CPUs, disk \
speeds,<br> &gt;&gt; proc types, rule set, suricata.yaml?<br>
&gt;&gt;<br>
&gt;&gt;  Cheers,<br>
&gt;&gt;<br>
&gt;&gt; Dave<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt;&gt; --<br>
&gt;&gt;&gt; Gene Albin<br>
&gt;&gt;&gt; <a href="mailto:gene.albin@gmail.com" \
target="_blank">gene.albin@gmail.com</a><br> &gt;&gt;&gt; <a \
href="mailto:gene_albin@bigfoot.com" target="_blank">gene_albin@bigfoot.com</a><br> \
&gt;&gt;&gt;<br> &gt;&gt;&gt; _______________________________________________<br>
&gt;&gt;&gt; Oisf-users mailing list<br>
&gt;&gt;&gt; <a href="mailto:Oisf-users@openinfosecfoundation.org" \
target="_blank">Oisf-users@openinfosecfoundation.org</a><br> &gt;&gt;&gt; <a \
href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" \
target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
 &gt;&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; --<br>
&gt;&gt; &quot;Of course, someone who knows more about this will correct me if \
I&#39;m<br> &gt;&gt; wrong, and someone who knows less will correct me if I&#39;m \
right.&quot;<br> &gt;&gt; David Palmer (<a href="mailto:palmer@tybalt.caltech.edu" \
target="_blank">palmer@tybalt.caltech.edu</a>)<br> &gt;&gt;<br>
&gt;&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Oisf-users mailing list<br>
&gt; <a href="mailto:Oisf-users@openinfosecfoundation.org" \
target="_blank">Oisf-users@openinfosecfoundation.org</a><br> &gt; <a \
href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" \
target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
 <br>
<br>
--<br>
</div></div>---------------------------------------------<br>
<font color="#888888">Victor Julien<br>
<a href="http://www.inliniac.net/" target="_blank">http://www.inliniac.net/</a><br>
PGP: <a href="http://www.inliniac.net/victorjulien.asc" \
                target="_blank">http://www.inliniac.net/victorjulien.asc</a><br>
---------------------------------------------<br>
</font><div><div></div><div><br>
_______________________________________________<br>
Oisf-users mailing list<br>
<a href="mailto:Oisf-users@openinfosecfoundation.org" \
target="_blank">Oisf-users@openinfosecfoundation.org</a><br> <a \
href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" \
target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
 </div></div></blockquote></div><br><br clear="all"><br></div></div>-- <br><div>Gene \
Albin<br><a href="mailto:gene.albin@gmail.com" \
target="_blank">gene.albin@gmail.com</a><br><a href="mailto:gene_albin@bigfoot.com" \
target="_blank">gene_albin@bigfoot.com</a><br>




</div></div>
<br>_______________________________________________<br>
Oisf-users mailing list<br>
<a href="mailto:Oisf-users@openinfosecfoundation.org" \
target="_blank">Oisf-users@openinfosecfoundation.org</a><br> <a \
href="http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users" \
target="_blank">http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users</a><br>
 <br></blockquote></div><br><br clear="all"><br>-- <br>&quot;Of course, someone who \
knows more about this will correct me if I&#39;m<br>wrong, and someone who knows less \
will correct me if I&#39;m right.&quot; <br>David Palmer (<a \
href="mailto:palmer@tybalt.caltech.edu" \
target="_blank">palmer@tybalt.caltech.edu</a>)<br>

<br>
</div>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>&quot;Of course, \
someone who knows more about this will correct me if I&#39;m<br>wrong, and someone \
who knows less will correct me if I&#39;m right.&quot; <br> David Palmer (<a \
href="mailto:palmer@tybalt.caltech.edu">palmer@tybalt.caltech.edu</a>)<br><br> </div>



_______________________________________________
Oisf-users mailing list
Oisf-users@openinfosecfoundation.org
http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-users


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic