[prev in list] [next in list] [prev in thread] [next in thread]
List: oisf-devel
Subject: Re: [Oisf-devel] <Error> (ReceivePfring) -- [ERRCODE:
From: Will Metcalf <william.metcalf () gmail ! com>
Date: 2011-08-04 19:52:14
Message-ID: CAO0nrJbWhXOr_SZcvetfh+Cyp+to+jkR2zAEXBaE=4AoKAF-iw () mail ! gmail ! com
[Download RAW message or body]
Since VictorJ is on vaca, if somebody wants an early fix to test here
you go... Also included is a new PF_RING "single" run mode which at
least in my testing performs better when threads setting for PF_RING
is > 1 over autofp. You can test by enabling setting runmode to
single in the suricata.yaml.
Regards,
Will
On Thu, Aug 4, 2011 at 11:49 AM, Chris Wakelin
<c.d.wakelin@reading.ac.uk> wrote:
> On 04/08/11 17:36, Peter Manev wrote:
>> Hi,
>> Can you please try the following:
>> 1. Increase the MTU to 1522
>
> Yes, trying that now with native PF_RING, but doesn't seem to make any
> difference.
>
>> 2. Can you try to point suricata to listen to the VLAN interface directl=
y
>> for example: suricata -c /etc/suricata/yaml -i eth0.15
>
> Only inbound packets are VLAN-tagged, e.g. ARGUS ratop shows
>
>> =A0 =A0 =A0 =A0 =A0StartTime =A0 =A0 =A0Flgs =A0Proto =A0 =A0 =A0 =A0 =
=A0 =A0SrcAddr =A0Sport =A0 Dir =A0 =A0 =A0 =A0 =A0 =A0DstAddr =A0Dport =A0=
TotPkts =A0 TotBytes State =A0 =A0sVlan =A0 =A0dVlan
>> =A0 =A017:38:42.013921 =A0M s =A0 =A0 =A0 =A0 tcp =A0 =A0 =A0xxx.xxx.216=
.22.22 =A0 =A0 =A0 <?> =A0 =A0 134.225.yyy.yyy.60262 =A0 =A0187072 =A023469=
4540 =A0 =A0 E =A0 =A0 =A0 =A0 =A0 =A00x0fa1
>> =A0 =A017:38:43.533109 =A0M s =A0 =A0 =A0 =A0 tcp =A0 =A0 =A0xxx.xxx.216=
.23.22 =A0 =A0 =A0 <?> =A0 =A0 134.225.yyy.yyy.58316 =A0 =A0 86514 =A011227=
0100 =A0 =A0 E =A0 =A0 =A0 =A0 =A0 =A00x0fa1
>> =A0 =A017:38:42.749149 =A0M * =A0 =A0 =A0 =A0 tcp =A0 =A0134.225.uuu.uuu=
.36552 =A0 =A0 -> =A0 =A0 =A0 vvv.vvv.134.84.80 =A0 =A0 =A0 =A082389 =A0 84=
852685 =A0 sSE =A0 0x0fa1
>
> I think if I tried -i eth1.64001 I'd miss half the traffic?
>
>> 3. is there any difference?
>> 4. A pcap would be helpful to further explore the issue (should you
>> consider).
>
> Most of the packets aren't flagging errors, so it's a bit of a needle in
> a haystack. I have a couple that I sent to Will that gave AppLayerParse
> errors in "http" when using native PF_RING but not PF_RING-enabled
> libpcap. Increasing MTU from the default (1514 presumably) to 1515 fixed
> them :)
>
> Best Wishes,
> Chris
>
> --
> --+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+-
> Christopher Wakelin, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
c.d.wakelin@reading.ac.uk
> IT Services Centre, The University of Reading, =A0Tel: +44 (0)118 378 290=
8
> Whiteknights, Reading, RG6 6AF, UK =A0 =A0 =A0 =A0 =A0 =A0 =A0Fax: +44 (0=
)118 975 3094
> _______________________________________________
> Oisf-devel mailing list
> Oisf-devel@openinfosecfoundation.org
> http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
>
["0001-Fix-PF_RING-off-by-one-error-when-dealing-with-a-ful.patch" (application/octet-stream)]
From cf6c1631788109384457db857692a3d711806b93 Mon Sep 17 00:00:00 2001
From: William <wmetcalf@qualys.com>
Date: Thu, 4 Aug 2011 14:43:03 -0500
Subject: [PATCH] Fix PF_RING off-by-one error when dealing with a full frame. Add \
PF_RING single runmode i.e everything contained in a single thread. Seems to perform \
worse with a single thread but better with many threads and clustering.
---
src/runmode-pfring.c | 108 ++++++++++++++++++++++++++++++++++++++++++++++++++
src/runmode-pfring.h | 1 +
src/source-pfring.c | 12 ++++--
3 files changed, 117 insertions(+), 4 deletions(-)
diff --git a/src/runmode-pfring.c b/src/runmode-pfring.c
index a3e01f8..a83b380 100644
--- a/src/runmode-pfring.c
+++ b/src/runmode-pfring.c
@@ -40,6 +40,7 @@
static const char *default_mode_auto = NULL;
static const char *default_mode_autofp = NULL;
+static const char *default_mode_single = NULL;
const char *RunModeIdsPfringGetDefaultMode(void)
{
@@ -68,6 +69,10 @@ void RunModeIdsPfringRegister(void)
"from the same flow can be processed by any "
"detect thread",
RunModeIdsPfringAutoFp);
+ default_mode_single = "single";
+ RunModeRegisterNewRunMode(RUNMODE_PFRING, "single",
+ "All processing contained in a single thread, but you \
can spawn many of them", + RunModeIdsPfringSingle);
return;
}
@@ -461,3 +466,106 @@ int RunModeIdsPfringAutoFp(DetectEngineCtx *de_ctx)
return 0;
}
+
+/**
+ * \brief Single thread version of the Pcap file processing.
+ */
+int RunModeIdsPfringSingle(DetectEngineCtx *de_ctx)
+{
+ SCEnter();
+
+/* We include only if pfring is enabled */
+#ifdef HAVE_PFRING
+
+ char tname[12];
+ uint16_t cpu = 0;
+
+ RunModeInitialize();
+
+ TimeModeSetLive();
+
+ /* Available cpus */
+ uint16_t ncpus = UtilCpuGetNumProcessorsOnline();
+
+ /* start with cpu 1 so that if we're creating an odd number of detect
+ * threads we're not creating the most on CPU0. */
+ if (ncpus > 0)
+ cpu = 1;
+
+ int thread;
+
+ int pfring_threads = PfringConfGetThreads();
+ if (pfring_threads == 0) {
+ pfring_threads = 1;
+ }
+ /* create the threads */
+ for (thread = 0; thread < pfring_threads; thread++) {
+ snprintf(tname, sizeof(tname), "Pfring%"PRIu16, thread+1);
+ char *thread_name = SCStrdup(tname);
+
+ /* create the threads */
+ ThreadVars *tv = TmThreadCreatePacketHandler(thread_name,
+ "packetpool", "packetpool",
+ "packetpool","packetpool",
+ "varslot");
+ if (tv == NULL) {
+ printf("ERROR: TmThreadsCreate failed\n");
+ exit(EXIT_FAILURE);
+ }
+
+ TmModule *tm_module = TmModuleGetByName("ReceivePfring");
+ if (tm_module == NULL) {
+ printf("ERROR: TmModuleGetByName failed for ReceivePcap\n");
+ exit(EXIT_FAILURE);
+ }
+ TmVarSlotSetFuncAppend(tv, tm_module, thread_name);
+
+ tm_module = TmModuleGetByName("DecodePfring");
+ if (tm_module == NULL) {
+ printf("ERROR: TmModuleGetByName DecodePfring failed\n");
+ exit(EXIT_FAILURE);
+ }
+ TmVarSlotSetFuncAppend(tv, tm_module, NULL);
+
+ tm_module = TmModuleGetByName("StreamTcp");
+ if (tm_module == NULL) {
+ printf("ERROR: TmModuleGetByName StreamTcp failed\n");
+ exit(EXIT_FAILURE);
+ }
+ TmVarSlotSetFuncAppend(tv, tm_module, NULL);
+
+ tm_module = TmModuleGetByName("Detect");
+ if (tm_module == NULL) {
+ printf("ERROR: TmModuleGetByName Detect failed\n");
+ exit(EXIT_FAILURE);
+ }
+ TmVarSlotSetFuncAppend(tv, tm_module, (void *)de_ctx);
+
+ SetupOutputs(tv);
+
+ if (threading_set_cpu_affinity) {
+ TmThreadSetCPUAffinity(tv, (int)cpu);
+ /* If we have more than one core/cpu, the first PF_RING thread
+ * (at cpu 0) will have less priority (higher 'nice' value)
+ * In this case we will set the thread priority to +10 (default is 0)
+ */
+ if (cpu == 0 && ncpus > 1) {
+ TmThreadSetThreadPriority(tv, PRIO_LOW);
+ } else if (ncpus > 1) {
+ TmThreadSetThreadPriority(tv, PRIO_MEDIUM);
+ }
+ }
+
+ if (TmThreadSpawn(tv) != TM_ECODE_OK) {
+ printf("ERROR: TmThreadSpawn failed\n");
+ exit(EXIT_FAILURE);
+ }
+
+ if ((cpu + 1) == ncpus)
+ cpu = 0;
+ else
+ cpu++;
+ }
+#endif /* HAVE_PFRING */
+ return 0;
+}
diff --git a/src/runmode-pfring.h b/src/runmode-pfring.h
index 8296039..aeb999b 100644
--- a/src/runmode-pfring.h
+++ b/src/runmode-pfring.h
@@ -27,6 +27,7 @@
int RunModeIdsPfringAuto(DetectEngineCtx *);
int RunModeIdsPfringAutoFp(DetectEngineCtx *de_ctx);
+int RunModeIdsPfringSingle(DetectEngineCtx *de_ctx);
void RunModeIdsPfringRegister(void);
const char *RunModeIdsPfringGetDefaultMode(void);
diff --git a/src/source-pfring.c b/src/source-pfring.c
index 64f6400..03ba568 100644
--- a/src/source-pfring.c
+++ b/src/source-pfring.c
@@ -218,15 +218,19 @@ TmEcode ReceivePfring(ThreadVars *tv, Packet *p, void *data, \
PacketQueue *pq, Pa SCReturnInt(TM_ECODE_FAILED);
}
- /* Depending on what compile time options are used for pfring we either return 0 \
or -1 on error and always 1 for success */ + /*
+ * Need to use default_packet_size as buff len, PF_RING does bounds checking \
here and truncates when src buffer is larger than user supplied buffer. + * \
Calling GET_PKT_DIRECT_MAX_SIZE() results in an off-by-one error resulting in the \
last byte in the payload being overwritten + * by the NULL terminator on a full \
frame. + */
#ifdef HAVE_PFRING_RECV_UCHAR
int r = pfring_recv(ptv->pd, (u_char**)&GET_PKT_DIRECT_DATA(p),
- (u_int)GET_PKT_DIRECT_MAX_SIZE(p),
+ (uint32_t)default_packet_size,
&hdr,
LIBPFRING_WAIT_FOR_INCOMING);
#else
int r = pfring_recv(ptv->pd, (char *)GET_PKT_DIRECT_DATA(p),
- (u_int)GET_PKT_DIRECT_MAX_SIZE(p),
+ (uint32_t)default_packet_size,
&hdr,
LIBPFRING_WAIT_FOR_INCOMING);
#endif /* HAVE_PFRING_RECV_UCHAR */
@@ -358,7 +362,7 @@ void ReceivePfringThreadExitStats(ThreadVars *tv, void *data) {
SCLogInfo("(%s) Pfring Total:%" PRIu64 " Recv:%" PRIu64 " Drop:%" PRIu64 " \
(%02.1f%%).", tv->name,
(uint64_t)pfring_s.recv + (uint64_t)pfring_s.drop, (uint64_t)pfring_s.recv,
- (uint64_t)pfring_s.drop, ((float)pfring_s.drop/(float)(pfring_s.drop + \
pfring_s.recv))*100); + (uint64_t)pfring_s.drop, \
((float)pfring_s.drop/(float)((uint64_t)pfring_s.drop + \
(uint64_t)pfring_s.recv))*100); }
}
--
1.7.1
_______________________________________________
Oisf-devel mailing list
Oisf-devel@openinfosecfoundation.org
http://lists.openinfosecfoundation.org/mailman/listinfo/oisf-devel
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic