[prev in list] [next in list] [prev in thread] [next in thread] 

List:       openbsd-pf
Subject:    scrub in and out, IP fragmentation and mismatched MTUs
From:       Jon Hart <warchild () spoofed ! org>
Date:       2005-09-23 20:36:26
Message-ID: 20050923203626.GF12388 () spoofed ! org
[Download RAW message or body]

Greetings,

This was mostly my fault but I saw some behavior in pf that I could not
explain.  This is with 2 OpenBSD 3.8 -snapshots from about 8/23 and
a number of linux clients hanging off 4 different subnets on the
firewalls.

The application in question is JBoss.  By default, it will wait until
N bytes have been accumlated and then cram those into a UDP packet and
send it on its way.  As expected, this packet is often larger than the
MTU, so fragmentation occurs.  This by itself is not a problem with pf
-- enable scrub and the fragments get reassembled and stateful filtering
can still happen.

The problem I was seeing was that when especially large UDP packets were
sent, they were not necessarily coming out the other side of the
firewall as I had expected.  Some were passing, others were still
fragmented.

Here is where my screwups came into play.  One, I'm using:

   scrub all no-df random-id fragment reassemble reassemble tcp

This reassembles fragments in both directions, right.  So after a packet
has been reassembled and applied to stateful filtering, what happens?
Does it get sent out over the other interface and onto the next hop
fragmented or unfragmented?  Does this depend on MTU?  Is there any
particular reason to only 'scrub in'?  If 'scrub out' is on, does that
simply mean that the fragments are *again* reassembled before being
applied to stateful outbound rules, or are they also sent out
reassembled if possible?

To further complicate things, the firewalls had jumbo frames enabled
with a 9000 byte MTU, as did the switches, but the clients were stuck at
1500 because of switch issues.  Matching the MTUs and leaving pf.conf as
is fixed the problem immediately.

Some things make sense here, some do not.  Thoughts?  Did pf perform as
expected?

-jon




[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic