[prev in list] [next in list] [prev in thread] [next in thread] 

List:       openbsd-misc
Subject:    Re: relayd for TLS termination
From:       David Higgs <higgsd () gmail ! com>
Date:       2018-05-30 13:29:46
Message-ID: CAFQvAdfnAaimC2-+3hkaN1JdLyAWAmMTiPiOi=0PWzueaM+pLA () mail ! gmail ! com
[Download RAW message or body]

On Sat, Apr 28, 2018 at 2:12 PM, David Higgs <higgsd@gmail.com> wrote:
> On Sat, Apr 28, 2018 at 11:33 AM, David Higgs <higgsd@gmail.com> wrote:
>> On Sat, Apr 28, 2018 at 9:58 AM, Claudio Jeker <cjeker@diehard.n-r-g.com> wrote:
>>> On Sat, Apr 28, 2018 at 09:39:56AM -0400, David Higgs wrote:
>>>> I run several services on the same host and would like to consolidate
>>>> certificate management with the help of relayd.
>>>>
>>>> Before:
>>>> - acme-client generates certificates via LE
>>>> - kibana running https on port 5601
>>>> - unifi running https on port 8443
>>>> - httpd running http+https on port 80
>>>> - daily.local script to install new certs and restart all services
>>>> when LE updates
>>>>
>>>> After:
>>>> - register new LE domains for kibana and unifi
>>>> - switch kibana and unifi back to running http on localhost
>>>> - relayd transparently terminates all https and demuxes to http
>>>> service based on Host header
>>>> - daily.local has much fewer services to manage
>>>>
>>>> First off, is this even possible with relayd?
>>>
>>> More or less. relayd does not do SNI so you need to have per hostname or
>>> actually per certificate a different IP. Doing complicated rule based
>>> relays is not working all that well. So try to keep it simple one port one
>>> service.
>>
>> Single IP, one hostname per port (and thereby service) at 1:1
>> correspondence.  Hostnames will all be aliases on the same LE cert, so
>> it seems like SNI is not a problem.
>>
>>>> Second, I am having difficulty grokking how to structure my
>>>> relayd.conf.  Will I need one relay and protocol block for EACH
>>>> service?  Do I need a pf.conf anchor if I am only using relay
>>>> behavior?
>>>
>>> Depends. You may be able to just use one 'http protocol' block that is
>>> referenced by multiple relays. It depends on the config.
>>> I think the pf.conf anchor is required even if you are not using
>>> redirects (I assume that relayd would even refuse to start without the
>>> anchor).
>>
>> My pf.conf is a bit complex with tag usage, but I definitely wasn't
>> using the pf anchor.  (Not sure if this is a bug?)
>>
>>>> Lastly and perhaps indicative of my difficulties, I am having
>>>> difficulty building (or debugging) even a single host as
>>>> proof-of-concept using the config below.  The relayd daemon starts
>>>> just fine, loading symlinked <addr>.crt and <addr>.key files.  (Should
>>>> I be using the fullchain.pem instead?)
>>>
>>> Yes, you should use a full chain certificate else there is no trust anchor
>>> for the clients.
>>
>> I was concerned that relayd might not grok PEM files - all the example
>> use .crt extensions.
>>
>>>> Behavior seems to vary based on client / environment - I have seen
>>>> both wget and curl complain about certificate verification (relaying
>>>> to :80), while curl on a different box reported an empty reply from
>>>> the server after timeout (relaying to 127.0.0.1:80).
>>>>
>>>> Hints or clue sticks would be most appreciated.
>>>>
>>>> --david
>>>>
>>>> ### relayd.conf
>>>>
>>>> http protocol wwwproto {
>>>>         tcp { nodelay, sack, socket buffer 65536, backlog 128 }
>>>
>>> Honestly most of this tuning is not helpful. sack and backlog may be OK
>>> but esp. changing the socket buffer will disable the automatic socket
>>> buffer scaling and leave you with a much smaller buffer then the default.
>>
>> I'm not concerned about scale or performance; it was just present in
>> the example/relayd.conf.
>>
>>>>         # seen in example, not sure of purpose
>>>>         match request header set "Connection" value "close"
>>>
>>> This tells the server to close the connection after each request and so no
>>> keep-alive is happening. In some cases this is needed. Especially when
>>> mutliple backends are used in match or pass rules.
>>
>> Same as above.
>>
>>>>         # notify client if relay failed
>>>>         return error
>>>>         # reject unknown hosts by default
>>>>         block
>>>>         # traffic for httpd, forward
>>>>         pass request header "Host" value "example.com"
>>>>         pass request header "Host" value "www.example.com"
>>>
>>> I'm not sure why you do this. In general I leave the Host parsing to the
>>> backend servers. Also I think Host may include the port number if it is
>>> not a default port.
>>
>> This was because I want relayd to demux the service/port based on the
>> "Host" header.  I mainly hope to accomplish something like the
>> following, since httpd(8) doesn't support proxying.
>>
>> tls on port 443 w/ "Host: unifi.example.com" => localhost port 8443, no tls
>> tls on port 443 w/ "Host: kibana.example.com" => localhost port 5601, no tls
>> tls on port 443 w/ "Host: www.example.com" => localhost port 80, no tls
>> anything else => error
>>
>>>> }
>>>>
>>>> relay wwwrelay {
>>>>         listen on em1 port 443 tls
>>>>         protocol wwwproto
>>>>         transparent forward to lo port http
>>>
>>> On hig volume servers I would not use transparent forwading but instead
>>> set the X-Forwarded-For header. Also transparent needs help from pf.
>>
>> I was mainly looking to use default log configuration on my services.
>>
>> This gives me plenty to work with; will experiment and report back, thanks.
>
> I should have known that .crt and .pem files are the same format.
> Softlinking to the fullchain file worked just great.
>
> After more searching I stumbled upon this post, which was very similar
> to my goal.
> https://marc.info/?l=openbsd-misc&m=150660134614388&w=2
>

Just to follow up, below is my current approach - I found it a bit
surprising that the port number was required to correctly match the
"Host" header.

I ended up not proxying httpd(8), since I still need it to 301
redirect actual port 80 traffic to 443.

And after sthen@ noted that he is using nginx rather than relayd with
unifi, I haven't bothered proxying that service.  Google tells me that
it uses websockets, and that relayd(8) doesn't support proxying those
(yet).  Maybe someday? :)

Thanks again.

--david

# relayd.conf
table <kibana> { lo }
http protocol tlsproto {
        match request header append \
                "X-Forwarded-For" value "$REMOTE_ADDR"
        match request header append \
                "X-Forwarded-By" value "$SERVER_ADDR:$SERVER_PORT"
        return error
        block
        pass request header "Host" value \
                "kibana.example.com:5601" forward to <kibana>
}
relay tlsrelay {
        listen on 10.0.0.1 port 5601 tls
        listen on fc00::1 port 5601 tls
        protocol tlsproto
        forward to <kibana> port 5601
}

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic