[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] How to enable jumbo frames on IPv6 only cluster?
From:       wido () 42on ! com (Wido den Hollander)
Date:       2017-10-30 10:29:29
Message-ID: 2018567548.3855.1509359369873 () ox ! pcextreme ! nl
[Download RAW message or body]


> Op 30 oktober 2017 om 11:15 schreef F?lix Barbeira <fbarbeira at gmail.com>:
> 
> 
> Oh BTW, I had to change back MTU to 1500 on the ceph-monitors because they
> didn't work with 9000. This is the output of the ansible-playbook:
> 
> TASK [ceph-mon : put initial mon keyring in mon kv store]
> ************************************************************************************ \
> **********************************************************************************************
>                 
> fatal: [ceph-monitor01]: FAILED! => {"changed": false, "cmd": ["ceph",
> "--cluster", "ceph", "config-key", "put", "initial_mon_keyring",
> "xxxxxxxxxxxxxxxxxxxxxxxxxx=="], "delta": "0:05:00.159094", "end":
> "2017-10-30 09:48:10.425012", "failed": true, "msg": "non-zero return
> code", "rc": 1, "start": "2017-10-30 09:43:10.265918", "stderr":
> "2017-10-30 09:48:10.395156 7fd314408700  0 monclient(hunting):
> authenticate timed out after 300\n2017-10-30 09:48:10.395197 7fd314408700
> 0 librados: client.admin authentication error (110) Connection timed
> out\n[errno 110] error connecting to the cluster", "stderr_lines":
> ["2017-10-30 09:48:10.395156 7fd314408700  0 monclient(hunting):
> authenticate timed out after 300", "2017-10-30 09:48:10.395197
> 7fd314408700  0 librados: client.admin authentication error (110)
> Connection timed out", "[errno 110] error connecting to the cluster"],
> "stdout": "", "stdout_lines": []}
> 
> Resuming, gateways and osds with jumbo frames, monitors not. Maybe this
> isn't a problem because the servers that handle most of traffic are the
> osds and gateways.
> 

Seems like a different problem. I'm running multiple Ceph cluster on IPv6 with Jumbo \
Frames.

If the network is configured properly the application running on top of TCP doesn't \
know anything about the MTU of the link below.

Wido

> 
> 
> 2017-10-30 10:50 GMT+01:00 F?lix Barbeira <fbarbeira at gmail.com>:
> 
> > Thanks Wido, it's fixed. I'm going to put the explanation if somebody runs
> > into the same error.
> > 
> > The MTU was defined on the client side and it was 9000. The 'ifconfig'
> > shows the value established but if I ask directly the /proc filesystem it
> > shows the following:
> > 
> > root at ceph-node03:~# cat /proc/sys/net/ipv6/conf/eno1/mtu
> > 1500
> > root at ceph-node03:~#
> > 
> > If I restart the interface it shows 9000 for a while and then it changes
> > back to 1500. After some research it turns out that the router offers a MTU
> > 1500 in the SLAAC parameters so when the session is 'refreshed', the client
> > applies the wrong value (1500).
> > 
> > The network guys changed the MTU parameter offered via SLAAC and now it's
> > working:
> > 
> > root at ceph-node03:~# cat /proc/sys/net/ipv6/conf/eno1/mtu
> > 9000
> > root at ceph-node03:~# ping6 -c 3 -M do -s 8952 ceph-node01
> > PING ceph-node01(2a02:x:x:x:x:x:x:x) 8952 data bytes
> > 8960 bytes from 2a02:x:x:x:x:x:x:x: icmp_seq=1 ttl=64 time=0.271 ms
> > 8960 bytes from 2a02:x:x:x:x:x:x:x: icmp_seq=2 ttl=64 time=0.216 ms
> > 8960 bytes from 2a02:x:x:x:x:x:x:x: icmp_seq=3 ttl=64 time=0.280 ms
> > 
> > --- ceph-node01 ping statistics ---
> > 3 packets transmitted, 3 received, 0% packet loss, time 2002ms
> > rtt min/avg/max/mdev = 0.216/0.255/0.280/0.033 ms
> > root at ceph-node03:~#
> > 
> > 
> > 2017-10-27 16:02 GMT+02:00 Wido den Hollander <wido at 42on.com>:
> > 
> > > 
> > > > Op 27 oktober 2017 om 14:22 schreef F?lix Barbeira <fbarbeira at gmail.com
> > > > > 
> > > > 
> > > > 
> > > > Hi,
> > > > 
> > > > I'm trying to configure a ceph cluster using IPv6 only but I can't
> > > enable
> > > > jumbo frames. I made the definition on the
> > > > 'interfaces' file and it seems like the value is applied but when I
> > > test it
> > > > looks like only works on IPv4, not IPv6.
> > > > 
> > > > It works on IPv4:
> > > > 
> > > > root at ceph-node01:~# ping -c 3 -M do -s 8972 ceph-node02
> > > > 
> > > > PING ceph-node02 (x.x.x.x) 8972(9000) bytes of data.
> > > > 8980 bytes from ceph-node02 (x.x.x.x): icmp_seq=1 ttl=64 time=0.474 ms
> > > > 8980 bytes from ceph-node02 (x.x.x.x): icmp_seq=2 ttl=64 time=0.254 ms
> > > > 8980 bytes from ceph-node02 (x.x.x.x): icmp_seq=3 ttl=64 time=0.288 ms
> > > > 
> > > 
> > > Verify with Wireshark/tcpdump if it really sends 9k packets. I doubt it.
> > > 
> > > > --- ceph-node02 ping statistics ---
> > > > 3 packets transmitted, 3 received, 0% packet loss, time 2000ms
> > > > rtt min/avg/max/mdev = 0.254/0.338/0.474/0.099 ms
> > > > 
> > > > root at ceph-node01:~#
> > > > 
> > > > But *not* in IPv6:
> > > > 
> > > > root at ceph-node01:~# ping6 -c 3 -M do -s 8972 ceph-node02
> > > > PING ceph-node02(x:x:x:x:x:x:x:x) 8972 data bytes
> > > > ping: local error: Message too long, mtu=1500
> > > > ping: local error: Message too long, mtu=1500
> > > > ping: local error: Message too long, mtu=1500
> > > > 
> > > 
> > > Like Ronny already mentioned, check the switches and the receiver. There
> > > is a 1500 MTU somewhere configured.
> > > 
> > > Wido
> > > 
> > > > --- ceph-node02 ping statistics ---
> > > > 4 packets transmitted, 0 received, +4 errors, 100% packet loss, time
> > > 3024ms
> > > > 
> > > > root at ceph-node01:~#
> > > > 
> > > > 
> > > > 
> > > > root at ceph-node01:~# ifconfig
> > > > eno1      Link encap:Ethernet  HWaddr 24:6e:96:05:55:f8
> > > > inet6 addr: 2a02:x:x:x:x:x:x:x/64 Scope:Global
> > > > inet6 addr: fe80::266e:96ff:fe05:55f8/64 Scope:Link
> > > > UP BROADCAST RUNNING MULTICAST  *MTU:9000*  Metric:1
> > > > RX packets:633318 errors:0 dropped:0 overruns:0 frame:0
> > > > TX packets:649607 errors:0 dropped:0 overruns:0 carrier:0
> > > > collisions:0 txqueuelen:1000
> > > > RX bytes:463355602 (463.3 MB)  TX bytes:498891771 (498.8 MB)
> > > > 
> > > > lo        Link encap:Local Loopback
> > > > inet addr:127.0.0.1  Mask:255.0.0.0
> > > > inet6 addr: ::1/128 Scope:Host
> > > > UP LOOPBACK RUNNING  MTU:65536  Metric:1
> > > > RX packets:127420 errors:0 dropped:0 overruns:0 frame:0
> > > > TX packets:127420 errors:0 dropped:0 overruns:0 carrier:0
> > > > collisions:0 txqueuelen:1
> > > > RX bytes:179470326 (179.4 MB)  TX bytes:179470326 (179.4 MB)
> > > > 
> > > > root at ceph-node01:~#
> > > > 
> > > > root at ceph-node01:~# cat /etc/network/interfaces
> > > > # This file describes network interfaces avaiulable on your system
> > > > # and how to activate them. For more information, see interfaces(5).
> > > > 
> > > > source /etc/network/interfaces.d/*
> > > > 
> > > > # The loopback network interface
> > > > auto lo
> > > > iface lo inet loopback
> > > > 
> > > > # The primary network interface
> > > > auto eno1
> > > > iface eno1 inet6 auto
> > > > post-up ifconfig eno1 mtu 9000
> > > > root at ceph-node01:#
> > > > 
> > > > 
> > > > Please help!
> > > > 
> > > > --
> > > > F?lix Barbeira.
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users at lists.ceph.com
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > 
> > 
> > 
> > 
> > --
> > F?lix Barbeira.
> > 
> 
> 
> 
> -- 
> F?lix Barbeira.


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic