[prev in list] [next in list] [prev in thread] [next in thread]
List: quagga-dev
Subject: [quagga-dev 8959] Re: Strange logs
From: Sami Halabi <sodynet1 () gmail ! com>
Date: 2011-11-27 22:52:00
Message-ID: CAEW+ogapQwC2p0CSqp54wiJXHjvyLOeK0sciLE5FKravfbBWZg () mail ! gmail ! com
[Download RAW message or body]
[Attachment #2 (multipart/alternative)]
Hi,
I don't have the "interrupt" nor "irq" dir in my /proc.
actually it wasn't there, i mounted it manuualy:
mount -t procfs procfs /proc
Sami
On Sun, Nov 27, 2011 at 10:26 PM, Stig <stig@ubnt.com> wrote:
> On Sun, Nov 27, 2011 at 2:39 AM, Sami Halabi <sodynet1@gmail.com> wrote:
> > Hi Stug,
> > how do I do that?
>
> Well, my home router isn't the best example since it doesn't have
> multi-queue nics, but if you look at the nics in /proc/interrupts I
> have:
>
> root@v600:/home/vyatta# cat /proc/interrupts
> CPU0 CPU1 CPU2 CPU3
> 43: 1 1 619 1 PCI-MSI-edge eth1
> 44: 3 104898 1 2 PCI-MSI-edge eth2
> 45: 0 0 1 576 PCI-MSI-edge eth3
> 46: 575 1 1 1 PCI-MSI-edge eth4
> 47: 7 9 16016 7 PCI-MSI-edge eth5
>
>
> You can see the interrupts are spread across the 4 cpu's. To see the
> smp_affinity for each interrupt:
>
> root@v600:/home/vyatta# cat /proc/irq/43/smp_affinity
> 4
> root@v600:/home/vyatta# cat /proc/irq/44/smp_affinity
> 2
> root@v600:/home/vyatta# cat /proc/irq/45/smp_affinity
> 8
> root@v600:/home/vyatta# cat /proc/irq/46/smp_affinity
> 1
> root@v600:/home/vyatta# cat /proc/irq/47/smp_affinity
> 4
>
> To change the smp affinity you can echo what ever cpu bit mask you
> want into the same proc entry.
>
> stig
>
>
>
>
> > Thanks in advance,
> > Sami
> >
> >
> > On Sun, Nov 27, 2011 at 1:33 AM, Stig <stig@ubnt.com> wrote:
> >>
> >> If you have a multi-core system, you could change the interrupt
> >> smp-affinity so that some core/thread is not used for handling network
> >> interrupts.
> >>
> >> On Sat, Nov 26, 2011 at 11:35 AM, Sami Halabi <sodynet1@gmail.com>
> wrote:
> >> > Hi,
> >> > Lately i see strange messages on my /var/log, i use FBSD-8.1-R with
> >> > Quagga
> >> > 0.99.17
> >> > Nov 26 19:28:50 bgpServer watchquagga[1394]: zebra state ->
> unresponsive
> >> > :
> >> > no response yet to ping sent 10 seconds ago
> >> > Nov 26 19:28:57 bgpServer zebra[88251]: SLOW THREAD: task
> >> > zebra_client_read
> >> > (102aae0) ran for 21313ms (cpu time 1ms)
> >> > Nov 26 19:28:57 bgpServer watchquagga[1394]: zebra: slow echo response
> >> > finally received after 16.449412 seconds
> >> > Nov 26 19:29:12 bgpServer watchquagga[1394]: zebra state ->
> unresponsive
> >> > :
> >> > no response yet to ping sent 10 seconds ago
> >> > Nov 26 19:29:12 bgpServer watchquagga[1394]: bgpd state ->
> unresponsive
> >> > : no
> >> > response yet to ping sent 10 seconds ago
> >> > Nov 26 19:29:15 bgpServer zebra[88251]: SLOW THREAD: task
> >> > zebra_client_read
> >> > (102aae0) ran for 17260ms (cpu time 0ms)
> >> > Nov 26 19:29:15 bgpServer watchquagga[1394]: zebra: slow echo response
> >> > finally received after 13.707425 seconds
> >> > Nov 26 19:29:15 bgpServer bgpd[79850]: SLOW THREAD: task bgp_read
> >> > (10675d0)
> >> > ran for 17095ms (cpu time 5ms)
> >> > Nov 26 19:29:15 bgpServer watchquagga[1394]: bgpd: slow echo response
> >> > finally received after 13.700411 seconds
> >> > Nov 26 19:29:30 bgpServer watchquagga[1394]: bgpd state ->
> unresponsive
> >> > : no
> >> > response yet to ping sent 10 seconds ago
> >> > Nov 26 19:29:30 bgpServer watchquagga[1394]: zebra state ->
> unresponsive
> >> > :
> >> > no response yet to ping sent 10 seconds ago
> >> > Nov 26 19:29:34 bgpServer bgpd[79850]: SLOW THREAD: task bgp_read
> >> > (10675d0)
> >> > ran for 16281ms (cpu time 0ms)
> >> > Nov 26 19:29:34 bgpServer zebra[88251]: SLOW THREAD: task
> >> > zebra_client_read
> >> > (102aae0) ran for 16276ms (cpu time 0ms)
> >> > Nov 26 19:29:34 bgpServer watchquagga[1394]: bgpd: slow echo response
> >> > finally received after 13.585467 seconds
> >> > Nov 26 19:29:34 bgpServer watchquagga[1394]: zebra: slow echo response
> >> > finally received after 13.496174 seconds
> >> > Nov 26 19:29:49 bgpServer watchquagga[1394]: bgpd state ->
> unresponsive
> >> > : no
> >> > response yet to ping sent 10 seconds ago
> >> > Nov 26 19:29:49 bgpServer watchquagga[1394]: zebra state ->
> unresponsive
> >> > :
> >> > no response yet to ping sent 10 seconds ago
> >> > Nov 26 19:29:54 bgpServer watchquagga[1394]: Forked background command
> >> > [pid
> >> > 79871]: /usr/local/etc/quagga/service bgpd stop
> >> > Nov 26 19:30:00 bgpServer watchquagga[1394]: bgpd state -> down : read
> >> > returned EOF
> >> > Nov 26 19:30:00 bgpServer watchquagga[1394]: Forked background command
> >> > [pid
> >> > 79875]: /usr/local/etc/quagga/service zebra restart
> >> > Nov 26 19:30:01 bgpServer watchquagga[1394]: Forked background command
> >> > [pid
> >> > 79879]: /usr/local/etc/quagga/service bgpd start
> >> > Nov 26 19:30:01 bgpServer watchquagga[1394]: Phased global restart has
> >> > completed.
> >> > Nov 26 19:30:01 bgpServer zebra[88251]: SLOW THREAD: task
> >> > zebra_client_read
> >> > (102aae0) ran for 24850ms (cpu time 0ms)
> >> > Nov 26 19:30:01 bgpServer watchquagga[1394]: zebra: slow echo response
> >> > finally received after 21.684171 seconds
> >> > Nov 26 19:30:01 bgpServer bgpd[79885]: BGPd 0.99.17 starting: vty@2605
> ,
> >> > bgp@<all>:179
> >> > Nov 26 19:30:05 bgpServer watchquagga[1394]: bgpd state -> up :
> connect
> >> > succeeded
> >> > Nov 26 19:30:06 bgpServer watchquagga[1394]: zebra state -> up : echo
> >> > response received after 0.065503 seconds
> >> > Nov 26 19:30:21 bgpServer watchquagga[1394]: zebra state ->
> unresponsive
> >> > :
> >> > no response yet to ping sent 10 seconds ago
> >> > Nov 26 19:30:35 bgpServer zebra[88251]: SLOW THREAD: task
> >> > zebra_client_read
> >> > (102aae0) ran for 24767ms (cpu time 43ms)
> >> > Nov 26 19:30:35 bgpServer watchquagga[1394]: zebra: slow echo response
> >> > finally received after 24.512305 seconds
> >> > Nov 26 19:30:40 bgpServer watchquagga[1394]: zebra state -> up : echo
> >> > response received after 0.004419 seconds
> >> >
> >> > Lately i pass 2-3GB of traffic daliy maybe its related? moreover I
> have
> >> > 10G
> >> > Dual card base 82599-EB card connected to a 10GB Switch.
> >> > Why these messages arrive? does that mean i have something wrong?
> >> > /etc/sysctl.conf
> >> > net.inet.flowtable.enable=0
> >> > net.inet.ip.fastforwarding=1
> >> > kern.ipc.somaxconn=8192
> >> > kern.ipc.shmmax=2147483648
> >> > kern.ipc.maxsockets=204800
> >> > kern.ipc.maxsockbuf=262144
> >> > kern.maxfiles=256000
> >> > kern.maxfilesperproc=230400
> >> > net.inet.ip.dummynet.pipe_slot_limit=1000
> >> > net.inet.ip.dummynet.io_fast=1
> >> > #10Gb sysctls
> >> > hw.intr_storm_threshold=9000
> >> > kern.ipc.nmbclusters=262144
> >> > kern.ipc.nmbjumbop=262144
> >> > dev.ix.0.rx_processing_limit=4096
> >> > dev.ix.1.rx_processing_limit=4096
> >> >
> >> > Thanks in advance,
> >> > --
> >> > Sami Halabi
> >> > Information Systems Engineer
> >> > NMS Projects Expert
> >> >
> >> > _______________________________________________
> >> > Quagga-dev mailing list
> >> > Quagga-dev@lists.quagga.net
> >> > http://lists.quagga.net/mailman/listinfo/quagga-dev
> >> >
> >> >
> >
> >
> >
> >
> > --
> > Sami Halabi
> > Information Systems Engineer
> > NMS Projects Expert
> >
>
--
Sami Halabi
Information Systems Engineer
NMS Projects Expert
[Attachment #5 (text/html)]
<div dir="ltr"><div>Hi,</div><div>I don't have the "interrupt" nor \
"irq" dir in my /proc.</div><div>actually it wasn't there, i mounted it \
manuualy:</div><div>mount -t procfs procfs /proc</div><div> </div><div>Sami<br><br> \
</div><div class="gmail_quote">On Sun, Nov 27, 2011 at 10:26 PM, Stig <span \
dir="ltr"><<a href="mailto:stig@ubnt.com">stig@ubnt.com</a>></span> \
wrote:<br><blockquote style="margin: 0px 0px 0px 0.8ex; padding-left: 1ex; \
border-left-color: rgb(204, 204, 204); border-left-width: 1px; border-left-style: \
solid;" class="gmail_quote"> <div class="im">On Sun, Nov 27, 2011 at 2:39 AM, Sami \
Halabi <<a href="mailto:sodynet1@gmail.com">sodynet1@gmail.com</a>> wrote:<br> \
> Hi Stug,<br> > how do I do that?<br>
<br>
</div>Well, my home router isn't the best example since it doesn't have<br>
multi-queue nics, but if you look at the nics in /proc/interrupts I<br>
have:<br>
<br>
root@v600:/home/vyatta# cat /proc/interrupts<br>
CPU0 CPU1 CPU2 CPU3<br>
43: 1 1 619 1 PCI-MSI-edge eth1<br>
44: 3 104898 1 2 PCI-MSI-edge eth2<br>
45: 0 0 1 576 PCI-MSI-edge eth3<br>
46: 575 1 1 1 PCI-MSI-edge eth4<br>
47: 7 9 16016 7 PCI-MSI-edge eth5<br>
<br>
<br>
You can see the interrupts are spread across the 4 cpu's. To see the<br>
smp_affinity for each interrupt:<br>
<br>
root@v600:/home/vyatta# cat /proc/irq/43/smp_affinity<br>
4<br>
root@v600:/home/vyatta# cat /proc/irq/44/smp_affinity<br>
2<br>
root@v600:/home/vyatta# cat /proc/irq/45/smp_affinity<br>
8<br>
root@v600:/home/vyatta# cat /proc/irq/46/smp_affinity<br>
1<br>
root@v600:/home/vyatta# cat /proc/irq/47/smp_affinity<br>
4<br>
<br>
To change the smp affinity you can echo what ever cpu bit mask you<br>
want into the same proc entry.<br>
<span class="HOEnZb"><font color="#888888"><br>
stig<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
<br>
<br>
> Thanks in advance,<br>
> Sami<br>
><br>
><br>
> On Sun, Nov 27, 2011 at 1:33 AM, Stig <<a \
href="mailto:stig@ubnt.com">stig@ubnt.com</a>> wrote:<br> >><br>
>> If you have a multi-core system, you could change the interrupt<br>
>> smp-affinity so that some core/thread is not used for handling network<br>
>> interrupts.<br>
>><br>
>> On Sat, Nov 26, 2011 at 11:35 AM, Sami Halabi <<a \
href="mailto:sodynet1@gmail.com">sodynet1@gmail.com</a>> wrote:<br> >> > \
Hi,<br> >> > Lately i see strange messages on my /var/log, i use FBSD-8.1-R \
with<br> >> > Quagga<br>
>> > 0.99.17<br>
>> > Nov 26 19:28:50 bgpServer watchquagga[1394]: zebra state -> \
unresponsive<br> >> > :<br>
>> > no response yet to ping sent 10 seconds ago<br>
>> > Nov 26 19:28:57 bgpServer zebra[88251]: SLOW THREAD: task<br>
>> > zebra_client_read<br>
>> > (102aae0) ran for 21313ms (cpu time 1ms)<br>
>> > Nov 26 19:28:57 bgpServer watchquagga[1394]: zebra: slow echo \
response<br> >> > finally received after 16.449412 seconds<br>
>> > Nov 26 19:29:12 bgpServer watchquagga[1394]: zebra state -> \
unresponsive<br> >> > :<br>
>> > no response yet to ping sent 10 seconds ago<br>
>> > Nov 26 19:29:12 bgpServer watchquagga[1394]: bgpd state -> \
unresponsive<br> >> > : no<br>
>> > response yet to ping sent 10 seconds ago<br>
>> > Nov 26 19:29:15 bgpServer zebra[88251]: SLOW THREAD: task<br>
>> > zebra_client_read<br>
>> > (102aae0) ran for 17260ms (cpu time 0ms)<br>
>> > Nov 26 19:29:15 bgpServer watchquagga[1394]: zebra: slow echo \
response<br> >> > finally received after 13.707425 seconds<br>
>> > Nov 26 19:29:15 bgpServer bgpd[79850]: SLOW THREAD: task bgp_read<br>
>> > (10675d0)<br>
>> > ran for 17095ms (cpu time 5ms)<br>
>> > Nov 26 19:29:15 bgpServer watchquagga[1394]: bgpd: slow echo \
response<br> >> > finally received after 13.700411 seconds<br>
>> > Nov 26 19:29:30 bgpServer watchquagga[1394]: bgpd state -> \
unresponsive<br> >> > : no<br>
>> > response yet to ping sent 10 seconds ago<br>
>> > Nov 26 19:29:30 bgpServer watchquagga[1394]: zebra state -> \
unresponsive<br> >> > :<br>
>> > no response yet to ping sent 10 seconds ago<br>
>> > Nov 26 19:29:34 bgpServer bgpd[79850]: SLOW THREAD: task bgp_read<br>
>> > (10675d0)<br>
>> > ran for 16281ms (cpu time 0ms)<br>
>> > Nov 26 19:29:34 bgpServer zebra[88251]: SLOW THREAD: task<br>
>> > zebra_client_read<br>
>> > (102aae0) ran for 16276ms (cpu time 0ms)<br>
>> > Nov 26 19:29:34 bgpServer watchquagga[1394]: bgpd: slow echo \
response<br> >> > finally received after 13.585467 seconds<br>
>> > Nov 26 19:29:34 bgpServer watchquagga[1394]: zebra: slow echo \
response<br> >> > finally received after 13.496174 seconds<br>
>> > Nov 26 19:29:49 bgpServer watchquagga[1394]: bgpd state -> \
unresponsive<br> >> > : no<br>
>> > response yet to ping sent 10 seconds ago<br>
>> > Nov 26 19:29:49 bgpServer watchquagga[1394]: zebra state -> \
unresponsive<br> >> > :<br>
>> > no response yet to ping sent 10 seconds ago<br>
>> > Nov 26 19:29:54 bgpServer watchquagga[1394]: Forked background \
command<br> >> > [pid<br>
>> > 79871]: /usr/local/etc/quagga/service bgpd stop<br>
>> > Nov 26 19:30:00 bgpServer watchquagga[1394]: bgpd state -> down : \
read<br> >> > returned EOF<br>
>> > Nov 26 19:30:00 bgpServer watchquagga[1394]: Forked background \
command<br> >> > [pid<br>
>> > 79875]: /usr/local/etc/quagga/service zebra restart<br>
>> > Nov 26 19:30:01 bgpServer watchquagga[1394]: Forked background \
command<br> >> > [pid<br>
>> > 79879]: /usr/local/etc/quagga/service bgpd start<br>
>> > Nov 26 19:30:01 bgpServer watchquagga[1394]: Phased global restart \
has<br> >> > completed.<br>
>> > Nov 26 19:30:01 bgpServer zebra[88251]: SLOW THREAD: task<br>
>> > zebra_client_read<br>
>> > (102aae0) ran for 24850ms (cpu time 0ms)<br>
>> > Nov 26 19:30:01 bgpServer watchquagga[1394]: zebra: slow echo \
response<br> >> > finally received after 21.684171 seconds<br>
>> > Nov 26 19:30:01 bgpServer bgpd[79885]: BGPd 0.99.17 starting: \
vty@2605,<br> >> > bgp@<all>:179<br>
>> > Nov 26 19:30:05 bgpServer watchquagga[1394]: bgpd state -> up : \
connect<br> >> > succeeded<br>
>> > Nov 26 19:30:06 bgpServer watchquagga[1394]: zebra state -> up : \
echo<br> >> > response received after 0.065503 seconds<br>
>> > Nov 26 19:30:21 bgpServer watchquagga[1394]: zebra state -> \
unresponsive<br> >> > :<br>
>> > no response yet to ping sent 10 seconds ago<br>
>> > Nov 26 19:30:35 bgpServer zebra[88251]: SLOW THREAD: task<br>
>> > zebra_client_read<br>
>> > (102aae0) ran for 24767ms (cpu time 43ms)<br>
>> > Nov 26 19:30:35 bgpServer watchquagga[1394]: zebra: slow echo \
response<br> >> > finally received after 24.512305 seconds<br>
>> > Nov 26 19:30:40 bgpServer watchquagga[1394]: zebra state -> up : \
echo<br> >> > response received after 0.004419 seconds<br>
>> ><br>
>> > Lately i pass 2-3GB of traffic daliy maybe its related? moreover I \
have<br> >> > 10G<br>
>> > Dual card base 82599-EB card connected to a 10GB Switch.<br>
>> > Why these messages arrive? does that mean i have something wrong?<br>
>> > /etc/sysctl.conf<br>
>> > net.inet.flowtable.enable=0<br>
>> > net.inet.ip.fastforwarding=1<br>
>> > kern.ipc.somaxconn=8192<br>
>> > kern.ipc.shmmax=<a href="tel:2147483648" \
value="+12147483648">2147483648</a><br> >> > kern.ipc.maxsockets=204800<br>
>> > kern.ipc.maxsockbuf=262144<br>
>> > kern.maxfiles=256000<br>
>> > kern.maxfilesperproc=230400<br>
>> > net.inet.ip.dummynet.pipe_slot_limit=1000<br>
>> > net.inet.ip.dummynet.io_fast=1<br>
>> > #10Gb sysctls<br>
>> > hw.intr_storm_threshold=9000<br>
>> > kern.ipc.nmbclusters=262144<br>
>> > kern.ipc.nmbjumbop=262144<br>
>> > dev.ix.0.rx_processing_limit=4096<br>
>> > dev.ix.1.rx_processing_limit=4096<br>
>> ><br>
>> > Thanks in advance,<br>
>> > --<br>
>> > Sami Halabi<br>
>> > Information Systems Engineer<br>
>> > NMS Projects Expert<br>
>> ><br>
>> > _______________________________________________<br>
>> > Quagga-dev mailing list<br>
>> > <a href="mailto:Quagga-dev@lists.quagga.net">Quagga-dev@lists.quagga.net</a><br>
>> > <a href="http://lists.quagga.net/mailman/listinfo/quagga-dev" \
target="_blank">http://lists.quagga.net/mailman/listinfo/quagga-dev</a><br> >> \
><br> >> ><br>
><br>
><br>
><br>
><br>
> --<br>
> Sami Halabi<br>
> Information Systems Engineer<br>
> NMS Projects Expert<br>
><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div dir="ltr">Sami \
Halabi<div>Information Systems Engineer</div><div>NMS Projects Expert</div></div><br> \
</div>
_______________________________________________
Quagga-dev mailing list
Quagga-dev@lists.quagga.net
http://lists.quagga.net/mailman/listinfo/quagga-dev
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic