[prev in list] [next in list] [prev in thread] [next in thread]
List: security-onion
Subject: [security-onion] Re: Unexpected shutdown, everything working except elsa main::_sql_error_handler
From: Wes <wlambertts () gmail ! com>
Date: 2017-04-28 19:22:49
Message-ID: fd61fd5d-27f0-4a70-b965-c7b9471ca5b6 () googlegroups ! com
[Download RAW message or body]
On Wednesday, April 26, 2017 at 2:56:11 PM UTC-6, Will B wrote:
> Server suffered a power outage
> Everything came back up normally
> Elsa has not logged since.
>
> below is my sostat
>
> Have a lot of ELSA data...would rather not have to use elsa-reset if not \
> needed.... Wondering if aynone has had this issue that.
>
> Thanks!
>
>
>
>
> Errors form /nsm/elsa/data/elsa/log/node.log :
>
>
> /opt/elsa/node/elsa.pl (214) main::_sql_error_handler 12078 [undef]
> FATAL: failed to load header: failed to open /nsm/elsa/data/sphinx/temp_101.sph: No \
> such file or directory.
> Multiple different temp_*.sph
>
> WARNING: failed to scanf pid from pid_file '/var/run/sphinxsearch/searchd.pid'.
> WARNING: indices NOT rotated.
> /opt/elsa/node/Indexer.pm (350) Indexer::initial_validate_directory 2025 [undef]
> * ERROR /opt/elsa/node/Indexer.pm (3060) Indexer::_get_index_schema 2025 [undef]
>
>
>
>
>
>
>
>
> =========================================================================
> CPU Usage
> =========================================================================
> Load average for the last 1, 5, and 15 minutes:
> 2.50 2.71 2.81
> Processing units: 16
> If load average is higher than processing units,
> then tune until load average is lower than processing units.
>
> top - 20:51:44 up 6:25, 2 SO-users, load average: 2.50, 2.71, 2.81
> Tasks: 446 total, 3 running, 436 sleeping, 7 stopped, 0 zombie
> %Cpu(s): 16.0 us, 4.1 sy, 0.0 ni, 79.4 id, 0.2 wa, 0.0 hi, 0.2 si, 0.0 st
> KiB Mem: 98993424 total, 97121960 used, 1871468 free, 268044 buffers
> KiB Swap: 10064793+total, 0 used, 10064793+free. 64502416 cached Mem
>
>
> /nsm/sensor_data/SO-server-eth5/snort-6.stats last reported pkt_drop_percent as \
> 0.000
> /nsm/sensor_data/SO-server-eth7/snort-1.stats last reported pkt_drop_percent as \
> 0.000
> /nsm/sensor_data/SO-server-eth7/snort-2.stats last reported pkt_drop_percent as \
> 0.000
> /nsm/sensor_data/SO-server-eth7/snort-3.stats last reported pkt_drop_percent as \
> 0.000
> /nsm/sensor_data/SO-server-eth7/snort-4.stats last reported pkt_drop_percent as \
> 0.000
> /nsm/sensor_data/SO-server-eth7/snort-5.stats last reported pkt_drop_percent as \
> 0.000
> /nsm/sensor_data/SO-server-eth7/snort-6.stats last reported pkt_drop_percent as \
> 0.000
> /nsm/sensor_data/SO-server-eth9/snort-1.stats last reported pkt_drop_percent as \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
>
> /nsm/sensor_data/SO-server-eth9/snort-2.stats last reported pkt_drop_percent as \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
>
> /nsm/sensor_data/SO-server-eth9/snort-3.stats last reported pkt_drop_percent as \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
>
> /nsm/sensor_data/SO-server-eth9/snort-4.stats last reported pkt_drop_percent as \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
>
> /nsm/sensor_data/SO-server-eth9/snort-5.stats last reported pkt_drop_percent as \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
>
> /nsm/sensor_data/SO-server-eth9/snort-6.stats last reported pkt_drop_percent as \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ \
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
>
> -------------------------------------------------------------------------
>
> Bro:
>
> Average packet loss as percent across all Bro workers: 0.000001
>
> SO-server-eth3-1: 1493239905.512813 recvd=31445514 dropped=0 link=31445514
> SO-server-eth3-2: 1493239905.712769 recvd=63963465 dropped=0 link=63963465
> SO-server-eth3-3: 1493239905.912785 recvd=22599650 dropped=0 link=22599650
> SO-server-eth5-1: 1493239906.116805 recvd=33574297 dropped=1 link=33574297
> SO-server-eth5-2: 1493239906.316814 recvd=37678472 dropped=1 link=37678472
> SO-server-eth5-3: 1493239906.516817 recvd=27992660 dropped=0 link=27992660
> SO-server-eth9-1: 1493239906.720766 recvd=54694960 dropped=0 link=54694960
> SO-server-eth9-2: 1493239906.916776 recvd=17556132 dropped=0 link=17556132
> SO-server-eth9-3: 1493239907.116790 recvd=14473974 dropped=0 link=14473974
>
> Capture Loss:
>
> SO-server-eth3-1 0.0
> SO-server-eth3-1 0.001555
> SO-server-eth3-2 0.0
> SO-server-eth3-2 0.001712
> SO-server-eth3-2 0.002506
> SO-server-eth3-3 0.0
> SO-server-eth3-3 0.004229
> SO-server-eth5-1 0.673911
> SO-server-eth5-1 0.849238
> SO-server-eth5-1 8.479934
> SO-server-eth5-2 0.677586
> SO-server-eth5-2 1.069891
> SO-server-eth5-2 3.123642
> SO-server-eth5-3 0.177534
> SO-server-eth5-3 0.270247
> SO-server-eth5-3 1.483378
> SO-server-eth9-1 1.876353
> SO-server-eth9-1 3.121698
> SO-server-eth9-1 8.019882
> SO-server-eth9-2 2.124475
> SO-server-eth9-2 3.978378
> SO-server-eth9-2 9.460837
> SO-server-eth9-3 1.16879
> SO-server-eth9-3 1.575765
> SO-server-eth9-3 7.083802
>
> If you are seeing capture loss without dropped packets, this
> may indicate that an upstream device is dropping packets (tap or SPAN port).
>
> -------------------------------------------------------------------------
> Netsniff-NG:
> File: /var/log/nsm/SO-server-eth5/netsniff-ng.log.20170425190331 Processed: \
> +253313 Lost: -1893
> File: /var/log/nsm/SO-server-eth7/netsniff-ng.log Processed: \
> +224808 Lost: -5102
> File: /var/log/nsm/SO-server-eth7/netsniff-ng.log.20160305000005 Processed: \
> +401335 Lost: -2981
> =========================================================================
> PF_RING
> =========================================================================
> PF_RING Version : 6.4.1 (unknown)
> Total rings : 21
>
> Standard (non ZC) Options
> Ring slots : 65500
> Slot version : 16
> Capture TX : Yes [RX+TX]
> IP Defragment : No
> Socket Mode : Standard
> Total plugins : 0
> Cluster Fragment Queue : 0
> Cluster Fragment Discard : 0
>
> =========================================================================
> Log Archive
> =========================================================================
> /nsm/sensor_data/SO-server-eth0/dailylogs/ - 0 days
> 4.0K .
>
> /nsm/sensor_data/SO-server-eth1/dailylogs/ - 0 days
> 4.0K .
>
> /nsm/sensor_data/SO-server-eth2/dailylogs/ - 0 days
> 4.0K .
>
> /nsm/sensor_data/SO-server-eth3/dailylogs/ - 3 days
> 731G .
> 273G ./2017-04-24
> 234G ./2017-04-25
> 225G ./2017-04-26
>
> /nsm/sensor_data/SO-server-eth4/dailylogs/ - 0 days
> 4.0K .
>
> /nsm/sensor_data/SO-server-eth5/dailylogs/ - 3 days
> 436G .
> 161G ./2017-04-24
> 155G ./2017-04-25
> 121G ./2017-04-26
>
> /nsm/sensor_data/SO-server-eth6/dailylogs/ - 0 days
> 4.0K .
>
> /nsm/sensor_data/SO-server-eth7/dailylogs/ - 0 days
> 4.0K .
>
> /nsm/sensor_data/SO-server-eth8/dailylogs/ - 0 days
> 4.0K .
>
> /nsm/sensor_data/SO-server-eth9/dailylogs/ - 0 days
> 4.0K .
>
> /nsm/bro/logs/ - 5 days
> 3.4G .
> 378M ./2017-04-22
> 354M ./2017-04-23
> 671M ./2017-04-24
> 304M ./2017-04-25
> 630M ./2017-04-26
> 1.1G ./stats
> =========================================================================
> Last update
> =========================================================================
>
> Start-Date: 2017-04-25 21:03:21
> Commandline: apt-get -y remove --purge linux-image-3.13.0-95-generic \
> linux-headers-3.13.0-95-generic
> Purge: linux-image-extra-3.13.0-95-generic:amd64 (3.13.0-95.142), \
> linux-image-3.13.0-95-generic:amd64 (3.13.0-95.142), \
> linux-headers-3.13.0-95-generic:amd64 (3.13.0-95.142)
> End-Date: 2017-04-25 21:03:41
>
> Start-Date: 2017-04-26 16:23:50
> Commandline: apt-get -y dist-upgrade
> Upgrade: chromium-codecs-ffmpeg:amd64 (53.0.2785.143-0ubuntu0.14.04.1.1145, \
> 58.0.3029.81-0ubuntu0.14.04.1172), chromium-browser-l10n:amd64 \
> (53.0.2785.143-0ubuntu0.14.04.1.1145, 58.0.3029.81-0ubuntu0.14.04.1172), \
> chromium-browser:amd64 (53.0.2785.143-0ubuntu0.14.04.1.1145, \
> 58.0.3029.81-0ubuntu0.14.04.1172)
> End-Date: 2017-04-26 16:24:07
>
> =========================================================================
> ELSA
> =========================================================================
> Syslog-ng
> Checking for process:
> 2611 /usr/sbin/syslog-ng -p /var/run/syslog-ng.pid
> Checking for connection:
> Connection to localhost 514 port [tcp/shell] succeeded!
>
> MySQL
> Checking for process:
> 2482 /usr/sbin/mysqld
> Checking for connection:
> Connection to localhost 3306 port [tcp/mysql] succeeded!
>
> Sphinx
> Checking for process:
> 2763 su -s /bin/sh -c exec "$0" "$@" sphinxsearch -- /usr/bin/searchd --nodetach
> 2771 /usr/bin/searchd --nodetach
> Checking for connection:
> Connection to localhost 9306 port [tcp/*] succeeded!
>
> ELSA Buffers in Queue:
> 1063
> If this number is consistently higher than 20, please see:
> https://github.com/Security-Onion-Solutions/security-onion/wiki/FAQ#why-does-sostat-show-a-high-number-of-elsa-buffers-in-queue
>
> ELSA Directory Sizes:
> 1.9T /nsm/elsa/data
> 556M /var/lib/mysql/syslog
> 8.4M /var/lib/mysql/syslog_data
>
> ELSA Index Date Range
> If you don't have at least 2 full days of logs in the Index Date Range,
> then you'll need to increase log_size_limit in /etc/elsa_node.conf.
> MIN(start) MAX(end)
> 2017-03-09 15:34:40 2017-04-19 01:42:25
>
>
> =========================================================================
> Version Information
> =========================================================================
> Ubuntu 14.04.5 LTS
> securityonion-sostat 20120722-0ubuntu0securityonion69
Will,
Have you tried running fsck to check the filesystem, and/or mysqlcheck to ensure the \
databases associated with Security Onion are not in an inconsistent state?
You may also want to follow the guidance provided in sostat to see if it helps:
ELSA Buffers in Queue:
1063
If this number is consistently higher than 20, please see:
https://github.com/Security-Onion-Solutions/security-onion/wiki/FAQ#why-does-sostat-show-a-high-number-of-elsa-buffers-in-queue
Thanks,
Wes
--
Follow Security Onion on Twitter!
https://twitter.com/securityonion
---
You received this message because you are subscribed to the Google Groups \
"security-onion" group. To unsubscribe from this group and stop receiving emails from \
it, send an email to security-onion+unsubscribe@googlegroups.com. To post to this \
group, send email to security-onion@googlegroups.com. Visit this group at \
https://groups.google.com/group/security-onion. For more options, visit \
https://groups.google.com/d/optout.
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic