[prev in list] [next in list] [prev in thread] [next in thread] 

List:       npaci-rocks-discussion
Subject:    [Rocks-Discuss] Re: Mysql database error when removing host from database
From:       François_Thieuleux <francois.thieuleux () univ-lille1 ! fr>
Date:       2017-07-28 7:09:29
Message-ID: 20cb7a99-c962-3778-387e-2730aab9ef14 () univ-lille1 ! fr
[Download RAW message or body]

On 07/28/2017 01:26 AM, Cooper, Trevor wrote:
> > On Jul 27, 2017, at 1:18 PM, François Thieuleux \
> > <francois.thieuleux@univ-lille1.fr> wrote: 
> > On 07/27/2017 09:48 PM, Cooper, Trevor wrote:
> > > > On Jul 27, 2017, at 12:25 PM, François Thieuleux \
> > > > <francois.thieuleux@univ-lille1.fr> wrote: 
> > > > On 07/27/2017 06:53 PM, Cooper, Trevor wrote:
> > > > > > On Jul 27, 2017, at 4:06 AM, François Thieuleux \
> > > > > > <francois.thieuleux@univ-lille1.fr> wrote: 
> > > > > > It seems I don't check the correct database (shouldn't it be
> > > > > > /var/opt/rocks/mysql/cluster) ?
> > > > > > How can I proceed to check the "nodes" cluster's database ?
> > > > > Rocks uses it's own custom install of MySQL in /opt/rocks NOT the \
> > > > > system/distro mysql-server install. 
> > > > > As a general rule, all mysql'ish commands to be run against the Rocks DB \
> > > > > will be found in /opt/rocks/mysql/bin and the DB password is stored in \
> > > > > /root/.rocks.my.cnf 
> > > > > Try...
> > > > > 
> > > > > [root@sidewinder-fe1 ~]# /opt/rocks/mysql/bin/mysqlcheck \
> > > > >                 --defaults-extra-file=/root/.rocks.my.cnf -c cluster
> > > > > /opt/rocks/mysql/bin/mysqlcheck: Unknown OS character set 'ISO-8859-15'.
> > > > > /opt/rocks/mysql/bin/mysqlcheck: Switching to the default character set \
> > > > > 'latin1'. cluster.aliases                                    OK
> > > > > cluster.appliance_routes                           OK
> > > > > cluster.appliances                                 OK
> > > > > cluster.attributes                                 OK
> > > > > cluster.boot                                       OK
> > > > > cluster.bootaction                                 OK
> > > > > cluster.bootflags                                  OK
> > > > > cluster.categories                                 OK
> > > > > cluster.catindex                                   OK
> > > > > cluster.distributions                              OK
> > > > > cluster.firewalls                                  OK
> > > > > cluster.global_routes                              OK
> > > > > cluster.memberships                                OK
> > > > > cluster.networks                                   OK
> > > > > cluster.node_rolls                                 OK
> > > > > cluster.node_routes                                OK
> > > > > cluster.nodes                                      OK
> > > > > cluster.os_routes                                  OK
> > > > > cluster.partitions                                 OK
> > > > > cluster.public_keys                                OK
> > > > > cluster.resolvechain                               OK
> > > > > cluster.rolls                                      OK
> > > > > cluster.sec_global_attributes                      OK
> > > > > cluster.sec_node_attributes                        OK
> > > > > cluster.subnets                                    OK
> > > > > cluster.vm_disks                                   OK
> > > > > cluster.vm_nodes                                   OK
> > > > > 
> > > > > -- Trevor
> > > > > 
> > > > Thanks Trevor,
> > > > 
> > > > # ls -ltr /root/.rocks.my.cnf
> > > > -r-------- 1 root root 37 Sep  4  2013 /root/.rocks.my.cnf
> > > > 
> > > > I get that :
> > > > 
> > > > # /opt/rocks/mysql/bin/mysqlcheck
> > > > --defaults-extra-file=/root/.rocks.my.cnf -c cluster
> > > > cluster.aliases
> > > > warning  : 3 clients are using or haven't closed the table properly
> > > > status   : OK
> > > > cluster.app_globals                                OK
> > > > cluster.appliance_attributes
> > > > warning  : 1 client is using or hasn't closed the table properly
> > > > status   : OK
> > > > cluster.appliance_firewall                         OK
> > > > cluster.appliance_routes                           OK
> > > > cluster.appliances
> > > > warning  : 1 client is using or hasn't closed the table properly
> > > > status   : OK
> > > > cluster.attributes                                 OK
> > > > cluster.boot
> > > > warning  : 22 clients are using or haven't closed the table properly
> > > > status   : OK
> > > > cluster.bootaction
> > > > warning  : 2 clients are using or haven't closed the table properly
> > > > status   : OK
> > > > cluster.bootflags                                  OK
> > > > cluster.categories
> > > > warning  : 1 client is using or hasn't closed the table properly
> > > > status   : OK
> > > > cluster.catindex
> > > > warning  : 4 clients are using or haven't closed the table properly
> > > > status   : OK
> > > > cluster.distributions
> > > > warning  : 1 client is using or hasn't closed the table properly
> > > > status   : OK
> > > > cluster.firewalls
> > > > warning  : 2 clients are using or haven't closed the table properly
> > > > status   : OK
> > > > cluster.global_attributes
> > > > warning  : 1 client is using or hasn't closed the table properly
> > > > status   : OK
> > > > cluster.global_firewall                            OK
> > > > cluster.global_routes
> > > > warning  : 1 client is using or hasn't closed the table properly
> > > > status   : OK
> > > > cluster.hostselections
> > > > warning  : 3 clients are using or haven't closed the table properly
> > > > status   : OK
> > > > cluster.memberships
> > > > warning  : 1 client is using or hasn't closed the table properly
> > > > status   : OK
> > > > cluster.networks
> > > > warning  : 7 clients are using or haven't closed the table properly
> > > > status   : OK
> > > > cluster.node_attributes
> > > > warning  : 5 clients are using or haven't closed the table properly
> > > > status   : OK
> > > > cluster.node_firewall                              OK
> > > > cluster.node_rolls                                 OK
> > > > cluster.node_routes
> > > > warning  : 1 client is using or hasn't closed the table properly
> > > > status   : OK
> > > > cluster.nodes
> > > > warning  : 5 clients are using or haven't closed the table properly
> > > > status   : OK
> > > > cluster.os_attributes
> > > > warning  : 1 client is using or hasn't closed the table properly
> > > > status   : OK
> > > > cluster.os_firewall                                OK
> > > > cluster.os_routes                                  OK
> > > > cluster.partitions
> > > > warning  : 10 clients are using or haven't closed the table properly
> > > > status   : OK
> > > > cluster.public_keys                                OK
> > > > cluster.resolvechain
> > > > warning  : 1 client is using or hasn't closed the table properly
> > > > status   : OK
> > > > cluster.rolls
> > > > warning  : 14 clients are using or haven't closed the table properly
> > > > status   : OK
> > > > cluster.sec_global_attributes                      OK
> > > > cluster.sec_node_attributes                        OK
> > > > cluster.subnets
> > > > warning  : 3 clients are using or haven't closed the table properly
> > > > status   : OK
> > > > cluster.vm_disks                                   OK
> > > > cluster.vm_nodes                                   OK
> > > > 
> > > > May I use the previous command with --auto-repair option or --analyze
> > > > first ?
> > > > (I ever backup'ed /var/opt/rocks/mysql/cluster folder)
> > > > 
> > > > -- 
> > > > ___
> > > > François Thieuleux
> > > > Laboratoire d'Optique Atmosphérique - Bât.P5 Bur.325
> > > > UMR8518 CNRS/Lille1
> > > > (+33) 3 20 33 61 89
> > > > 
> > > I have never used that command option or seen the problem you are showing us.
> > me too :-)
> > > You might want to figure out what clients (ie. other Rocks commands) may be \
> > > running/hung? trying to read from or modify the DB. Any found should probably \
> > > be terminated. 
> > Ok, I just do :
> > 
> > # /etc/init.d/mysqld stop
> > 
> > and previous warning messages disappeared:
> > 
> > # /opt/rocks/mysql/bin/mysqlcheck
> > --defaults-extra-file=/root/.rocks.my.cnf -A
> > cluster.aliases                                    OK
> > cluster.app_globals                                OK
> > cluster.appliance_attributes                       OK
> > cluster.appliance_firewall                         OK
> > cluster.appliance_routes                           OK
> > cluster.appliances                                 OK
> > cluster.attributes                                 OK
> > cluster.boot                                       OK
> > cluster.bootaction                                 OK
> > cluster.bootflags                                  OK
> > cluster.categories                                 OK
> > cluster.catindex                                   OK
> > cluster.distributions                              OK
> > cluster.firewalls                                  OK
> > cluster.global_attributes                          OK
> > cluster.global_firewall                            OK
> > cluster.global_routes                              OK
> > cluster.hostselections                             OK
> > cluster.memberships                                OK
> > cluster.networks                                   OK
> > cluster.node_attributes                            OK
> > cluster.node_firewall                              OK
> > cluster.node_rolls                                 OK
> > cluster.node_routes                                OK
> > cluster.nodes                                      OK
> > cluster.os_attributes                              OK
> > cluster.os_firewall                                OK
> > cluster.os_routes                                  OK
> > cluster.partitions                                 OK
> > cluster.public_keys                                OK
> > cluster.resolvechain                               OK
> > cluster.rolls                                      OK
> > cluster.sec_global_attributes                      OK
> > cluster.sec_node_attributes                        OK
> > cluster.subnets                                    OK
> > cluster.vm_disks                                   OK
> > cluster.vm_nodes                                   OK
> > mysql.columns_priv                                 OK
> > mysql.db                                           OK
> > mysql.event                                        OK
> > mysql.func                                         OK
> > mysql.general_log                                  OK
> > mysql.help_category                                OK
> > mysql.help_keyword                                 OK
> > mysql.help_relation                                OK
> > mysql.help_topic                                   OK
> > mysql.host                                         OK
> > mysql.ndb_binlog_index                             OK
> > mysql.plugin                                       OK
> > mysql.proc                                         OK
> > mysql.procs_priv                                   OK
> > mysql.servers                                      OK
> > mysql.slow_log                                     OK
> > mysql.tables_priv                                  OK
> > mysql.time_zone                                    OK
> > mysql.time_zone_leap_second                        OK
> > mysql.time_zone_name                               OK
> > mysql.time_zone_transition                         OK
> > mysql.time_zone_transition_type                    OK
> > mysql.user                                         OK
> > wordpress.wp_commentmeta                           OK
> > wordpress.wp_comments                              OK
> > wordpress.wp_links                                 OK
> > wordpress.wp_options                               OK
> > wordpress.wp_postmeta                              OK
> > wordpress.wp_posts                                 OK
> > wordpress.wp_term_relationships                    OK
> > wordpress.wp_term_taxonomy                         OK
> > wordpress.wp_terms                                 OK
> > wordpress.wp_usermeta                              OK
> > wordpress.wp_users                                 OK
> > 
> > 
> > But when I try to remove another compute node, the error still persist:
> > 
> > # rocks remove host compute-0-28
> > rockscommand[3961793]: pbs: deleting node compute-0-28
> > Traceback (most recent call last):
> > File "/opt/rocks/bin/rocks", line 301, in <module>
> > command.runWrapper(name, args[i:])
> > File
> > "/opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py",
> > line 2194, in runWrapper
> > self.run(self._params, self._args)
> > File
> > "/opt/rocks/lib/python2.6/site-packages/rocks/commands/remove/host/__init__.py",
> > line 119, in run
> > self.runPlugins(host)
> > File
> > "/opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py",
> > line 1937, in runPlugins
> > plugin.run(args)
> > File
> > "/opt/rocks/lib/python2.6/site-packages/rocks/commands/remove/host/plugin_host.py",
> >  line 109, in run
> > Host=mapCategoryIndex('host','%s')""" % host)
> > File
> > "/opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py",
> > line 1237, in execute
> > return self.link.execute(command)
> > File "/opt/rocks/lib/python2.6/site-packages/MySQLdb/cursors.py", line
> > 174, in execute
> > self.errorhandler(self, exc, value)
> > File "/opt/rocks/lib/python2.6/site-packages/MySQLdb/connections.py",
> > line 36, in defaulterrorhandler
> > raise errorclass, errorvalue
> > _mysql_exceptions.OperationalError: (1728, 'Cannot load from mysql.proc.
> > The table is probably corrupted')
> > 
> > Moreover (sorry in advance if those aren't link up), when I do for
> > example the following command, I get :
> > 
> > # rocks list host interface compute-0-25
> > /opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py:1237:
> > Warning: Truncated incorrect DOUBLE value: '10.1.255.228'
> > return self.link.execute(command)
> > /opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py:1237:
> > Warning: Truncated incorrect DOUBLE value: '10.10.255.228'
> > return self.link.execute(command)
> > /opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py:1237:
> > Warning: Truncated incorrect DOUBLE value: '10.2.255.228'
> > return self.link.execute(command)
> > SUBNET  IFACE
> > MAC                                                        
> > IP            NETMASK     MODULE   NAME         VLAN OPTIONS CHANNEL
> > private em1  
> > 7c:d3:0a:ce:09:9a                                          
> > 10.1.255.228  255.255.0.0 -------- compute-0-25 ---- ------- -------
> > ibnet   ib0  
> > 80:00:00:03:fe:80:00:00:00:00:00:00:00:11:75:00:00:6f:1a:76
> > 10.2.255.228  255.255.0.0 ip_ipoib compute-0-25 ---- ------- -------
> > admin   ipmi 
> > 7c:d3:0a:ce:09:9e                                          
> > 10.10.255.228 255.255.0.0 -------- ipmi-0-25    ---- ------- 1     
> > 
> > I then do :
> > 
> > # /etc/init.d/foundation-mysql status
> > MySQL running (3716549)                                    [  OK  ]
> > # /etc/init.d/foundation-mysql restart
> > Shutting down MySQL.                                       [  OK  ]
> > Starting MySQL..                                           [  OK  ]
> > 
> > But the message still persist:
> > 
> > # rocks remove host compute-0-27
> > rockscommand[3965603]: pbs: deleting node compute-0-27
> > Traceback (most recent call last):
> > File "/opt/rocks/bin/rocks", line 301, in <module>
> > command.runWrapper(name, args[i:])
> > File
> > "/opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py",
> > line 2194, in runWrapper
> > self.run(self._params, self._args)
> > File
> > "/opt/rocks/lib/python2.6/site-packages/rocks/commands/remove/host/__init__.py",
> > line 119, in run
> > self.runPlugins(host)
> > File
> > "/opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py",
> > line 1937, in runPlugins
> > plugin.run(args)
> > File
> > "/opt/rocks/lib/python2.6/site-packages/rocks/commands/remove/host/plugin_host.py",
> >  line 109, in run
> > Host=mapCategoryIndex('host','%s')""" % host)
> > File
> > "/opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py",
> > line 1237, in execute
> > return self.link.execute(command)
> > File "/opt/rocks/lib/python2.6/site-packages/MySQLdb/cursors.py", line
> > 174, in execute
> > self.errorhandler(self, exc, value)
> > File "/opt/rocks/lib/python2.6/site-packages/MySQLdb/connections.py",
> > line 36, in defaulterrorhandler
> > raise errorclass, errorvalue
> > _mysql_exceptions.OperationalError: (1728, 'Cannot load from mysql.proc.
> > The table is probably corrupted')
> > 
> > (And the output from: rocks list host interface compute-0-25 is unchanged)
> > 
> > F.T.
> > 
> > > If the check still finds issues after you're sure nothing is running an trying \
> > > to access the DB you could give --auto-repair a try. If that doesn't work \
> > > restart the Rock DB (service name is foundation-mysql) and recheck. 
> > > Assuming there isn't some underlying issue with the DB there should be daily \
> > > backups of the Rocks DB SQL checking into RCS in /var/db. 
> > > For example...
> > > 
> > > [root@sidewinder-fe1 ~]# cd /var/db
> > > 
> > > [root@sidewinder-fe1 db]# ls -l
> > > total 108
> > > -rw-r--r-- 1 root root  4588 Apr 29  2001 Makefile
> > > -rw------- 1 root root 90280 Jul 27 03:24 mysql-backup-cluster
> > > drwx------ 2 root root  4096 Jul 27 03:24 RCS
> > > drwx------ 9 root root  4096 Jun 22 13:05 sudo
> > I get something slightly different :
> > 
> > # ls -l /var/db
> > total 24
> > drwxrwxrwt  2 root root 4096 Oct  6  2015 clck
> > -rw-r--r--  1 root root 4588 Apr 30  2001 Makefile
> > -rw-------  1 root root    0 Jul 27 03:35 mysql-backup-cluster
> > drwxr-xr-x  2 root root 4096 Apr 15  2014 nscd
> > drwx------  2 root root 4096 Jul 27 03:35 RCS
> > drwx------ 37 root root 4096 Jun 22 22:05 sudo
> > 
> > 
> > > [root@sidewinder-fe1 db]# head mysql-backup-cluster
> > > -- MySQL dump 10.13  Distrib 5.6.15, for Linux (x86_64)
> > > --
> > > -- Host: localhost    Database: cluster
> > > -- ------------------------------------------------------
> > > -- Server version	5.6.15
> > > 
> > > /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
> > > /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
> > > /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
> > > /*!40101 SET NAMES utf8 */;
> > > 
> > > [root@sidewinder-fe1 db]# rlog mysql-backup-cluster | head
> > > 
> > > RCS file: RCS/mysql-backup-cluster,v
> > > Working file: mysql-backup-cluster
> > > head: 1.228
> > > branch:
> > > locks: strict
> > > 	root: 1.228
> > > access list:
> > > symbolic names:
> > > keyword substitution: kv
> > > 
> > > Read the rcs man pages to learn how to use rcs commands to operate with the \
> > > backups if you need to fix things. 
> > > - Trevor
> > > 
> > 
> > -- 
> > ___
> > François Thieuleux
> > Laboratoire d'Optique Atmosphérique - Bât.P5 Bur.325
> > UMR8518 CNRS/Lille1
> > (+33) 3 20 33 61 89
> François,
Trevor and all,
> 
> > From the looks of the output it appears your system uses Torque/PBS and your \
> > attempts to remove a host are triggering the Rocks plugin for Torque/PBS which \
> > should add/remove nodes from the scheduler configuration.
Right, but when I attempt to remove a host with the command 'rocks
remove host compute-0-27' for example everything seems to be ok
from Torque/PBS point of view, i.e. fo example the file
/var/spool/torque/server_priv/nodes is correctly filled.
> 
> I don't have experience with Rocks and Torque/PBS integration but I'm sure others \
> here may be able to chime in to help with this specific problem. 
> Here are some more questions/guesses?
> 
> Are the nodes you are trying to remove from Rocks correctly setup in Torque/PBS?
In fact, my main problem is that since the last update from CentOS 6.8
to 6.9 on our
Rocks cluster (actually 6.1.1). [Every updates from 6.3 to 6.8 was fine
since june 2013 !].
I had the capability to build (rocks create distro) a full image and
install it on a part of our
compute nodes (about the half) with CentOS 6.9 based, BEFORE to upgrade
frontend.
Everything was fine with these installed nodes,so I decided to upgrade
frontend .

I followed the recommandation from Philip P. onto this list
http://lists.sdsc.edu/pipermail/npaci-rocks-discussion/attachments/20170522/78805d3d/attachment.html


(point 0, 1, 2, 3), but I do not do a yum upgrade because we are currently
in Rocks 6.1.1 and I think (maybe is my error ?) we can't easilly by
this way upgrade to rocks 6.2

The system THEN encountered a power outage, and since reboot I can't
install that image onto nodes.
The nodes however are not kickstarting. They say they find the PXE entry
point, anaconda starts,
for about 2 seconds I see the ASCII-GUI progress dialog for the
kickstarter download,
and then I'm kicked to the language selection screen for manual install.

I clearly first exclude syntax problem into
/export/rocks/install/sites/6.1/nodes/*.xml files
with xmllint
> 
> Do other Rocks commands not related to hosts and interfaces work without errors \
> (eg. rocks list attr)?
Yes. For example:

# rocks list attr
ATTR                               VALUE                              
Kickstart_PublicHostname:          loa720.univ-lille1.fr              
Kickstart_PublicAddress:           134.206.50.2                       
Kickstart_PrivateKickstartBasedir: install                            
Kickstart_DistroDir:               /export/rocks                      
Kickstart_PrivateNetmask:          255.255.0.0                        
Info_ClusterLatlong:               N50.3 W3.14                        
Kickstart_PublicBroadcast:         134.206.255.255                    
Kickstart_PrivateDNSServers:       10.1.1.1                           
Kickstart_Langsupport:             en_US                              
Kickstart_PrivateNTPHost:          10.1.1.1                           
Kickstart_PrivateNetwork:          10.1.0.0                           
Kickstart_PublicInterface:         p5p2                               
Kickstart_PrivateHostname:         loa720                             
Info_ClusterName:                  LOACLuster                         
Server_Partitioning:               manual                             
Kickstart_PrivateAddress:          10.1.1.1                           
Kickstart_PrivateKickstartHost:    10.1.1.1                           
Kickstart_PrivateDNSDomain:        local                              
Kickstart_PrivateBroadcast:        10.1.255.255                       
Kickstart_Timezone:                Europe/Paris                       
Kickstart_PublicDNSServers:        134.206.1.4,134.206.1.15,134.206.1.3
Kickstart_Multicast:               239.62.105.154                     
Kickstart_PrivateSyslogHost:       10.1.1.1                           
Kickstart_PrivateInterface:        p5p1                               
Kickstart_PublicNTPHost:           ntp.univ-lille1.fr                 
Info_CertificateState:             NPDC                               
Kickstart_PublicGateway:           134.206.3.1                        
Info_CertificateOrganization:      Laboratoire d Optique Atmospherique
Kickstart_PublicDNSDomain:         univ-lille1.fr                     
Kickstart_PublicKickstartHost:     central.rocksclusters.org          
Kickstart_PrivateKickstartCGI:     sbin/kickstart.cgi                 
Info_CertificateLocality:          Lille                              
Kickstart_PublicNetmaskCIDR:       16                                 
Kickstart_Lang:                    en_US                              
Info_CertificateCountry:           FR                                 
Kickstart_PrivateGateway:          10.1.1.1                           
Kickstart_PrivateNetmaskCIDR:      16                                 
Kickstart_Keyboard:                us                                 
Kickstart_PublicNetwork:           134.206.0.0                        
Info_ClusterContact:               loa-server@univ-lille1.fr          
Kickstart_PublicNetmask:           255.255.0.0                        
rocks_version:                     6.1                                
rocks_version_major:               6                                  
ssh_use_dns:                       true                               
vm_mac_base_addr:                  2:32:ce:80:00:00                   
vm_mac_base_addr_mask:             ff:ff:ff:c0:00:00                  
Condor_Master:                     loa720.univ-lille1.fr              
Condor_Network:                    private                            
Condor_Daemons:                    MASTER, STARTD                     
Condor_PortLow:                    40000                              
Condor_PortHigh:                   50000                              
Condor_HostAllow:                  +                                  
Condor_PasswordAuth:               no                                 
Condor_EnableMPI:                  no                                 
port411:                           372                                
ganglia_address:                   224.0.0.3                          
Info_GoogleOTPUsers:               yes                                
Info_GoogleOTPRoot:                yes                                
tripwire_mail:                     root@loa720.univ-lille1.fr


# rocks list host attr

is fine also.
> 
> Have you tried running your Rocks commands with debugging turned on?
Yes. For example:

#ROCKSDEBUG=true rocks list host profile

The last lines are:

</kickstart>Traceback (most recent call last):
  File "/opt/rocks/bin/rocks", line 301, in <module>
    command.runWrapper(name, args[i:])
  File
"/opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py",
line 2194, in runWrapper
    self.run(self._params, self._args)
  File
"/opt/rocks/lib/python2.6/site-packages/rocks/commands/list/host/profile/__init__.py",
 line 310, in run
    'os=%s' % self.os,
  File
"/opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py",
line 1872, in command
    o.runWrapper(name, args)
  File
"/opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py",
line 2194, in runWrapper
    self.run(self._params, self._args)
  File
"/opt/rocks/lib/python2.6/site-packages/rocks/commands/list/host/xml/__init__.py",
line 215, in run
    xml = self.command('list.node.xml', args)
  File
"/opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py",
line 1872, in command
    o.runWrapper(name, args)
  File
"/opt/rocks/lib/python2.6/site-packages/rocks/commands/__init__.py",
line 2172, in runWrapper
    if tokens[0] == 'select':
IndexError: list index out of range

> 
> For example...
> 
> [root@sidewinder-fe1 ~]# ROCKSDEBUG=true rocks list distribution
> Database connection URL:  \
> mysql+mysqldb://root:[BEWARE_YOUR_PASSWORD_WILL_BE_HERE]@localhost/cluster?unix_socket=/var/opt/rocks/mysql/mysql.sock
>  INFO:sqlalchemy.engine.base.Engine:SHOW VARIABLES LIKE 'sql_mode'
> INFO:sqlalchemy.engine.base.Engine:()
> DEBUG:sqlalchemy.engine.base.Engine:Col ('Variable_name', 'Value')
> DEBUG:sqlalchemy.engine.base.Engine:Row ('sql_mode', 'NO_ENGINE_SUBSTITUTION')
> INFO:sqlalchemy.engine.base.Engine:SELECT DATABASE()
> INFO:sqlalchemy.engine.base.Engine:()
> DEBUG:sqlalchemy.engine.base.Engine:Col ('DATABASE()',)
> DEBUG:sqlalchemy.engine.base.Engine:Row ('cluster',)
> INFO:sqlalchemy.engine.base.Engine:show collation where `Charset` = 'utf8' and \
> `Collation` = 'utf8_bin' INFO:sqlalchemy.engine.base.Engine:()
> DEBUG:sqlalchemy.engine.base.Engine:Col ('Collation', 'Charset', 'Id', 'Default', \
> 'Compiled', 'Sortlen') DEBUG:sqlalchemy.engine.base.Engine:Row ('utf8_bin', 'utf8', \
> 83L, '', 'Yes', 1L) INFO:sqlalchemy.engine.base.Engine:SELECT CAST('test plain \
> returns' AS CHAR(60)) AS anon_1 INFO:sqlalchemy.engine.base.Engine:()
> INFO:sqlalchemy.engine.base.Engine:SELECT CAST('test unicode returns' AS CHAR(60)) \
> AS anon_1 INFO:sqlalchemy.engine.base.Engine:()
> INFO:sqlalchemy.engine.base.Engine:SELECT CAST('test collated returns' AS CHAR \
> CHARACTER SET utf8) COLLATE utf8_bin AS anon_1 \
> INFO:sqlalchemy.engine.base.Engine:() INFO:sqlalchemy.engine.base.Engine:select \
> name from  distributions where name like '%%'
> INFO:sqlalchemy.engine.base.Engine:()
> DEBUG:sqlalchemy.engine.base.Engine:Col ('name',)
> DEBUG:sqlalchemy.engine.base.Engine:Row ('rocks-dist',)
> NAME
> rocks-dist
> 
> The output is VERY verbose but might help with figuring out the problem.
I just get that ?!

# ROCKSDEBUG=true rocks list distribution
NAME     
rocks-dist

> 
> Not sure I'll be much more help... sorry.
> 
> -- Trevor
> 
> 
F.T.



-- 
___
François Thieuleux
Laboratoire d'Optique Atmosphérique - Bât.P5 Bur.325
UMR8518 CNRS/Lille1
(+33) 3 20 33 61 89


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic