[prev in list] [next in list] [prev in thread] [next in thread] 

List:       evms-devel
Subject:    [Evms-devel] Fwd: Max number of supported EVMS volumes
From:       Srini <v.vvsrini () gmail ! com>
Date:       2007-08-08 16:34:54
Message-ID: 25b3869e0708080922i727bd6dbg8bd8dbad7940b99b () mail ! gmail ! com
[Download RAW message or body]

Hi all,

We went ahead and debugged the issue further and here are our
findings. Since we have not really hit upon a solution to this
problem, we would highly appreciate any answers / inputs on why is the
MD region going corrupt on creating close to 230 regions... We have
been grappling with this problem for the last 2 weeks...

On the following environment: Linux kernel 2.6.20-rc6,  EVMS 2.5.4,
Intel hardware / P4 D processor / 2 GB RAM, we are getting the error
message "MDRaid0RegMgr: RAID0 region md/md0 is corrupt.  Too many
disks (8) are active.  Whereas the number of raid disks is 4."

We did get the evms-engine 'details' debug log message and have
enclosed them to this email. File evms-engine.good.log is the debug
log for a correctly created one while the file evms-engine-bad.log is
the debug log for the incorrectly created one.

In the extracts of the log message as below, we see that the error is
coming in sb0_analyze_sb function:

MDRaid1RegMgr: md_discover_volumes: Searching for MD Super Blocks.
MDRaid1RegMgr: raid1_discover: PV discovery complete.
MDRaid1RegMgr: raid1_discover: RAID1 volume discovery complete.
MDRaid0RegMgr: md_discover_volumes: Searching for MD Super Blocks.
MDRaid0RegMgr: raid0_discover: PV discovery complete.
MDRaid0RegMgr: sb0_analyze_sb: MD region md/md0 is corrupt
MDRaid0RegMgr: raid0_discover: RAID0 volume discovery complete.
MDRaid5RegMgr: md_discover_volumes: Searching for MD Super Blocks.
MDRaid5RegMgr: raid5_discover: PV discovery complete.
MDRaid5RegMgr: raid5_discover: RAID4/5 volume discovery complete.
MD Multipath: md_discover_volumes: Searching for MD Super Blocks.
LvmRegMgr: lvm_discover_volume_groups:Searching for PVs in the object list.
LvmRegMgr: lvm_discover_volume_groups:Container discovery complete.

However, we find that while the number of actual disks is only 4, the
extended info of this region is showing as if there are a total of 8
disks - which seems to be actually triggering this error message. The
command input and output in this situation is:

"q:ei,md/md0" evms command output:

EVMS Command Line Interpreter Version 2.5.4

MDRaid0RegMgr: RAID0 region md/md0 is corrupt.  Too many disks (8) are
active.  Whereas the number of raid disks is 4.
EVMS:
Field Name: name
Title: Name
Description: MD volume name
The value of this field is: md/md0

Field Name: state
Title: State
Description: State of the MD region
The value of this field is: Discovered, Corrupt

Field Name: personality
Title: Personality
Description: MD personality
The value of this field is: RAID0

Field Name: superblock
Title: Working SuperBlock
Description: Copy of SuperBlock that is most up to date
The value of this field is: (null)
This field has additional information available.  To access this
additional information, specify the name of this field as part of the
extended info query command.

Field Name: nr_disks
Title: Number of disks
Description: Number of disks found by EVMS that comprise this volume
The value of this field is: 8

Field Name: child_object0
Title: Disk 0
Description: Disk that belongs to this raid volume set
The value of this field is: dm-0
This field has additional information available.  To access this
additional information, specify the name of this field as part of the
extended info query command.

Field Name: child_object0
Title: Disk 0
Description: Disk that belongs to this raid volume set
The value of this field is: sdd1
This field has additional information available.  To access this
additional information, specify the name of this field as part of the
extended info query command.

Field Name: child_object1
Title: Disk 1
Description: Disk that belongs to this raid volume set
The value of this field is: dm-1
This field has additional information available.  To access this
additional information, specify the name of this field as part of the
extended info query command.

Field Name: child_object1
Title: Disk 1
Description: Disk that belongs to this raid volume set
The value of this field is: sdc1
This field has additional information available.  To access this
additional information, specify the name of this field as part of the
extended info query command.

Field Name: child_object2
Title: Disk 2
Description: Disk that belongs to this raid volume set
The value of this field is: dm-2
This field has additional information available.  To access this
additional information, specify the name of this field as part of the
extended info query command.

Field Name: child_object2
Title: Disk 2
Description: Disk that belongs to this raid volume set
The value of this field is: sdb1
This field has additional information available.  To access this
additional information, specify the name of this field as part of the
extended info query command.

Field Name: child_object3
Title: Disk 3
Description: Disk that belongs to this raid volume set
The value of this field is: dm-3
This field has additional information available.  To access this
additional information, specify the name of this field as part of the
extended info query command.

Field Name: child_object3
Title: Disk 3
Description: Disk that belongs to this raid volume set
The value of this field is: sda1
This field has additional information available.  To access this
additional information, specify the name of this field as part of the
extended info query command.

------------End of Extended info command --------------------

Can someone help us on why is this MD region getting corrupted on
creating upto this much segments?

Thanks in advance for all your inputs....

Srini

---------- Forwarded message ----------
From: Srini <v.vvsrini@gmail.com>
Date: Aug 2, 2007 10:03 PM
Subject: Max number of supported EVMS volumes
To: evms-devel@lists.sourceforge.net


Hi all,

We are trying to create more than 230 volumes on a EVMS RAID 0 device.
Here are the details:

Linux kernel 2.6.20-rc6
EVMS 2.5.4
Intel hardware / P4 D processor / 2 GB RAM

The RAID 0 has been created with 4 disks in them. When we  try to
create around 230 volumes, the following message gets printed:

"MDRaid0RegMgr: RAID0 region md/md0 is corrupt.  Too many disks (8) are
active.  Whereas the number of raid disks is 4."

The following EVMS commands have been used to create the region / volume:

Region creation:
Create:Region,LVM2={name=Regionname,size=1GB},lvm2/<container name>/Freespace

Volume Creation:
       Create:volume,lvm2/<container name>/Regionname,name=volume_name

Is there a maximum number of volumes possible to be created in EVMS?
Please clarify with suggestions on what needs to be done to increase
the number of supported volumes.

Thanks in advance for your help,

Srini

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
Evms-devel mailing list
Evms-devel@lists.sourceforge.net
To subscribe/unsubscribe, please visit:
https://lists.sourceforge.net/lists/listinfo/evms-devel
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic