[prev in list] [next in list] [prev in thread] [next in thread] 

List:       lustre-announce
Subject:    Re: [Lustre-discuss] compiling lustre
From:       cliff white <cliffw () clusterfs ! com>
Date:       2006-01-19 17:50:18
Message-ID: 43CFD15A.90908 () clusterfs ! com
[Download RAW message or body]

Geoffrey Chisnall wrote:
> thanks for the information guys  :-)
> 
> i just would like to know to more things....
> what exactly is the MDS ( meta data server ) and
> (OST) object storage targets
> 
> is the MDS kinda like the filesystem ( root / ? )
> and OST is the partition where the actual data that u will be "sharing" 
> ? similar to a NFS export

Somewhat, but not quite.
Metadata (http://en.wikipedia.org/wiki/Metadata_(computing)) is
'data about data' - In a filesystem, metadata includes the directory 
structures, timestamps, mode bits and a few other things.

The object store is the actual data. With Lustre, multiple servers can
provide object storage targets (OSTS) which are combined into a single 
filesystem image.

Another way of looking at this - when you create,open  or destroy a 
file, you affect metadata. When you are reading and writing to the file,
you are using the object store.
hope this helps
cliffw


> 
> 
> thanks again!
> 
> 
> 
> cliff white wrote:
> 
>> Geoffrey Chisnall wrote:
>>
>>>
>>> i have a local.sh which inside is this...
>>> <snip>
>>> #!/bin/sh
>>>
>>> # local.sh
>>>
>>> # Create node
>>> rm -f local.xml
>>> lmc -m local.xml --add node --node localhost
>>> lmc -m local.xml --add net --node localhost --nid localhost --nettype 
>>> tcp
>>>
>>> # Configure MDS
>>> lmc -m local.xml --format --add mds --node localhost --mds mds-test 
>>> --fstype ldiskfs --dev /tmp/mds-test --size 50000
>>>
>>> # Configure OSTs
>>> lmc -m local.xml --add lov --lov lov-test --mds mds-test --stripe_sz 
>>> 1048576 --stripe_cnt 0 --stripe_pattern 0
>>> lmc -m local.xml --add ost --node localhost --lov lov-test --ost 
>>> ost1-test --fstype ldiskfs --dev /tmp/ost1 --size 100000
>>> lmc -m local.xml --add ost --node localhost --lov lov-test --ost 
>>> ost2-test --fstype ldiskfs --dev /tmp/ost2-test --size 100000
>>>
>>> # Configure client
>>> lmc -m local.xml --add mtpt --node localhost --path /mnt/lustre --mds 
>>> mds-test --lov lov-test
>>> </snip>
>>>
>>> it makes a /mnt/lustre with only 190MB ( from where does it get that? )
>>>
>>>
>>> on the machine i have setup a seperate partition /dev/hda4 just for 
>>> lustre...now i want to mount lustre on that partition.
>>> so i have tried changing the ( --dev /tmp/ost1 to --dev /dev/hda4 ) 
>>> but to no avail.
>>>
>>> even with the current setup ( local.sh ) i tried mounting the lustre 
>>> mount with this command
>>> [root@test13 lustre]# mount -t lustre localhost:/mds-test/client 
>>> /mnt/test
>>> localhost:/mds-test/client: mount(/mnt/test, mount.lustre) failed: 
>>> Input/output error
>>>
>>> im not sure what im doing wrong...
>>
>>
>>
>>>
>>> thanks!
>>>
>>
>> First, you need either two machines with one partition, or one machine 
>> with two partitions. One partition for the MDS, one (or more) for the 
>> OST. Then you need another node to be the client, which will mount the
>> filesystem. The client is _not named 'client' we use a generic label, 
>> which means you can use N number of clients without altering the 
>> config. When you use real disks, do not use the --size parameter, we
>> want the whole partition.
>>
>> So: nodeA (mds) /dev/sda1 - supplies fs metadata
>>     nodeB (ost) /dev/sda3 - supplies fs data
>>     nodeC (client) - mounts the filesystem
>>
>> Then,
>>
>> lmc -m local.xml --add node --node nodeA
>> lmc -m local.xml --add node --node nodeB
>> lmc -m local.xml --add node --node client
>>
>> lmc -m local.xml --add net --node nodeA --nid nodeA --nettype tcp
>> lmc -m local.xml --add net --node nodeB --nid nodeB --nettype tcp
>> # generic client
>> lmc -m local.xml --add net --node client --nid '*' --nettype tcp
>>
>>  # Configure MDS
>>  lmc -m local.xml --format --add mds --node nodeA --mds mdsA \
>>  --fstype ldiskfs --dev /dev/sda1
>>
>>  # Configure LOV
>> lmc -m local.xml --add lov --lov lov-test --mds mdsA \
>> --stripe_sz 1048576 --stripe_cnt 0 --stripe_pattern 0
>>
>> # Configure OST
>> lmc -m local.xml --add ost --node nodeB --lov lov-test \
>> --ost ost1-test --fstype ldiskfs --dev /dev/sda3
>>
>> # Configure mountpoint
>> lmc -m local.xml --add mtpt --node client --path /mnt/lustre \
>> --mds mdsA --lov lov-test
>>
>> To start Luster (first time only) on the OST:
>> lconf  --reformat local.xml
>> Normal start:
>> lconf local.xml
>> First time start of MDS:
>> lconf --reformat local.xml
>> Normal MDS startup
>>
>> lconf local.xml
>>
>> After both the MDS and OST are started, you mount the filesystem on
>> the client node thusly (remember, the client node is not named 'client'):
>> mount -t lustre nodeA:/mdsA/client /mnt/lustre
>>
>> hope this helps
>> cliffw
>>
>> [snip]
>>
>>
> 
> _______________________________________________
> Lustre-discuss mailing list
> Lustre-discuss@lists.clusterfs.com
> https://lists.clusterfs.com/mailman/listinfo/lustre-discuss

_______________________________________________
Lustre-discuss mailing list
Lustre-discuss@lists.clusterfs.com
https://lists.clusterfs.com/mailman/listinfo/lustre-discuss
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic