[prev in list] [next in list] [prev in thread] [next in thread] 

List:       lustre-discuss
Subject:    [Lustre-discuss] usage by user, "du -s", and lack of quotas
From:       Evan.Felix () pnl ! gov (Felix, Evan J)
Date:       2006-05-19 7:36:50
Message-ID: 2A474706F1CA9B4C9170EF8FB2031BAB012EACE9 () pnlmse27 ! pnl ! gov
[Download RAW message or body]

Ok, Sorry for the delay, took a while to get this past the IP system.
Here is the two files needed for my parallel-python disk usage program.
It is also designed to expire files that are old, but only if you tell
it.

Send any comments or improvements to the list.

Evan

-----Original Message-----
From: Terry Heidelberg [mailto:th@llnl.gov] 
Sent: Monday, April 10, 2006 2:18 PM
To: Felix, Evan J
Subject: Re: [Lustre-discuss] usage by user, "du -s", and lack of quotas

Hi Evan,
This program sounds interesting.    Is there any chance you
could make it available to LLNL?   

It takes us many hours to scan our filesystems, using current
locally-written non-threaded tools.   How many inodes
were in use on your 300TB/few hours filesystem?    Maybe
inodes/hour would be a useful metric for this situation?

Thanks,
Terry

Felix, Evan J wrote:

>Here at pnnl, we have two fairly large file systems, and we have 
>created a multi-threaded directory scanner, that  gives us a accounting

>of the number of files,directories,filesize that each userid owns.  It 
>still takes a few hours to run on our 300TB system, but on the smaller 
>54TB system(5-10% full) it takes less than 20 minutes.  Both programs 
>have specific uses for their file system, such as deleting old 
>files(scratch file system), and modifying a database, or checking
stripe information.
>The parallel nature of the system makes it fairly quick.  Essentially 
>each thread does this:
>
>While dirs on stack:
> Pop directory off stack:
>  scan directory:
>   if directory, collect statistics, push it on stack
>   if file, collect statistics, filespecial stuff
>   other files: collect statistics
>  save thread-private statistics to global stats
>
>It works well, we have tried thread counts from 1 to 32, but as you 
>pass 32, it actually slows things down.  And  one of them does not do 
>too well beyond 16.
>
>Evan Felix
>Pacific Northwest National Laboratory
>
>
>-----Original Message-----
>From: lustre-discuss-bounces@clusterfs.com
>[mailto:lustre-discuss-bounces@clusterfs.com] On Behalf Of Nathan 
>Dauchy
>Sent: Thursday, April 06, 2006 8:08 AM
>To: lustre-discuss@clusterfs.com
>Subject: [Lustre-discuss] usage by user, "du -s", and lack of quotas
>
>Greetings,
>
>Due to lack of quota support in Lustre, we are having to periodically 
>monitor filesystem usage by user.  Fortunately, we can do this by 
>project directory, and don't have to run "find" on the whole thing.
>Unfortunately, even running "du -s /el1/projects/*" takes a LONG time.
>
>Does anyone have suggestions of alternative ways to implement quotas?
>Is there a more efficient way in Lustre to determine filesystem 
>utilization of a directory?
>
>We are currently running lustre-1.4.4, linux-2.6.5-7.191, on SuSE 9.1 
>for x86_64, with 4 OSS nodes.  We are in the middle of upgrading to 
>lustre-1.4.6.1.  The filesystem is reasonably full, but not huge by my 
>understanding of what Lustre is capable of:
>
># df -Ph /el*
>Filesystem            Size  Used Avail Use% Mounted on
>l0009-m:/mds0/client0   12T  8.4T  2.6T  77% /misc/el0
>l0009-m:/mds1/client1   12T  9.6T  1.4T  88% /misc/el1
>
>Thanks for any suggestions!
>
>-Nathan
>_______________________________________________
>Lustre-discuss mailing list
>Lustre-discuss@clusterfs.com
>https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
>_______________________________________________
>Lustre-discuss mailing list
>Lustre-discuss@clusterfs.com
>https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
>  
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: evanlib.py
Type: application/octet-stream
Size: 2262 bytes
Desc: evanlib.py
Url : http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20060510/8d9df55a/evanlib.obj
-------------- next part --------------
A non-text attachment was scrubbed...
Name: dude
Type: application/octet-stream
Size: 14615 bytes
Desc: dude
Url : http://mail.clusterfs.com/pipermail/lustre-discuss/attachments/20060510/8d9df55a/dude.obj

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic