[prev in list] [next in list] [prev in thread] [next in thread] 

List:       hpux-admin
Subject:    [HPADM] SUMMARY:  LTO-2 and Omniback/data protector drive setup suggestions
From:       Dan Zucker <daniz () netvision ! net ! il>
Date:       2003-11-29 19:49:08
[Download RAW message or body]

I thank
Mike Lavery
Mike White
Tom Myers

The query:

 I have just finished setting  up a new library with LTO2 drives for use 
 with omniback.  
 
 I want to ask if anyone has suggestions for buffers, segsize and blksize 
 when defining the drives to omniback?
 
 At least for the next 3 months the library, although SAN attached will
work only with the main media server, so I also need suggestions for 
 kernal parms - if any - to adjust.


reply 1:
I would look to do the following........

1. make sure you have latest SCSI patches, particularly SCSI tape
2. latest Omniback patches. I hope you are on 4.x?.
3. Increase disk agent buffers to 32
4. Keep block size the default.
5. For the LTO2 devices in omniback increase the concurrency to around 12 to get the \
performance. Just keep an eye on the server's resources.  6. Make sure shared memory \
parms are increased from the default settings 7. Finally, if you are on a SAN, make \
sure EMS SCSI_tape monitoring is disabled and st_ats_enabled kernel parm is set to 0.


Reply 2:
We have been using the default values.  No changes.  Let us know if you 
find out there is a better way.  What type of library do you have?
 

Reply 3:
Since they are LTO (gen1 or gen2) the buffer sizes aren't as critical as if
you were using DLT-8000 drives.

Unless you have any Solaris clients to backup, I would set everything
towards the top end.  On my LTO (gen1 or gen2) "devices", I set block size
to 256K, leave the segment size at the default of 2000 and raise disk agent
buffers to 20.  If all the clients and the media server have plenty of RAM,
you could push the DA buffers all the way up to 32 and kick block size up to
512K or 1024K.

I've seen backup speeds up to 59GB/hr using my settings, although 50GB/hr
seems to be typical for high-end clients like RPxxxx servers.

Note: For Solaris clients, at least with OB/DP 4.10, I haven't been able to
make it work with block size larger than 128K or DA buffers higher than 6-8.
If you exceed some threshold, the Solaris clients will randomly fail
reporting RPC errors, mostly on Full sessions.

What I have done so far.........

The setup as of today:
1 HBA 2giga Fibre for 7 drives.  2 HBA's are used by DLT7000 on this machine
    and 1 HBA on a different machine is also used by the DLT7000. 
(I have 9 DLT7000 on the media/cell server.)

OB4.1 is limited to 5 DA per MA.  DP5.1 permits 32 DA per MA.

I set the drives to 32 buffers, 64 blksize, 2000 segments.

I have found that some machines backed up via LAN have only slight
change of total backup time. Some file systems are running at 200% 
the speed of previous backups, others are showing only a 1-2% improvement.

Until now I did not use 'compress' on local file systems, but that will be
my next test.

If I find a magic bullet, I will send a second summary. I am attempting
to setup a test machine with DP5.1 at the DRP site, but it means spending
at least a day in the desert.

Hopefully your milage will vary.

DZ


[Attachment #3 (text/html)]

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=windows-1255">
<META content="MSHTML 6.00.2800.1276" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2>I thank</FONT></DIV>
<DIV><FONT face=Arial size=2>Mike Lavery</FONT></DIV>
<DIV><FONT face=Arial size=2>Mike White</FONT></DIV>
<DIV><FONT face=Arial size=2>Tom Myers</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT>&nbsp;</DIV>
<DIV><FONT face=Arial size=2>The query:</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT>&nbsp;</DIV>
<DIV><FONT face=Arial size=2><FONT size=3>&nbsp;I have just finished 
setting&nbsp; up a new library with LTO2 drives for use&nbsp;<BR> with 
omniback.&nbsp;&nbsp;<BR>&nbsp;<BR> I want to ask if anyone has suggestions for 
buffers, segsize and blksize&nbsp;<BR> when defining the drives to 
omniback?<BR>&nbsp;<BR> At least for the next 3 months the library, although SAN 
attached will<BR>work only with the main media server, so I also need 
suggestions for&nbsp;<BR> kernal parms - if any - to 
adjust.</FONT><BR></FONT></DIV>
<DIV><FONT face=Arial size=2></FONT>&nbsp;</DIV>
<DIV><FONT face=Arial size=2>reply 1:</FONT></DIV>
<DIV><FONT face=Arial size=2>
<DIV><SPAN class=741141108-19112003><FONT face=Arial color=#0000ff size=2>I 
would look to do the following........</FONT></SPAN></DIV>
<DIV><SPAN class=741141108-19112003><FONT face=Arial color=#0000ff 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=741141108-19112003><FONT face=Arial color=#0000ff size=2>1. 
make sure you have latest SCSI patches, particularly SCSI 
tape</FONT></SPAN></DIV>
<DIV><SPAN class=741141108-19112003><FONT face=Arial color=#0000ff size=2>2. 
latest Omniback patches. I hope you are on 4.x?.</FONT></SPAN></DIV>
<DIV><SPAN class=741141108-19112003><FONT face=Arial color=#0000ff size=2>3. 
Increase disk agent buffers to 32</FONT></SPAN></DIV>
<DIV><SPAN class=741141108-19112003><FONT face=Arial color=#0000ff size=2>4. 
Keep block size the default.</FONT></SPAN></DIV>
<DIV><SPAN class=741141108-19112003><FONT face=Arial color=#0000ff size=2>5. For 
the LTO2 devices in omniback increase the concurrency to around 12 to get the 
performance. Just keep an eye on the server's resources. </FONT></SPAN></DIV>
<DIV><SPAN class=741141108-19112003><FONT face=Arial color=#0000ff size=2>6. 
Make sure shared memory parms are increased from the default 
settings</FONT></SPAN></DIV>
<DIV><SPAN class=741141108-19112003><FONT face=Arial color=#0000ff size=2>7. 
Finally, if you are on a SAN, make sure EMS SCSI_tape monitoring is disabled and 
st_ats_enabled kernel parm is set to 0.</FONT></SPAN></DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>Reply 2:</DIV>
<DIV>We have been using the default values.&nbsp; No changes.&nbsp; Let us know 
if you <BR>find out there is a better way.&nbsp; What type of library do you 
have?<BR>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>Reply 3:<BR>Since they are LTO (gen1 or gen2) the buffer sizes aren't as 
critical as if<BR>you were using DLT-8000 drives.<BR><BR>Unless you have any 
Solaris clients to backup, I would set everything<BR>towards the top end.&nbsp; 
On my LTO (gen1 or gen2) "devices", I set block size<BR>to 256K, leave the 
segment size at the default of 2000 and raise disk agent<BR>buffers to 20.&nbsp; 
If all the clients and the media server have plenty of RAM,<BR>you could push 
the DA buffers all the way up to 32 and kick block size up to<BR>512K or 
1024K.<BR><BR>I've seen backup speeds up to 59GB/hr using my settings, although 
50GB/hr<BR>seems to be typical for high-end clients like RPxxxx 
servers.<BR><BR>Note: For Solaris clients, at least with OB/DP 4.10, I haven't 
been able to<BR>make it work with block size larger than 128K or DA buffers 
higher than 6-8.<BR>If you exceed some threshold, the Solaris clients will 
randomly fail<BR>reporting RPC errors, mostly on Full sessions.<BR></DIV>
<DIV>What I have done so far.........</DIV>
<DIV>&nbsp;</DIV>
<DIV>The setup as of today:</DIV>
<DIV>1 HBA 2giga Fibre for 7 drives.&nbsp; 2 HBA's are used by DLT7000 on this 
machine</DIV>
<DIV>&nbsp;&nbsp;&nbsp; and 1 HBA on a different machine is also used by the 
DLT7000. </DIV>
<DIV>(I have 9 DLT7000 on the media/cell server.)</DIV>
<DIV>&nbsp;</DIV>
<DIV>OB4.1 is limited to 5 DA per MA.&nbsp; DP5.1 permits 32 DA per MA.</DIV>
<DIV>&nbsp;</DIV>
<DIV>I set the&nbsp;drives to&nbsp;32 buffers, 64 blksize, 2000 segments.</DIV>
<DIV>&nbsp;</DIV>
<DIV>I have found that some machines backed up via LAN have only slight</DIV>
<DIV>change of total backup time. Some file systems are running at 200% </DIV>
<DIV>the speed of previous backups, others are showing only a 1-2% 
improvement.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Until now I did not use 'compress' on local file systems, but that will 
be</DIV>
<DIV>my next test.</DIV>
<DIV>&nbsp;</DIV>
<DIV>If I find a magic bullet, I will send a second summary. I am 
attempting</DIV>
<DIV>to setup a test machine with DP5.1 at the DRP site, but it means 
spending</DIV>
<DIV>at least a day in the desert.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Hopefully your milage will vary.</DIV>
<DIV>&nbsp;</DIV>
<DIV>DZ</DIV>
<DIV>&nbsp;</DIV></FONT></DIV></BODY></HTML>

--
             ---> Please post QUESTIONS and SUMMARIES only!! <---
        To subscribe/unsubscribe to this list, contact majordomo@dutchworks.nl
       Name: hpux-admin@dutchworks.nl     Owner: owner-hpux-admin@dutchworks.nl

 Archives:  ftp.dutchworks.nl/pub/digests/hpux-admin       (FTP, browse only)
            http://www.dutchworks.nl/htbin/hpsysadmin   (Web, browse & search)


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic