[prev in list] [next in list] [prev in thread] [next in thread] 

List:       aix-l
Subject:    Re: What about GPFS
From:       Ferenc Gyurcsan <fgyurcsa () AVAILANT ! COM>
Date:       2002-09-30 23:42:00
[Download RAW message or body]

HACMP/ES 4.5 is supported with GPFS. What it basically does is use
node-bound service labels for communication, which has to be configured in
HACMP.
There is a redbook out there about hacmp and gpfs, and you may also find
information about it in the ES guides for 4.5.

--Ferenc

-----Original Message-----
From: B.N.Sarma [mailto:bsarma@BASIT.COM]
Sent: Monday, September 30, 2002 3:45 PM
To: aix-l@Princeton.EDU
Subject: What about GPFS


Greetings,

What about GPFS, here is the link:
http://www-1.ibm.com/servers/eserver/pseries/software/whitepapers/gpfs_prime
r.html

Any experiences on this. what do I need to setup this. We have SSA disks and
HAMP/ES.

Regards & Thanks
BN

"B.N.Sarma" wrote:

> Greetings,
>
> We have IBM RS6000 AIX 4.3.3 and 2 node HACMP/ES cluster running oracle
> 8.1.7 OPS.
>
> I have setup the oracle database on raw devices, every thing is fine.  I
> wanted to setup a shared file system for archived redo logs , where
> oracle instance from both the nodes can archive the redo logs to one
> shared file system.
> This (shared FS)  facilitates easier backups and recovery in case of
> oracle database (instance) crash.
>
> But we  were informed by IBM Tech support that it is not possible to
> setup a shared file system where we can read, write simultaneously from
> both the nodes. Does HANFS need separate license or is it a different
> product.
>
> I was looking at the HACMP documentation, there they talk about  HANFS
> again which is  limited to 2 nodes, of course it works for us.
> But I want to know the pros and cons of going for such a setup. Oracle
> 817 offers writing a copy of the archived logs to remote servers.
>
> I want  to weigh both the features.
>
> I appreciate your suggestions and comments.
>
> Regards & Thanks
> BN
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic