[prev in list] [next in list] [prev in thread] [next in thread]
List: openldap-technical
Subject: Re: Multimaster ldap related questions
From: Jarbas Peixoto =?utf-8?q?J=C3=BAnior?= <jarbas.junior () gmail ! com>
Date: 2011-03-25 20:13:38
Message-ID: AANLkTimo8FLumrRP2PTTQrExit3GQh2Y1ms02zvXqy81 () mail ! gmail ! com
[Download RAW message or body]
["attachment.htm" (text/html)]
<br><br><div class="gmail_quote">2011/3/25 Cannady, Mike <span dir="ltr"><<a \
href="mailto:mike.cannady@htcinc.net">mike.cannady@htcinc.net</a>></span><br><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex;"> <div class="im"><br>
<br>
> -----Original Message-----<br>
> From: Buchan Milne [mailto:<a \
href="mailto:bgmilne@staff.telkomsa.net">bgmilne@staff.telkomsa.net</a>]<br> > \
Sent: Friday, March 25, 2011 6:17 AM<br> > To: Cannady, Mike<br>
> Cc: <a href="mailto:openldap-technical@openldap.org">openldap-technical@openldap.org</a><br>
> Subject: Re: Multimaster ldap related questions<br>
><br>
><br>
> ----- "Mike Cannady" <<a \
href="mailto:mike.cannady@htcinc.net">mike.cannady@htcinc.net</a>> wrote:<br> \
><br> > > I have implemented a multi-master two node ldap with openldap \
2.4.22<br> > > and Berkely DB 4.8.26 on Redhat enterprise 5.4 with several \
readonly<br> > > replicas off of the masters.<br>
> ><br>
> > I have a need to add several optional attributes to a schema and<br>
> > probably should upgrade to 2.4.24 as well. If this was a<br>
> > single-master<br>
> > server, it would be easy to do; just slapcat the ldap store, update<br>
> > software, change schema, slapadd the ldap store back, and resume<br>
> > slapd.<br>
><br>
> Why would you need to slapcat/slapadd to "add several optional<br>
attributes"<br>
<br>
</div>While testing the 2 node multi-master nodes, I did identical changes<br>
(added an optional attribute to a schema) by stopping the 2 slapd<br>
daemonds, changed schema, started the daemonds. After a lot of adding<br>
and deleting of info out of the stores, it was apparent that something<br>
was wrong with the data and the syncs. I stopped both daemons, did the<br>
dump and restore on one and purge the database on the other and started<br>
them back up. I didn't have any problems after that. I know I didn't<br>
have issues prior to the changes, so I assumed the S.O.P. for schema<br>
changes is dump, change, restore.<br>
<br>
Are you indicating that isn't the case? What about newly defined<br>
required attributes in a schema or one that was optional and now<br>
required where all the instances have the optional already specified.<br>
<div class="im"><br>
><br>
> > I'm not sure how to do that with multi-master. One reason for \
using<br> > > multi-master was if one master was down, the other would keep<br>
> > running.<br>
> > One should be able to upgrade one server, have it catch up with the<br>
> > changes that the other master had done while the first master is<br>
down<br>
> > and then repeat for the 2nd master.<br>
><br>
> Well, it would apply if you weren't modifying data offline on the 1st<br>
> master.<br>
><br>
> > Is this possible? Has anyone<br>
> > done<br>
> > this and how was it done?<br>
> ><br>
> > I know in the near future, a high-level branch on my DIT will be<br>
> > purged<br>
> > and bulk reloaded.<br>
><br>
> I can't think of a strategy where a bulk load will have neither:<br>
> -write downtime<br>
> -inconsistency (changes made in the window between the bulk generation<br>
and<br>
> the startup of the server after import will be lost)<br>
><br>
> You aren't clear which of these you want/prefer/require.<br>
><br>
<br>
</div>I want to minimize the time that the data in the store is unavailable.<br>
If the delete and bulk-load take 10 hours to do online, then the service<br>
is effectively down for 10 hours. If the offline version of dump,<br>
delete what is to be deleted, slapadd the results, and then slapadd the<br>
new info takes only 30-60 minutes. The offline would be the method I<br>
choose based on time. The amount of time was my next question:<br>
<div class="im"><br>
<br>
> > I have tested the load with a test setup of<br>
> > multi-master ldap. If I do it via ldapadd, it takes over 6 hours to<br>
> > load. With slapadd (and slapd down) it only takes 25 minutes plus<br>
> > the<br>
> > time for the other master to get up-to-date.<br>
><br>
> What is tool-threads set to? Which interesting slapadd options (e.g.<br>
-q)<br>
> did you use?<br>
<br>
</div>Tool-threads is not specified, so I guess its one. The test box is a<br>
single hyperthreaded cpu.<br>
<br>
Slapadd command (for master # 1)<br>
slapadd -c -f ldapfile -n 1 -S 001 -w<br>
<div class="im"><br>
<br>
><br>
> > Is there any way that I<br>
> > can speed-up the update with ldapadd?<br>
><br>
> ldapadd will never be as fast as slapadd.<br>
><br>
<br>
</div>Granted. What I'm asking is there a way to speed up slapd for the<br>
duration of the mass deletes and bulk loads. In normal circumstances,<br>
the syncs and such would be OK for normal processing; but, with bulk<br>
changes, I would like have the daemon run in such a way that syncs are<br>
not done and everything is done in memory. My tests seem to indicate<br>
that disk I/O is the bottleneck. I know if there is a failure during<br>
this time, the database may be corrupted, but this OK for the bulk<br>
change duration.<br></blockquote><div><br></div><div>Hello, \
</div><div><div><br></div><div>I have a machine that was time consuming to restore a \
base </div><div>ldif with "slapadd. </div><div><br></div><div>My problem was the \
I / O (HD very slow). </div> <div><br></div><div>Resolved with a small shell: \
</div></div><div><br></div><div>#!/bin/bash</div><div><div><br></div><div>/etc/init.d/slapd \
stop</div><div>slapcat \
/tmp/backup.ldif</div><div><br></div><div>dir_ldap="/var/lib/ldap"</div> \
<div><br></div><div>rm -f \
/var/lib/ldap/{alock,__db.*,*bdb,log.*}</div><div><br></div><div>mv $dir_ldap \
$dir_ldap.real</div><div>mkdir $dir_ldap</div><div>mount -t tmpfs tmpfs $dir_ldap -o \
nr_inodes=3000000,size=4G</div><div> <br></div><div>cp $dir_ldap.real/DB_CONFIG \
$dir_ldap</div><div><br></div><div>slapadd -c -q -w -b "dc=example,dc=com" \
-l /tmp/backup.ldif</div><div>rsync -a --delete $dir_ldap/ \
$dir_ldap.real/</div><div>umount $dir_ldap</div> <div>rmdir $dir_ldap</div><div>mv \
$dir_ldap.real $dir_ldap</div><div>chown openldap:openldap $dir_ldap \
-R</div><div><br></div><div>/etc/init.d/slapd \
start</div></div><div><br></div><div>Thus, the I / O during the slapadd no longer \
exists, because all files are in memory. The time difference with "tmpfs" \
is striking.</div> <div><br></div><div><div>Regards,</div><div>Jarbas</div></div><div><br></div><div> \
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex;"> <div class="im"><br>
<br>
> > I have pieces of my slapd.conf<br>
> > for the 1st master at the end of this email.<br>
> ><br>
> > Slapadd has two options that appear to be needed when dealing with<br>
> > multi-master or replicate nodes. The first is the "-S sid" \
option,<br> > > the<br>
> > second is "-w". I'm a little confused what is used where. \
If you<br> > > are<br>
> > doing a dump and restore operation (slapcat, delete database,<br>
> > slapadd)<br>
> > the only option you need is the "-w" option? If you are adding \
new<br> > > entries offline then do both options need to be specified?<br>
><br>
> Adding, *or* modifying.<br>
<br>
</div>I don't understand this answer.<br>
<div class="im"><br>
><br>
> > Is there a multi-master best practice quide somewhere?<br>
><br>
> A good start is to never lie to slapd. If you have changed the<br>
contents of<br>
> an entry, the entryCSN should not be retained.<br>
><br>
> I also prefer to avoid non-restore bulk-loading.<br>
<br>
</div>Me too, but if it is a decision of 10 hours vs. 1 hour, the 1 hour will<br>
win.<br>
<br>
<br>
><br>
> Regards,<br>
> Buchan<br>
<br>
<br>
Thanks,<br>
<font color="#888888"><br>
Mike<br>
</font><div><div></div><div class="h5"><br>
**********************************************************************<br>
HTC Disclaimer: The information contained in this message may be privileged and \
confidential and protected from disclosure. If the reader of this message is not the \
intended recipient, or an employee or agent responsible for delivering this message \
to the intended recipient, you are hereby notified that any dissemination, \
distribution or copying of this communication is strictly prohibited. If you have \
received this communication in error, please notify us immediately by replying to the \
message and deleting it from your computer. Thank you.<br>
**********************************************************************<br>
<br>
<br>
</div></div></blockquote></div><br>
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic