[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-core-devel
Subject:    Re: Reusing of KIO slave instances
From:       "Dawit A." <adawit () kde ! org>
Date:       2009-11-19 23:14:55
Message-ID: 200911191814.55642.adawit () kde ! org
[Download RAW message or body]

Here is the patch as promised...

Note that I have not even tested whether or not this patch compiles since I do 
not yet have Qt 4.6 installed. I have made the best effort to ensure that it is 
an exact replica of the one I have in the 4.3 branch which does compile and 
work fine. Also when apply the patch you have to make sure you do it from the 
top level of kdelibs since the it contains changes in three separate folders 
within kdelibs.

Anyhow, here are a few notes about the the patch itself:

#1. With this patch the 'maxInstances' property which has been around since 
the creation of KIO or thereabouts will all of the sudden be enforced! This
means that all protocol files will need to be checked to ensure they contain 
this particular property with a sensible default or else only a single 
instance of ioslave will be spawned for them by default! The patch itself 
contains changes to protocol files included in kdelibs (ftp/file/http), but not 
kdebase or other modules...

#2. The 'maxInstancesPerHost' property is optional and will not be enforced if 
it is missing for a particular protocol. This was purposefully done for the 
sake of backwards compatiability.

#3. I dealt with the deadlock conditions dfaure raised by simply refusing to 
schedule a request whose other end also needs to be scheduled (e.g. copying 
file from ftp to ftp or ftp to sftp) if an ioslave for the other end cannot be 
reserved. This is a non-intrusive and much simpler solution than otherwise 
suggested in those earlier discussions. It is however very effective in 
combating deadlock conditions as far as I can tell... 

If you want to test for deadlock conditions, then you can change the values of 
the maxInstances/maxInstancesPerHost properties in any non-local (e.g. 
ftp/sftp/http) protocol files and start copying files between two different 
servers. If you make the value of those properties low enough, you should 
reach a deadlock condition which should never happen with this patch. To 
actually see deadlocks remove the patch to kdelib/kio/scheduler.cpp, compile, 
and run the same tests again.

#4. The entire change needs to be stress tested under heavy request load to 
see if it works reliably. I have attempted to stress it by opening multiple 
table and visiting several complex sites, but some sort of automated stress 
testing would be nice...

Thanks...


On Thursday 19 November 2009 10:42:39 Dawit A. wrote:
> On Thursday 19 November 2009 04:11:15 Sebastian Trueg wrote:
> > Did I understand correctly from reading the thread: except for the
> > changes in job.cpp you commited your patch? I.e. maxInstancesPerHost
> > supprot is already implemented if an app uses the scheduler?
> 
> Unfortunately I did not commit any portion of my patch yet for lack of
> testing. Instead I opted to use the patch locally for an extended period of
> time and see there are any negative side effect and so far I have had none.
>  I was being overly cautious because this portion of the code affects every
>  single application that accesses the network or the file system.
>
> > So what I would test is the part of the patch that potentially leads to
> > dead-locks in file_copy and friends?
> 
> Yes, but in addition someone else besides me has to test whether or not the
> default limits used for maxInstances and maxInstancesPerHost are actually
> sufficient as well and generally test to see if there are any problems such
>  as performance degredation etc etc...
> 
> > If so, yes, of course I will test that one.
> 
> Great, then I will adapt these changes to trunk and post a patch later on
> today.
> 

["kio_scheduler_request_queue.patch" (text/x-patch)]

Index: kioslave/file/file.protocol
===================================================================
--- kioslave/file/file.protocol	(revision 1051604)
+++ kioslave/file/file.protocol	(working copy)
@@ -13,6 +13,6 @@
 moving=true
 opening=true
 deleteRecursive=true
-maxInstances=4
+maxInstances=50
 X-DocPath=kioslave/file/index.html
 Class=:local
Index: kioslave/http/http.protocol
===================================================================
--- kioslave/http/http.protocol	(revision 1051604)
+++ kioslave/http/http.protocol	(working copy)
@@ -9,6 +9,7 @@
 defaultMimetype=application/octet-stream
 determineMimetypeFromExtension=false
 Icon=text-html
-maxInstances=3
+maxInstances=20
+maxInstancesPerHost=5
 X-DocPath=kioslave/http/index.html
 Class=:internet
Index: kioslave/http/webdavs.protocol
===================================================================
--- kioslave/http/webdavs.protocol	(revision 1051604)
+++ kioslave/http/webdavs.protocol	(working copy)
@@ -14,5 +14,7 @@
 determineMimetypeFromExtension=false
 Icon=folder-remote
 config=webdav
+maxInstances=18
+maxInstancesPerHost=3
 X-DocPath=kioslave/webdav/index.html
 Class=:internet
Index: kioslave/http/https.protocol
===================================================================
--- kioslave/http/https.protocol	(revision 1051604)
+++ kioslave/http/https.protocol	(working copy)
@@ -9,6 +9,8 @@
 defaultMimetype=application/octet-stream
 determineMimetypeFromExtension=false
 Icon=text-html
+maxInstances=18
+maxInstancesPerHost=3
 config=http
 X-DocPath=kioslave/http/index.html
 Class=:internet
Index: kioslave/http/webdav.protocol
===================================================================
--- kioslave/http/webdav.protocol	(revision 1051604)
+++ kioslave/http/webdav.protocol	(working copy)
@@ -13,6 +13,7 @@
 defaultMimetype=application/octet-stream
 determineMimetypeFromExtension=false
 Icon=folder-remote
-maxInstances=3
+maxInstances=18
+maxInstancesPerHost=3
 X-DocPath=kioslave/webdav/index.html
 Class=:internet
Index: kioslave/ftp/ftp.protocol
===================================================================
--- kioslave/ftp/ftp.protocol	(revision 1051604)
+++ kioslave/ftp/ftp.protocol	(working copy)
@@ -12,6 +12,7 @@
 deleting=true
 moving=true
 Icon=folder-remote
-maxInstances=2
+maxInstancesPerHost=2
+maxInstances=10
 X-DocPath=kioslave/ftp/index.html
 Class=:internet
Index: kio/kio/job.cpp
===================================================================
--- kio/kio/job.cpp	(revision 1051604)
+++ kio/kio/job.cpp	(working copy)
@@ -306,6 +306,7 @@
     }
 
     Scheduler::doJob(q);
+    Scheduler::scheduleJob(q);
 }
 
 
@@ -625,6 +626,7 @@
         // Return slave to the scheduler
         d->slaveDone();
         Scheduler::doJob(this);
+        Scheduler::scheduleJob(this);
     }
 }
 
@@ -843,6 +845,7 @@
         // Return slave to the scheduler
         d->slaveDone();
         Scheduler::doJob(this);
+        Scheduler::scheduleJob(this);
     }
 }
 
@@ -1026,6 +1029,7 @@
         // Return slave to the scheduler
         d->slaveDone();
         Scheduler::doJob(this);
+        Scheduler::scheduleJob(this);
     }
 }
 
@@ -1633,6 +1637,7 @@
         // Return slave to the scheduler
         d->slaveDone();
         Scheduler::doJob(this);
+        Scheduler::scheduleJob(this);
     }
 }
 
@@ -1839,6 +1844,8 @@
    else
    {
       startDataPump();
+      if (m_putJob)
+        SimpleJobPrivate::get(m_putJob)->m_pairedUrl = m_src;
    }
 }
 
@@ -2456,6 +2463,7 @@
         // Return slave to the scheduler
         d->slaveDone();
         Scheduler::doJob(this);
+        Scheduler::scheduleJob(this);
     }
 }
 
@@ -2707,6 +2715,7 @@
         d->m_url = d->m_waitQueue.first().url;
         d->slaveDone();
         Scheduler::doJob(this);
+        Scheduler::scheduleJob(this);
      }
   }
 }
Index: kio/kio/job_p.h
===================================================================
--- kio/kio/job_p.h	(revision 1051604)
+++ kio/kio/job_p.h	(working copy)
@@ -101,6 +101,13 @@
         KUrl m_url;
         KUrl m_subUrl;
         int m_command;
+        /*
+          In high-level jobs such as FileCopyJob, this variable represents the
+          source (GET) url and is used by KIO::Scheduler to avoid deadlock \
conditions +          when scheduling jobs with two remote ends, e.g. copying file \
from one ftp +          server to another.
+         */
+        KUrl m_pairedUrl;
 
         // for use in KIO::Scheduler
         QString m_protocol;
Index: kio/kio/scheduler.cpp
===================================================================
--- kio/kio/scheduler.cpp	(revision 1051604)
+++ kio/kio/scheduler.cpp	(working copy)
@@ -43,6 +43,11 @@
 
 using namespace KIO;
 
+typedef QPointer<Slave> SlavePtr;
+typedef QList<SlavePtr> SlaveList;
+typedef QMap<SlavePtr, QList<SimpleJob *> *> CoSlaveMap;
+
+
 #ifndef KDE_USE_FINAL // already defined in job.cpp
 static inline Slave *jobSlave(SimpleJob *job)
 {
@@ -55,13 +60,26 @@
     return SimpleJobPrivate::get(job)->m_command;
 }
 
+static inline KUrl pairedRequest(SimpleJob *job)
+{
+    return SimpleJobPrivate::get(job)->m_pairedUrl;
+}
+
 static inline void startJob(SimpleJob *job, Slave *slave)
 {
     SimpleJobPrivate::get(job)->start(slave);
 }
 
-typedef QList<Slave *> SlaveList;
-typedef QMap<Slave *, QList<SimpleJob *> * > CoSlaveMap;
+static void reparseConfiguration(const SlaveList& list)
+{
+    QListIterator<SlavePtr> it (list);
+    while (it.hasNext())
+    {
+      Slave* slave = it.next();
+      slave->send(CMD_REPARSECONFIGURATION);
+      slave->resetHost();
+    }
+}
 
 class KIO::SchedulerPrivate
 {
@@ -133,8 +151,8 @@
     void publishSlaveOnHold();
     void registerWindow(QWidget *wid);
 
-    Slave *findIdleSlave(ProtocolInfo *protInfo, SimpleJob *job, bool &exact);
-    Slave *createSlave(ProtocolInfo *protInfo, SimpleJob *job, const KUrl &url);
+    Slave *findIdleSlave(ProtocolInfo *protInfo, SimpleJob *job, bool &exact, bool \
enforeLimits = false); +    Slave *createSlave(ProtocolInfo *protInfo, SimpleJob \
*job, const KUrl& url, bool enforceLimits = false);  
     void debug_info();
 
@@ -156,38 +174,133 @@
     void slotUnregisterWindow(QObject *);
 };
 
+
 class KIO::SchedulerPrivate::ProtocolInfo
 {
 public:
-    ProtocolInfo() : maxSlaves(1), skipCount(0)
+    ProtocolInfo() : maxSlaves(1), maxSlavesPerHost(0), skipCount(0)
     {
     }
 
     ~ProtocolInfo()
     {
-        qDeleteAll(allSlaves());
+        qDeleteAll(activeSlaves.begin(), activeSlaves.end());
+        qDeleteAll(idleSlaves.begin(), idleSlaves.end());
+        qDeleteAll(coIdleSlaves.begin(), coIdleSlaves.end());
+
+        SlaveList list (coSlaves.keys());
+        qDeleteAll(list.begin(), list.end());
     }
 
-    // bad performance, but will change this later
-    SlaveList allSlaves() const
+    Slave* findJobCoSlave(SimpleJob* job) const
     {
-        SlaveList ret(activeSlaves);
-        ret.append(idleSlaves);
-        ret.append(coSlaves.keys());
-        ret.append(coIdleSlaves);
-        return ret;
+        Slave* slave;
+
+        // Search all slaves to see if job is in the queue of a coSlave
+        QListIterator<SlavePtr> it (activeSlaves);
+        while(it.hasNext())
+        {
+            slave = it.next();
+            JobList *list = coSlaves.value(slave);
+            if (list && list->removeAll(job))
+              return slave;
+        }
+
+        it = idleSlaves;
+        while(it.hasNext())
+        {
+            slave = it.next();
+            JobList *list = coSlaves.value(slave);
+            if (list && list->removeAll(job))
+              return slave;
+        }
+
+        it = coIdleSlaves;
+        while(it.hasNext())
+        {
+            slave = it.next();
+            JobList *list = coSlaves.value(slave);
+            if (list && list->removeAll(job))
+              return slave;
+        }
+
+        it = coSlaves.keys();
+        while(it.hasNext())
+        {
+            slave = it.next();
+            JobList *list = coSlaves.value(slave);
+            if (list && list->removeAll(job))
+              return slave;
+        }
+
+        return 0;
     }
 
+    void reparseSlaveConfiguration() const
+    {
+        reparseConfiguration(activeSlaves);
+        reparseConfiguration(idleSlaves);
+        reparseConfiguration(coIdleSlaves);
+        reparseConfiguration(coSlaves.keys());
+    }
+
+    int activeSlaveCountFor(SimpleJob* job)
+    {
+        int count = 0;
+        QString host = job->url().host();
+
+        if (!host.isEmpty())
+        {
+            QListIterator<SlavePtr> it (activeSlaves);
+            while (it.hasNext())
+            {
+                if (host == it.next()->host())
+                    count++;
+            }
+
+            QString url = job->url().url();
+
+            if (reserveList.contains(url)) {
+                kDebug() << "*** Removing paired request for: " << url;
+                reserveList.removeOne(url);
+            } else {
+                count += reserveList.count();
+            }
+        }
+
+        return count;
+    }
+
+    QStringList reserveList;
     QList<SimpleJob *> joblist;
     SlaveList activeSlaves;
     SlaveList idleSlaves;
     CoSlaveMap coSlaves;
     SlaveList coIdleSlaves;
     int maxSlaves;
+    int maxSlavesPerHost;
     int skipCount;
     QString protocol;
 };
 
+static inline bool checkLimits(KIO::SchedulerPrivate::ProtocolInfo *protInfo, \
SimpleJob *job) +{
+  const int numActiveSlaves = protInfo->activeSlaveCountFor(job);
+
+#if 0
+    kDebug() << job->url() << ": ";
+    kDebug() << "    protocol :" << job->url().protocol()
+             << ", max :" << protInfo->maxSlaves
+             << ", max/host :" << protInfo->maxSlavesPerHost
+             << ", active :" << protInfo->activeSlaves.count()
+             << ", idle :" << protInfo->idleSlaves.count()
+             << ", active for " << job->url().host() << " = " << numActiveSlaves;
+#endif
+
+  return (protInfo->maxSlavesPerHost < 1 || protInfo->maxSlavesPerHost > \
numActiveSlaves); +}
+
+
 KIO::SchedulerPrivate::ProtocolInfo *
 KIO::SchedulerPrivate::ProtocolInfoDict::get(const QString &protocol)
 {
@@ -197,6 +310,7 @@
         info = new ProtocolInfo;
         info->protocol = protocol;
         info->maxSlaves = KProtocolInfo::maxSlaves( protocol );
+        info->maxSlavesPerHost = KProtocolInfo::maxSlavesPerHost( protocol );
 
         insert(protocol, info);
     }
@@ -345,10 +459,7 @@
     ProtocolInfoDict::ConstIterator endIt = proto.isEmpty() ? \
protInfoDict.constEnd() :  it + 1;
     for (; it != endIt; ++it) {
-        foreach(Slave *slave, (*it)->allSlaves()) {
-            slave->send(CMD_REPARSECONFIGURATION);
-            slave->resetHost();
-        }
+        (*it)->reparseSlaveConfiguration();
     }
 }
 
@@ -372,13 +483,14 @@
 
 void SchedulerPrivate::scheduleJob(SimpleJob *job)
 {
-    newJobs.removeOne(job);
-    QString protocol = SimpleJobPrivate::get(job)->m_protocol;
-//    kDebug(7006) << "protocol=" << protocol;
+    const QString protocol = SimpleJobPrivate::get(job)->m_protocol;
+    //kDebug(7006) << "protocol=" << protocol;
     ProtocolInfo *protInfo = protInfoDict.get(protocol);
-    protInfo->joblist.append(job);
-
-    slaveTimer.start(0);
+    if (!protInfo->joblist.contains(job)) { // scheduleJob already called for this \
job? +        newJobs.removeOne(job);
+        protInfo->joblist.append(job);
+        slaveTimer.start(0);
+    }
 }
 
 void SchedulerPrivate::cancelJob(SimpleJob *job) {
@@ -391,17 +503,12 @@
         ProtocolInfo *protInfo = \
protInfoDict.get(SimpleJobPrivate::get(job)->m_protocol);  \
protInfo->joblist.removeAll(job);  
-        // Search all slaves to see if job is in the queue of a coSlave
-        foreach(Slave* coSlave, protInfo->allSlaves())
-        {
-           JobList *list = protInfo->coSlaves.value(coSlave);
-           if (list && list->removeAll(job)) {
-               // Job was found and removed.
-               // Fall through to kill the slave as well!
-               slave = coSlave;
-               break;
-           }
-        }
+        KUrl pairedUrl = pairedRequest(job);
+        if (pairedUrl.isValid())
+          protInfo->reserveList.removeAll(pairedUrl.url());
+
+        slave = protInfo->findJobCoSlave(job);
+
         if (!slave)
         {
            return; // Job was not yet running and not in a coSlave queue.
@@ -487,15 +594,15 @@
 
     SimpleJob *job = 0;
     Slave *slave = 0;
-
+    
     if (protInfo->skipCount > 2)
     {
        bool dummy;
        // Prevent starvation. We skip the first entry in the queue at most
        // 2 times in a row. The
        protInfo->skipCount = 0;
-       job = protInfo->joblist.at(0);
-       slave = findIdleSlave(protInfo, job, dummy );
+       job = protInfo->joblist.at(0);     
+       slave = findIdleSlave(protInfo, job, dummy, true);
     }
     else
     {
@@ -505,7 +612,8 @@
        for(int i = 0; (i < protInfo->joblist.count()) && (i < 10); i++)
        {
           job = protInfo->joblist.at(i);
-          slave = findIdleSlave(protInfo, job, exact);
+          slave = findIdleSlave(protInfo, job, exact, true);
+
           if (!firstSlave)
           {
              firstJob = job;
@@ -528,22 +636,32 @@
 
     if (!slave)
     {
-       if ( protInfo->maxSlaves > static_cast<int>(protInfo->activeSlaves.count()) )
-       {
+       slave = createSlave(protInfo, job, job->url(), true);
+       if (slave)
           newSlave = true;
-          slave = createSlave(protInfo, job, job->url());
-          if (!slave)
-             slaveTimer.start(0);
-       }
+       else
+          slaveTimer.start(0);
     }
 
     if (!slave)
     {
-//          kDebug(7006) << "No slaves available";
-//          kDebug(7006) << " -- active: " << protInfo->activeSlaves.count();
+       //kDebug(7006) << "No slaves available";
+       //kDebug(7006) << " -- active: " << protInfo->activeSlaves.count();
        return false;
     }
 
+    // Check to make sure the scheduling of this job is dependent on another
+    // job request yet to arrive and if it is add the url of the new job to
+    // to the reserve list. This is done to avoid any potential deadlock
+    // conditions that might occur as a result of scheduling high level jobs,
+    // e.g. cipying file from one ftp server to another one.
+    KUrl url = pairedRequest(job);
+    if (url.isValid())
+    {
+        kDebug() << "*** PAIRED REQUEST: " << url;
+        protInfoDict.get(url.protocol())->reserveList << url.url();
+    }
+
     protInfo->activeSlaves.append(slave);
     protInfo->idleSlaves.removeAll(slave);
     protInfo->joblist.removeOne(job);
@@ -600,8 +718,10 @@
     QString user = url.user();
     exact = true;
 
-    foreach( Slave *slave, idleSlaves )
+    Q_FOREACH( Slave *slave, idleSlaves )
     {
+//       kDebug() << "job protocol: " << protocol << ", slave protocol: " << \
slave->slaveProtocol(); +//       kDebug() << "job host: " << host << ", slave host: \
" << slave->host();  if ((protocol == slave->slaveProtocol()) &&
            (host == slave->host()) &&
            (port == slave->port()) &&
@@ -619,78 +739,95 @@
     return 0;
 }
 
-Slave *SchedulerPrivate::findIdleSlave(ProtocolInfo *protInfo, SimpleJob *job, bool \
&exact) +Slave *SchedulerPrivate::findIdleSlave(ProtocolInfo *protInfo, SimpleJob \
*job, +                                       bool &exact, bool enforceLimits)
 {
     Slave *slave = 0;
-    KIO::SimpleJobPrivate *const jobPriv = SimpleJobPrivate::get(job);
 
-    if (jobPriv->m_checkOnHold)
+    if (!enforceLimits || checkLimits(protInfo, job))
     {
-       slave = Slave::holdSlave(jobPriv->m_protocol, job->url());
-       if (slave)
-          return slave;
+        KIO::SimpleJobPrivate *const jobPriv = SimpleJobPrivate::get(job);
+
+        if (jobPriv->m_checkOnHold)
+        {
+           slave = Slave::holdSlave(jobPriv->m_protocol, job->url());
+           if (slave)
+              return slave;
+        }
+        if (slaveOnHold)
+        {
+           // Make sure that the job wants to do a GET or a POST, and with no offset
+           bool bCanReuse = (jobCommand(job) == CMD_GET);
+           KIO::TransferJob * tJob = qobject_cast<KIO::TransferJob *>(job);
+           if ( tJob )
+           {
+              bCanReuse = (jobCommand(job) == CMD_GET || jobCommand(job) == \
CMD_SPECIAL); +              if ( bCanReuse )
+              {
+                KIO::MetaData outgoing = tJob->outgoingMetaData();
+                QString resume = (!outgoing.contains("resume")) ? QString() : \
outgoing["resume"]; +                kDebug(7006) << "Resume metadata is" << resume;
+                bCanReuse = (resume.isEmpty() || resume == "0");
+              }
+           }
+    //       kDebug(7006) << "bCanReuse = " << bCanReuse;
+           if (bCanReuse)
+           {
+              if (job->url() == urlOnHold)
+              {
+                 kDebug(7006) << "HOLD: Reusing held slave for" << urlOnHold;
+                 slave = slaveOnHold;
+              }
+              else
+              {
+                 kDebug(7006) << "HOLD: Discarding held slave (" << urlOnHold << \
")"; +                 slaveOnHold->kill();
+              }
+              slaveOnHold = 0;
+              urlOnHold = KUrl();
+           }
+           if (slave)
+              return slave;
+        }
+
+        slave = searchIdleList(protInfo->idleSlaves, job->url(), \
jobPriv->m_protocol, exact);  }
-    if (slaveOnHold)
-    {
-       // Make sure that the job wants to do a GET or a POST, and with no offset
-       bool bCanReuse = (jobCommand(job) == CMD_GET);
-       KIO::TransferJob * tJob = qobject_cast<KIO::TransferJob *>(job);
-       if ( tJob )
-       {
-          bCanReuse = (jobCommand(job) == CMD_GET || jobCommand(job) == \
                CMD_SPECIAL);
-          if ( bCanReuse )
-          {
-            KIO::MetaData outgoing = tJob->outgoingMetaData();
-            QString resume = (!outgoing.contains("resume")) ? QString() : \
                outgoing["resume"];
-            kDebug(7006) << "Resume metadata is" << resume;
-            bCanReuse = (resume.isEmpty() || resume == "0");
-          }
-       }
-//       kDebug(7006) << "bCanReuse = " << bCanReuse;
-       if (bCanReuse)
-       {
-          if (job->url() == urlOnHold)
-          {
-             kDebug(7006) << "HOLD: Reusing held slave for" << urlOnHold;
-             slave = slaveOnHold;
-          }
-          else
-          {
-             kDebug(7006) << "HOLD: Discarding held slave (" << urlOnHold << ")";
-             slaveOnHold->kill();
-          }
-          slaveOnHold = 0;
-          urlOnHold = KUrl();
-       }
-       if (slave)
-          return slave;
-    }
 
-    return searchIdleList(protInfo->idleSlaves, job->url(), jobPriv->m_protocol, \
exact); +    return slave;
 }
 
-Slave *SchedulerPrivate::createSlave(ProtocolInfo *protInfo, SimpleJob *job, const \
KUrl &url) +Slave *SchedulerPrivate::createSlave(ProtocolInfo *protInfo, SimpleJob \
*job, +                                     const KUrl &url, bool enforceLimits)
 {
-   int error;
-   QString errortext;
-   Slave *slave = Slave::createSlave(protInfo->protocol, url, error, errortext);
-   if (slave)
+   Slave *slave = 0;
+   const int slavesCount = protInfo->activeSlaves.count() + \
protInfo->idleSlaves.count(); +
+   if (!enforceLimits ||
+       (protInfo->maxSlaves > slavesCount && checkLimits(protInfo, job)))
    {
-      protInfo->idleSlaves.append(slave);
-      q->connect(slave, SIGNAL(slaveDied(KIO::Slave *)),
-                 SLOT(slotSlaveDied(KIO::Slave *)));
-      q->connect(slave, SIGNAL(slaveStatus(pid_t,const QByteArray&,const QString &, \
                bool)),
-                 SLOT(slotSlaveStatus(pid_t,const QByteArray&, const QString &, \
                bool)));
-   }
-   else
-   {
-      kError() << "couldn't create slave:" << errortext;
-      if (job)
+      int error;
+      QString errortext;
+      slave = Slave::createSlave(protInfo->protocol, url, error, errortext);
+
+      if (slave)
       {
-         protInfo->joblist.removeAll(job);
-         job->slotError( error, errortext );
+         protInfo->idleSlaves.append(slave);
+         q->connect(slave, SIGNAL(slaveDied(KIO::Slave *)),
+                    SLOT(slotSlaveDied(KIO::Slave *)));
+         q->connect(slave, SIGNAL(slaveStatus(pid_t,const QByteArray&,const QString \
&, bool)), +                    SLOT(slotSlaveStatus(pid_t,const QByteArray&, const \
QString &, bool)));  }
+      else
+      {
+         kError() << "couldn't create slave:" << errortext;
+         if (job)
+         {
+            protInfo->joblist.removeAll(job);
+            job->slotError( error, errortext );
+         }
+      }
    }
+
    return slave;
 }
 
@@ -751,7 +888,7 @@
 
 void SchedulerPrivate::slotCleanIdleSlaves()
 {
-    foreach (ProtocolInfo *protInfo, protInfoDict) {
+    Q_FOREACH (ProtocolInfo *protInfo, protInfoDict) {
         SlaveList::iterator it = protInfo->idleSlaves.begin();
         for( ; it != protInfo->idleSlaves.end(); )
         {
@@ -775,7 +912,7 @@
 
 void SchedulerPrivate::scheduleCleanup()
 {
-    foreach (ProtocolInfo *protInfo, protInfoDict) {
+    Q_FOREACH (ProtocolInfo *protInfo, protInfoDict) {
         if (protInfo->idleSlaves.count() && !cleanupTimer.isActive()) {
             cleanupTimer.start(MAX_SLAVE_IDLE * 1000);
             break;
@@ -848,7 +985,7 @@
 SchedulerPrivate::slotScheduleCoSlave()
 {
     slaveConfig = SlaveConfig::self();
-    foreach (ProtocolInfo *protInfo, protInfoDict) {
+    Q_FOREACH (ProtocolInfo *protInfo, protInfoDict) {
         SlaveList::iterator it = protInfo->coIdleSlaves.begin();
         for( ; it != protInfo->coIdleSlaves.end(); )
         {
@@ -921,16 +1058,15 @@
 {
 //    kDebug(7006) << "_assignJobToSlave( " << job << ", " << slave << ")";
     QString dummy;
-    if ((slave->slaveProtocol() != KProtocolManager::slaveProtocol( job->url(), \
                dummy ))
-        ||
-        (!newJobs.removeAll(job)))
+    ProtocolInfo *protInfo = protInfoDict.get(slave->protocol());
+    if ((slave->slaveProtocol() != KProtocolManager::slaveProtocol( job->url(), \
dummy )) || +        !(protInfo->joblist.removeAll(job) > 0 || newJobs.removeAll(job) \
> 0))  {
         kDebug(7006) << "_assignJobToSlave(): ERROR, nonmatching or unknown job.";
         job->kill();
         return false;
     }
 
-    ProtocolInfo *protInfo = protInfoDict.get(slave->protocol());
     JobList *list = protInfo->coSlaves.value(slave);
     assert(list);
     if (!list)
Index: kdecore/sycoca/kprotocolinfo_p.h
===================================================================
--- kdecore/sycoca/kprotocolinfo_p.h	(revision 1051604)
+++ kdecore/sycoca/kprotocolinfo_p.h	(working copy)
@@ -58,6 +58,7 @@
   //KUrl::URIMode uriMode;
   QStringList capabilities;
   QString proxyProtocol;
+  int maxSlavesPerHost;
 };
 
 
Index: kdecore/sycoca/kprotocolinfo.cpp
===================================================================
--- kdecore/sycoca/kprotocolinfo.cpp	(revision 1051604)
+++ kdecore/sycoca/kprotocolinfo.cpp	(working copy)
@@ -69,6 +69,7 @@
   m_icon = config.readEntry( "Icon" );
   m_config = config.readEntry( "config", m_name );
   m_maxSlaves = config.readEntry( "maxInstances", 1);
+  d->maxSlavesPerHost = config.readEntry( "maxInstancesPerHost", 0);
 
   QString tmp = config.readEntry( "input" );
   if ( tmp == "filesystem" )
@@ -151,7 +152,7 @@
         >> d->capabilities >> d->proxyProtocol
         >> i_canRenameFromFile >> i_canRenameToFile
         >> i_canDeleteRecursive >> i_fileNameUsedForCopying
-        >> d->archiveMimetype;
+        >> d->archiveMimetype >> d->maxSlavesPerHost;
 
    m_inputType = (Type) i_inputType;
    m_outputType = (Type) i_outputType;
@@ -230,7 +231,7 @@
         << capabilities << proxyProtocol
         << i_canRenameFromFile << i_canRenameToFile
         << i_canDeleteRecursive << i_fileNameUsedForCopying
-        << archiveMimetype;
+        << archiveMimetype << maxSlavesPerHost;
 }
 
 
@@ -282,6 +283,15 @@
   return prot->m_maxSlaves;
 }
 
+int KProtocolInfo::maxSlavesPerHost( const QString& _protocol )
+{
+  KProtocolInfo::Ptr prot = KProtocolInfoFactory::self()->findProtocol(_protocol);
+  if ( !prot )
+    return 0;
+
+  return prot->d_func()->maxSlavesPerHost;
+}
+
 bool KProtocolInfo::determineMimetypeFromExtension( const QString &_protocol )
 {
   KProtocolInfo::Ptr prot = KProtocolInfoFactory::self()->findProtocol( _protocol );
Index: kdecore/sycoca/ksycoca.cpp
===================================================================
--- kdecore/sycoca/ksycoca.cpp	(revision 1051604)
+++ kdecore/sycoca/ksycoca.cpp	(working copy)
@@ -55,7 +55,7 @@
  * If the existing file is outdated, it will not get read
  * but instead we'll ask kded to regenerate a new one...
  */
-#define KSYCOCA_VERSION 161
+#define KSYCOCA_VERSION 162
 
 /**
  * Sycoca file name, used internally (by kbuildsycoca)
Index: kdecore/sycoca/kprotocolinfo.h
===================================================================
--- kdecore/sycoca/kprotocolinfo.h	(revision 1051604)
+++ kdecore/sycoca/kprotocolinfo.h	(working copy)
@@ -218,7 +218,21 @@
    */
   static int maxSlaves( const QString& protocol );
 
+
   /**
+   * Returns the limit on the number of slaves for this protocol per host.
+   *
+   * This corresponds to the "maxInstancesPerHost=" field in the protocol \
description file. +   * The default is 0 which means there is no per host limit.
+   *
+   * @param protocol the protocol to check
+   * @return the maximum number of slaves, or 1 if unknown
+   *
+   * @since 4.4
+   */
+  static int maxSlavesPerHost( const QString& protocol );
+
+  /**
    * Returns whether mimetypes can be determined based on extension for this
    * protocol. For some protocols, e.g. http, the filename extension in the URL
    * can not be trusted to truly reflect the file type.



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic