[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-core-devel
Subject:    Re: RFC: KIO::Scheduler changes... [PATCH included]
From:       "Dawit A." <adawit () kde ! org>
Date:       2009-08-31 0:38:15
Message-ID: 200908302038.15982.adawit () kde ! org
[Download RAW message or body]

New patch that combines all the individual patches I posted before (to be 
applied from kdelibs level) with the following additional changes:

1.) Improve the scheduler patch by putting the enforcing of the ioslave limits 
into two functions (createSlave and findIdleSlave). Still limits are only 
enforces when jobs are scheduled.

2.) Fixed a logic error in modification to 
KIO::Scheduler::assignJobToSlave(...).

3.) Modification of the "maxInstances" and "maxInstancesPerHost" entries for 
the file, http and ftp ioslaves to make it easier to test the change itself. 
Note the I picked these values arbitrarily. They are simply limits I chose 
would be sufficient and are not something set in stone...

Now if someone else could apply these patches and provide feedback that would 
be great...


On Friday 28 August 2009 17:25:54 Dawit A. wrote:
> This is a follow up on the the recent discussions about what KIO::Scheduler
> does by default and how we can go about improving the current default
> behavior to resolve some long standing issues in KDE as well as limit the
> number new ioslaves (read: new connections) that get spawned. For further
> details please read the the threads linked below or the explanation at the
> bottom of this email.
>
> The patches provided here are my attempt to address the current short
> comings of the scheduler and are based on a patch David posted in [1] with
> the following additions/modifications:
>
> 1.) Add support for optional host based limits to the .protocol files
> (maxInstancesPerHost) and if set use them in KIO::Scheduler::scheduleJob.
>
> 2.) Modify the job.cpp class to ensure all redirected requests are re-
> scheduled.
>
> 3.) Remove scheduled jobs if they are to be used in connection-oriented
> scheduling mode, KIO::Scheduler::assignJobToSlave.
>
> ---------------------------------------------------------------------------
>------------------------------------------------------------------
>
> What follows is a brief explanation of how jobs are currently handled by
> the scheduler and how they will be handled when the attached patches are
> applied for those that do not want to be bothered to read the other threads
> linked below.
>
> First, for the benefit of those not familiar with the inner workings of
> KIO, here is roughly how a job is handled by KIO internally:
>
> 1.) A call to KIO::get, KIO::post, KIO::mimetype comes in...
>
> 2.) A job is created and of it is of type SimpleJob, it is handed over to
> KIO::Scheduler...
>
> 3.) KIO::Scheduler simply checks to see if there is an idle ioslave that
> can be reused to handle the request. If one is found, it is reused.
>
> 4.) If none was found, the scheduler will always attempt to create a brand
> new ioslave. No limits on the number of ioslaves it will create.
>
> The issue with this is that the scheduler by default does not have any
> limits on the number ioslaves (read: new connections) that it spawns.
> Instead it forces you to call an additional function,
> KIO::Scheduler::scheduleJob, if you want to limit the number ioslave
> instances.
>
> In practise the problems with this default behavior only affects users
> under certain circumstances where servers actually put a limit on the
> number of connections they allow from a given address. This is specially
> true for more than a few ftp servers out there. In fact, there is a long
> standing FTP related bug vs Konqueror caused by this very issue. However,
> just because users do not directly see the side effects does not mean there
> is no problem with what the scheduler does by default. Creating too many
> connections by spawning many ioslaves is a costly affair to both to the
> client and server machines since it will unnecessarily result in the
> creation of many sockets. Hence, we end up using too many resources.
>
> Okay so what does this attach patch do ?
>
> 1.) Schedule all simple jobs created through KIO::<get/post/..> by default.
>
> 2.) Add an optional "maxInstancesPerHost" property to the *.protocol files
> that is honored when jobs are scheduled which is now the default.
> KIO::scheduleJob currently honors the existing "maxInstances" property set
> in each *.protocol file as a 'soft limit'. However, if #1 is made the
> default, then this will cause a bottle neck at the scheduler since the
> "maxInstances" property is per protocol (ftp, http). For example, for the
> ioslaves included in kdelibs, these value is set to 4, 3, and 2 in file,
> http and ftp protocol files respectively. Having only this many instances
> of ioslaves KDE wide is not sufficient. Hence the new property. After that
> it is only a matter of setting both
> "maxInstances=" and "maxInstancesPerHost=" proerties to some sane default
> values so that the scheduler can do a better job of scheduling requests per
> host.
>
> For example, fot the http protocol the "maxInstances" property can be
> raised to "15", 5x as much as its original value, and "maxInstancesPerHost"
> set to "3". This means that there cannot be more than 15 instances of the
> http ioslave at any given time and no more than 3 connection to any given
> host. All jobs will be queued until this criteria is fulfilled...
>
> What issues remain ?
>
> 1.) Unfortunately the patch to job.cpp will break applications that might
> want to have direct control over the number of ioslaves (read: connections)
> , e.g. a download manager. The change to scheduling jobs by default will
> make direct control impossible since all jobs will be queued. So, an
> additional function is required in KIO::Scheduler to remove the scheduled
> job from the queue or such applications need to call KIO::Scheduler::doJob
> again and we can add the remove from queue check there ??? Somehow the last
> one seems very inefficient to me, doJob(...), scheduleJob(...) and then
> doJob(...) again...
>
> 2.) Should the other scheduling modes, besides "scheduled mode" be forced
> to honor the "maxInstances" property ? My own view is no since the default
> does and the other modes are meant to give the users of KIO complete
> control to schedule their jobs how they see fit.
>
> Anyhow, comments, suggestions, improvements and everything in between is
> welcome...
>
> [1] http://lists.kde.org/?t=124531188100002&r=1&w=2
> [2] http://lists.kde.org/?t=125042610900001&r=1&w=2

["scheduler.patch" (text/x-patch)]

Index: kio/kio/scheduler.cpp
===================================================================
--- kio/kio/scheduler.cpp	(revision 1016843)
+++ kio/kio/scheduler.cpp	(working copy)
@@ -136,8 +136,8 @@
     void publishSlaveOnHold();
     void registerWindow(QWidget *wid);
 
-    Slave *findIdleSlave(ProtocolInfo *protInfo, SimpleJob *job, bool &exact);
-    Slave *createSlave(ProtocolInfo *protInfo, SimpleJob *job, const KUrl &url);
+    Slave *findIdleSlave(ProtocolInfo *protInfo, SimpleJob *job, bool &exact, bool \
enforeLimits = false); +    Slave *createSlave(ProtocolInfo *protInfo, SimpleJob \
*job, const KUrl& url, bool enforceLimits = false);  
     void debug_info();
 
@@ -162,7 +162,7 @@
 class KIO::SchedulerPrivate::ProtocolInfo
 {
 public:
-    ProtocolInfo() : maxSlaves(1), skipCount(0)
+    ProtocolInfo() : maxSlaves(1), maxSlavesPerHost(0), skipCount(0)
     {
     }
 
@@ -181,12 +181,30 @@
         return ret;
     }
 
+    int activeSlavesCountFor(const QString& host) const
+    {
+        int count = 0;
+
+        if (!host.isEmpty())
+        {
+            QListIterator<Slave *> it (activeSlaves);
+            while (it.hasNext())
+            {
+                if (host == it.next()->host())
+                    count++;
+            }
+        }
+
+        return count;
+    }
+
     QList<SimpleJob *> joblist;
     SlaveList activeSlaves;
     SlaveList idleSlaves;
     CoSlaveMap coSlaves;
     SlaveList coIdleSlaves;
     int maxSlaves;
+    int maxSlavesPerHost;
     int skipCount;
     QString protocol;
 };
@@ -200,6 +218,7 @@
         info = new ProtocolInfo;
         info->protocol = protocol;
         info->maxSlaves = KProtocolInfo::maxSlaves( protocol );
+        info->maxSlavesPerHost = KProtocolInfo::maxSlavesPerHost( protocol );
 
         insert(protocol, info);
     }
@@ -382,9 +401,9 @@
     }
 }
 
-void SchedulerPrivate::doJob(SimpleJob *job) {
+void SchedulerPrivate::doJob(SimpleJob *job) {  
     JobData jobData;
-    jobData.protocol = KProtocolManager::slaveProtocol(job->url(), jobData.proxy);
+    jobData.protocol = KProtocolManager::slaveProtocol(job->url(), jobData.proxy);   \
  //    kDebug(7006) << "protocol=" << jobData->protocol;
     if (jobCommand(job) == CMD_GET)
     {
@@ -401,15 +420,17 @@
 }
 
 void SchedulerPrivate::scheduleJob(SimpleJob *job) {
+
     newJobs.removeOne(job);
     const JobData& jobData = extraJobData.value(job);
 
     QString protocol = jobData.protocol;
 //    kDebug(7006) << "protocol=" << protocol;
     ProtocolInfo *protInfo = protInfoDict.get(protocol);
-    protInfo->joblist.append(job);
-
-    slaveTimer.start(0);
+    if (!protInfo->joblist.contains(job)) { // scheduleJob already called for this \
job? +        protInfo->joblist.append(job);
+        slaveTimer.start(0);
+    }
 }
 
 void SchedulerPrivate::cancelJob(SimpleJob *job) {
@@ -520,15 +541,15 @@
 
     SimpleJob *job = 0;
     Slave *slave = 0;
-
+    
     if (protInfo->skipCount > 2)
     {
        bool dummy;
        // Prevent starvation. We skip the first entry in the queue at most
        // 2 times in a row. The
        protInfo->skipCount = 0;
-       job = protInfo->joblist.at(0);
-       slave = findIdleSlave(protInfo, job, dummy );
+       job = protInfo->joblist.at(0);     
+       slave = findIdleSlave(protInfo, job, dummy, true);
     }
     else
     {
@@ -538,7 +559,8 @@
        for(int i = 0; (i < protInfo->joblist.count()) && (i < 10); i++)
        {
           job = protInfo->joblist.at(i);
-          slave = findIdleSlave(protInfo, job, exact);
+          slave = findIdleSlave(protInfo, job, exact, true);
+
           if (!firstSlave)
           {
              firstJob = job;
@@ -561,13 +583,11 @@
 
     if (!slave)
     {
-       if ( protInfo->maxSlaves > static_cast<int>(protInfo->activeSlaves.count()) )
-       {
-          newSlave = true;
-          slave = createSlave(protInfo, job, job->url());
-          if (!slave)
-             slaveTimer.start(0);
-       }
+      slave = createSlave(protInfo, job, job->url(), true);
+      if (slave)
+        newSlave = true;
+      else
+        slaveTimer.start(0);
     }
 
     if (!slave)
@@ -635,6 +655,8 @@
 
     foreach( Slave *slave, idleSlaves )
     {
+//       kDebug() << "job protocol: " << protocol << ", slave protocol: " << \
slave->slaveProtocol(); +//       kDebug() << "job host: " << host << ", slave host: \
" << slave->host();  if ((protocol == slave->slaveProtocol()) &&
            (host == slave->host()) &&
            (port == slave->port()) &&
@@ -652,82 +674,106 @@
     return 0;
 }
 
-Slave *SchedulerPrivate::findIdleSlave(ProtocolInfo *protInfo, SimpleJob *job, bool \
&exact) +Slave *SchedulerPrivate::findIdleSlave(ProtocolInfo *protInfo, SimpleJob \
*job, +                                       bool &exact, bool enforceLimits)
 {
     Slave *slave = 0;
     JobData jobData = extraJobData.value(job);
 
-    if (jobData.checkOnHold)
+    if (!enforceLimits || protInfo->maxSlavesPerHost < 1 ||
+        protInfo->maxSlavesPerHost > \
protInfo->activeSlavesCountFor(job->url().host()))  {
-       slave = Slave::holdSlave(jobData.protocol, job->url());
-       if (slave)
-          return slave;
+        if (jobData.checkOnHold)
+        {
+           slave = Slave::holdSlave(jobData.protocol, job->url());
+           if (slave)
+              return slave;
+        }
+        if (slaveOnHold)
+        {
+           // Make sure that the job wants to do a GET or a POST, and with no offset
+           bool bCanReuse = (jobCommand(job) == CMD_GET);
+           KIO::TransferJob * tJob = qobject_cast<KIO::TransferJob *>(job);
+           if ( tJob )
+           {
+              bCanReuse = (jobCommand(job) == CMD_GET || jobCommand(job) == \
CMD_SPECIAL); +              if ( bCanReuse )
+              {
+                KIO::MetaData outgoing = tJob->outgoingMetaData();
+                QString resume = (!outgoing.contains("resume")) ? QString() : \
outgoing["resume"]; +                kDebug(7006) << "Resume metadata is" << resume;
+                bCanReuse = (resume.isEmpty() || resume == "0");
+              }
+           }
+//           kDebug(7006) << "bCanReuse = " << bCanReuse;
+           if (bCanReuse)
+           {
+              if (job->url() == urlOnHold)
+              {
+                 kDebug(7006) << "HOLD: Reusing held slave for" << urlOnHold;
+                 slave = slaveOnHold;
+              }
+              else
+              {
+                 kDebug(7006) << "HOLD: Discarding held slave (" << urlOnHold << \
")"; +                 slaveOnHold->kill();
+              }
+              slaveOnHold = 0;
+              urlOnHold = KUrl();
+           }
+           if (slave)
+              return slave;
+        }
+        slave = searchIdleList(protInfo->idleSlaves, job->url(), jobData.protocol, \
exact);  }
-    if (slaveOnHold)
+
+    return slave;
+}
+
+Slave *SchedulerPrivate::createSlave(ProtocolInfo *protInfo, SimpleJob *job,
+                                     const KUrl &url, bool enforceLimits)
+{  
+#if 0
+    kDebug() << "max allowed : " << protInfo->maxSlaves
+             << ", max per/host allowed: " << protInfo->maxSlavesPerHost
+             << ", total active now: " << protInfo->activeSlaves.count()
+             << ", total active for " << url.host() << ": "
+             << protInfo->activeSlavesCountFor(url.host());
+#endif
+    Slave *slave = 0;
+    const int slavesCount = protInfo->activeSlaves.count() + \
protInfo->idleSlaves.count(); +
+    if (!enforceLimits ||
+        (protInfo->maxSlaves > slavesCount && (protInfo->maxSlavesPerHost < 1 ||
+         protInfo->maxSlavesPerHost > protInfo->activeSlavesCountFor(url.host()))))
     {
-       // Make sure that the job wants to do a GET or a POST, and with no offset
-       bool bCanReuse = (jobCommand(job) == CMD_GET);
-       KIO::TransferJob * tJob = qobject_cast<KIO::TransferJob *>(job);
-       if ( tJob )
-       {
-          bCanReuse = (jobCommand(job) == CMD_GET || jobCommand(job) == \
                CMD_SPECIAL);
-          if ( bCanReuse )
-          {
-            KIO::MetaData outgoing = tJob->outgoingMetaData();
-            QString resume = (!outgoing.contains("resume")) ? QString() : \
                outgoing["resume"];
-            kDebug(7006) << "Resume metadata is" << resume;
-            bCanReuse = (resume.isEmpty() || resume == "0");
-          }
-       }
-//       kDebug(7006) << "bCanReuse = " << bCanReuse;
-       if (bCanReuse)
-       {
-          if (job->url() == urlOnHold)
-          {
-             kDebug(7006) << "HOLD: Reusing held slave for" << urlOnHold;
-             slave = slaveOnHold;
-          }
-          else
-          {
-             kDebug(7006) << "HOLD: Discarding held slave (" << urlOnHold << ")";
-             slaveOnHold->kill();
-          }
-          slaveOnHold = 0;
-          urlOnHold = KUrl();
-       }
-       if (slave)
-          return slave;
+        int error;
+        QString errortext;
+        slave = Slave::createSlave(protInfo->protocol, url, error, errortext);
+
+        if (slave)
+        {
+            protInfo->idleSlaves.append(slave);
+            q->connect(slave, SIGNAL(slaveDied(KIO::Slave *)),
+                       SLOT(slotSlaveDied(KIO::Slave *)));
+            q->connect(slave, SIGNAL(slaveStatus(pid_t,const QByteArray&,const \
QString &, bool)), +                       SLOT(slotSlaveStatus(pid_t,const \
QByteArray&, const QString &, bool))); +        }
+        else
+        {
+            kError() << "couldn't create slave:" << errortext;
+            if (job)
+            {
+                protInfo->joblist.removeAll(job);
+                extraJobData.remove(job);
+                job->slotError( error, errortext );
+            }
+        }
     }
 
-    return searchIdleList(protInfo->idleSlaves, job->url(), jobData.protocol, \
exact); +    return slave;
 }
 
-Slave *SchedulerPrivate::createSlave(ProtocolInfo *protInfo, SimpleJob *job, const \
                KUrl &url)
-{
-   int error;
-   QString errortext;
-   Slave *slave = Slave::createSlave(protInfo->protocol, url, error, errortext);
-   if (slave)
-   {
-      protInfo->idleSlaves.append(slave);
-      q->connect(slave, SIGNAL(slaveDied(KIO::Slave *)),
-                 SLOT(slotSlaveDied(KIO::Slave *)));
-      q->connect(slave, SIGNAL(slaveStatus(pid_t,const QByteArray&,const QString &, \
                bool)),
-                 SLOT(slotSlaveStatus(pid_t,const QByteArray&, const QString &, \
                bool)));
-   }
-   else
-   {
-      kError() << "couldn't create slave:" << errortext;
-      if (job)
-      {
-         protInfo->joblist.removeAll(job);
-         extraJobData.remove(job);
-         job->slotError( error, errortext );
-      }
-   }
-   return slave;
-}
-
 void SchedulerPrivate::slotSlaveStatus(pid_t, const QByteArray&, const QString &, \
bool)  {
 }
@@ -955,16 +1001,15 @@
 {
 //    kDebug(7006) << "_assignJobToSlave( " << job << ", " << slave << ")";
     QString dummy;
-    if ((slave->slaveProtocol() != KProtocolManager::slaveProtocol( job->url(), \
                dummy ))
-        ||
-        (!newJobs.removeAll(job)))
+    ProtocolInfo *protInfo = protInfoDict.get(slave->protocol());
+    if ((slave->slaveProtocol() != KProtocolManager::slaveProtocol( job->url(), \
dummy )) || +        !(protInfo->joblist.removeAll(job) > 0 || newJobs.removeAll(job) \
> 0))  {
         kDebug(7006) << "_assignJobToSlave(): ERROR, nonmatching or unknown job.";
         job->kill();
         return false;
     }
 
-    ProtocolInfo *protInfo = protInfoDict.get(slave->protocol());
     JobList *list = protInfo->coSlaves.value(slave);
     assert(list);
     if (!list)
Index: kio/kio/job.cpp
===================================================================
--- kio/kio/job.cpp	(revision 1016843)
+++ kio/kio/job.cpp	(working copy)
@@ -308,6 +308,7 @@
     }
 
     Scheduler::doJob(q);
+    Scheduler::scheduleJob(q);
 }
 
 
@@ -627,6 +628,7 @@
         // Return slave to the scheduler
         d->slaveDone();
         Scheduler::doJob(this);
+        Scheduler::scheduleJob(this);
     }
 }
 
@@ -834,6 +836,7 @@
         // Return slave to the scheduler
         d->slaveDone();
         Scheduler::doJob(this);
+        Scheduler::scheduleJob(this);
     }
 }
 
@@ -999,6 +1002,7 @@
         // Return slave to the scheduler
         d->slaveDone();
         Scheduler::doJob(this);
+        Scheduler::scheduleJob(this);
     }
 }
 
@@ -1597,6 +1601,7 @@
         // Return slave to the scheduler
         d->slaveDone();
         Scheduler::doJob(this);
+        Scheduler::scheduleJob(this);
     }
 }
 
@@ -2420,6 +2425,7 @@
         // Return slave to the scheduler
         d->slaveDone();
         Scheduler::doJob(this);
+        Scheduler::scheduleJob(this);
     }
 }
 
@@ -2671,6 +2677,7 @@
         d->m_url = d->m_waitQueue.first().url;
         d->slaveDone();
         Scheduler::doJob(this);
+        Scheduler::scheduleJob(this);
      }
   }
 }
Index: kdecore/sycoca/kprotocolinfo_p.h
===================================================================
--- kdecore/sycoca/kprotocolinfo_p.h	(revision 1016843)
+++ kdecore/sycoca/kprotocolinfo_p.h	(working copy)
@@ -58,6 +58,7 @@
   //KUrl::URIMode uriMode;
   QStringList capabilities;
   QString proxyProtocol;
+  int maxSlavesPerHost;
 };
 
 
Index: kdecore/sycoca/kprotocolinfo.cpp
===================================================================
--- kdecore/sycoca/kprotocolinfo.cpp	(revision 1016843)
+++ kdecore/sycoca/kprotocolinfo.cpp	(working copy)
@@ -69,6 +69,7 @@
   m_icon = config.readEntry( "Icon" );
   m_config = config.readEntry( "config", m_name );
   m_maxSlaves = config.readEntry( "maxInstances", 1);
+  d->maxSlavesPerHost = config.readEntry( "maxInstancesPerHost", 0);
 
   QString tmp = config.readEntry( "input" );
   if ( tmp == "filesystem" )
@@ -151,7 +152,7 @@
         >> d->capabilities >> d->proxyProtocol
         >> i_canRenameFromFile >> i_canRenameToFile
         >> i_canDeleteRecursive >> i_fileNameUsedForCopying
-        >> d->archiveMimetype;
+        >> d->archiveMimetype >> d->maxSlavesPerHost;
 
    m_inputType = (Type) i_inputType;
    m_outputType = (Type) i_outputType;
@@ -230,7 +231,7 @@
         << capabilities << proxyProtocol
         << i_canRenameFromFile << i_canRenameToFile
         << i_canDeleteRecursive << i_fileNameUsedForCopying
-        << archiveMimetype;
+        << archiveMimetype << maxSlavesPerHost;
 }
 
 
@@ -282,6 +283,15 @@
   return prot->m_maxSlaves;
 }
 
+int KProtocolInfo::maxSlavesPerHost( const QString& _protocol )
+{
+  KProtocolInfo::Ptr prot = KProtocolInfoFactory::self()->findProtocol(_protocol);
+  if ( !prot )
+    return 0;
+
+  return prot->d_func()->maxSlavesPerHost;
+}
+
 bool KProtocolInfo::determineMimetypeFromExtension( const QString &_protocol )
 {
   KProtocolInfo::Ptr prot = KProtocolInfoFactory::self()->findProtocol( _protocol );
Index: kdecore/sycoca/ksycoca.cpp
===================================================================
--- kdecore/sycoca/ksycoca.cpp	(revision 1016843)
+++ kdecore/sycoca/ksycoca.cpp	(working copy)
@@ -55,7 +55,7 @@
  * If the existing file is outdated, it will not get read
  * but instead we'll ask kded to regenerate a new one...
  */
-#define KSYCOCA_VERSION 143
+#define KSYCOCA_VERSION 144
 
 /**
  * Sycoca file name, used internally (by kbuildsycoca)
Index: kdecore/sycoca/kprotocolinfo.h
===================================================================
--- kdecore/sycoca/kprotocolinfo.h	(revision 1016843)
+++ kdecore/sycoca/kprotocolinfo.h	(working copy)
@@ -218,7 +218,19 @@
    */
   static int maxSlaves( const QString& protocol );
 
+
   /**
+   * Returns the limit on the number of slaves for this protocol per host.
+   *
+   * This corresponds to the "maxInstancesPerHost=" field in the protocol \
description file. +   * The default is 0 which means there is no per host limit.
+   *
+   * @param protocol the protocol to check
+   * @return the maximum number of slaves, or 1 if unknown
+   */
+  static int maxSlavesPerHost( const QString& protocol );
+
+  /**
    * Returns whether mimetypes can be determined based on extension for this
    * protocol. For some protocols, e.g. http, the filename extension in the URL
    * can not be trusted to truly reflect the file type.
Index: kioslave/http/http.protocol
===================================================================
--- kioslave/http/http.protocol	(revision 1016843)
+++ kioslave/http/http.protocol	(working copy)
@@ -9,6 +9,7 @@
 defaultMimetype=application/octet-stream
 determineMimetypeFromExtension=false
 Icon=text-html
-maxInstances=3
+maxInstancesPerHost=3
+maxInstances=12
 X-DocPath=kioslave/http/index.html
 Class=:internet
Index: kioslave/http/https.protocol
===================================================================
--- kioslave/http/https.protocol	(revision 1016843)
+++ kioslave/http/https.protocol	(working copy)
@@ -7,6 +7,8 @@
 defaultMimetype=application/octet-stream
 determineMimetypeFromExtension=false
 Icon=text-html
+maxInstancesPerHost=3
+maxInstances=12
 config=http
 X-DocPath=kioslave/http/index.html
 Class=:internet
Index: kioslave/file/file.protocol
===================================================================
--- kioslave/file/file.protocol	(revision 1016843)
+++ kioslave/file/file.protocol	(working copy)
@@ -13,6 +13,6 @@
 moving=true
 opening=true
 deleteRecursive=true
-maxInstances=4
+maxInstances=10
 X-DocPath=kioslave/file/index.html
 Class=:local
Index: kioslave/ftp/ftp.protocol
===================================================================
--- kioslave/ftp/ftp.protocol	(revision 1016843)
+++ kioslave/ftp/ftp.protocol	(working copy)
@@ -12,6 +12,7 @@
 deleting=true
 moving=true
 Icon=folder-remote
-maxInstances=2
+maxInstancesPerHost=2
+maxInstances=10
 X-DocPath=kioslave/ftp/index.html
 Class=:internet



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic