From kde-core-devel Wed Apr 28 05:06:35 2010 From: Dawit A Date: Wed, 28 Apr 2010 05:06:35 +0000 To: kde-core-devel Subject: [PATCH] Request for adding internal meta-data support to KIO... Message-Id: <201004280106.36237.adawit () kde ! org> X-MARC-Message: https://marc.info/?l=kde-core-devel&m=127243120606811 MIME-Version: 1 Content-Type: multipart/mixed; boundary="--Boundary-00=_cJ81LAb8Th6YdvC" --Boundary-00=_cJ81LAb8Th6YdvC Content-Type: Text/Plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Purpose: ====== Allow io-slaves to send meta-data to one another that is not accessible from the application level. Reason: ===== This patch actually grew from a need to fix authentication caching issues in kio_http. Let me explain a little about the problem to make it crystal clear why such a solution is essential. kio_http like every other ioslave uses kpasswdserver to store and retrieve authentication information which currently is accessed over dbus from KIO. Nothing wrong with that except kio_http needs to query this service each and every time it processes a request. It needs to do this because it wants to preemptively send authentication header and avoid round-trips to sites the user had already already authenticated to. However, doing so would have meant sending gazillion requests over dbus. That brings us to the problem... When kio_http was rewritten/updated for KDE 4.x, the idea of preemptively sending authentication header was put aside to avoid the unnecessary abuse of the dbus resource. Instead, kio_http was changed to always wait for a 401/407 response from the server before attempting to send any authentication header. Unfortunately, that breaks things in in more ways than one. Not only does opting to wait for a response from the server causes three or more round trips to access every resource from a protected website now, but it also breaks sites that rely, correctly I might add (see RFC 2617), on the fact that once authenticated, the client will cache and preemptively send back the authentication header whenever appropriate. In other words, they do not expect yet another authentication again. See bug# 181833 or the others marked duplicate of it. Solution: ====== The obvious solution to the problem described above is to have the first ioslave that successfully authenticates against a protected website somehow communicate this fact to all other io-slaves that are going to access a resource from this site. That way only those io-slaves would know they need to check for cached authentication information and send it preemptively, hence solving both of the problems for us... What was surprising here is that the above solution can be implemented very easily. With only one additional requirement to qualify meta-data as internal, we can use the existing method that ioslaves use to send meta-data back to applications to solve the issue. What is this requirement ? We simply state/assume that a meta-data whose key starts with the keyword "_kio_internal_" will be treated as an internal meta-data and handled separately from the regular meta-data container that holds values slated to be sent to applications. You can read the details of how this is supposed to work by either reading the attached patch or simply reading the changes I made to the DESIGN.metadata document which is included with the patch. Comments, objections, suggestions are all welcome... --Boundary-00=_cJ81LAb8Th6YdvC Content-Type: text/x-patch; charset="UTF-8"; name="kio_internal_metadata.patch" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="kio_internal_metadata.patch" Index: kio/DESIGN.metadata =================================================================== --- kio/DESIGN.metadata (revision 1119700) +++ kio/DESIGN.metadata (working copy) @@ -5,7 +5,23 @@ the behavior of a slave and is usally protocol dependent. MetaData consists of two strings: a "key" and a "value". +Any meta data whose "key" starts with the keyword "_kio_internal_[0-1]_" is +marked as internal metadata and will not be accessable at the application level. +Internal meta-data key/value pairs will simply be stored and sent back to all +ioslave of the same type depending on the [0-1] flag. +The flags in the internal meta-data marker keyword are used to indicate scope +of applicability. If "1" is used in the keyword, it means that the meta-data +should be re-sent to all ioslave of the same type. On the other hand if "0" is +used, then the meta-data should only be re-sent to ioslaves that are not only +of the same type but also those that are only tasked to retrieve data from the +very same host as the one that originally sent the internal meta-data. + +When internal meta-data values are resent back to the ioslaves, the keyword +used to mark them internal will be removed from the key name. Also note that +internal meta-data are internal to KIO and are not accessiable at the application +level. + The following keys are currently in use: Key Value(s) Description Index: kio/kio/job_p.h =================================================================== --- kio/kio/job_p.h (revision 1119700) +++ kio/kio/job_p.h (working copy) @@ -55,6 +55,7 @@ Job* m_parentJob; int m_extraFlags; MetaData m_incomingMetaData; + MetaData m_internalMetaData; MetaData m_outgoingMetaData; inline KIO::JobUiDelegate *ui() const Index: kio/kio/job.cpp =================================================================== --- kio/kio/job.cpp (revision 1119700) +++ kio/kio/job.cpp (working copy) @@ -588,7 +588,15 @@ void SimpleJob::slotMetaData( const KIO::MetaData &_metaData ) { Q_D(SimpleJob); - d->m_incomingMetaData += _metaData; + + QMapIterator it (_metaData); + while (it.hasNext()) { + it.next(); + if (it.key().startsWith(QLatin1String("_kio_internal_"), Qt::CaseInsensitive)) + d->m_internalMetaData.insert(it.key(), it.value()); + else + d->m_incomingMetaData.insert(it.key(), it.value()); + } } void SimpleJob::storeSSLSessionFromJob(const KUrl &redirectionURL) Index: kio/kio/scheduler_p.h =================================================================== --- kio/kio/scheduler_p.h (revision 1119700) +++ kio/kio/scheduler_p.h (working copy) @@ -36,6 +36,7 @@ // remove slave from keeper bool removeSlave(KIO::Slave *slave); QList allSlaves() const; + QList slavesFor(const QString&) const; private: void scheduleGrimReaper(); @@ -146,7 +147,8 @@ KIO::Slave *createSlave(const QString &protocol, KIO::SimpleJob *job, const KUrl &url); bool removeSlave (KIO::Slave *slave); QList allSlaves() const; - ConnectedSlaveQueue m_connectedSlaveQueue; + void updateSlaveConfigFor(const QString&); + ConnectedSlaveQueue m_connectedSlaveQueue; private slots: // start max one (non-connected) job and return Index: kio/kio/scheduler.cpp =================================================================== --- kio/kio/scheduler.cpp (revision 1119700) +++ kio/kio/scheduler.cpp (working copy) @@ -133,6 +133,10 @@ return m_idleSlaves.values(); } +QList SlaveKeeper::slavesFor(const QString& host) const +{ + return m_idleSlaves.values(host); +} void SlaveKeeper::scheduleGrimReaper() { @@ -570,7 +574,16 @@ return ret; } +void ProtoQueue::updateSlaveConfigFor(const QString& host) +{ + QList slaveList (m_slaveKeeper.slavesFor(host)); + Q_FOREACH(Slave *slave, slaveList) { + slave->setConfig(SlaveConfig::self()->configData(slave->protocol(), host)); + } +} + + //private slot void ProtoQueue::startAJob() { @@ -948,17 +961,41 @@ } KIO::SimpleJobPrivate *const jobPriv = SimpleJobPrivate::get(job); + + // Preserve all internal meta-data so they can be sent back to the + // ioslaves as needed... + const KUrl jobUrl = job->url(); + QMapIterator it (jobPriv->m_internalMetaData); + while (it.hasNext()) { + it.next(); + if (it.key().startsWith(QLatin1String("_kio_internal_0_"), Qt::CaseInsensitive)) { + SlaveConfig::self()->setConfigData(jobUrl.protocol(), jobUrl.host(), it.key().mid(16), it.value()); + } else if (it.key().startsWith(QLatin1String("_kio_internal_1_"), Qt::CaseInsensitive)) { + SlaveConfig::self()->setConfigData(jobUrl.protocol(), QString(), it.key().mid(16), it.value()); + } + } + // make sure that we knew about the job! Q_ASSERT(jobPriv->m_schedSerial); ProtoQueue *pq = m_protocols.value(jobPriv->m_protocol); pq->removeJob(job); if (slave) { + // Reconfigure the slave that sent the original internal meta-data and + // all other idle slaves that are currently servi + if (jobPriv->m_internalMetaData.count()) { + kDebug(7006) << "Updating already running slaves with new configuration data"; + slave->setConfig(SlaveConfig::self()->configData(slave->protocol(), slave->host())); + m_protocols.value(slave->protocol())->updateSlaveConfigFor(slave->host()); + } slave->setJob(0); slave->disconnect(job); } jobPriv->m_schedSerial = 0; // this marks the job as unscheduled again jobPriv->m_slave = 0; + // Clear the values in the internal metadata container since they have + // already been taken care of above... + jobPriv->m_internalMetaData.clear(); } // static --Boundary-00=_cJ81LAb8Th6YdvC--