[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-core-devel
Subject:    Re: more thoughts about the new kio
From:       Stephan Kulow <coolo () kde ! org>
Date:       2000-01-28 19:02:59
[Download RAW message or body]

David Faure wrote:
> 
> This is about kio in the make_it_cool_branch, so mainly
> for Stephan and Waldo.
> 
> But I thought I would make it possible for anybody else to step in
> and contribute, instead of keeping the discussions private.
> [skip if you don't care about kio internals]
> 
> Currently DeleteJob and CopyJob constructors do :
> 
> CopyJob::CopyJob( const KURL::List& src, ...
> {
>     for (KURL::List::ConstIterator it = src.begin(); it != src.end(); it++)
> {
>         ListJob *job = listRecursive(*it);
>         connect(job, SIGNAL(entries( KIO::Job *,
>                                      const KIO::UDSEntryList& )),
>                 SLOT( slotEntries( KIO::Job*,
>                                    const KIO::UDSEntryList& )));
>         addSubjob(job);
>     }
> 
> and slotEntries adds the entries to the private members "dirs" and "files."
> 
> The problem is that this mixes up everything, when there is more than one
> src url. All listings get merged into dirs and files.
> 
> I have thought about a quick solution : keeping a dict with
> job <-> struct { dirs, files }, to correctly associate the data coming in.
> Roughly:
>     typedef struct { UDSEntryList files; UDSEntryList dirs; long int
> totalSize; } listing;
>     typedef QDict< Job, listing > jobListingDict;
> 
> But this means that ONE job still deals with N copyings, so :
> 
> 1 - the progress information is going to be weird.
> emit TotalDirs(dirs.count()) will be done for each base url ?!
> It seems that the design is broken in this case.
> Or each subjob will handle the progress information itself (a dialog box per
> src url seems right anyway), but then the caller sees nothing.
> [ We need help for the progress info stuff ! Matt, where are you ? :-) ]
> 
> 2 - In fact it goes further than that. Even the "state" (listing or
> deleting)
> is subjob-dependent. Except if we want to wait for all srcurls to be listed,
> and then move on to copying/deleting, but this seems stupid. We could add
> the state to the struct... but is becoming to be hairy for nothing.
> 
> It really looks like we should split those jobs.
> 
> Either by having a metajob that does nothing else than spawning N DeleteJobs
> or CopyJobs, for each src url, and waiting for completion. This way we can
> still return a Job * to the caller.
> 
> Or we just remove the KURL::List methods from the API and let the caller
> create N jobs, for each src url. In the case of konqy it's not really a
> problem,
> since they are distinct operations anyway. One may fail while the other may
> not...
> Copying N src urls is very much like starting a copy for each by hand.
> 
> Is there a case I overlook ?
> Any other app that might use the copy with N src urls and which must
> consider this
> as a single job ?
> 
> We are globally moving from huge loops with synchronous stuff (copy, wait,
> copy, wait)
> towards jobs with smaller jobs, in a very asynchronous way.
> This changes quite a lot of things :-)
> 
> Like : how do we sort the listings so that we create dirs before files ?
> (or for delete, so that we delete nested dirs before toplevel dirs) ?
> Since we append in whatever order the subjobs send the listings... it's
> rather
> messy :-) Or is the order enforced by the way listRecursive is implemented ?
> 
> Too long mail, sorry.
> 
Well, two things. First the thing in the constructor is just a reminder. 
What I wanted to implement is that one job starts when the other one
finishes.
I put them all in a queue in the constructor and whenever I reach a
finish
state, I get the next one from the queue, create a job, switch to
listdir
state, ...

The other thing is, that listRecursive is implemented the way that
parent
dirs are always listed before sub directories. As long as you don't mix
subjobs of course. 

I don't see a sense of spawning N jobs at the same time. It's very
likely
that these URLs are from the same directory, so more than one job
working
there doesn't improve the performance (skiping the very unlikely case of
RAID-5 :), so better not even try it.

Greetings, Stephan

-- 
It said Windows 95 or better, so in theory Linux should run it
                                                GeorgeH on /.

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic