KIO: Mass Copy of Files from Different Sources to Different Destinations

Dawit A. adawit at kde.org
Thu Sep 17 16:10:43 BST 2009


Updated patch. 

* Replace the use of metadata container with a member variable addition to the 
SimpleJobPrivate class.

On Wednesday 16 September 2009 18:44:49 Dawit A. wrote:
> Okay, I have run some tests on my approach to address the potential
>  deadlock issue when scheduling high-level jobs. The idea is as follows:
> 
> 1.) FileCopyJob is called as result of either KIO::file_copy by
>  application.
> 
> 2.) Step #2 results in a creation of "PUT" job.
> 
> 3.) A meta data named "paried-request" is set at step #3 with the source
>  url. This is done in FileCopyJob::startBestCopyMethod.
> 
> 4.) When the scheduler is asked to create a slave for this "PUT" job, it
>  will save any url specified in the "paired-request" meta-data in a reserve
>  list.
> 
> 5.) The scheduler always takes into account the number of items in the
> reserved list before attempting to assign any ioslave to a job.
> 
> The above process ensures that no two "PUT" jobs are assigned an ioslave
> before a "PUT/GET" pair.
> 
> I tested it with as many scenrios as I can think, but it is entirely
>  possible I completely missed something. Anyhow, all feedback is welcome...
> 
> NOTE: I do not like to use the meta-data system for this, but this a quick
> proof of concept. Perhaps, there is a better way to do this...
> 
> On Wednesday 16 September 2009 13:22:41 Dawit A. wrote:
> > David,
> >
> > Yes, now that I understand how the high level jobs work, I completely got
> >  your concern about the potential for a deadlocked condition.
> >
> > Right now I am working on a solution to eliminate this deadlock condition
> >  from ocurring in the scheduler. There is a way to do this by pairing the
> >  requests from high level jobs so that the scheduler can take that into
> >  account when it is scheduling jobs.
> >
> > More about that once I refine and test the solution to see whether or not
> >  it is viable and does solve the deadlock problem...
> >
> > On Wednesday 16 September 2009 11:52:44 David Faure wrote:
> > > On Tuesday 08 September 2009, Dawit A. wrote:
> >
> > [snipped]
> >
> > > >  Can you give an example of how to trigger this
> > > >  dead lock ? I suppose I can simply start copying files from remote
> > > >  locations (sftp/ftp) until the max instances limit is reached, no ?
> > >
> > > Maybe. I admit I didn't actually try, but it seems logical to me, with
> > > the above reasoning. To get many filecopyjobs started, I recommend
> > > copying a whole directory of files. That gives time to start another
> > > directory copy while it's happening. Each file being copied will start
> > > a FileCopyJob.
> >
> > Just for clarification, the dead lock condition can only occur if both
> > ends of the high level job are remote urls, correct ? That is both the
> > put and get operation must be remote otherwise they are handled
> > differently and the scheduling does not come into the equation... Or did
> > I get that wrong ?
> 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: kio.patch
Type: text/x-patch
Size: 16352 bytes
Desc: not available
URL: <http://mail.kde.org/pipermail/kde-core-devel/attachments/20090917/37db9320/attachment.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: kioslaves.patch
Type: text/x-patch
Size: 2396 bytes
Desc: not available
URL: <http://mail.kde.org/pipermail/kde-core-devel/attachments/20090917/37db9320/attachment-0001.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: kprotocolinfo.patch
Type: text/x-patch
Size: 3346 bytes
Desc: not available
URL: <http://mail.kde.org/pipermail/kde-core-devel/attachments/20090917/37db9320/attachment-0002.bin>


More information about the kde-core-devel mailing list