Dolphin file copy question

Duncan 1i5t5.duncan at cox.net
Fri Sep 23 03:49:29 BST 2011


James Colby posted on Thu, 22 Sep 2011 15:03:57 -0400 as excerpted:

> On Thu, Sep 22, 2011 at 2:30 PM, Nikos Chantziaras <realnc at arcor.de>
> wrote:
> 
>> On 09/22/2011 09:23 PM, James Colby wrote:
>>
>>> List Members -
>>>
>>> I am trying to use dolphin to copy a rather large directory (approx.
>>> 22 Gb. 2000 files, 500 directories) from my laptop to a server using
>>> the fish:// protocol.  When I first attempted the copy it ran for a
>>> few hours and then died due to a network disconnect.  Now when I try
>>> to do the copy again, I am getting an error saying that the directory
>>> already exists, which is true, as it looks like dolphin created the
>>> directory structure first, and then starting copying the files.  Does
>>> anyone know of a way to resume this copy or is it possible to tell
>>> dolphin to just skip files and directories that already exist at the
>>> destination?  If this is not possible with dolphin does anyone have a
>>> suggestion as to a better way to do this copy?
>>>
>>>
>> Use rsync.  It was made for exactly this kind of job.
>>
>>
> Sounds like we have a consensus.  :)  I've always been a little
> intimidated by rsync but I guess now is the time to man up (man rsync) 
> </end bad pun>

Yet another vote for rsync.  Note that once you have ssh setup, as it 
appears you already do, rsync over ssh is quite easy indeed.

My personal usage?  I run gentoo on both my 64-bit workstation and my 32-
bit-only netbook.  Gentoo is of course build-from-source, but since my 
workstation is 64-bit while my netbook is 32-bit, I can't just build once 
for both, I have to build separately for each.  And the netbook being a 
netbook, I'm not particularly eager to do my building on it.  So I have a 
32-bit build-image chroot on my 64-bit machine, setup specifically to 
handle the building and maintenance for the 32-bit netbook.

Originally, I copied the image to USB thumbdrive (while it was plugged 
into the workstation, of course), booted the thumbdrive on the netbook, 
then copied from it to the netbook's harddrive so I could native-boot 
from the netbook's own drive.

To that point I had never used ssh before, as I had always had only the 
single machine, so after getting the netbook installed and running in 
general, I learned how to setup sshd securely and did so, on the netbook 
(while still using the thumbdrive to sync between the workstation and the 
netbook, rsyncing twice per transfer, once to the thumbdrive from the 
workstation, once from the thumbdrive to the netbook).  Then I learned 
the ssh client side setup and did that on the workstation.

It was then only a matter of a couple adjustments to the rsync scripts 
that had been syncing to and from the thumbdrive, to use direct rsync on 
the LAN, instead.

Learning how to handle ssh for both server and client was *MUCH* harder 
than learning, then scripting, the rsync solution, first to/from the 
thumbdrive, then script tweaked only slightly to use direct rsync via ssh 
between the hosts themselves, instead of the thumbdrive "sneakernet".

So if you already have ssh setup, one end server, one end client, as it 
sounds like you do, learning to handle rsync over the existing ssh setup 
should be no sweat at all, comparatively.

FWIW the rsync manpage is a bit of a monster, but most of it is dealing 
with exclusion file formats, etc, that you can probably do without, at 
least at first.  There are a few command line switches that are useful 
for handling symlinks if you're dealing with them (as I was), but other 
than that, it's pretty basic.

The one hint that I WILL leave you with is to *ALWAYS* use the --dry-run 
switch the first time around.  In my scripts, I actually made that the 
default, and then go back and run the command again, adding a LIVE 
parameter, to actually run it.  That way, if you screwed up the command, 
you can catch the error from the output, before you actually do something 
stupid like switch the source and destination, or forget to mount the 
source so it copies an empty dir, or some such.  Dry-run first will SEEM 
to take a lot of time and might SEEM a waste, because of all the drive 
seeking it does, but in reality it's caching all those files so the 
system doesn't have to work nearly as hard doing the compares the second 
time around as the data is already cached, so you'd be spending most of 
that time that it can now skip the second time around anyway, if you 
hadn't done the dry run.  (Tho you may wish to do subdirs or other chunks 
of around the same sice as your memory on the source machine, rather than 
the whole 22 gig at once, unless you have 22+ gig of RAM for it to cache 
to, of course.)  Using dry-run has SAVED MY BACON a few times, when I fat-
fingered or fat-neuroned something, so I'd definitely recommend ALWAYS 
using it first.  Far better safe than sorry, as they say.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

___________________________________________________
This message is from the kde mailing list.
Account management:  https://mail.kde.org/mailman/listinfo/kde.
Archives: http://lists.kde.org/.
More info: http://www.kde.org/faq.html.




More information about the kde mailing list