<br><br><div class="gmail_quote">On Fri, Sep 23, 2011 at 10:18 AM, James Colby <span dir="ltr"><<a href="mailto:jcolby@gmail.com">jcolby@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div><div></div><div class="h5"><br><br><div class="gmail_quote">On Thu, Sep 22, 2011 at 10:49 PM, Duncan <span dir="ltr"><<a href="mailto:1i5t5.duncan@cox.net" target="_blank">1i5t5.duncan@cox.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
James Colby posted on Thu, 22 Sep 2011 15:03:57 -0400 as excerpted:<br>
<div><div></div><div><br>
> On Thu, Sep 22, 2011 at 2:30 PM, Nikos Chantziaras <<a href="mailto:realnc@arcor.de" target="_blank">realnc@arcor.de</a>><br>
> wrote:<br>
><br>
>> On 09/22/2011 09:23 PM, James Colby wrote:<br>
>><br>
>>> List Members -<br>
>>><br>
>>> I am trying to use dolphin to copy a rather large directory (approx.<br>
>>> 22 Gb. 2000 files, 500 directories) from my laptop to a server using<br>
>>> the fish:// protocol. When I first attempted the copy it ran for a<br>
>>> few hours and then died due to a network disconnect. Now when I try<br>
>>> to do the copy again, I am getting an error saying that the directory<br>
>>> already exists, which is true, as it looks like dolphin created the<br>
>>> directory structure first, and then starting copying the files. Does<br>
>>> anyone know of a way to resume this copy or is it possible to tell<br>
>>> dolphin to just skip files and directories that already exist at the<br>
>>> destination? If this is not possible with dolphin does anyone have a<br>
>>> suggestion as to a better way to do this copy?<br>
>>><br>
>>><br>
>> Use rsync. It was made for exactly this kind of job.<br>
>><br>
>><br>
> Sounds like we have a consensus. :) I've always been a little<br>
> intimidated by rsync but I guess now is the time to man up (man rsync)<br>
> </end bad pun><br>
<br>
</div></div>Yet another vote for rsync. Note that once you have ssh setup, as it<br>
appears you already do, rsync over ssh is quite easy indeed.<br>
<br>
My personal usage? I run gentoo on both my 64-bit workstation and my 32-<br>
bit-only netbook. Gentoo is of course build-from-source, but since my<br>
workstation is 64-bit while my netbook is 32-bit, I can't just build once<br>
for both, I have to build separately for each. And the netbook being a<br>
netbook, I'm not particularly eager to do my building on it. So I have a<br>
32-bit build-image chroot on my 64-bit machine, setup specifically to<br>
handle the building and maintenance for the 32-bit netbook.<br>
<br>
Originally, I copied the image to USB thumbdrive (while it was plugged<br>
into the workstation, of course), booted the thumbdrive on the netbook,<br>
then copied from it to the netbook's harddrive so I could native-boot<br>
from the netbook's own drive.<br>
<br>
To that point I had never used ssh before, as I had always had only the<br>
single machine, so after getting the netbook installed and running in<br>
general, I learned how to setup sshd securely and did so, on the netbook<br>
(while still using the thumbdrive to sync between the workstation and the<br>
netbook, rsyncing twice per transfer, once to the thumbdrive from the<br>
workstation, once from the thumbdrive to the netbook). Then I learned<br>
the ssh client side setup and did that on the workstation.<br>
<br>
It was then only a matter of a couple adjustments to the rsync scripts<br>
that had been syncing to and from the thumbdrive, to use direct rsync on<br>
the LAN, instead.<br>
<br>
Learning how to handle ssh for both server and client was *MUCH* harder<br>
than learning, then scripting, the rsync solution, first to/from the<br>
thumbdrive, then script tweaked only slightly to use direct rsync via ssh<br>
between the hosts themselves, instead of the thumbdrive "sneakernet".<br>
<br>
So if you already have ssh setup, one end server, one end client, as it<br>
sounds like you do, learning to handle rsync over the existing ssh setup<br>
should be no sweat at all, comparatively.<br>
<br>
FWIW the rsync manpage is a bit of a monster, but most of it is dealing<br>
with exclusion file formats, etc, that you can probably do without, at<br>
least at first. There are a few command line switches that are useful<br>
for handling symlinks if you're dealing with them (as I was), but other<br>
than that, it's pretty basic.<br>
<br>
The one hint that I WILL leave you with is to *ALWAYS* use the --dry-run<br>
switch the first time around. In my scripts, I actually made that the<br>
default, and then go back and run the command again, adding a LIVE<br>
parameter, to actually run it. That way, if you screwed up the command,<br>
you can catch the error from the output, before you actually do something<br>
stupid like switch the source and destination, or forget to mount the<br>
source so it copies an empty dir, or some such. Dry-run first will SEEM<br>
to take a lot of time and might SEEM a waste, because of all the drive<br>
seeking it does, but in reality it's caching all those files so the<br>
system doesn't have to work nearly as hard doing the compares the second<br>
time around as the data is already cached, so you'd be spending most of<br>
that time that it can now skip the second time around anyway, if you<br>
hadn't done the dry run. (Tho you may wish to do subdirs or other chunks<br>
of around the same sice as your memory on the source machine, rather than<br>
the whole 22 gig at once, unless you have 22+ gig of RAM for it to cache<br>
to, of course.) Using dry-run has SAVED MY BACON a few times, when I fat-<br>
fingered or fat-neuroned something, so I'd definitely recommend ALWAYS<br>
using it first. Far better safe than sorry, as they say.<br>
<font color="#888888"><br>
--<br>
Duncan - List replies preferred. No HTML msgs.<br>
"Every nonfree program has a lord, a master --<br>
and if you use the program, he is your master." Richard Stallman<br>
</font><div><div></div><div><br>
___________________________________________________<br>
This message is from the kde mailing list.<br>
Account management: <a href="https://mail.kde.org/mailman/listinfo/kde" target="_blank">https://mail.kde.org/mailman/listinfo/kde</a>.<br>
Archives: <a href="http://lists.kde.org/" target="_blank">http://lists.kde.org/</a>.<br>
More info: <a href="http://www.kde.org/faq.html" target="_blank">http://www.kde.org/faq.html</a>.<br>
</div></div></blockquote></div><br></div></div><div>Duncan - </div><div><br></div><div>Thanks for your answer. I ran my first rsync session over night last night (rsync -av <src_dir> <dest_host>:<dest_dir>) and it seemed to work OK. I was a little disappointed that it only copied about 2 Gb, over 6 hours. I am not sure if that is due to network congestion or rsync overhead, as the day before I was able to copy approx. 7 Gb. over the same 6 hours.</div>
<div><br></div><div>Thanks for the --dry-run tip, I'll be sure to use that in the future.</div><div><br></div><div>Regards,</div><div>James</div>
</blockquote></div><br><div>Another questions that I can't seem to find the answer to. Is it possible to set up rsync to auto retry on failure? The resume option work great, but I would like rsync to retry after a network failure automatically. The other night my connection dropped out shortly after I started my rsync session and an entire night was wasted, and I would like to avoid that in the future.</div>
<div><br></div><div>Thanks,</div><div>James</div>