Grabbing web sites
john_82 at tiscali.co.uk
Sun Apr 9 20:34:38 BST 2006
Have solved it myself. Seems that the httrack install was at fault. Used an
rpm for suse 10 and it seems to be working now. Would still like to know why
the source install wont work though.
On Sunday 09 April 2006 20:12, John wrote:
> Has anybody managed to get these to work. I've installed from source.
> Khttrack doesn't run even though it came up in the menu. It tries but just
> times out. Have no luck trying to run httrack either. The desktop files
> just time out.
> httrack-3.40-2.tar.gz from
> Khttrack-0.10.tar.bz2 from
> Just did configure, make, make install and make clean on each. Several
> warnings about unused items on httrack. Newer versions will not compile.
> On Thursday 06 April 2006 07:44, John Layt wrote:
> > On Wednesday 05 April 2006 17:30, Avuton Olrich wrote:
> > > On 4/4/06, Malcolm Fitzgerald <scrpt at catholic.org> wrote:
> > > > How can I grab a web site? Is there a tool for this job?
> > > > --
> > > > Malcolm Fitzgerald
> > >
> > > One way is to use the 'Archive web page' tool in konqueror, it's a
> > > plugin iirc. --
> > > avuton
> > > --
> > That just saves the current page intact, which is good if that's what you
> > want. If you are wanting to archive a whole multi-page website then
> > httrack (http://www.httrack.com/) and the kde gui for it khttrack
> > (http://www.nongnu.org/khttrack/) is the tool for you.
KDE 3.4.2 B
This message is from the kde mailing list.
Account management: https://mail.kde.org/mailman/listinfo/kde.
More info: http://www.kde.org/faq.html.
More information about the kde