[prev in list] [next in list] [prev in thread] [next in thread] 

List:       wget
Subject:    Rejecting recursive HTML download
From:       Joerg Behrend <jbe () MI ! Uni-Koeln ! DE>
Date:       1998-07-06 9:24:08
[Download RAW message or body]

After having browsed a WWW page with Netscape, I often want
to save it like it comes up in the browser (including images).

For this, I am using the command

wget  -r -l 1 -R html $url

Is there a better alternative?

Unfortunately, it is then loading all the referenced html pages.
I read the note in the info page that "Wget must load all the HTMLs to know
where to go at all", but I don't understand why this is necessary
for the above purpose. It would be very nice to have an option
for saving exactly and only the information necessary to reproduce
a page with Netscape.


Joerg Behrend
Mathematisches Institut Universitaet Koeln

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic