Closed
Bug 40255
Opened 25 years ago
Closed 25 years ago
Execute the download of all links
Categories
(SeaMonkey :: General, enhancement, P3)
SeaMonkey
General
Tracking
(Not tracked)
People
(Reporter: netdragon, Assigned: asa)
Details
I think the browser, on request, should be able to download all the pages in a
certain site while he/she goes and eats something, etc.
The question is... How will the browser know when the site ends and others
begin. I mean, if the browser wasn't given limits, it could download the WHOLE
WEB! Also, it might download a site more than once.
Obviously, a user would have to limit (A)number of pages downloaded (B)how many
levels of links to execute (C)the domains that are allowed or a combination of
the 3. Obviously, you would be able to stop it. It would also be able to recover
on messed up pages.
Pages predownloaded would be stored in a special cache dir and could be copied
to another part of the disk to save. All images, etc. would be downloaded with
the pages - therefore, you could copy a whole site to the hard disk with, of
course, certain restrictions. IE - you couldn't copy cgis.
Another idea I have is that someone can post a site map file to the site. The
browser could then open this file and download the pages by how the sitemap file
lists them. The user could even view the sitemap file and select which parts
he/she wants to download. The sitemap file would contain the data to construct a
tree. Each node could have a name, description, size info, and url.
*** This bug has been marked as a duplicate of 40253 ***
Status: UNCONFIRMED → RESOLVED
Closed: 25 years ago
Resolution: --- → DUPLICATE
Updated•20 years ago
|
Product: Browser → Seamonkey
You need to log in
before you can comment on or make changes to this bug.
Description
•