I recently had to download large files (see post).
Before I used a download helper, I used
curl. It is a standard tool for downloading
files. But there is another standard tool:
wget. Let's see what I find in the
first 10 Google hits about their differences.
|Protocols||FTP, FTPS, Gopher, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS, FILE, POP3, IMAP, SMB/CIFS, SMTP, RTMP, RTSP||HTTP, HTTPS, FTP|
- curl supports much more protocols and platforms (OS/400, TPF - never heard of them before)
- curl supports more authentication methods
- curl supports gzip and deflate Content-Encoding and does automatic decompression
- Recursive! Wget's major strong side compared to curl is its ability to download recursively, or even just download everything that is referred to from a remote resource, be it a HTML page or a FTP directory listing.
- wget can recover from a prematurely broken transfer and continue downloading.
- Wget enables more features by default: cookies, redirect-following, time stamping from the remote resource etc. With curl most of those features need to be explicitly enabled.
Interesting wget options
-b: Put the download in background. Interersting for large downloads.
-i [filename]: Specify a filename with newline separated URLs to download
--mirror -p: Download a webpage
--convert-links: Convert links for offline viewing
-P ./LOCAL-DIR: where to store the webpage
--reject=gif: Don't download gif files
-Q5m: Stop downloading when the file size exceeds 5 MB
wget when you want to download a single file or a website. Use
for more fancy stuff.