BLOG POSTS
    MangoHost Blog / How to use wget (Direct download files from the internet) – Examples, Scripts
How to use wget (Direct download files from the internet) – Examples, Scripts

How to use wget (Direct download files from the internet) – Examples, Scripts

Linux wget is a command-line utility that allows users to download files from the internet. It is a non-interactive tool, meaning it can be used in scripts and automated processes. Wget supports downloading files using various protocols such as HTTP, HTTPS, and FTP. It is a powerful tool that provides a wide range of options and features for downloading files.

Wget is commonly used for tasks such as downloading files, mirroring websites, and recursive downloading. It is often used in shell scripts and automation processes to fetch files from remote servers. Wget is available for various operating systems, including Linux, macOS, and Windows (through Cygwin).

Official page of wget: https://www.gnu.org/software/wget/

Wget is written in C programming language. It is an open-source project maintained by the GNU Project.

How to install on Supported Operating Systems

Installing wget on different operating systems:

Ubuntu/Debian

sudo apt-get install wget

CentOS/RHEL

sudo yum install wget

macOS

brew install wget

Windows (Cygwin)

  1. Download and install Cygwin from https://www.cygwin.com/
  2. During the installation, select wget from the list of packages to install.

wget Command Examples

  • Basic Download: Download a file from a URL:
    wget http://example.com/file.zip
  • Download in the Background: Continue running wget after logging out:
    wget -b http://example.com/file.zip
  • Specify Download Filename: Save the downloaded file with a specific name:
    wget -O myfile.zip http://example.com/file.zip
  • Download Multiple URLs: Download multiple files:
    wget http://example.com/file1.zip http://example.com/file2.zip
  • Resume an Unfinished Download: Continue getting a partially-downloaded file:
    wget -c http://example.com/file.zip
  • Limit Download Speed: Restrict the download speed to 100k:
    wget --limit-rate=100k http://example.com/file.zip
  • Download a Full Website: Download the entire contents of a website up to 3 levels deep:
    wget -r -l 3 http://example.com
  • Ignore SSL Certificates: Do not validate SSL certificates:
    wget --no-check-certificate https://example.com
  • Use Proxy for Downloading: Set HTTP proxy:
    wget -e use_proxy=yes -e http_proxy=192.168.0.1:8080 http://example.com
  • Download all Links on a Page: Download all PDF files from a specific HTML page:
    wget -r -H -l1 -nd -A.pdf -e robots=off http://example.com
  • Download in the Quiet Mode: Suppress all output:
    wget -q http://example.com/file.zip

wget Script Examples

Backup Script: This script downloads a file and saves it with a timestamp, making it useful for creating backups.

#!/bin/bash
# Set the URL and destination folder
url="http://example.com/data.zip"
destination="/path/to/backup/folder"

# Create a timestamp
timestamp=$(date +"%Y-%m-%d_%H-%M-%S")

# Download the file with a timestamp
wget -O "${destination}/backup_${timestamp}.zip" $url

Scheduled Download Script: This script is meant to be set up as a cron job to download files at regular intervals.

#!/bin/bash
# URL of the resource to be downloaded
url="http://example.com/daily-report.csv"

# Directory to store the downloaded file
output_directory="/path/to/download/folder"

# Current date as filename
filename=$(date +"%Y-%m-%d_daily-report.csv")

# Full path for output
full_path="${output_directory}/${filename}"

# Download the file
wget -O "$full_path" $url

Website Mirroring Script: This script uses wget to mirror a website for offline viewing.

#!/bin/bash
# Website to mirror
website_url="http://example.com"

# Location to store the mirrored website
output_dir="/path/to/mirror/folder"

# Run wget to mirror the website
wget --mirror --convert-links --adjust-extension --page-requisites --no-parent -P $output_dir $website_url

Bulk Download Script: This script reads URLs from a text file and downloads each one.

#!/bin/bash
# File containing URLs
url_file="urls.txt"

# Directory where files will be downloaded
download_dir="/path/to/download/folder"

# Read each URL from the file and download it
while IFS= read -r url; do
wget -P $download_dir $url
done < "$url_file"


This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.

This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.

Leave a reply

Your email address will not be published. Required fields are marked