Complete Guide: Downloading Files with cURL

cURL (Client URL) is a powerful command-line tool and library designed for transferring data across networks using various protocols including HTTP, HTTPS, FTP, and more. Pre-installed on most Unix-based systems and widely available across platforms, cURL has become the go-to solution for file downloads in server environments and automated workflows.

When you need to curl download file operations efficiently from the command line, cURL provides unmatched flexibility and reliability. This comprehensive guide will walk you through everything you need to know about downloading files using cURL, from basic operations to advanced techniques. Whether you're working with APIs, setting up automated scripts, or managing server operations, mastering how to curl download file tasks will prove invaluable for your workflow automation and data management needs.

But remember to verify the source and integrity of files before downloading. Only download from trusted sources and scan files for malware when necessary.

Basic File Retrieval with cURL

The simplest way to download and view a file's content is using curl to download a file with cURL's default behavior, which displays the retrieved data directly in your terminal. When you need to download a file with curl, this basic approach allows you to quickly inspect content without saving it to disk first, making it perfect for testing endpoints or previewing file contents before committing to a full download.

copy

bash

curl https://example.com/sample.txt

This command fetches the file and prints its contents to stdout (standard output). You'll see the file's text displayed in your terminal window.

For viewing HTTP headers along with the content:

copy

bash

curl -i https://example.com/sample.txt

The -i flag includes HTTP response headers, which can help you understand the server's response and file metadata.

Note: When downloading large files this way, the entire content will scroll through your terminal, which isn't practical for binary files or large text files.

Saving Remote Files to Your Local System

To save downloaded files to your local filesystem instead of displaying them, use the -o (output) flag:

copy

bash

curl -o downloaded_file.txt https://example.com/sample.txt

This downloads the remote file and saves it locally as "downloaded_file.txt". The -o flag requires you to specify the exact filename you want to use.

For multiple file downloads with specific names:

copy

curl -o file1.txt https://example.com/data1.txt \

-o file2.txt https://example.com/data2.txt

Note: If the specified file already exists, cURL will overwrite it without warning.

Preserving Original Filenames

When you want to keep the original filename from the URL, use the -O (uppercase O) flag:

copy

bash

curl -O https://example.com/important-document.pdf

This saves the file with its original name "important-document.pdf" in your current directory. This approach is particularly useful when dealing with files that have meaningful names in their URLs.

For downloading multiple files while preserving their names:

copy

bash

curl -O https://example.com/file1.zip \

-O https://example.com/file2.tar.gz \

-O https://example.com/file3.pdf

Handling URL Redirects

Many web servers use redirects to point to the actual file location. By default, cURL doesn't follow redirects, which can result in downloading redirect pages instead of the actual files.

copy

bash

curl -L -O https://example.com/download-link

The -L flag tells cURL to follow redirects automatically. This is essential when downloading from services like GitHub releases, cloud storage links, or any URL shorteners.

You can limit the number of redirects to prevent infinite redirect loops:

copy

bash

curl -L --max-redirs 5 -O https://example.com/download-link

Note: Some download services use multiple redirects, so allowing 3-5 redirects is generally safe and sufficient.

Downloading Files with Authentication

Many servers require authentication before allowing file downloads. cURL supports various authentication methods:

#1. Basic Authentication

copy

bash

curl -u username:password -O https://example.com/protected/file.zip

#2. Prompt for Password (More Secure)

copy

bash

curl -u username -O https://example.com/protected/file.zip

This prompts you to enter the password, preventing it from being stored in your command history.

#3. Using API Keys

copy

bash

curl -H "Authorization: Bearer YOUR_API_TOKEN" -O https://api.example.com/files/data.json

#4. Custom Headers

copy

bash

curl -H "X-API-Key: your_api_key" \

-H "User-Agent: MyApp/1.0" \

-O https://example.com/api/download

Managing Timeouts, Retries, and Resume Downloads

#1. Setting Connection and Transfer Timeouts

copy

bash

curl --connect-timeout 10 --max-time 300 -O https://example.com/largefile.zip

--connect-timeout: Maximum time for connection establishment

--max-time: Maximum total time for the entire operation

#2. Retry Failed Downloads

copy

bash

curl --retry 3 --retry-delay 5 -O https://example.com/unstable-source.tar.gz

#3. Resume Interrupted Downloads

copy

bash

curl -C - -O https://example.com/large-file.iso

The -C - flag tells cURL to automatically determine where to resume the download based on the existing partial file.

Note: Resume functionality only works if the server supports HTTP range requests.

Automating Downloads with Shell Scripts

Create powerful download scripts for batch operations:

copy

bash

#!/bin/bash

# Download multiple files from a list

urls=(

"https://example.com/file1.pdf"

"https://example.com/file2.zip"

"https://example.com/file3.tar.gz"

)

for url in "$ { urls[@] } "; do

echo "Downloading: $url"

curl -L --retry 3 -O "$url"

 

if [ $? -eq 0 ]; then

echo "✓ Successfully downloaded: $(basename "$url")"

else

echo "✗ Failed to download: $url"

fi

done

Progress Monitoring Script

copy

bash

#!/bin/bash

download_with_progress() { 

local url="$1"

local filename="$2"

 

curl -L --retry 3 \

--progress-bar \

-o "$filename" \

"$url"

}

download_with_progress "https://example.com/large-file.zip" "downloaded.zip"

Troubleshooting Common Download Issues

Number 1. SSL Certificate Problems

copy

bash

# Skip SSL verification (use cautiously)

curl -k -O https://self-signed-site.com/file.zip

# Specify custom CA certificate

curl --cacert /path/to/certificate.pem -O https://example.com/file.zip

Number 2. Debugging Connection Issues

copy

bash

# Verbose output for troubleshooting

curl -v -O https://example.com/file.zip

# Show only headers

curl -I https://example.com/file.zip

# Test connectivity without downloading

curl --head https://example.com/file.zip

Number 3. Handling Rate Limits

copy

bash

# Add delays between requests

curl --limit-rate 100k -O https://example.com/file.zip

# Custom user agent to avoid blocking

curl -A "Mozilla/5.0 (compatible; MyDownloader/1.0)" -O https://example.com/file.zip

Number 4. Common Error Solutions

"curl: (6) Could not resolve host"

  • Check your internet connection
  • Verify the URL is correct
  • Check DNS settings

"curl: (7) Failed to connect"

  • Verify the server is accessible
  • Check firewall settings
  • Try using a different port if applicable

"curl: (28) Operation timed out"

  • Increase timeout values
  • Check network stability
  • Try downloading during off-peak hours

Using wget as an Alternative

While cURL is excellent for most tasks, wget offers some advantages for specific scenarios:

Basic wget Download

copy

bash

wget https://example.com/file.zip

Recursive Directory Downloads

copy

bash

wget -r -np -nd https://example.com/files/

Continue Partial Downloads

copy

bash

wget -c https://example.com/large-file.iso

Key Differences

File Validation & Integrity

Ensuring downloaded files are complete and uncorrupted is crucial for production environments. cURL provides several mechanisms for validation. For checksum verification, you can combine downloads with hash checking:

copy

bash

# Download and verify MD5 checksum

curl -O https://example.com/file.zip

curl -s https://example.com/file.zip.md5 | md5sum -c

# One-liner with automatic verification

curl -O https://example.com/file.zip && \

curl -s https://example.com/file.zip.sha256 | sha256sum -c

To validate file sizes and detect partial downloads, use the --write-out option to capture transfer information:

copy

bash

#!/bin/bash

expected_size=1048576 # Expected size in bytes

actual_size=$(curl -sI https://example.com/file.zip | grep -i content-length | awk ' { print $2 } ' | tr -d '\r')

if [ "$actual_size" -eq "$expected_size" ]; then

curl -O https://example.com/file.zip

downloaded_size=$(stat -f%z file.zip 2>/dev/null || stat -c%s file.zip)

if [ "$downloaded_size" -eq "$expected_size" ]; then

echo "✓ Download verified: Complete and correct size"

else

echo "✗ Warning: Downloaded size mismatch"

fi

fi

For content-type validation, ensure you're receiving the expected file format:

copy

bash

content_type=$(curl -sI https://example.com/file.pdf | grep -i content-type | awk ' { print $2 } ' | tr -d '\r')

if ; then

curl -O https://example.com/file.pdf

else

echo "Error: Expected PDF, got $content_type"

fi

Performance Optimization

Maximizing download efficiency involves several strategies. For parallel downloads, combine cURL with GNU parallel or xargs:

copy

bash

# Download multiple files in parallel (4 simultaneous connections)

echo -e "https://example.com/file1.zip\nhttps://example.com/file2.zip\nhttps://example.com/file3.zip" | \

parallel -j4 curl -L --retry 3 -O { } 

# Using xargs for parallel downloads

cat urls.txt | xargs -n1 -P4 curl -L --retry 3 -O

For bandwidth management and connection optimization:

copy

bash

# Limit bandwidth to avoid overwhelming network

curl --limit-rate 1000k -O https://example.com/largefile.iso

# Enable compression for text files

curl --compressed -O https://example.com/data.json

# Optimize for multiple downloads with keep-alive

curl --keepalive-time 60 --keepalive \

-O https://example.com/file1.txt \

-O https://example.com/file2.txt

For connection reuse in scripts, use a single cURL process for multiple downloads:

copy

bash

#!/bin/bash

# Efficient batch downloading with connection reuse

 { 

echo "url = https://example.com/file1.pdf"

echo "output = file1.pdf"

echo "url = https://example.com/file2.zip" 

echo "output = file2.zip"

echo "url = https://example.com/file3.tar.gz"

echo "output = file3.tar.gz"

} | curl --parallel --parallel-max 3 --config -

To monitor and optimize performance, use detailed timing information:

copy

bash

curl -w "Download completed in % { time_total } s\nAverage speed: % { speed_download } bytes/s\nSize: % { size_download } bytes\n" \

-o file.zip https://example.com/file.zip

These techniques can dramatically improve download speeds, especially when dealing with multiple files or unreliable connections.

Best Practices and Final Tips

When downloading files, prioritize security by using HTTPS connections whenever possible. Always verify file integrity through checksums provided by the source, and monitor download progress for large files using progress bars. Set appropriate timeouts to prevent connections from hanging indefinitely, and implement retry mechanisms to handle unreliable network conditions. For security, store credentials safely and avoid embedding passwords directly in scripts. Before deploying any automated download processes, thoroughly test them in development environments to ensure they work as expected and handle edge cases properly.

cURL's versatility makes it an essential tool for system administrators, developers, and anyone working with automated file transfers. Master these techniques, and you'll be well-equipped to handle any download scenario efficiently and securely.

We hope this guide has empowered you to handle file downloads confidently and efficiently. Happy downloading!

About the author
Oleksandr Vlasenko
Oleksandr Vlasenko

Oleksandr Vlasenko, Head of Growth at Host-World, is an experienced SEO and growth strategist with over 10 years of expertise in driving organic traffic and scaling businesses in hosting, e-commerce, and technology. He holds a master's degree... See All

Leave your reviews

Share your thoughts and help us improve! Your feedback matters to us

Upload your photo for review