Resume interrupted downloads with cURL in Linux
I have recently been trying to download an ISO of a certain Linux distro, but I find myself at the wrong end of a crappy connection. I tried downloading it with both Firefox and Chrome but both choked very early in the process and current speed would go down to 0 and after a while the connection would just cutoff with no way to resume the download from where I left off.
Fear not because [cURL][curl] comes to the rescue. After typing Up-Arrow
and
Enter
more time than I care to admit, I decided to let the computer do the
hard work and automate this thing. So I created a small Bash script called
acurl.sh
(for Automated cURL) that looks like this:
export ec=1; while [ $ec -gt 0 ]; do /usr/bin/curl -O -C - $1; export ec=$?; done if [ $ec -eq 0 ]; then echo "Downloaded $1"; fi
What's going on above is that I have a while loop that is checking the exit code
of curl
. This exit code is captured by assigning the value of $?
to the
variable $ec
. If curl
exit code is larger than 0 then continue trying to
download the file, which is represented by parameter $1
, from where we left
off, which is accomplished by passing the parameter -C
to curl
.
Assuming that the script above is located somewhere in the $PATH
, then using
it is as simple as this:
acurl.sh http://distrodomain.org/downloads/linux-distro-version-platform.iso
This is definitely a brute force approach, but it does the work.
[curl]: http://curl.haxx.se/ "cURL"
Comments
Comments powered by Disqus