I am trying to make a program that can convert a series of manga scans into one pdf file, and I don't want to have to attempt to download the picture to determine if I have the right url. Is there a shell scripting command that I can use to just check if a web page exists?
Existence of an URL can be checked by checking the status code in the response header. The status code 200 is Standard response for successful HTTP requests and status code 404 means URL doesn't exist. Used Functions: get_headers() Function: It fetches all the headers sent by the server in response to the HTTP request.
Alternatively, have your script create a PID file when it executes. It's then a simple exercise of checking for the presence of the PID file to determine if the process is already running. #!/bin/bash # abc.sh mypidfile=/var/run/abc.
Under a *NIX, you can use curl
to issue a simple HEAD
request (HEAD
only asks for the headers, not the page body):
curl --head http://myurl/
Then you can take only the first line, which contains the HTTP status code (200 OK, 404 Not Found, etc.):
curl -s --head http://myurl/ | head -n 1
And then check if you got a decent response (status code is 200 or 3**):
curl -s --head http://myurl/ | head -n 1 | grep "HTTP/1.[01] [23].."
This will output the first line if the status code is okay, or nothing if it isn't. You can also pipe that to /dev/null to get no output, and use $?
to determine if it worked or no:
curl -s --head http://myurl/ | head -n 1 | grep "HTTP/1.[01] [23].." > /dev/null # on success (page exists), $? will be 0; on failure (page does not exist or # is unreachable), $? will be 1
EDIT -s
simply tells curl
to not show a "progress bar".
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With