I need to get 2 values from a web page header that I am getting using curl. I have been able to get the values individually using:
response1=$(curl -I -s http://www.example.com | grep HTTP/1.1 | awk {'print $2'})
response2=$(curl -I -s http://www.example.com | grep Server: | awk {'print $2'})
But I cannot figure out how to grep the values separately using a single curl request like:
response=$(curl -I -s http://www.example.com)
http_status=$response | grep HTTP/1.1 | awk {'print $2'}
server=$response | grep Server: | awk {'print $2'}
Every attempt either leads to a error message or empty values. I am sure it is just a syntax issue.
Full bash
solution. Demonstrate how to easily parse other headers without requiring awk
:
shopt -s extglob # Required to trim whitespace; see below
while IFS=':' read key value; do
# trim whitespace in "value"
value=${value##+([[:space:]])}; value=${value%%+([[:space:]])}
case "$key" in
Server) SERVER="$value"
;;
Content-Type) CT="$value"
;;
HTTP*) read PROTO STATUS MSG <<< "$key{$value:+:$value}"
;;
esac
done < <(curl -sI http://www.google.com)
echo $STATUS
echo $SERVER
echo $CT
Producing:
302
GFE/2.0
text/html; charset=UTF-8
According to RFC-2616, HTTP headers are modeled as described in "Standard for the Format of ARPA Internet Text Messages" (RFC822), which states clearly section 3.1.2:
The field-name must be composed of printable ASCII characters (i.e., characters that have values between 33. and 126., decimal, except colon). The field-body may be composed of any ASCII characters, except CR or LF. (While CR and/or LF may be present in the actual text, they are removed by the action of unfolding the field.)
So the above script should catch any RFC-[2]822 compliant header with the notable exception of folded headers.
If you wanted to extract more than a couple of headers, you could stuff all the headers into a bash associative array. Here's a simple-minded function which assumes that any given header only occurs once. (Don't use it for Set-Cookie
; see below.)
# Call this as: headers ARRAY URL
headers () {
{
# (Re)define the specified variable as an associative array.
unset $1;
declare -gA $1;
local line rest
# Get the first line, assuming HTTP/1.0 or above. Note that these fields
# have Capitalized names.
IFS=$' \t\n\r' read $1[Proto] $1[Status] rest
# Drop the CR from the message, if there was one.
declare -gA $1[Message]="${rest%$'\r'}"
# Now read the rest of the headers.
while true; do
# Get rid of the trailing CR if there is one.
IFS=$'\r' read line rest;
# Stop when we hit an empty line
if [[ -z $line ]]; then break; fi
# Make sure it looks like a header
# This regex also strips leading and trailing spaces from the value
if [[ $line =~ ^([[:alnum:]_-]+):\ *(( *[^ ]+)*)\ *$ ]]; then
# Force the header to lower case, since headers are case-insensitive,
# and store it into the array
declare -gA $1[${BASH_REMATCH[1],,}]="${BASH_REMATCH[2]}"
else
printf "Ignoring non-header line: %q\n" "$line" >> /dev/stderr
fi
done
} < <(curl -Is "$2")
}
Example:
$ headers so http://stackoverflow.com/
$ for h in ${!so[@]}; do printf "%s=%s\n" $h "${so[$h]}"; done | sort
Message=OK
Proto=HTTP/1.1
Status=200
cache-control=public, no-cache="Set-Cookie", max-age=43
content-length=224904
content-type=text/html; charset=utf-8
date=Fri, 25 Jul 2014 17:35:16 GMT
expires=Fri, 25 Jul 2014 17:36:00 GMT
last-modified=Fri, 25 Jul 2014 17:35:00 GMT
set-cookie=prov=205fd7f3-10d4-4197-b03a-252b60df7653; domain=.stackoverflow.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly
vary=*
x-frame-options=SAMEORIGIN
Note that the SO response includes one or more cookies, in Set-Cookie
headers, but we can only see the last one because the naive script overwrites entries with the same header name. (As it happens, there was only one but we can't know that.) While it would be possible to augment the script to special case Set-Cookie
, a better approach would probably be to provide a cookie-jar file, and use the -b
and -c
curl options in order to maintain it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With