I am grepping an XML File, which gives me output like this:
<tag>data</tag>
<tag>more data</tag>
...
Note, this is a flat file, not an XML tree. I want to remove the XML tags and just display the data in between. I'm doing all this from the command line and was wondering if there is a better way than piping it into awk twice...
cat file.xml | awk -F'>' '{print $2}' | awk -F'<' '{print $1}'
Ideally, I would like to do this in one command
If your file looks just like that, then sed
can help you:
sed -e 's/<[^>]*>//g' file.xml
Of course you should not use regular expressions for parsing XML because it's hard.
Using awk:
awk '{gsub(/<[^>]*>/,"")};1' file.xml
Give this a try:
grep -Po '<.*?>\K.*?(?=<.*?>)' inputfile
Explanation:
Using Perl Compatible Regular Expressions (-P
) and outputting only the specified matches (-o
):
<.*?>
- Non-greedy match of any characters within angle brackets\K
- Don't include the preceding match in the output (reset match start - similar to positive look-behind, but it works with variable-length matches).*?
- Non-greedy match stopping at the next match (this part will be output)(?=<.*?>)
- Non-greedy match of any characters within angle brackets and don't include the match in the output (positive look-ahead - works with variable-length matches)Use html2text
command-line tool, which converts html into plain text.
Alternatively you may try ex-way:
ex -s +'%s/<[^>].\{-}>//ge' +%p +q! file.txt
or:
cat file.txt | ex -s +'%s/<[^>].\{-}>//ge' +%p +q! /dev/stdin
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With