Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to remove XML tags from Unix command line?

I am grepping an XML File, which gives me output like this:

<tag>data</tag>
<tag>more data</tag>
...

Note, this is a flat file, not an XML tree. I want to remove the XML tags and just display the data in between. I'm doing all this from the command line and was wondering if there is a better way than piping it into awk twice...

cat file.xml | awk -F'>' '{print $2}' | awk -F'<' '{print $1}'

Ideally, I would like to do this in one command

like image 989
Tarski Avatar asked Mar 21 '11 09:03

Tarski


4 Answers

If your file looks just like that, then sed can help you:

sed -e 's/<[^>]*>//g' file.xml

Of course you should not use regular expressions for parsing XML because it's hard.

like image 97
Johnsyweb Avatar answered Nov 09 '22 02:11

Johnsyweb


Using awk:

awk '{gsub(/<[^>]*>/,"")};1' file.xml
like image 41
dogbane Avatar answered Nov 09 '22 03:11

dogbane


Give this a try:

grep -Po '<.*?>\K.*?(?=<.*?>)' inputfile

Explanation:

Using Perl Compatible Regular Expressions (-P) and outputting only the specified matches (-o):

  • <.*?> - Non-greedy match of any characters within angle brackets
  • \K - Don't include the preceding match in the output (reset match start - similar to positive look-behind, but it works with variable-length matches)
  • .*? - Non-greedy match stopping at the next match (this part will be output)
  • (?=<.*?>) - Non-greedy match of any characters within angle brackets and don't include the match in the output (positive look-ahead - works with variable-length matches)
like image 2
Dennis Williamson Avatar answered Nov 09 '22 02:11

Dennis Williamson


Use html2text command-line tool, which converts html into plain text.

Alternatively you may try ex-way:

ex -s +'%s/<[^>].\{-}>//ge' +%p +q! file.txt

or:

cat file.txt | ex -s +'%s/<[^>].\{-}>//ge' +%p +q! /dev/stdin
like image 1
kenorb Avatar answered Nov 09 '22 02:11

kenorb