I want to run ack or grep on HTML files that often have very long lines. I don't want to see very long lines that wrap repeatedly. But I do want to see just that portion of a long line that surrounds a string that matches the regular expression. How can I get this using any combination of Unix tools?
The grep command has an -m or --max-count parameter, which can solve this problem, but it might not work like you'd expect. This parameter will make grep stop matching after finding N matching lines, which works great as it will limit the output to one line, always containing the first match.
It is waiting for you to type lines on your keyboard. (Normally you would use this form with a pipe from another command, e.g. ls -l |grep one .) The -l (or --files-with-matches ) option will "Suppress normal output; instead print the name of each input file from which output would normally have been printed.
Use the -A argument to grep to specify how many lines beyond the match to output. And use -B n to grep lines before the match. And -C in grep to add lines both above and below the match!
You could use the grep option -o
, possibly in combination with changing your pattern to ".{0,10}<original pattern>.{0,10}"
in order to see some context around it:
-o, --only-matching Show only the part of a matching line that matches PATTERN.
..or -c
:
-c, --count Suppress normal output; instead print a count of matching lines for each input file. With the -v, --invert-match option (see below), count non-matching lines.
Pipe your results thru cut
. I'm also considering adding a --cut
switch so you could say --cut=80
and only get 80 columns.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With