I have a question about sed efficiency in bash. I have a pipelined series of sed statements, e.g.:
var1="Some string of text"
var2=$(echo "$var1" | sed 's/pattern1/replacement1/g' | sed 's/pattern2/replacement2/g' | sed 's/pattern3/replacement3/g' | sed 's/pattern4/replacement4' | sed 's/pattern5/replacement5/g')
Assuming no inputs depend on edited output from an earlier sed pipe, am I better off scripting the above with expression statements instead? For example:
var2=$(echo "$var1" | sed -e's/pattern1/replacement1/g' -e's/pattern2/replacement2/g' -e's/pattern3/replacement3/g' -e's/pattern4/replacement4/g' -e's/pattern5/replacement5/g')
Is there any efficiency to be gained here?
Using multiple expressions will be faster than using multiple pipelines, because you there's additional overhead in creating pipelines and forking sed processes. However, it's rarely enough of a difference to matter in practice.
Using multiple expressions is faster than multiple pipelines, but probably not enough to matter for the average use case. Using your example, the average difference in execution speed was only two-thousandths of a second, which is not enough to get excited about.
# Average run with multiple pipelines.
$ time {
echo "$var1" |
sed 's/pattern1/replacement1/g' |
sed 's/pattern2/replacement2/g' |
sed 's/pattern3/replacement3/g' |
sed 's/pattern4/replacement4/g' |
sed 's/pattern5/replacement5/g'
}
Some string of text
real 0m0.007s
user 0m0.000s
sys 0m0.004s
# Average run with multiple expressions.
$ time {
echo "$var1" | sed \
-e 's/pattern1/replacement1/g' \
-e 's/pattern2/replacement2/g' \
-e 's/pattern3/replacement3/g' \
-e 's/pattern4/replacement4/g' \
-e 's/pattern5/replacement5/g'
}
Some string of text
real 0m0.005s
user 0m0.000s
sys 0m0.000s
Granted, this isn't testing against a large input file, thousands of input files, or running in a loop with tens of thousands of iterations. Still, it seems safe to say that the difference is small enough to be irrelevant for most common situations.
Uncommon situations are a different story. In such cases, benchmarking will help you determine whether replacing pipes with in-line expressions is a valuable optimization for that use case.
Most of the overhead in sed tends to be processing regular expressions but you're processing the same number of regular expressions in each of your examples.
Consider that the operating system needs to construct std and stdout for each element of the pipe. Sed also takes memory in your system, and the OS must allocate that memory for each instance of sed -- whether that's one instance or four.
Here's my assessment:
$ jot -r 1000000 1 10000 | time sed 's/1/_/g' | time sed 's/2/_/g' | time sed 's/3/_/g' | time sed 's/4/_/g' >/dev/null
2.38 real 0.84 user 0.01 sys
2.38 real 0.84 user 0.01 sys
2.39 real 0.85 user 0.01 sys
2.39 real 0.85 user 0.01 sys
$ jot -r 1000000 1 10000 | time sed 's/1/_/g;s/2/_/g;s/3/_/g;s/4/_/g' >/dev/null
2.71 real 2.57 user 0.02 sys
$ jot -r 1000000 1 10000 | time sed 's/1/_/g;s/2/_/g;s/3/_/g;s/4/_/g' >/dev/null
2.71 real 2.56 user 0.02 sys
$ jot -r 1000000 1 10000 | time sed 's/1/_/g;s/2/_/g;s/3/_/g;s/4/_/g' >/dev/null
2.71 real 2.57 user 0.02 sys
$ jot -r 1000000 1 10000 | time sed 's/1/_/g;s/2/_/g;s/3/_/g;s/4/_/g' >/dev/null
2.74 real 2.57 user 0.02 sys
$ dc
.84 2* .85 2* + p
3.38
$
And since 3.38 > 2.57, les time is taken up if you use a single instance of sed.
Yes. You'll avoid the overhead of starting sed anew each time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With