I was wondering how bad would be the impact in the performance of a program migrated to shell script from C.
I have intensive I/O operations.
For example, in C, I have a loop reading from a filesystem file and writing into another one. I'm taking parts of each line without any consistent relation. I'm doing this using pointers. A really simple program.
In the Shell script, to move through a line, I'm using ${var:(char):(num_bytes)}
. After I finish processing each line I just concatenate it to another file.
"$out" >> "$filename"
The program does something like:
while read line; do
out="$out${line:10:16}.${line:45:2}"
out="$out${line:106:61}"
out="$out${line:189:3}"
out="$out${line:215:15}"
...
echo "$out" >> "outFileName"
done < "$fileName"
The problem is, C takes like half a minute to process a 400MB file and the shell script takes 15 minutes.
I don't know if I'm doing something wrong or not using the right operator in the shell script.
Edit: I cannot use awk since there is not a pattern to process the line
I tried commenting the "echo $out" >> "$outFileName" but it doesn't gets much better. I think the problem is the ${line:106:61} operation. Any suggestions?
Thanks for your help.
C is by far the fastest of them all. BASh (Bourne Again Shell) is written in C which adds a step of translation and reduces speed. Same goes for any other shell.
How fast is Bash compared with C? Bash will be slower than C for the actual runtime. However, the use case for bash isn't execution speed - it's ease of gluing together other system commands and components.
C++ much faster than Bash script writing to text file.
Shell is interpreted, and by itself that means it cannot be as fast as an specially-coded application — provided that what executes is written by equally competent programmers. More than that, the answer depends on how you count the time: you should count development time as well as the execution time of a script.
I suspect, based on your description, that you're spawning off new processes in your shell script. If that's the case, then that's where your time is going. It takes a lot of OS resource to fork/exec a new process.
As donitor and Dietrich sugested, I did a little research about the AWK language and, again, as they said, it was a total success. here is a little example of the AWK program:
#!/bin/awk -f
{
option=substr($0, 5, 9);
if (option=="SOMETHING"){
type=substr($0, 80, 1)
if (type=="A"){
type="01";
}else if (type=="B"){
type="02";
}else if (type=="C"){
type="03";
}
print substr($0, 7, 3) substr($0, 49, 8) substr($0, 86, 8) type\
substr($0, 568, 30) >> ARGV[2]
}
}
And it works like a charm. It takes barely 1 minute to process a 500mb file
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With