I have the need to parse through a large pipe-delimited file to count the number of records whose 5th column meets and doesn't meet my criteria.
PS C:\temp> gc .\items.txt -readcount 1000 | `
? { $_ -notlike "HEAD" } | `
% { foreach ($s in $_) { $s.split("|")[4] } } | `
group -property {$_ -ge 256} -noelement | `
ft –autosize
This command does what I want, returning output like this:
Count Name ----- ---- 1129339 True 2013703 False
However, for a 500 MB test file, this command takes about 5.5 minutes to run as measured by Measure-Command. A typical file is over 2 GB, where waiting 20+ minutes is undesirably long.
Do you see a way to improve the performance of this command?
For example, is there a way to determine an optimum value for Get-Content's ReadCount? Without it, it takes 8.8 minutes to complete the same file.
Have you tried StreamReader? I think that Get-Content loads the whole file into memory before it does anything with it.
StreamReader class
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With