I have just found out that my script gives me a fatal error:
Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 440 bytes) in C:\process_txt.php on line 109
That line is this:
$lines = count(file($path)) - 1;
So I think it is having difficulty loading the file into memeory and counting the number of lines, is there a more efficient way I can do this without having memory issues?
The text files that I need to count the number of lines for range from 2MB to 500MB. Maybe a Gig sometimes.
Thanks all for any help.
The command “wc” basically means “word count” and with different optional parameters one can use it to count the number of lines, words, and characters in a text file. Using wc with no options will get you the counts of bytes, lines, and words (-c, -l and -w option).
wc command - The wc (word count) command is one of the easiest and fastest methods of getting the amount of characters, lines, and words in a file.
$filePath = "test. txt" ; $lines = count (file( $filePath ));
If you are in *Nix system, you can call the command wc -l that gives the number of lines in file.
This will use less memory, since it doesn't load the whole file into memory:
$file="largefile.txt"; $linecount = 0; $handle = fopen($file, "r"); while(!feof($handle)){ $line = fgets($handle); $linecount++; } fclose($handle); echo $linecount;
fgets
loads a single line into memory (if the second argument $length
is omitted it will keep reading from the stream until it reaches the end of the line, which is what we want). This is still unlikely to be as quick as using something other than PHP, if you care about wall time as well as memory usage.
The only danger with this is if any lines are particularly long (what if you encounter a 2GB file without line breaks?). In which case you're better off doing slurping it in in chunks, and counting end-of-line characters:
$file="largefile.txt"; $linecount = 0; $handle = fopen($file, "r"); while(!feof($handle)){ $line = fgets($handle, 4096); $linecount = $linecount + substr_count($line, PHP_EOL); } fclose($handle); echo $linecount;
Using a loop of fgets()
calls is fine solution and the most straightforward to write, however:
even though internally the file is read using a buffer of 8192 bytes, your code still has to call that function for each line.
it's technically possible that a single line may be bigger than the available memory if you're reading a binary file.
This code reads a file in chunks of 8kB each and then counts the number of newlines within that chunk.
function getLines($file) { $f = fopen($file, 'rb'); $lines = 0; while (!feof($f)) { $lines += substr_count(fread($f, 8192), "\n"); } fclose($f); return $lines; }
If the average length of each line is at most 4kB, you will already start saving on function calls, and those can add up when you process big files.
I ran a test with a 1GB file; here are the results:
+-------------+------------------+---------+ | This answer | Dominic's answer | wc -l | +------------+-------------+------------------+---------+ | Lines | 3550388 | 3550389 | 3550388 | +------------+-------------+------------------+---------+ | Runtime | 1.055 | 4.297 | 0.587 | +------------+-------------+------------------+---------+
Time is measured in seconds real time, see here what real means
While the above works well and returns the same results as wc -l
, if the file ends without a newline, the line number will be off by one; if you care about this particular scenario, you can make it more accurate by using this logic:
function getLines($file) { $f = fopen($file, 'rb'); $lines = 0; $buffer = ''; while (!feof($f)) { $buffer = fread($f, 8192); $lines += substr_count($buffer, "\n"); } fclose($f); if (strlen($buffer) > 0 && $buffer[-1] != "\n") { ++$lines; } return $lines; }
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With