I have a big (300 kB) text file containing words delimited by spaces. Now I want to open this file and process every word in it one by one.
The problem is that perl reads the file line by line (i.e) the entire file at once which gives me strange results. I know the normal way is to do something like
open($inFile, 'tagged.txt') or die $!;
$_ = <$inFile>;
@splitted = split(' ',$_);
print $#splitted;
But this gives me a faulty word count (too large array?).
Is it possible to read the text file word by word instead?
Instead of reading it in one fell swoop, try the line-by-line approach which is easier on your machine's memory usage too (although 300 KB isn't too large for modern computers).
use strict;
use warnings;
my @words;
open (my $inFile, '<', 'tagged.txt') or die $!;
while (<$inFile>) {
chomp;
@words = split(' ');
foreach my $word (@words) { # process }
}
close ($inFile);
To read the file one word at a time, change the input record separator ($/
) to a space:
local $/ = ' ';
Example:
#!/usr/bin/perl
use strict;
use warnings;
use feature 'say';
{
local $/ = ' ';
while (<DATA>) {
say;
}
}
__DATA__
one two three four five
Output:
one
two
three
four
five
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With