I have a requirement that I have not found a solution or assistance for on the forums so far...
The issue: I monitor real time systems and these generate a number of very large log files that roll over on a daily basis, 1GB is not uncommon, and what I do is look through the (tail end of the) logs for known error conditions or other strings that I want to monitor for, which I may need to take action on.
As this sort of work is both time consuming and tedious, and easy for human error to miss issues within logs, I have been automating this log file monitoring. I utilise a product called Servers Alive to perform scheduled checks and I write scripts to monitor log files for occurrences of log entries that I am interested in looking for which may indicate problems with services, and I can than call other scripts to restart services if necessary to resolve the issue that has been encountered.
I have previously done the scripting for these log monitoring tasks using Perl and these scripts are very quick, but not necessarily the cleanest way to do this, I'm an admin rather that programmer so I don't have the developer methodologies or experience to rely on.
The Perl code snippet below shows that I 'open' a log file $logfile
, and then 'seek' backwards from the end of the file for a given amount of data and then I search through the data from this point to the end of the file for the log entry I am interested in monitoring, in this example the log entry is "No packet received from EISEC Client
"
In this example the log entry we are monitoring for indicates an issue with the EISEC service and a simple restart of the service normally resolves the issue, all of which I do automatically utilising Servers Alive as the scheduled check and alerting mechanism.
Perl script function
sub checkEisecSrvloggedon {
print "$logfile\n";
if (open (EISECSRV_LOGGEDON, $logfile)) {
seek (EISECSRV_LOGGEDON, -40000, 2);
$line = <EISECSRV_LOGGEDON>;
$eisecsrvloggedon_ok = 0;
while ($line = <EISECSRV_LOGGEDON>) {
if ($line =~/No packet received from EISEC Client/) {
#increment counter
++$eisecsrvloggedon_ok;
}
}
}
}
I would like to implement a solution for this using PowerShell if possible now that we have moved on to Windows Server 2008 R2 and Windows 7 clients, but I cannot find details of how I could efficiently do this, quickly and without any large memory overhead.
I have tried Get-Content based solutions but the need to read the whole logfile makes these type of solutions unusable as it takes far too long to query the logfile. I need to be able to check a number of these large logfile on a very regular basis, once every few minutes in some cases. I have seen tail type solutions that are great for tailing the end of the logs files, and these scripts are using System.IO.File
methods. This does get the performance / speed I would like to achieve in my scripts, but I am not familiar enough with PowerShell to know how to use this methodology to get quickly to the end of a large logfile, and then be able to read backwards for a given amount of data and then search for the relevant strings in this log section.
Does anyone have any ideas?
If you use PowerShell 3.0 or newer, you can use a combination of the get-content
and select-string
commandlets to get the same functionality. Since version 3.0, get-content
support a -tail
option that only returns the last n lines of a file in an efficient way. Using this, you can reimplement the Perl script above with the following (searching in the last 1000 lines):
# Returns the number of occurrences
(get-content logfile.txt -Tail 1000 | select-string -pattern "No packet received from EISEC Client").Length
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With