I have a Java application that runs in AWS Elastic Container Service. Application polls a queue periodically. Sometimes there is no response from the queue and the application hanging forever. I have enclosed the methods with try-catch blocks with logging exceptions. Even though there are no logs in the Cloudwatch after that. No exceptions or errors. Is there a way that I can identify this situation. ? (No logs in the Cloudwatch). Like filtering an error log pattern. So I can restart the service. Any trick or solution would be appreciated.
public void handleProcess() {
try {
while(true) {
Response response = QueueUitils.pollQueue(); // poll the queue
QueueUitils.processMessage(response);
TimeUnit.SECONDS.sleep(WAIT_TIME); // WAIT_TIME = 20
}
} catch (Exception e) {
LOGGER.error("Data Queue operation failed" + e.getMessage());
throw e;
}
}
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/ . In the navigation pane, choose Instances. Select the instance and choose Actions, Monitor and troubleshoot, Manage CloudWatch alarms. On the Manage CloudWatch alarms detail page, under Add or edit alarm, select Create an alarm.
To view your alarm history, log in to CloudWatch in the Amazon Web Services Management Console, choose Alarms from the menu at left, select your alarm, and click the History tab in the lower panel. There you will find a history of any state changes to the alarm as well as any modifications to the alarm configuration.
How alarm state is evaluated when data is missing. Whenever an alarm evaluates whether to change state, CloudWatch attempts to retrieve a higher number of data points than the number specified as Evaluation Periods.
Login to the AWS Management Console and choose AWS Chatbot. Choose Configure new client notice option to select either Chime or Slack. Select Slack and choose Configure. You will be asked to Sign in to your workspace, if you haven't already.
AWS::CloudWatch::Alarm. The AWS::CloudWatch::Alarm type specifies an alarm and associates it with the specified metric or metric math expression. When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA. The alarm is then evaluated and its state is set appropriately.
If you omit this parameter, CloudWatch uses the same value here that you set for EvaluationPeriods, and the alarm goes to alarm state if that many consecutive periods are breaching. The dimensions for the metric associated with the alarm.
An alarm in INSUFFICIENT_DATA state might simply reflect the normal behavior of a metric. There are two types of metrics based on how they are pushed to CloudWatch: period-driven and event-driven. Some services send periodic data points to their metrics, but specific metrics might have periods without data points.
An alarm can watch a metric in the same account. If you have enabled cross-account functionality in your CloudWatch console, you can also create alarms that watch metrics in other AWS accounts. Creating cross-account composite alarms is not supported.
You can do this with CloudWatch Alarms. I've set up a test Lambda function for this which runs every minute and logs to CloudWatch.
IncomingLogEvents
metric for the relevant log group (the log group to which your application is logging). In my case it's /aws/lambda/test-log-silence
With reference to brads3290's answer, if you are using AWS CDK:
import * as cloudwatch from '@aws-cdk/aws-cloudwatch';
// ...
const metric = new cloudwatch.Metric({
namespace: 'AWS/Logs',
metricName: 'IncomingLogEvents',
dimensions: { LogGroupName: '/aws/lambda/test-log-silence' },
statistic: "Average",
period: cdk.Duration.minutes(5),
});
const alarm = new cloudwatch.Alarm(this, 'Alarm', {
metric,
threshold: 0,
comparisonOperator: cloudwatch.ComparisonOperator.LESS_THAN_OR_EQUAL_TO_THRESHOLD,
evaluationPeriods: 1,
datapointsToAlarm: 1,
treatMissingData: cloudwatch.TreatMissingData.BREACHING,
});
This should also solve the problem of ignoring missing data.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With