I was trying to work with AWS Lambda using the awscli on an ubuntu ec2 instance, and I do not have access to the aws console. Note that I am not using serverless or zapper, I directly zip my main.py file along with the dependency files as mentioned here
I run the function like this
aws lambda invoke --function-name python-test --invocation-type RequestResponse outfile.txt
The errors given in the outfile are very vague and dont help in debugging, rather they confuse me more. Using the admin's system, I am able to recognize the errors when I run a test on the console, but How can I check those logs using the awscli ?
So I tried running aws cloudwatch list-metrics > cloudwatch_logs.log
and searching the function name 'python-test' in the cloudwatch_logs.log file, I am able to find Namespace
, MetricName
, Dimensions
for this function, but how do you access the logs ?
Any help with links to similar examples greatly appreciated !
You can view logs in the Lambda console, in the CloudWatch Logs console, or from the command line.
To retrieve information about a function The following get-function example displays information about the my-function function. For more information, see AWS Lambda Function Configuration in the AWS Lambda Developer Guide.
First, get the log group name:
aws logs describe-log-groups --query logGroups[*].logGroupName
[
"/aws/lambda/MyFunction"
]
Then, list the log streams for that log group:
aws logs describe-log-streams --log-group-name '/aws/lambda/MyFunction' --query logStreams[*].logStreamName
[
"2018/02/07/[$LATEST]140c61ffd59442b7b8405dc91d708fdc"
]
Finally, get the log events for that stream:
aws logs get-log-events --log-group-name '/aws/lambda/MyFunction' --log-stream-name '2018/02/07/[$LATEST]140c61ffd59442b7b8405dc91d708fdc'
{
"nextForwardToken": "f/33851760153448034063427449515194237355552440866456338433",
"events": [
{
"ingestionTime": 1517965421523,
"timestamp": 1517965421526,
"message": "START RequestId: bca9c478-0ba2-11e8-81db-4bccfc644168 Version: $LATEST\n"
},
{
"ingestionTime": 1517965424581,
"timestamp": 1517965424567,
"message": "END RequestId: bca9c478-0ba2-11e8-81db-4bccfc644168\n"
},
{
"ingestionTime": 1517965424581,
"timestamp": 1517965424567,
"message": "REPORT RequestId: bca9c478-0ba2-11e8-81db-4bccfc644168\tDuration: 3055.39 ms\tBilled Duration: 3100 ms \tMemory Size: 128 MB\tMax Memory Used: 35 MB\t\n"
}
],
"nextBackwardToken": "b/33851760085631457914695824538087252860391482425578356736"
}
jq flavor:
List AWS lambda group name (if list is too big you might want to filter it with grep):
aws logs describe-log-groups | jq -r ".logGroups[].logGroupName"
Then read message
property from latest stream with:
LOG_GROUP_NAME="/aws/lambda/awesomeFunction"
LOG_STREAM_NAME=$(aws logs describe-log-streams --log-group-name "${LOG_GROUP_NAME}" | jq -r '.logStreams | sort_by(.creationTime) | .[-1].logStreamName')
aws logs get-log-events --log-group-name "${LOG_GROUP_NAME}" --log-stream-name "${LOG_STREAM_NAME}" | jq -r '.events[] | select(has("message")) | .message'
You might want to put this in a logs.sh
file.
If you want more or other streams you might want to tweak the .logStreams[0]
part
Using the AWS CLI can be a bit irritating because the stream name changes as you modify your function. I've found that using awslogs (https://github.com/jorgebastida/awslogs) is a nicer workflow.
List the groups:
awslogs groups
Filter results.
awslogs groups|grep myfunction
Then get the logs from the group.
awslogs get /aws/lambda/ShortenStack-mutationShortenLambdaBC1758AD-6KW0KAD3TYVE
It defaults to the last 5 minutes, but you can add the -s parameter to choose a time, e.g -s 10m
for the last 10 minutes.
The output is colourised if you're at the terminal, or plain if you're piping it through other commands, e.g. grep to find something.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With