Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

create_export_task returns success but does not export data to s3 from cloudwatch

I have logs on cloudwatch which I want to store on S3 everyday. I am using AWS Lambda to achieve this.

I created a function on AWS Lambda and I use Cloudwatch event as the trigger. This created an event rule on Cloudwatch. Now when I execute this lambda function, it executes successfully and a file with name 'aws-log-write-test' gets created on S3 inside the bucket, but there is no other data or file in the bucket. The file contains the text 'Permission Check Successful'.

This is my lambda function:

import boto3
import collections
from datetime import datetime, date, time, timedelta

region = 'us-west-2'

def lambda_handler(event, context):
    yesterday = datetime.combine(date.today()-timedelta(1),time())
    today = datetime.combine(date.today(),time())
    unix_start = datetime(1970,1,1)
    client = boto3.client('logs')
    response = client.create_export_task(
        taskName='export_cw_to_s3',
        logGroupName='ABC',
        logStreamNamePrefix='ABCStream',
        fromTime=int((yesterday-unix_start).total_seconds()),
        to=int((today-unix_start).total_seconds()),
        destination='abc-logs',
        destinationPrefix='abc-logs-{}'.format(yesterday.strftime("%Y-%m-%d"))
    )
    return 'Response from export task at {} :\n{}'.format(datetime.now().isoformat(),response)

This is the response when I execute the lambda function:

Response from export task at 2018-01-05T10:57:42.441844 :\n{'ResponseMetadata': {'RetryAttempts': 0, 'HTTPStatusCode': 200, 'RequestId': 'xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx', 'HTTPHeaders': {'x-amzn-requestid': 'xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx', 'date': 'Fri, 05 Jan 2018 10:57:41 GMT', 'content-length': '49', 'content-type': 'application/x-amz-json-1.1'}}, u'taskId': u'xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx'}

START RequestId: xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx Version: $LATEST
END RequestId: xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx
REPORT RequestId: xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx   Duration: 1418.13 ms    Billed Duration: 1500 ms    Memory Size: 128 MB Max Memory Used: 36 MB
like image 401
D.Roy Avatar asked Nov 07 '22 12:11

D.Roy


1 Answers

In fact, according to method create_export_task you should convert timestamp in milliseconds multiplying the resulted number by 1000 :

fromTime = int((yesterday-unix_start).total_seconds() * 1000),
to       = int((today-unix_start).total_seconds() * 1000),

Also, make sure you have already created appropriate bucket policy to able your lambda to export and put objects on your bucket :

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "logs.us-west-2.amazonaws.com"
            },
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::abc-logs"
        },
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "logs.us-west-2.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::abc-logs/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        }
    ]
}

You try to create different folders on your bucket to make daily exports separated each other, it's a brilliant idea:

destinationPrefix='abc-logs-{}'.format(yesterday.strftime("%Y-%m-%d"))

But but it's not possible to use timestamp on policy json, so you have to:

Change Resource Arn to this, to allow putObject on all newly created destination folders:

"Resource":"arn:aws:s3:::abc-logs/*"
like image 50
s.daden Avatar answered Nov 14 '22 22:11

s.daden