I've been doing some Amazon AWS tinkering for a project that pulls in a decent amount of data. The majority of the services have been super cheap, however, log storage for Cloud Watch is dominating the bill, cloud watch log storage is $13 of the total $18 bill. I'm already deleting logs as I go.
How do I get rid of the logs from storage (removing the groups from the console doesn't seem to be doing it) or lower the cost of the logs (this post indicated it should be $0.03/GB which mine is more than that) or something else?
What strategies are people using?
To reduce costs, delete unnecessary dashboards. If you're using the AWS Free Tier, keep your total number of dashboards to three or less. Also be sure to keep the total number of metrics across all dashboards to less than 50.
Monthly CloudWatch charges = $21 per month Once you exceed 10,000 total metrics then volume pricing tiers will apply - see metrics pricing table for details.
Can you tell us how many logs/hour you are pushing?
One thing I've learned over the years is while having multi-level logging is nice (Debug
, Info
, Warn
, Error
, Fatal
), it has two serious drawbacks:
Warn
, Error
and Fatal
", Debug and Info are all still evaluated at runtime!For the record, I've paid over $1000/mo for logging for previous projects. PCI compliancy for security audits requires 2 years of logs, and we were sending 1000s of logs per second.
I even gave talks about how you should be logging everything in context:
http://go-talks.appspot.com/github.com/eduncan911/go-slides/gologit.slide#1
I have since retracted from this stance after benchmarking my applications and funcs and the overall costs of labor and log storage in production.
I now only log the minimal (errors), and use packages that negate the evaluation at runtime if the log level is not set, such as Google's
Glog
.
Also since moving to Go development, I have adopted the strategy of very small amounts of code (e.g. microservices and packages) and dedicated CLI utils that negates the need to have lots of Debug
and Info
statements in monolithic stacks - if i can just log the RPC to/from each service instead. Better yet - just monitor the event bus.
Finally, with unit tests of these small services, you can be assured of how your code is acting - as you don't need those Info
and Debug
statements because your tests show the good and bad input conditions. Those Info and Debug statements can go inside of your unit tests, leaving your code free of cross-cutting concerns.
All of this basically reduces your logging needs in the end.
How are you shipping your logs?
If you are not able to exclude all of the Debug, Infos and other lines, another idea is to filter your logs before you ship them by using sed
, awk
or alike to pipe to another file.
When you need to debug something, that's when you change the sed/awk and send the extra log info. When done debugging, go back to filtering and only log the minimal like Exceptions and Errors.
There are 2 components to the price you pay:
1) ingestion costs: you pay when you send/upload the logs
2) storage costs: you pay to keep the logs around.
the storage costs are very low (3cents/GB, so guessing that's not the issue - ie the increased usage is a red herring - that costs you 3 cents out of the total cloudwatch bill). you are paying for ingestion when it happens. The only real way to reduce that is to reduce the amount of logging you are doing and/or stop using cloudwatch.
https://aws.amazon.com/cloudwatch/pricing/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With