The documentation on pubsub pricing is very minimal. Can someone explain the costs for the scenario below ?
There is only one publisher app and there are two dataflow pipeline subscriptions.
The very rough estimate I can come up with is:
The questions are:
The storage costs you $135 in North America (equivalent to 3375 GiB * 24 hours per day * 30 days per month * $0.04 / GiB-month-zone). For a regional Lite topic, since the data is stored in two zones, the storage cost is doubled to $270.
Pub/Sub allows services to communicate asynchronously, with latencies on the order of 100 milliseconds.
Google Cloud Pub/Sub provides messaging between applications. Cloud Pub/Sub is designed to provide reliable, many-to-many, asynchronous messaging between applications. Publisher applications can send messages to a "topic" and other applications can subscribe to that topic to receive the messages.
Pricing. Dataflow jobs are billed per second, based on the actual use of Dataflow batch or streaming workers. Additional resources, such as Cloud Storage or Pub/Sub, are each billed per that service's pricing.
One does not pay for acknowledgements in Google Cloud Pub/Sub, only for publishes, pulls, and pushes. With messages of size 0.5KB, the amount you'd get charged would depend on the batching because of the 1KB minimum size. If all requests had at least 1KB, then the total cost for publishing and getting messages to two subscribers would be:
1TB/day * 30 days * 3 = 92,160GB/month
10GB * $0 + 92,150GB * $0.04 = $3,686
If some messages were not batched, then the price could go up because of the 1KB minimum. The Google Cloud Pub/Sub client library does batch published messages by default, so assuming your messages were not published very sporadically (meaning they were not frequent enough to result in batching), you would hit the 1KB minimum. With the amount of data, you are probably going to end up with batching on your subscribe side as well.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With