This question is more on best practices when developing a web service. This may be a bit vague
Lets say my service uses Spring container which creates a standard controller object for all requests. Now, in my controller I inject an instance of Dynamo Db mapper created once in spring container. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBMapper.OptionalConfig.html
Question:-
Shouldn't we create a pool of DynamoDb client objects and mappers so that the parallel requests to the service are supplied from the pool? Or should we should inject same/new instance of DynamoDb mapper object for all requests? Why don't we use something like C3PO for Dynamo Db connections ?
For example although the default limit is 50 connections a linux client should be able to sustain 5,000 open connections, will Amazon allow this?
To access DynamoDB running locally, use the --endpoint-url parameter. The following is an example of using the AWS CLI to list the tables in DynamoDB on your computer. The AWS CLI can't use the downloadable version of DynamoDB as a default endpoint. Therefore, you must specify --endpoint-url with each AWS CLI command.
The SDK and CLI tools use the access keys to cryptographically sign your request. If you don't use AWS tools, you must sign the request yourself. DynamoDB supports Signature Version 4, a protocol for authenticating inbound API requests.
You can access Amazon DynamoDB using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the DynamoDB API. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions.
There is a pretty significant difference between how a relational database works and how DynamoDB works.
With a typical relational database engine such as MySQL, PostgreSQL or MSSQL, each client application instance is expected to establish a small number of connections to the engine and keep the connections open while the application is in use. Then, when parts of the application need to interact with the database, then they borrow a connection from the pool, use it to make a query, and release the connection back to the pool. This makes efficient use of the connections and removes the overhead of setting up and tearing down connections as well as reduces the thrashing that results from creating and releasing the connection object resources.
Now, switching over to DynamoDB: things look a bit different. You no longer have persistent connections from the client to a database server. When you execute a Dynamo operation (query, scan, etc.) it's an HTTP request/response which means the connection is established ad-hoc and lasts only the duration of the requests. DynamoDB is a web service and it takes care of load balancing and routing to get you consistent performance regardless of scale. In this case it is generally better for applications to use a single DynamoDB client object per instance and let the client and the associated service-side infrastructure take care of the load balancing and routing.
Now, the DynamoDB client for your stack (ie. Java client, .NET client, JavaScript/NodeJS client etc.) will typically make use of an underlying HTTP client that is pooled, mostly to minimize the costs associated with creating and tearing down these objects. And you can tweak some of those settings, and in some cases provide your own HTTP client pool implementation, but usually that is not needed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With