Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

is it safe to keep database connections open for long time

I have a .net client application which is connected to a remote database. Is it safe to keep a single connection open for the lifetime of the client (hours)?

Does the answer hold if I have multiple (10 or 100) clients running?

like image 691
DanJ Avatar asked Nov 23 '08 16:11

DanJ


People also ask

What happens if you don't close a database connection?

If you don't close it, it leaks, and ties up server resources. @EJP The connection itself might be thread-safe (required by JDBC), but the applications use of the connection is probably not threadsafe. Think of things like different transaction isolation, boundaries (commit/rollback/autocommit) etc.

Should I close database connection?

For the purpose of safe coding, you should always close database connections explicitly to make sure that the code was able to close itself gracefully and to prevent any other objects from reusing the same connection after you are done with it.

Should I close DB connection after each query?

Database connections shouldn't be left open. Open the connections just when you need to execute a query and close it as early as possible. So my solution is, YES.

How long does a SQL connection last?

At the time of writing the first connection pool created in a process would always have a timer interval of 3min 10s, so you'd normally see sql connections being closed somewhere between 3min 10s and 6min 20s after you call Close() on the ADO object.


2 Answers

Absolutely it is safe to do this. This is how client-server applications work. If you are using a three-tier application, the application server will keep a pool of connections open anyway.

Scalability is an issue, or at least used to be in the days that machines had less memory than modern kit. With a two-tier (client-server) application, each client opened a connection and held it open. This had several effects:

  • Memory was used per-connection, so large numbers of (relatively) idle connections would use up machine memory. However, a modern 64-bit server can have tens or hundreds of GB of memory, so it could support a very large number of such connections.

  • If a transaction was left uncommitted on the client machine, the locks would be held open for as long as the transaction was open. This led to a class of problems when someone could start a transaction, go off to lunch and forget they had left something open. This would lock any records referenced by the transaction for hours at a time.

  • Transactions could, however easily cover multiple acceses to the database, which is harder to do with a conneciton pool.

A pooled architecture, which is common on 3-tier architectures has a finite number of connections between the application server and the database. Queries simply use the next available connection and updates are committed immediately. This uses less resources as you only have a finite number of connections open, and (in conjunction with an optimistic concurrency strategy) will eliminate a large of potential application concurrency issues.

In order to use long transactions (i.e. transactions that cover more than one call to the database) one has to de-couple the transaction from the connection. This is the basic architecture of a TP monitor and there are some standard protocols such as XA or OLE Transactions to support this. If this type of externally managed transaction is unavailable the application has to construct a compensating transaction that undoes the changes made by the application's transaction. This type of architecture is often used by workflow management systems.

like image 75
ConcernedOfTunbridgeWells Avatar answered Sep 22 '22 03:09

ConcernedOfTunbridgeWells


Open and close your connection per business operation

If you are talking about a client/server application, I would recommend closing each connection as soon as you are done using it. While each individual application instance might take a small performance hit opening the connection, your application as a whole will scale better. This is somewhat dependent on the database server you are using. SQL Server will handle different numbers of concurrent connections based on the hardware it is installed on. If you want to scale up a client/server app to thousands of desktop, a small DB server might not handle all those desktops with open connections but could very well handle thousands of desktops with only some of the connections open.

I saw this first hand a few years ago. An application that was deployed to a few departments with no trouble was then deployed across the entire organization. The application was soon very, very slow. The organization was considering buying a very expensive chunk of hardware for their DB server to gain some performance. I recommended they open and close the db connection after each business operation. Luckily they had architected the application so that this was not a difficult change. They made the change and rolled it out during one of their weekly network updates. Overnight the application performance had improved significantly. They saved thousands of dollars.

like image 32
Jason Jackson Avatar answered Sep 24 '22 03:09

Jason Jackson