I'm building a multi tenant REST server application with Spring 2.x, Hibernate 5.x, Spring Data REST, Mysql 5.7. Spring 2.x uses Hikari for connection pooling.
I'm going to use a DB per tenant approach, so every tenant will have his own database.
I created my MultiTenantConnectionProvider in this way:
@Component
@Profile("prod")
public class MultiTenantConnectionProviderImpl implements MultiTenantConnectionProvider {
private static final long serialVersionUID = 3193007611085791247L;
private Logger log = LogManager.getLogger();
private Map<String, HikariDataSource> dataSourceMap = new ConcurrentHashMap<String, HikariDataSource>();
@Autowired
private TenantRestClient tenantRestClient;
@Autowired
private PasswordEncrypt passwordEncrypt;
@Override
public void releaseAnyConnection(Connection connection) throws SQLException {
connection.close();
}
@Override
public Connection getAnyConnection() throws SQLException {
Connection connection = getDataSource(TenantIdResolver.TENANT_DEFAULT).getConnection();
return connection;
}
@Override
public Connection getConnection(String tenantId) throws SQLException {
Connection connection = getDataSource(tenantId).getConnection();
return connection;
}
@Override
public void releaseConnection(String tenantId, Connection connection) throws SQLException {
log.info("releaseConnection " + tenantId);
connection.close();
}
@Override
public boolean supportsAggressiveRelease() {
return false;
}
@Override
public boolean isUnwrappableAs(Class unwrapType) {
return false;
}
@Override
public <T> T unwrap(Class<T> unwrapType) {
return null;
}
public HikariDataSource getDataSource(@NotNull String tentantId) throws SQLException {
if (dataSourceMap.containsKey(tentantId)) {
return dataSourceMap.get(tentantId);
} else {
HikariDataSource dataSource = createDataSource(tentantId);
dataSourceMap.put(tentantId, dataSource);
return dataSource;
}
}
public HikariDataSource createDataSource(String tenantId) throws SQLException {
log.info("Create Datasource for tenant {}", tenantId);
try {
Database database = tenantRestClient.getDatabase(tenantId);
DatabaseInstance databaseInstance = tenantRestClient.getDatabaseInstance(tenantId);
if (database != null && databaseInstance != null) {
HikariConfig hikari = new HikariConfig();
String driver = "";
String options = "";
switch (databaseInstance.getType()) {
case MYSQL:
driver = "jdbc:mysql://";
options = "?useLegacyDatetimeCode=false&serverTimezone=UTC&useUnicode=yes&characterEncoding=UTF-8&useSSL=false";
break;
default:
driver = "jdbc:mysql://";
options = "?useLegacyDatetimeCode=false&serverTimezone=UTC&useUnicode=yes&characterEncoding=UTF-8&useSSL=false";
}
hikari.setJdbcUrl(driver + databaseInstance.getHost() + ":" + databaseInstance.getPort() + "/" + database.getName() + options);
hikari.setUsername(database.getUsername());
hikari.setPassword(passwordEncrypt.decryptPassword(database.getPassword()));
// MySQL optimizations, see
// https://github.com/brettwooldridge/HikariCP/wiki/MySQL-Configuration
hikari.addDataSourceProperty("cachePrepStmts", true);
hikari.addDataSourceProperty("prepStmtCacheSize", "250");
hikari.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
hikari.addDataSourceProperty("useServerPrepStmts", "true");
hikari.addDataSourceProperty("useLocalSessionState", "true");
hikari.addDataSourceProperty("useLocalTransactionState", "true");
hikari.addDataSourceProperty("rewriteBatchedStatements", "true");
hikari.addDataSourceProperty("cacheResultSetMetadata", "true");
hikari.addDataSourceProperty("cacheServerConfiguration", "true");
hikari.addDataSourceProperty("elideSetAutoCommits", "true");
hikari.addDataSourceProperty("maintainTimeStats", "false");
hikari.setMinimumIdle(3);
hikari.setMaximumPoolSize(5);
hikari.setIdleTimeout(30000);
hikari.setPoolName("JPAHikari_" + tenantId);
// mysql wait_timeout 600seconds
hikari.setMaxLifetime(580000);
hikari.setLeakDetectionThreshold(60 * 1000);
HikariDataSource dataSource = new HikariDataSource(hikari);
return dataSource;
} else {
throw new SQLException(String.format("DB not found for tenant %s!", tenantId));
}
} catch (Exception e) {
throw new SQLException(e.getMessage());
}
}
}
In my implementation I read tenantId and I get information about the database instance from a central management system. I create a new pool for each tenant and I cache the pool in order to avoid to recreate it each time.
I read this interesting question, but my question is quite different. I'm thinking to use AWS (both for server instance, and RDS db instance).
Let's hypothesize a concrete scenario in which I've 100 tenants. The application is a management/point of sale software. It will be used just from agents. Let's say each tenants has an average of 3 agents working concurrently in each moment.
With that numbers in mind and according to this article, the first thing I realize is that it seems hard to have a pool for each tenant.
For 100 tenants I would like to think that a db.r4.large (2vcore, 15,25GB RAM and fast disk access ) with Aurora should be enough (about 150€/month).
According to the formula to size a connection pool:
connections = ((core_count * 2) + effective_spindle_count)
I should have 2core*2 + 1 = 5 connections in the pool.
From what I got, this should be the max connections in the pool to maximise performance on that DB instance.
1st solution
So my first question is pretty simple: how can I create a separate connection pool for each tenant seen that I should only use 5 connection in total?
It seems not possible to me. Even if I assign 2 connections to each tenant, I would have 200 connections to the DBMS!!
According to this question, on a db.r4.large
instance I could have at max 1300 connections, so seems the instance should face quite well the load.
But according the article I mentioned before, seems a bad practice use hundreds connections to the db:
If you have 10,000 front-end users, having a connection pool of 10,000 would be shear insanity. 1000 still horrible. Even 100 connections, overkill. You want a small pool of a few dozen connections at most, and you want the rest of the application threads blocked on the pool awaiting connections.
2nd solution
The second solution I have in mind is to share a connection pool for tenants on the same DMBS. This means that all 100 tenants will use the same Hikari pool of 5 connections (honestly it seems quite low to me).
Should this the right way to maximize performance and redure the response time of the application?
Do you have a better idea of how to manage this scenario with Spring, Hibernate, Mysql (hosted on AWS RDS Aurora)?
For optimal performance, use a pool with eight to 16 connections per node. For example, if you have four nodes configured, then the steady-pool size must be set to 32 and the maximum pool size must be 64.
You reuse a prior database connection, in a new context to avoid the cost of setting up a new database connection for each request. The primary reason to avoid using database connections is that you're application's approach to solving problems isn't structured to accommodate a database connection pool.
Database connection pooling is still achieved through application module pooling. The only exception is when multiple application module pools (and therefore a large number of application module instances) share the same database, making the total available database connections the highest priority.
When an application requests a connection from the connection pool, the Pool assigns an available connection. If an unused connection exists, the pool return it. Otherwise, if there is no available connection, the Pool will create a new connection and assigns it to the application as an active connection.
Most definitely opening connection per tenant is a very bad idea. All you need is a pool of connections shared across all users.
So first step would be to find the load or anticipate what it would be based on some projections.
Decide how much latency is acceptable, what is the burst peak time traffic etc
Finally come to number of connections you will need for this and decide on number of instances required. For instance if your peak time usage is 10k per s and each query takes 10ms then you will need 100 open connections for latency of 1s.
Implement it without any bindings to user. i.e. the same pool shared across all. Unless you have a case to group say premium/basic users to say have set of two pools etc
Finally as you are doing this in AWS if you need more than 1 instance based on point 3 - see if you can autoscale up/down based on load to save costs.
Check these out for some comparison metrics
This one is probably most interesting in terms of spike demand
https://github.com/brettwooldridge/HikariCP/blob/dev/documents/Welcome-To-The-Jungle.md
Some more...
https://github.com/brettwooldridge/HikariCP
https://www.wix.engineering/blog/how-does-hikaricp-compare-to-other-connection-pools
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With