Connection Pool Gotchas

Recently I was helping a client who was suffering from performance problems on their site sometimes leading to node instability.

After doing a deep dive, I found that most of the problems stemmed from improper connection pool settings.

Not that it was the developers faults, per se, but it had more to do with not seeing the whole picture.

So I thought I'd share some common gotchas with connection pools that hopefully you can use to prevent facing similar issues in your own environments.

DB Connection Limits

First thing you need to know is that databases have connection limits. This is a hard limit that you cannot do anything about from a developer perspective.

For example, MySQL has a 150 (well, 151) default limit on client connections. Postgres has a limit of 100 connections. All enterprise databases impose a connection limit.

These limits are configurable, but when setting up your connection pool you really need to know what these limits are because you can't do anything about them.

DB User Connection Limits

Databases can also allow defining a limit to connections by a user. MySQL, for example, has the max_connections value but also max_user_connections that can limit what a user can use simultaneously. Most enterprise databases will allow defining a per-user connection limit.

You should know if your database is configured to use a per-user connection limit as well as what that limit is.

If you have an artificially low per-user limit, you can work around this by using separate users in each of your cluster nodes.

Cluster Sizing vs DB Limits

When DB limits are in place, your individual node connections will be defined by the DB connection limit divided by the size of your cluster.

For example, if your DB connection limit restricts to 100 connections, in your four node cluster the max pool size should be 25 connections. Any higher and the cluster-wide connection needs would exceed the count allowed by the database.

Application-Managed Pools Consume Connections Separately

This one is kind of sneaky. If you have 2 Spring portlet wars and each war has a connection pool sized to 20 connections max, it is not 20 connections total for the node, it is 40 connections. Each pool is managed separately because a war-based application will not share pools with other applications.

A best practice is to define a container-managed pool and use JNDI lookups in the applications. When managed by the container, that pool is shared amongst all apps that look up the JNDI datasource and provide a better way of sharing resources.

Liferay Uses Two Connection Pools

It is not very well known, but Liferay actually has two different connection pools.

The first pool is the one we typically do know, the Default connection pool. When you use the jdbc.default.jndi.name property or any of the jdbc.default.* properties, you're defining the default Liferay connection pool.

Liferay has a second counter pool, though, which is managed using the jdbc.counter.* properties. This is a separate connection pool that Liferay uses for incrementing the various counters that Liferay manages. It uses a separate pool so Liferay is always able to get a counter value even when the default pool is exhausted.

When there is no specific configuration for the counter pool sizing, it will inherit the size from the jdbc.default.* settings. So if you use the Liferay default value of jdbc.default.maxPoolSize=100, your counter pool will also have a max pool size of 100.

When you are configuring your Liferay default connection pool, keep in mind and consider sizing your counter connection pool manually within expected limits.

Connection Pool Best Practices

So here's my short list of best practices when it comes to connection pools:

  • Use different logins on each node in your cluster. This may seem kind of odd, but your DBAs will appreciate being able to see how many logins are originating from each node of your cluster, whether user-based connection limits are defined or not.
  • Configure the Liferay Counter connection pool. I tend to set this pool kinda small, like a max of 10 connections per the default pool's 100 size.
  • Know your DB connection pool limit. You need this to figure out how to size your pools in each node. Divide the limit by the number of nodes you have; this will be the max on a per-node basis.
  • Leave some connections on the table. If your database allows for 100 connections, don't size your 4 node pools at 25. You'll chew up all connections and will make it hard or impossible to connect when things have gone sideways. Leave some connections on the table so you can use an admin tool to connect when you need to.
  • Always use container-managed pools. This shares the connection pool in each application in the container and allows active apps to use available connections without reserving resources for idle apps that don't need them. Also depending upon your application server, you may be able to tweak connection pool configuration without requiring a container restart, unlike what is necessary for a property- or application-based configuration.
  • If it's an option, use HikariCP for your connection pool. As I presented here: https://liferay.dev/blogs/-/blogs/tomcat-hikaricp Hikari has a better performance footprint that exceeds DBCP and C3P0.

Hope this helps!

Blogs