Ask Questions and Find Answers
Important:
Ask is now read-only. You can review any existing questions and answers, but not add anything new.
But - don't panic! While ask is no more, we've replaced it with discuss - the new Liferay Discussion Forum! Read more here here or just visit the site here:
discuss.liferay.com
RE: Issue with connections availability in Liferay
We are experiencing
an issue with connections availability in Liferay that started
occurring in PROD environment after Liferay upgrade from 7.3 GA1
into version 7.4.3.85 . The issue appears in Liferay logs with the error:
2024-02-19 16:55:33.775 ERROR
[http-nio-8080-exec-357][SqlExceptionHelper:142] HikariPool-1 -
Connection is not available, request timed out after 30000ms.
We have ruled out as root causes through testing and
inspecting the application and Liferay logs, the depletion of
connection in the pool (we increased the connection pool) and any
long running queries (we did not find anything taking more than 30
sec).
We have enabled
access log on Liferay as well as Postgres logging and by checking
them at the time of the error occurrences (timestamp is -2h) we
identified a specific query in Liferay’s Lock_ table that is
executed almost 100 times during one second for specific
parameter lockId. This sudden increase in active connections
was observed in dashboard by AM team also in previous occurrences
of the error.
2024-02-19 14:55:02.476 GMT [36248] LOG: execute
<unnamed>: select lockimpl0_.lockId as lockid1_0_0_,
lockimpl0_.mvccVersion as mvccvers2_0_0_, lockimpl0_.uuid_ as
uuid_3_0_0_, lockimpl0_.companyId as companyi4_0_0_,
lockimpl0_.userId as userid5_0_0_, lockimpl0_.userName as
username6_0_0_, lockimpl0_.createDate as createda7_0_0_,
lockimpl0_.className as classnam8_0_0_, lockimpl0_.key_ as
key_9_0_0_, lockimpl0_.owner as owner10_0_0_,
lockimpl0_.inheritable as inherit11_0_0_,
lockimpl0_.expirationDate as expirat12_0_0_ from Lock_ lockimpl0_
where lockimpl0_.lockId=$1
2024-02-19 14:55:02.476 GMT [36248]
DETAIL: parameters: $1 = '1288145'
Unfortunately, these records for specific lockIds are not
present currently in Lock_ table in order to identify what caused
them but at this point we believe they are not related to our
application and we would like to prove they are related to Liferay
internally e.g with. SchedulerEngine.
to get to the root, I'd recommend looking at log files: Access logs (in case of an attack during the night) and - with adapted log settings for the scheduler - for its information. You might also want to check if you have (legitimately, but accidentally) scheduled a lot of tasks at the same time, and spread out their execution a bit.
Hi Olaf,
Thank you for your answer. just to clarify that the issue started without any changes to the site trafic just after we upgraded from release "7.3.5 GA 6" to release "7.4.3.GA 85"
Hi,
I think you should consider reviewing Liferay's internal processes, particularly those related to SchedulerEngine, to see if they're inadvertently causing the surge in queries. Additionally, ensure any necessary optimizations or configurations are in place for Liferay's internal workings to prevent such issues in the future.
Thanks
Powered by Liferay™