Ask Questions and Find Answers
Important:
Ask is now read-only. You can review any existing questions and answers, but not add anything new.
But - don't panic! While ask is no more, we've replaced it with discuss - the new Liferay Discussion Forum! Read more here here or just visit the site here:
discuss.liferay.com
RE: Performance query
In our Liferay 6.2 portal we have an inquiry portlet consisting of 64 questions divided into 6 categories. In our load test of our portal there is an increase in page load time that I have hard time to explain. We are using Smart Bears LoadComplete to perform our load test. Attached image shows a normal test with 150 virtual users answering these questions with 5-10s think time. The line in dark blue represents page load time and here comes the puzzle. After all 150VU (light blue line) are logged on and answering questions there is a slow but noticeable increase in page load time - from ~2sec to ~4sec. During this time the Liferay server CPU load is slightly decreased (orange line) and back end CPU stays solid (green line). The questions in the inquiry are answered one at a time and the answer goes all the way to backend. The size of each questions are fairly the same. If the load are the same and the output are the same why does the load time double?
Attachments:
This is impossible to answer, there is really zero context here.
You're basically saying that when lots of folks start the same process at the same time, their load time is shorter at the start of the process than at the end of the process.
Okay, but there's no context of what the process is doing. Consider an application for submitting data to file US tax returns. Same kind of deal as what you're describing, it's basically a series of questions where each response goes back to the server. But when we look at what happens server side, US tax firms will build in logic and analytics and active decision making to use each question to trigger other questions, make suggestions, evaluate audit risk, etc.
The farther you are down in this process, the more backend work is going on evaluating the data collected up to the point, therefore the more backend work is involved with each data point, so the exact same kind of load performance could be seen in that kind of process.
That's just one example, but I hope it demonstrates that you shouldn't really be asking why load time increases as the process goes on, instead you should be asking why the process struggles as more data is added. Perhaps it is not caching previous results (or perhaps it can't), perhaps it is doing work that could wait until a final submission, etc.
You're basically saying that when lots of folks start the same process at the same time, their load time is shorter at the start of the process than at the end of the process.
Okay, but there's no context of what the process is doing. Consider an application for submitting data to file US tax returns. Same kind of deal as what you're describing, it's basically a series of questions where each response goes back to the server. But when we look at what happens server side, US tax firms will build in logic and analytics and active decision making to use each question to trigger other questions, make suggestions, evaluate audit risk, etc.
The farther you are down in this process, the more backend work is going on evaluating the data collected up to the point, therefore the more backend work is involved with each data point, so the exact same kind of load performance could be seen in that kind of process.
That's just one example, but I hope it demonstrates that you shouldn't really be asking why load time increases as the process goes on, instead you should be asking why the process struggles as more data is added. Perhaps it is not caching previous results (or perhaps it can't), perhaps it is doing work that could wait until a final submission, etc.
Exactly the answer I was looking for. I know the question is almost impossible to answer, but since I was stuck in my thinking I needed another look into this. And maybe a miracle saying change this param and you'll be fine.
The little context I supplied can be a little more detailed as this scenario:1) Frontend extracts a question from backend and presents it to the user using a web form.2) User answers the question and the frontend send data to backend for storage.3) Backend fetch next question and returns it to frontend and the process starts all over again. One calculation being done that can varys over time is the "Are all questions answered". This methods has been optimized and shouldn't increase over time.But the answer you supplied gave me the simple idé of putting a log print out in the above scenario and examine how this change over time. With time simple test I can eliminate or pinpoint if the problem relates to code or environment. Should have thought of that earlier. Thanks.

Sometime a fault can lead to discovering things. In this case it turned out that the test tool we are using was loosing a vital settings which in return made the scenario never reach backend. The scenario instead became:- Login to site
- Start inquiry - For every inquiry, at the start of the portelet rise an error. The error was displayed on an error page using an error portlet.
And then the next question was submitted from the test tool. New error, error page and next question. Repeated for all questions.
So the test never reached the back end. Major part of test produced two pages:
- question page
- error page
I still can't explain why page load time grows from 2s to 4s with similar load. Any suggestions on where to look?
- Start inquiry - For every inquiry, at the start of the portelet rise an error. The error was displayed on an error page using an error portlet.
And then the next question was submitted from the test tool. New error, error page and next question. Repeated for all questions.
So the test never reached the back end. Major part of test produced two pages:
- question page
- error page
I still can't explain why page load time grows from 2s to 4s with similar load. Any suggestions on where to look?
Tomcat hast a concurrent connection limit of 150. Maybe you hit that? You should see a warning about it in Catalina.out.
Or just try to increase it in server.xml and test again
Or just try to increase it in server.xml and test again
Thanks Christoph! I'll will look into this. During this weekend I did some more load tests and due to other problems I had to descrease number of virtual users down to 100. Is Tomcats concurrent connections based on one user one connection or can one user have more than one connection?Since I am only using 100 virtual users I shouldn't reach this limit but the graph of page load time grows the same way as with 150 VU.
I think, it is counted per connection. It depends on the load test tool, how many concurrent connections it opens. It might open multiple connections per user or just one. Modern browsers allow multiple connections to the same site (per "virtual user" in this case) and if the test tool mimics something like that, it might open multiple requests at once.
Btw. I was wrong, the default connection limit is 200. So, it is quite likely that I am totally wrong here with my wild guess. But from your problem description it feels like you hit some limit.
You could try to do a thread dump (On Linux kill -3 <PID> and there is also a button in the server administration of Liferay), check with netstat and if you are on Linux with tools like iostat. In the threaddump you should see how many threads are active and what they are doing. If a lot of them are waiting e.g. for a db query, you probably hit some db connection limit and would need to increase the pool size.
Btw. I was wrong, the default connection limit is 200. So, it is quite likely that I am totally wrong here with my wild guess. But from your problem description it feels like you hit some limit.
You could try to do a thread dump (On Linux kill -3 <PID> and there is also a button in the server administration of Liferay), check with netstat and if you are on Linux with tools like iostat. In the threaddump you should see how many threads are active and what they are doing. If a lot of them are waiting e.g. for a db query, you probably hit some db connection limit and would need to increase the pool size.
Copyright © 2025 Liferay, Inc
• Privacy Policy
Powered by Liferay™