Ask Questions and Find Answers
Important:
Ask is now read-only. You can review any existing questions and answers, but not add anything new.
But - don't panic! While ask is no more, we've replaced it with discuss - the new Liferay Discussion Forum! Read more here here or just visit the site here:
discuss.liferay.com
RE: RMI Tcp connection
Hi everyone,
Having a really strange issue here - We configured liferay clustering in aws linux machine, we using Tcp Unicast connection but we got this message WARNING [RMI TCP Accept-0] sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop RMI TCP Accept-0: accept loop for ServerSocket[addr=0.0.0.0/0.0.0.0,localport=41775] throws
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:420)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:377)
at java.lang.Thread.run(Thread.java:748)
The node went down.
Note : We used RDS for the database. The max Threads configured in the connector 8009 in server.xml file is 2000
Having a really strange issue here - We configured liferay clustering in aws linux machine, we using Tcp Unicast connection but we got this message WARNING [RMI TCP Accept-0] sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop RMI TCP Accept-0: accept loop for ServerSocket[addr=0.0.0.0/0.0.0.0,localport=41775] throws
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:420)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:377)
at java.lang.Thread.run(Thread.java:748)
The node went down.
Note : We used RDS for the database. The max Threads configured in the connector 8009 in server.xml file is 2000
Um, it says "OutOfMemoryError". That should be more than enough to tell you what you should be changing...
Hi,
The memory in the server it's ok. we have 16GB.
This is the JVM configuration in the setenv.sh file :
CATALINA_OPTS="$CATALINA_OPTS -Dfile.encoding=UTF8 -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false -Duser.timezone=GMT -XX:NewSize=2048m -XX:MaxNewSize=2048m -Xms8144m -Xmx8144m -XX:permSize=200m -XX:MaxPermSize=512m -XX:SurvivorRatio=20 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=15 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:parallelGCThreads=8 -XX:ReservedCodeCacheSize=512m -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+CMSCompactWhenClearAllSoftRefs -XX:CMSInitiatingOccupancyFraction=85 -XX:+CMSScavengeBeforeRemark -XX:+CMSConcurrentMTEnabled -XX:parallelCMSThreads=2 -XX:+UseCompressedOops -XX:+DisableExplicitGC -XX:-UseBiasedLocking -XX:+BindGCTaskThreadsToCPUs -XX:+UseFastAccessorMethods -Djava.net.preferIPv4Stack=true"CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote"
CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote.port=8100"
CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote.ssl=false"
CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote.authenticate=false"
CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote.local.only=false"
Note : when the server reaches around 500 threads the node hangs
The memory in the server it's ok. we have 16GB.
This is the JVM configuration in the setenv.sh file :
CATALINA_OPTS="$CATALINA_OPTS -Dfile.encoding=UTF8 -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false -Duser.timezone=GMT -XX:NewSize=2048m -XX:MaxNewSize=2048m -Xms8144m -Xmx8144m -XX:permSize=200m -XX:MaxPermSize=512m -XX:SurvivorRatio=20 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=15 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:parallelGCThreads=8 -XX:ReservedCodeCacheSize=512m -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+CMSCompactWhenClearAllSoftRefs -XX:CMSInitiatingOccupancyFraction=85 -XX:+CMSScavengeBeforeRemark -XX:+CMSConcurrentMTEnabled -XX:parallelCMSThreads=2 -XX:+UseCompressedOops -XX:+DisableExplicitGC -XX:-UseBiasedLocking -XX:+BindGCTaskThreadsToCPUs -XX:+UseFastAccessorMethods -Djava.net.preferIPv4Stack=true"CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote"
CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote.port=8100"
CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote.ssl=false"
CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote.authenticate=false"
CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote.local.only=false"
Note : when the server reaches around 500 threads the node hangs
sabri ben salem:
Have you seen what's available on that error message? It's pretty straightforward.
The memory in the server it's ok. we have 16GB.
No pun intended, but a good opportunity to use lmgtfy again ;)
Olaf Kock:
It's pretty straightforward. "We have been modifying the JVM values as seen above. But we always get the same outcome. Then node goes down once we start to do anything i.e add/edit pages or view any control panel tab. The "java.lang.OutOfMemoryError: unable to create new native thread" is didplayed in the logssabri ben salem:Have you seen what's available on that error message? It's pretty straightforward.
The memory in the server it's ok. we have 16GB.
No pun intended, but a good opportunity to use lmgtfy again ;)
sabri ben salem:
Indeed, it's pretty straightforward. Try the "ulimit -u" suggestions. You have some limitations on your underlying operating system (either the ulimit, or another) which the JVM runs in, thus runs into this issue.Olaf Kock:It's pretty straightforward. "We have been modifying the JVM values as seen above. But we always get the same outcome. Then node goes down once we start to do anything i.e add/edit pages or view any control panel tab. The "java.lang.OutOfMemoryError: unable to create new native thread" is didplayed in the logssabri ben salem:Have you seen what's available on that error message? It's pretty straightforward.
The memory in the server it's ok. we have 16GB.
No pun intended, but a good opportunity to use lmgtfy again ;)
Attached images contains the result of the command ulimit -u
The OS we are using is SUSE Linux Enterprise Server 12 SP3.
Which we only have 2 services Liferay and Apache.
Apache "2.4" connecting to Liferay using mod_jk.
The OS we are using is SUSE Linux Enterprise Server 12 SP3.
Which we only have 2 services Liferay and Apache.
Apache "2.4" connecting to Liferay using mod_jk.
sabri ben salem:
The memory in the server it's ok. we have 16GB.
This is the JVM configuration in the setenv.sh file :
CATALINA_OPTS="$CATALINA_OPTS ... -Xmx8144m ...
I'm not really sure you know what you have, but settings for memory do not guarantee that you have memory available at runtime...
Attached images contains the result of the command ulimit -u
The OS we are using is SUSE Linux Enterprise Server 12 SP3.
Which we only have 2 services Liferay and Apache.
Apache "2.4" connecting to Liferay using mod_jk.
The OS we are using is SUSE Linux Enterprise Server 12 SP3.
Which we only have 2 services Liferay and Apache.
Apache "2.4" connecting to Liferay using mod_jk.
One of the first google hits on this error message is https://dzone.com/articles/troubleshoot-outofmemoryerror-unable-to-create-new. It says in the paragraph after the "ulimit" suggestion:
If you don’t see a high number of threads created and “ulimit –u” value is well ahead then it’s indicative that your application has grown organically and needs more memory to create threads. In such circumstance, allocate more memory to the machine. It should solve the problem.
Followed by more paragraphs. Let us know the answer to those.
Another option is to check for the creators of the threads: Are your custom components creating these threads to process something in the background? Or are those "legitimate" (e.g. caused by load) threads for http processing? Take thread dumps to figure out who creates the threads.
If you don’t see a high number of threads created and “ulimit –u” value is well ahead then it’s indicative that your application has grown organically and needs more memory to create threads. In such circumstance, allocate more memory to the machine. It should solve the problem.
Followed by more paragraphs. Let us know the answer to those.
Another option is to check for the creators of the threads: Are your custom components creating these threads to process something in the background? Or are those "legitimate" (e.g. caused by load) threads for http processing? Take thread dumps to figure out who creates the threads.
From the attachments, noticed that you are running liferay using liferay user. Have you set the ulimit for that user or the results are for root ? As suggested by Olaf, this might be due to ulimit. We have previously encountered similar error and setting the ulimit values solved the issue. Not sure how to set on SUSE, here is the link for RHEL https://access.redhat.com/solutions/61334Sample belowliferay soft nofile 4096
liferay hard nofile 16384
liferay soft nproc 4096
liferay hard nproc 16384
liferay hard nofile 16384
liferay soft nproc 4096
liferay hard nproc 16384
Hi,
Yes we are running Liferay as a user, the values in the " 2019-07-12_1330_001.png image are for the liferay user.
You can see in attached screen the result of the command for Liferay user.
I also attached the dump analyze from https://fastthread.io/
Yes we are running Liferay as a user, the values in the " 2019-07-12_1330_001.png image are for the liferay user.
You can see in attached screen the result of the command for Liferay user.
I also attached the dump analyze from https://fastthread.io/
Screenshots show only a small fraction of the available data.
Further up, I've pointed to a dzone article. Did you follow it - particularly the part that I've quoted (what to do when "ulimit -u" is way ahead but you still run into issues)? You might not be able to stick more memory into the server, but you can lower the JVM's assigned memory and through this make more memory available to the OS. Continue through that list and let us know what you've tried, and what the results were.
Further up, I've pointed to a dzone article. Did you follow it - particularly the part that I've quoted (what to do when "ulimit -u" is way ahead but you still run into issues)? You might not be able to stick more memory into the server, but you can lower the JVM's assigned memory and through this make more memory available to the OS. Continue through that list and let us know what you've tried, and what the results were.