Blogs
Introduction
Kubernetes, commonly referred to as K8s, is an open-source system designed for automating the deployment, scaling, and management of containerized applications. It is a robust and flexible solution for deploying Liferay, facilitating scalability and monitoring. While creating a Liferay cluster on Kubernetes is not overly complex, there are certain nuanced aspects that require attention.
What do we need to setup the cluster?
To ensure proper functionality, you need to copy the
tcp.xml
file from the Liferay source code and modify it
to work with DNS_PING
DNS_PING utilizes DNS A or SRV entries for discovery. To enable DNS discovery for applications deployed on Kubernetes, it is necessary to create a Headless Service with appropriate selectors that cover the desired pods. This service ensures that DNS entries are populated as soon as the pods reach the ready state.
Example of an headless service
apiVersion: v1
kind: Service
metadata:
name: liferay-cluster //Service name
labels:
name: liferay-cluster
spec:
clusterIP: None
selector:
app: liferay
ports:
- port: 7800
name: jgroupsliferay-control //Service port name
protocol: TCP
targetPort: 7800
- port: 7900
name: jgroupsliferay-transport //Service port name
protocol: TCP
targetPort: 7900
To enable the use of DNS_PING, you must create your
unicast.xml
file to store a dns_query
. This
file can be dynamically injected into Kubernetes using a ConfigMap and
then injected into the container. If you are using the official
Liferay container, you have the option to store the file in /mnt/liferay/files/tomcat/webapps/ROOT/WEB-INF/classes/unicast.xml
.
While it is possible to create a single file for both transport and control, I recommend using two different files to make it easier to define different ports for each channel.
For instance, you could name the files
jgroupsliferay-control.xml
and
jgroupsliferay-transport.xml
as examples.
Example of the jgroupsliferay-control.xml
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:org:jgroups"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<TCP bind_port="7800"
recv_buf_size="${tcp.recv_buf_size:130k}"
send_buf_size="${tcp.send_buf_size:130k}"
max_bundle_size="64K"
sock_conn_timeout="300"
thread_pool.min_threads="0"
thread_pool.max_threads="20"
thread_pool.keep_alive_time="30000"/>
<dns.DNS_PING dns_query="<service-port-name>._tcp.<service-name>.<kubernetes-namespace>.svc.cluster.local" dns_record_type="SRV"/>
<MERGE3 min_interval="10000"
max_interval="30000"/>
<FD_SOCK/>
<FD_ALL timeout="9000" interval="3000" />
<VERIFY_SUSPECT timeout="1500" />
<BARRIER />
<pbcast.NAKACK2 use_mcast_xmit="false"
discard_delivered_msgs="true"/>
<UNICAST3 />
<pbcast.STABLE desired_avg_gossip="50000"
max_bytes="4M"/>
<pbcast.GMS print_local_addr="true" join_timeout="2000"/>
<UFC max_credits="2M"
min_threshold="0.4"/>
<MFC max_credits="2M"
min_threshold="0.4"/>
<FRAG2 frag_size="60K" />
<!--RSVP resend_interval="2000" timeout="10000"/-->
<pbcast.STATE_TRANSFER/>
</config>
What to add in Portal-ext.properties
You will need to add 3 properties in you portal-ext.properties
cluster.link.enabled=true
cluster.link.channel.properties.control=/jgroupsliferay-control.xml
cluster.link.channel.properties.transport.0=/jgroupsliferay-transport.xml
Conclusion
While there are several methods to implement Liferay clustering, the approach mentioned above is considered the easiest within the constraints of Kubernetes network restrictions. Another option is to utilize JDBC Ping, but if you opt for this route, caution is advised regarding the management of dead zombies. Dead zombies refer to nodes that no longer exist but are retained in the database as potential nodes for reasons such as not being properly cleaned up.
Leave your comments and feedback.
If you need some more information, please PM me or find me at the Liferay community slack.
Danie Dias
Head of Liferay Business
Unit at PDMFC
daniel.dias@pdmfc.com