Clustering in Liferay CE 7.2
Hello, in this blog article i will share with you how to use clustering with Liferay Community Edition 7.2, show the differents used discovery modes (Mutlicast and Unicast), and how to configure them especially for Unicast (TCPing, JDBCPing, S3Ping)
You find below the list of modules that make Liferay Clustering work :
In order to make clustering work with Liferay , we should be sure that theses points are checked :
1- Declare JNDI Resource (ROOT.xml of tomcat) on each node member of the cluster
<Resource name="jdbc/LiferayPool" auth="Container" type="javax.sql.DataSource" factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3308/liferaydxp?useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false" username="root" password="toor" maxActive="20" maxIdle="5" maxWait="10000" />
2- Configure Liferay to use this resource using portal-ext.properties
jdbc.default.jndi.name=jdbc/LiferayPool
In order to share the same indexes data between different nodes of the cluster, we should configure each node to use the same Search Engine, this can be done just with a simple OSGi configuration as below :
1- Create a configuration file with the name com.liferay.portal.search.elasticsearch6.configuration.ElasticsearchConfiguration.config inside osgi/config folder
2- Configure Liferay node to use remote mode and fill the ip address of the standalone elasticsearch instance
operationMode="REMOTE" transportAddresses="ip.of.elasticsearch.node:9300
For the data we have to make each Liferay node stateless, in another term we shouldn't store any data except temporary data into the instance, Liferay offer several ways (DB, NFS, S3 and custom connectors) to do that, here is two examples below (NFS, S3) :
1- NFS Share (using portal-ext.properties) :
We should have already a shared folder /data_shared/ mounted with NFS, and modify the portal-ext.properties to use it as below :
dl.store.impl=com.liferay.portal.store.file.system.FileSystemStore dl.store.file.system.root.dir=/data_shared/liferay/document_library
2- S3 Storage (using both portal-ext.properties and OSGi config ):
In portal-ext.properties we specify to Liferay node to use S3Store storage class, in this case Liferay will store data into the specified bucket
dl.store.impl=com.liferay.portal.store.s3.S3Store
then, we use OSGi config (com.liferay.portal.store.s3.configuration.S3StoreConfiguration.cfg under /osgi/config folder) to provide the necessary informations for the Liferay node to be able to store data into the Amazon bucket
accessKey=AKIAJACCVUGUY3MQALFA secretKey=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 s3Region=eu-west-3 bucketName=informizr-paris
1- When Liferay start in a clustered environnement, the QuartzSchedular will promote a master node in order to process jobs, this can be seen in logs like this :
2019-04-14 20:24:38.729 INFO [main][ClusterSchedulerEngine:615] Load 19 memory clustered jobs from master
2- When we implement a new custom Quartz Job, the default storage mode is StorageType.MEMORY_CLUSTERED, this mode ensure that jobs will be processed by the same node, we can use also StorageType.PERSISTED that guarantee the same thing, except that the queue will be persisted, however StorageType.MEMORY will not guarantee that.
As we know :
multicast.group.address["cluster-link-control"]=239.255.0.1 multicast.group.port["cluster-link-control"]=23301 multicast.group.address["cluster-link-udp"]=239.255.0.2 multicast.group.port["cluster-link-udp"]=23302 multicast.group.address["cluster-link-mping"]=239.255.0.3 multicast.group.port["cluster-link-mping"]=23303 multicast.group.address["multi-vm"]=239.255.0.5 multicast.group.port["multi-vm"]=23305
To enable the multicast mode, it's easy we have just to put theses 3 lines into portal-ext.properties, and Liferay do the job
cluster.link.enabled=true ehcache.cluster.link.replication.enabled=true cluster.link.autodetect.address=localhost:3306
In order to setup a clustered Liferay environnement with the Unicast, we should do some steps, theses steps below are common to different discovery modes :
1- Extraction of tcp.xml file (the file must be renamed to unicast.xml for exemple, or any other name except tcp.xml to avoid collisions) from jgroups.jar that's located in com.liferay.portal.cluster.multiple-[version].jar, and put this one into the Liferay's classpath
2- Add the following JVM parameter on each node
-Djgroups.bind_addr=[node_address]
3- Configure Liferay to use the new config file unicast.xml in portal-ext.properties
cluster.link.channel.properties.control=unicast.xml cluster.link.channel.properties.transport.0=unicast.xml
4- Modify the unicast.xml to add the tag singleton_name="liferay_cluster" in root level, and modify the default port 7800 in case of different physical server
Modify the file unicast.xml as below, to provide to the current node informations about the others nodes :
<TCPPING async_discovery="true" initial_hosts="${jgroups.tcpping.initial_hosts:localhost[7800],localhost[7801]}" port_range="2"/>
To use jdbc ping, we should specify it into unicast.xml to tell to jgroups to use a database to store nodes informations, we can do this as below by replacing TCPPing tag:
<JDBC_PING connection_url="jdbc:mysql://localhost:3306/liferay?useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false" connection_username="root" connection_password="toor" connection_driver="com.mysql.jdbc.Driver"/>
this mode will create a table where the nodes informations will be stored in.
In this mode we configure jgroups to use a bucket to store dynamically all nodes informations of the same cluster, to do that we modify the unicast.xml as below by replacing TCPPing tag :
<S3_PING secret_access_key="j4rwzuCS+n6h87c75niZDEu1hszZvyEVCmW+efO3" access_key="AKIAJB7ISBIOUfFGZ6Qz" location="liferayunicasts3ping"/>