RE: Issue in executing scheduler job on Liferay 6.0 CE cluster

Huy Tran, modified 5 Years ago. New Member Posts: 10 Join Date: 8/13/15 Recent Posts
Hi all,
I'm configuring 2 Liferay 6.0.6 CE with cluster. But I have a problem regarding doing the scheduler job on my cluster env. When deploying a scheduler job on both nodes, the scheduler job is fired on both nodes.

I found a similar issue here: https://issues.liferay.com/browse/LPS-25793, this is a bug of Liferay 6.1.0 CE GA1 and earlier versions.
I've checked the source code of Liferay 6.1.1 CE(fixed version) and found the differences related to executing scheduler job on the cluster, such as: 
- Add new some classes:
 > "com.liferay.portal.kernel.scheduler.StorageType": define the storage type.
 > "com.liferay.portal.scheduler.ClusterSchedulerEngine": contains the methods to check and detect the master/slave node.
 > "com.liferay.portal.scheduler.ClusterSchedulerEngine.MemorySchedulerClusterEventListener": fire when any node to join or depart,  it support to detect and change role (master/slave) of current node.
 > "com.liferay.portal.scheduler.ClusterSchedulerEngine.MemorySchedulerClusterResponseCallback", etc...
- Store the address of the master node in DB(table "Lock_.owner") and compare that address when checking the role of node.

Can I port these changes of Liferay 6.1.1 CE to version 6.0.6 CE to fix bug? Are there any obstacles? (I worry about the difference of source code structure)
P/S: I know should not use cluster on Liferay 6.0. But for some reason, my customer doesn't want to upgrade the version.
Thanks.
thumbnail
David H Nebinger, modified 5 Years ago. Liferay Legend Posts: 14933 Join Date: 9/2/06 Recent Posts
6.0 and 6.1? Sure anything is possible, but this backport is all on you.

Also this is an indicator that maybe it is time for you to start planning an update to maybe a version of the software that was released in the last 5 years...
thumbnail
Christoph Rabel, modified 5 Years ago. Liferay Legend Posts: 1555 Join Date: 9/24/09 Recent Posts
Wouldn't it be easier to just disable the job on one server?  e.g. by adding a portlet-ext.properties setting and setting it to true on one server and false on the other service.
Of course, the job won't run if the enabled node isn't up, but maybe you can live with that ...
Huy Tran, modified 5 Years ago. New Member Posts: 10 Join Date: 8/13/15 Recent Posts
Christoph Rabel:

Wouldn't it be easier to just disable the job on one server?  e.g. by adding a portlet-ext.properties setting and setting it to true on one server and false on the other service.
Of course, the job won't run if the enabled node isn't up, but maybe you can live with that ...
Thanks for your reply.
I don't know which node is the master and don't  know how to detect the node's role on Liferay 6.0 CE. (In Liferay 6.1 CE, the address of the master node is stored in DB with "Lock_.owner", it is easy to detect the master node through checking from DB )
In addition, I also want the default scheduler(ex: check expiration time of web content, user, etc...) to fire on the master node only.