Ask Questions and Find Answers
Important:
Ask is now read-only. You can review any existing questions and answers, but not add anything new.
But - don't panic! While ask is no more, we've replaced it with discuss - the new Liferay Discussion Forum! Read more here here or just visit the site here:
discuss.liferay.com
RE: Issue in executing scheduler job on Liferay 6.0 CE cluster
Hi all,
I'm configuring 2 Liferay 6.0.6 CE with cluster. But I have a problem regarding doing the scheduler job on my cluster env. When deploying a scheduler job on both nodes, the scheduler job is fired on both nodes.
I found a similar issue here: https://issues.liferay.com/browse/LPS-25793, this is a bug of Liferay 6.1.0 CE GA1 and earlier versions.
I've checked the source code of Liferay 6.1.1 CE(fixed version) and found the differences related to executing scheduler job on the cluster, such as:
- Add new some classes:
> "com.liferay.portal.kernel.scheduler.StorageType": define the storage type.
> "com.liferay.portal.scheduler.ClusterSchedulerEngine": contains the methods to check and detect the master/slave node.
> "com.liferay.portal.scheduler.ClusterSchedulerEngine.MemorySchedulerClusterEventListener": fire when any node to join or depart, it support to detect and change role (master/slave) of current node.
> "com.liferay.portal.scheduler.ClusterSchedulerEngine.MemorySchedulerClusterResponseCallback", etc...
- Store the address of the master node in DB(table "Lock_.owner") and compare that address when checking the role of node.
Can I port these changes of Liferay 6.1.1 CE to version 6.0.6 CE to fix bug? Are there any obstacles? (I worry about the difference of source code structure)
P/S: I know should not use cluster on Liferay 6.0. But for some reason, my customer doesn't want to upgrade the version.
Thanks.
I'm configuring 2 Liferay 6.0.6 CE with cluster. But I have a problem regarding doing the scheduler job on my cluster env. When deploying a scheduler job on both nodes, the scheduler job is fired on both nodes.
I found a similar issue here: https://issues.liferay.com/browse/LPS-25793, this is a bug of Liferay 6.1.0 CE GA1 and earlier versions.
I've checked the source code of Liferay 6.1.1 CE(fixed version) and found the differences related to executing scheduler job on the cluster, such as:
- Add new some classes:
> "com.liferay.portal.kernel.scheduler.StorageType": define the storage type.
> "com.liferay.portal.scheduler.ClusterSchedulerEngine": contains the methods to check and detect the master/slave node.
> "com.liferay.portal.scheduler.ClusterSchedulerEngine.MemorySchedulerClusterEventListener": fire when any node to join or depart, it support to detect and change role (master/slave) of current node.
> "com.liferay.portal.scheduler.ClusterSchedulerEngine.MemorySchedulerClusterResponseCallback", etc...
- Store the address of the master node in DB(table "Lock_.owner") and compare that address when checking the role of node.
Can I port these changes of Liferay 6.1.1 CE to version 6.0.6 CE to fix bug? Are there any obstacles? (I worry about the difference of source code structure)
P/S: I know should not use cluster on Liferay 6.0. But for some reason, my customer doesn't want to upgrade the version.
Thanks.
6.0 and 6.1? Sure anything is possible, but this backport is all on you.
Also this is an indicator that maybe it is time for you to start planning an update to maybe a version of the software that was released in the last 5 years...
Also this is an indicator that maybe it is time for you to start planning an update to maybe a version of the software that was released in the last 5 years...
Wouldn't it be easier to just disable the job on one server? e.g. by adding a portlet-ext.properties setting and setting it to true on one server and false on the other service.
Of course, the job won't run if the enabled node isn't up, but maybe you can live with that ...
Of course, the job won't run if the enabled node isn't up, but maybe you can live with that ...
Christoph Rabel:
Thanks for your reply.
Wouldn't it be easier to just disable the job on one server? e.g. by adding a portlet-ext.properties setting and setting it to true on one server and false on the other service.
Of course, the job won't run if the enabled node isn't up, but maybe you can live with that ...
I don't know which node is the master and don't know how to detect the node's role on Liferay 6.0 CE. (In Liferay 6.1 CE, the address of the master node is stored in DB with "Lock_.owner", it is easy to detect the master node through checking from DB )
In addition, I also want the default scheduler(ex: check expiration time of web content, user, etc...) to fire on the master node only.
Copyright © 2025 Liferay, Inc
• Privacy Policy
Powered by Liferay™