Ask Questions and Find Answers
Important:
Ask is now read-only. You can review any existing questions and answers, but not add anything new.
But - don't panic! While ask is no more, we've replaced it with discuss - the new Liferay Discussion Forum! Read more here here or just visit the site here:
discuss.liferay.com
- Home
- General
- Bug Report
- RE: Manually triggering a scheduled job on slave instance in a clustered setup
RE: Manually triggering a scheduled job on slave instance in a clustered setup
When I go to the Job Scheduler in the admin panel and manually run a scheduled job the outcome is different depending on the instance in a clustered setup I'm connected to. If it's the master node the job triggers as expected, however if it's the slave node the page just reloads and the job doesn't actually run.
I traced the problem to the ClusterSchedulerEngine - all the job-related methods have a @Clusterable annotation with some variation of parameters, there is one exception however - public void run(...). This method is responsible for actually running the job and I believe it should have the @Clusterable annotation as well. All the related methods (getScheduledJob, delete, pause, resume, schedule) are cluster-aware (through the annotation) and the run method is the only one that doesn't have any clustering-related logic despite all the other job-related methods having them. This seems like a simple omission and just adding the annotation should fix the issue.
Hello! Is there any update on the progress of this issue?
Hi, my apologizes but this one fell off my radar. I have now created bug report: https://liferay.atlassian.net/browse/LPD-18251 in Jira. Thanks for following up.
Hi, do you have any timeline for this issue? I'm wondering if this could be fixed in Q1 this year, or perhaps do you have any alternative estimations?
Powered by Liferay™