Krzysztof Gołębiowski 1 Year Ago - Edited This you for this blog post, I haven't seen that feature yet. We've already migrated our scheduler tasks to 7.4 in the old-fashioned way, but this looks much better! Whenever I find some spare time, I'll try to migrate. Is it possible (I'm sure it is!) to automatically create an entry in the Job Scheduler panel? If there was a way to do it when the module is deployed (@Activate ?), it would really make the feature complete and the module fully self-contained. By the way, I noticed that the first link in your post (directing to the old blog post) links to the Chinese version of liferay.dev. Please sign in to reply. Reply as... Cancel David H Nebinger Krzysztof Gołębiowski 1 Year Ago - Edited So actually the only thing that we've defined here is the DispatchTaskExecutor, it's not even a scheduled job at the point this gets created. There are services surrounding the new framework including the DispatchTriggerLocalService. If I were going to be automatically scheduling jobs, I would use an UpgradeProcess implementation that leveraged the DispatchTriggerLocalService's addDispatchTrigger() to create the new job and then the updateDispatchTrigger() to set the scheduling details (start/stop dates, cron expression, etc). Please sign in to reply. Reply as... Cancel Krzysztof Gołębiowski David H Nebinger 1 Year Ago - Edited Ok, thanks for the hint, I'll try the DispatchTriggerLocalService then. In terms of the first job creation, I actually prefer to trigger this code on every deployment. Then it will check if the job already exists (using external reference code if exists or another kind of key) and create it if not. This way, if anyone breaks it from the Control Panel or does any other modification, I can just ask him to remove the job and redeploy the module. The UpgradeProcess though (from what I remember) is run only once, and rerunning it requires deleting an entry from the database. Please sign in to reply. Reply as... Cancel David H Nebinger Krzysztof Gołębiowski 1 Year Ago - Edited You could use an @Activate to handle the check for the existing job, the challenge here though is what if the admin really wants that initial job registration not to be there? You'd keep recreating it on them. IMHO it's better that they just re-create the job if they deleted or changed it by accident. RE: The upgrade process, yes it only runs once, but you never want to tamper with the database. Instead you can add another upgrade step, so 1.0 -> 1.1 -> 1.2 that uses the same upgrade process to recreate the job if it doesn't exist. Solves your problem and avoids database manipulation. Please sign in to reply. Reply as... Cancel Christoph Rabel Krzysztof Gołębiowski 1 Year Ago - Edited Be careful, the old way stops working at some point. Somewhere in the u70s, I think. (I found the commit, but forgot again) We only noticed after upgrading, suddenly the jobs weren't executed anymore. It took us quite a while to figure out that the code for calling the old listeners had been removed. As David said, addDispatchTrigger works just fine to automatically create the triggers. JFYI: There is another way to do that, you could implement SchedulerJobConfiguration. https://github.com/liferay/liferay-portal/blob/master/modules/apps/trash/trash-web/src/main/java/com/liferay/trash/web/internal/scheduler/CheckEntrySchedulerJobConfiguration.java This has several disadvantages though, the dispatcher is far better. Please sign in to reply. Reply as... Cancel Krzysztof Gołębiowski Christoph Rabel 1 Year Ago - Edited In my case, all the old jobs still work, and I'm currently on 2023.Q3.2 release. During the upgrade, I looked into Liferay code, checked how they currently do it, and rewrote my code in the same way. The job class still extends the BaseMessageListener and the destination name is a @Component annotation parameter called "destination.name". Please sign in to reply. Reply as... Cancel David H Nebinger Krzysztof Gołębiowski 1 Year Ago - Edited You do save time/money/resources by not refactoring the code while you don't need to. I do however feel like the benefits of the new way should not be overlooked as they provide clear value, especially for administrators. Please sign in to reply. Reply as... Cancel Christoph Rabel Krzysztof Gołębiowski 1 Year Ago - Edited Ok, maybe it didn't work "in between" for some versions or we did something "unexpected". Doesn't matter, as long as it works for you. Please sign in to reply. Reply as... Cancel
David H Nebinger Krzysztof Gołębiowski 1 Year Ago - Edited So actually the only thing that we've defined here is the DispatchTaskExecutor, it's not even a scheduled job at the point this gets created. There are services surrounding the new framework including the DispatchTriggerLocalService. If I were going to be automatically scheduling jobs, I would use an UpgradeProcess implementation that leveraged the DispatchTriggerLocalService's addDispatchTrigger() to create the new job and then the updateDispatchTrigger() to set the scheduling details (start/stop dates, cron expression, etc). Please sign in to reply. Reply as... Cancel Krzysztof Gołębiowski David H Nebinger 1 Year Ago - Edited Ok, thanks for the hint, I'll try the DispatchTriggerLocalService then. In terms of the first job creation, I actually prefer to trigger this code on every deployment. Then it will check if the job already exists (using external reference code if exists or another kind of key) and create it if not. This way, if anyone breaks it from the Control Panel or does any other modification, I can just ask him to remove the job and redeploy the module. The UpgradeProcess though (from what I remember) is run only once, and rerunning it requires deleting an entry from the database. Please sign in to reply. Reply as... Cancel David H Nebinger Krzysztof Gołębiowski 1 Year Ago - Edited You could use an @Activate to handle the check for the existing job, the challenge here though is what if the admin really wants that initial job registration not to be there? You'd keep recreating it on them. IMHO it's better that they just re-create the job if they deleted or changed it by accident. RE: The upgrade process, yes it only runs once, but you never want to tamper with the database. Instead you can add another upgrade step, so 1.0 -> 1.1 -> 1.2 that uses the same upgrade process to recreate the job if it doesn't exist. Solves your problem and avoids database manipulation. Please sign in to reply. Reply as... Cancel
Krzysztof Gołębiowski David H Nebinger 1 Year Ago - Edited Ok, thanks for the hint, I'll try the DispatchTriggerLocalService then. In terms of the first job creation, I actually prefer to trigger this code on every deployment. Then it will check if the job already exists (using external reference code if exists or another kind of key) and create it if not. This way, if anyone breaks it from the Control Panel or does any other modification, I can just ask him to remove the job and redeploy the module. The UpgradeProcess though (from what I remember) is run only once, and rerunning it requires deleting an entry from the database. Please sign in to reply. Reply as... Cancel David H Nebinger Krzysztof Gołębiowski 1 Year Ago - Edited You could use an @Activate to handle the check for the existing job, the challenge here though is what if the admin really wants that initial job registration not to be there? You'd keep recreating it on them. IMHO it's better that they just re-create the job if they deleted or changed it by accident. RE: The upgrade process, yes it only runs once, but you never want to tamper with the database. Instead you can add another upgrade step, so 1.0 -> 1.1 -> 1.2 that uses the same upgrade process to recreate the job if it doesn't exist. Solves your problem and avoids database manipulation. Please sign in to reply. Reply as... Cancel
David H Nebinger Krzysztof Gołębiowski 1 Year Ago - Edited You could use an @Activate to handle the check for the existing job, the challenge here though is what if the admin really wants that initial job registration not to be there? You'd keep recreating it on them. IMHO it's better that they just re-create the job if they deleted or changed it by accident. RE: The upgrade process, yes it only runs once, but you never want to tamper with the database. Instead you can add another upgrade step, so 1.0 -> 1.1 -> 1.2 that uses the same upgrade process to recreate the job if it doesn't exist. Solves your problem and avoids database manipulation. Please sign in to reply. Reply as... Cancel
Christoph Rabel Krzysztof Gołębiowski 1 Year Ago - Edited Be careful, the old way stops working at some point. Somewhere in the u70s, I think. (I found the commit, but forgot again) We only noticed after upgrading, suddenly the jobs weren't executed anymore. It took us quite a while to figure out that the code for calling the old listeners had been removed. As David said, addDispatchTrigger works just fine to automatically create the triggers. JFYI: There is another way to do that, you could implement SchedulerJobConfiguration. https://github.com/liferay/liferay-portal/blob/master/modules/apps/trash/trash-web/src/main/java/com/liferay/trash/web/internal/scheduler/CheckEntrySchedulerJobConfiguration.java This has several disadvantages though, the dispatcher is far better. Please sign in to reply. Reply as... Cancel Krzysztof Gołębiowski Christoph Rabel 1 Year Ago - Edited In my case, all the old jobs still work, and I'm currently on 2023.Q3.2 release. During the upgrade, I looked into Liferay code, checked how they currently do it, and rewrote my code in the same way. The job class still extends the BaseMessageListener and the destination name is a @Component annotation parameter called "destination.name". Please sign in to reply. Reply as... Cancel David H Nebinger Krzysztof Gołębiowski 1 Year Ago - Edited You do save time/money/resources by not refactoring the code while you don't need to. I do however feel like the benefits of the new way should not be overlooked as they provide clear value, especially for administrators. Please sign in to reply. Reply as... Cancel Christoph Rabel Krzysztof Gołębiowski 1 Year Ago - Edited Ok, maybe it didn't work "in between" for some versions or we did something "unexpected". Doesn't matter, as long as it works for you. Please sign in to reply. Reply as... Cancel
Krzysztof Gołębiowski Christoph Rabel 1 Year Ago - Edited In my case, all the old jobs still work, and I'm currently on 2023.Q3.2 release. During the upgrade, I looked into Liferay code, checked how they currently do it, and rewrote my code in the same way. The job class still extends the BaseMessageListener and the destination name is a @Component annotation parameter called "destination.name". Please sign in to reply. Reply as... Cancel David H Nebinger Krzysztof Gołębiowski 1 Year Ago - Edited You do save time/money/resources by not refactoring the code while you don't need to. I do however feel like the benefits of the new way should not be overlooked as they provide clear value, especially for administrators. Please sign in to reply. Reply as... Cancel Christoph Rabel Krzysztof Gołębiowski 1 Year Ago - Edited Ok, maybe it didn't work "in between" for some versions or we did something "unexpected". Doesn't matter, as long as it works for you. Please sign in to reply. Reply as... Cancel
David H Nebinger Krzysztof Gołębiowski 1 Year Ago - Edited You do save time/money/resources by not refactoring the code while you don't need to. I do however feel like the benefits of the new way should not be overlooked as they provide clear value, especially for administrators. Please sign in to reply. Reply as... Cancel
Christoph Rabel Krzysztof Gołębiowski 1 Year Ago - Edited Ok, maybe it didn't work "in between" for some versions or we did something "unexpected". Doesn't matter, as long as it works for you. Please sign in to reply. Reply as... Cancel
Chris Dawson 11 Months Ago - Edited Very useful blog post, thank you! Liking this new framework very much. Please sign in to reply. Reply as... Cancel
yinqiu luo 6 Months Ago - Edited How to configure Scheduled Tasks to run in a cluster mode for example "StorageType.MEMORY_CLUSTERED"? I did see "DispatchTaskClusterMode.java" in the latest Liferay Source code but I do not know how to use it. Could you help and advise. Thanks!!! Please sign in to reply. Reply as... Cancel David H Nebinger yinqiu luo 6 Months Ago - Edited So if you check my code, you see that StorageType is gone. Well, it's not truly gone, it's just gone from the old implementation. This new way is used to define a job, but not really to schedule the job (or define storage type). Scheduling the job (and determining storage type) is handled separately, but in one of a number of different ways: * Via the UI, as seen in the post above. You can't set the storage type, but you can choose the cluster mode (if the DispatchTaskExecutor allows it) and choose Single to run only on one node. * By creating a component that implements SchedulerJobConfiguration. When started, these will define and schedule the task based on the SJC information, but it uses StorageType.MEMORY_CLUSTERED for tasks it schedules. * Manually by using the ScheduleEngineHelper component, injected into the component of your choosing, to schedule a job with a custom Trigger, StorageType and Message. Please sign in to reply. Reply as... Cancel
David H Nebinger yinqiu luo 6 Months Ago - Edited So if you check my code, you see that StorageType is gone. Well, it's not truly gone, it's just gone from the old implementation. This new way is used to define a job, but not really to schedule the job (or define storage type). Scheduling the job (and determining storage type) is handled separately, but in one of a number of different ways: * Via the UI, as seen in the post above. You can't set the storage type, but you can choose the cluster mode (if the DispatchTaskExecutor allows it) and choose Single to run only on one node. * By creating a component that implements SchedulerJobConfiguration. When started, these will define and schedule the task based on the SJC information, but it uses StorageType.MEMORY_CLUSTERED for tasks it schedules. * Manually by using the ScheduleEngineHelper component, injected into the component of your choosing, to schedule a job with a custom Trigger, StorageType and Message. Please sign in to reply. Reply as... Cancel
yinqiu luo 6 Months Ago - Edited Since the latest DXP 7.4 does not implement some scheduler classes, I need to adjust my code to run the task with "SINGLE_NODE_MEMORY_CLUSTERED("single-node-memory-clustered", 3, StorageType.MEMORY_CLUSTERED)". I checked the Liferay's simple sample public class S7A3DispatchTaskExecutor extends BaseDispatchTaskExecutor { @Override public void doExecute( DispatchTrigger dispatchTrigger, DispatchTaskExecutorOutput dispatchTaskExecutorOutput) throws IOException, PortalException { _log.info("getDispatchTaskClusterMode after: " + dispatchTrigger.getDispatchTaskClusterMode()); if (_log.isInfoEnabled()) { _log.info( "Invoking #doExecute(DispatchTrigger, " + "DispatchTaskExecutorOutput)"); } } Can I add setDispatchTaskClusterMode inside "doExecute" like dispatchTrigger.setDispatchTaskClusterMode(3) or what should I do to make a task run "SINGLE_NODE_MEMORY_CLUSTERED". Please help and adivse. Thanks, Yinqiu Luo Please sign in to reply. Reply as... Cancel David H Nebinger yinqiu luo 6 Months Ago - Edited Right, but my reply to your previous comment indicates why it isn't there. You can't control the storage type from your DispatchTaskExecutor, that only defines the job, it doesn't schedule the job or assign a trigger. You may just want to define an @Component with an @Activate method handler and an @Reference to inject the instance of SchedulerEngineHelper. Inside of the activate method, you can define a trigger (i.e. one to run in say 30 seconds and then every 24 hours or whatever the requirement might be), and then use the SchedulerEngineHelper to validate the trigger & desired storage type and then schedule the job. Remember the key to the new implementation is to separate the job definition (the DispatchTaskExecutor implementation) from the trigger and schedule details (to decouple them). Please sign in to reply. Reply as... Cancel
David H Nebinger yinqiu luo 6 Months Ago - Edited Right, but my reply to your previous comment indicates why it isn't there. You can't control the storage type from your DispatchTaskExecutor, that only defines the job, it doesn't schedule the job or assign a trigger. You may just want to define an @Component with an @Activate method handler and an @Reference to inject the instance of SchedulerEngineHelper. Inside of the activate method, you can define a trigger (i.e. one to run in say 30 seconds and then every 24 hours or whatever the requirement might be), and then use the SchedulerEngineHelper to validate the trigger & desired storage type and then schedule the job. Remember the key to the new implementation is to separate the job definition (the DispatchTaskExecutor implementation) from the trigger and schedule details (to decouple them). Please sign in to reply. Reply as... Cancel