Liferay 7.4 Scheduled Tasks

Liferay 7.4 has a new Job Scheduler framework that is easier to use than the older way for creating scheduled tasks. Let's review the Job Scheduler framework and see the new way in action...

Introduction

So I kind of have a "famous" blog post on Liferay Scheduled Tasks: https://liferay.dev/blogs/-/blogs/liferay-7-ce-liferay-dxp-scheduled-tasks.

I say "famous" because it has been used by many folks to create scheduled tasks for Liferay 7, but it has also been the source of some bugs (i.e. undeploying from a cluster could cancel a job outright even when not intended, an API change forced some rework, etc.). Even Liferay Support caught some tickets about issues stemming from implementations based upon the blog post.

Generally, though, Liferay recognized that there were issues with scheduling jobs this way, but they also got feedback to understand that scheduled jobs were also kind of important for implementors, and handling the scheduled jobs entirely via code limited runtime configuration options.

So they created a new Job Scheduler framework in 7.4 and, I have to say, working with it is so much easier than the old way, so I thought I'd do a new blog that highlights how to use the new framework and differences from the old implementation...

Job Scheduler Framework

First, it's important to share the official documentation for the Job Scheduler framework.

The entry point is going to be https://learn.liferay.com/w/dxp/building-applications/core-frameworks/job-scheduler-framework, but for development of tasks, we're going to focus on https://learn.liferay.com/w/dxp/building-applications/core-frameworks/job-scheduler-framework/understanding-the-job-scheduler-framework.

Now we can dive in. Let's start by creating the same type of job that I built in the first blog, basically just logging the fact that the job has executed. Check https://liferay.dev/blogs/-/blogs/liferay-7-ce-liferay-dxp-scheduled-tasks#update-05182017 for the implementation of MyTaskMessageListener, and note that the class as shown is 130 lines long.

Now, compare to the same class implemented the new way:

@Component(
  immediate = true,
  property = {
    /* can use a resource bundle key for the name */
    "dispatch.task.executor.name=My Scheduled Job",
    "dispatch.task.executor.type=dnebing.job-01"
  },
  service = DispatchTaskExecutor.class
)
public class MyScheduledJob extends BaseDispatchTaskExecutor {
  /**
   * doExecute: Invoked to complete the work of the scheduled task.
   * @param dispatchTrigger Trigger for the scheduled job.
   * @param dispatchTaskExecutorOutput Used to send details for an admin to review 
   *   for job status.
   * @throws Exception in case of failure.
   */
  @Override
  public void doExecute(DispatchTrigger dispatchTrigger, 
      DispatchTaskExecutorOutput dispatchTaskExecutorOutput) throws Exception {
        
    _log.info("Scheduled task executed...");
        
    dispatchTaskExecutorOutput.setOutput("Scheduled task executed successfully.");
  }

  /**
   * getName: Returns the name for the scheduled job.
   * @return String The name for the job, can be a message key for a resource 
   *    bundle.
   */
  @Override
  public String getName() {
    return "My Scheduled Job";
  }

  private static final Logger _log = LoggerFactory.getLogger(MyScheduledJob.class);
}

This implementation has been reduced to only 37 lines long, plus it takes away all of the manual scheduling, etc. for the job.

Although you could just implement the DispatchTaskExecutor yourself, I recommend extending BaseDispatchTaskExecutor for your implementations. Of course you should review the code to see what it is doing for you (and that you accept what it is doing), but it should be a slight time and code saver.
The value for dispatch.task.executor.type must be unique across all implementations, including any customizations as well as Liferay implementations. Be certain to pick values that will not conflict with Liferay's or your own implementations.

Using The DispatchTaskExecutor

So after you build and deploy, you're now going to be free to use it to define new jobs.

Navigate to the Waffle menu -> Control Panel -> Job Scheduler (under Configuration section), and you should be able to add a new job:

When you choose your new job, you get to define the name and optional configuration for the job:

When you hit Save, your new job is basically set up:

Note here the Task Executor Type - this must be unique amongst all defined DispatchTaskExecutors, so be sure to pick values in your component which will make it unique.

When you click the Run Now button and then refresh the page, you'll see that the job completed successfully:

When you click into the job, the middle tab will get you to the logs for the run:

And when you click into the log, you can see the detail that our task generated:

Finally, on the last tab of the job, the Job Scheduler Trigger tab, you can define the cron expression for when this job should execute:

We're definitely doing a lot more in the UI here than we could have done previously, but this will help us in numerous ways...

Dispatch Trigger

The first argument for the doExecute() method of your DispatchTaskExecutor is the DispatchTrigger for the execution call. You can actually access a lot of details about the trigger that is bound to the current job run including the task status, the next run time for the job, the cron expression, and many other potentially useful details.

But the best thing that you get with the DispatchTrigger has to be the job properties...

Job Properties

Each scheduled job can have properties that can be used to control the job code. I changed my doExecute() method to:

public void doExecute(DispatchTrigger dispatchTrigger,
    DispatchTaskExecutorOutput dispatchTaskExecutorOutput) throws Exception {
  _log.info("Scheduled task executed...");

  UnicodeProperties props = dispatchTrigger
    .getDispatchTaskSettingsUnicodeProperties();

  dispatchTaskExecutorOutput.setOutput("Scheduled task executed successfully.");

  if (props != null) {
    if (GetterUtil.getBoolean(props.getProperty("log.alternate.message"), false)) {
      dispatchTaskExecutorOutput.setOutput("This is the alternate message.");
    }
  }
}

It's a contrived example, of course, but if the property is set to true, then I want to output an alternate message.

I'm using GetterUtil here to convert the String property value into an expected type. Using this utility class you can eliminate all kinds of parsing errors that can occur if someone enters an invalid property value. I recommend using GetterUtil for conversions like this as much as possible.

After building and deploying the code, I then edited the job with the following:

With this done, when I re-run the job, my new log message is:

The idea behind this, though, is that I can now make my job generic in certain ways and leverage the properties tied to the job definition to control what it does at execution time.

So imagine that you wanted to have a scheduled job that would post a list of new blogs with a specific category to a given Slack channel, but the job itself would have different pairings, so blogs with the Finance category would be posted to the #d-finance Slack channel, and you're actually going to have a number of these.

Now you could build one job implementation that maintained the map of categories to channels and, at execution time, would iterate over each category, find new blogs, then post messages to the corresponding channel.

But imagine if that mapping had to change? A new category is added, or a channel name is changed... You'd be talking a code change to make this happen.

Instead, create a simpler DispatchTaskExecutor that uses properties, blog.category and slack.channel. The task implementation uses these properties to find the blogs with the right category and sends the message to the designated slack channel. All you need to do is define the separate jobs for each of your category/channel mappings.

Then, if a category is added, you're just adding a new job instance. If a channel name changes, you're just editing the properties for the job and it will start posting there instead.

Thinking like this will allow you to simplify your code implementation (its only worried about one category and one channel) and makes it controllable in the UI instead of in the code (no development necessary when category or channel change).

What Can I Output?

So I wondered if there were any limits on the output, and I found that it is really darn flexible:

Here I've changed the output to include HTML fragments with paragraphs, a table (only one row and two cells, but you can do what you need), a bulleted list...

The DispatchTaskExecutorOutput is the second argument for the DispatchTaskExecutor's execute() method (or BaseDispatchTaskExecutor's doExecute() method), and it offers the following methods:

public String getError();
public String getOutput();

public void setError(byte[] error);
public void setError(String error);

public void setOutput(byte[] output);
public void setOutput(String output);

So you can set output and/or error, you can pass a String or a byte[] array (that will be converted into a UTF-8 String), and both are CLOBs so they are essentially unlimited in size.

That said, you might find it quite challenging to review the log of your task if you're generating tons of string data into either or both of these fields, plus the data is stored in the database so it can affect the size of your database.

Conclusion

So the new Job Scheduler framework greatly simplifies our work, as developers, to define new scheduled tasks.

Not only that, but we get a ton of benefits from implementing our tasks this way:

  • They can be configurable in the UI.
  • They can generate elaborate output and error data that we can read in the UI.
  • They can be executed in the UI and the previous run times and history can be reviewed.
  • They can be easily scheduled directly in the UI.
  • All of these changes can be made without involving a developer.

So yeah, some of the things we could control, as developers, in the old blog are now lost to us, but the benefits of using the new way should really demonstrate just how good that can be...

Blogs

This you for this blog post, I haven't seen that feature yet. We've already migrated our scheduler tasks to 7.4 in the old-fashioned way, but this looks much better! Whenever I find some spare time, I'll try to migrate.

Is it possible (I'm sure it is!) to automatically create an entry in the Job Scheduler panel? If there was a way to do it when the module is deployed (@Activate ?), it would really make the feature complete and the module fully self-contained.

By the way, I noticed that the first link in your post (directing to the old blog post) links to the Chinese version of liferay.dev.

So actually the only thing that we've defined here is the DispatchTaskExecutor, it's not even a scheduled job at the point this gets created.

There are services surrounding the new framework including the DispatchTriggerLocalService. If I were going to be automatically scheduling jobs, I would use an UpgradeProcess implementation that leveraged the DispatchTriggerLocalService's addDispatchTrigger() to create the new job and then the updateDispatchTrigger() to set the scheduling details (start/stop dates, cron expression, etc).

Ok, thanks for the hint, I'll try the DispatchTriggerLocalService then.

In terms of the first job creation, I actually prefer to trigger this code on every deployment. Then it will check if the job already exists (using external reference code if exists or another kind of key) and create it if not. This way, if anyone breaks it from the Control Panel or does any other modification, I can just ask him to remove the job and redeploy the module. The UpgradeProcess though (from what I remember) is run only once, and rerunning it requires deleting an entry from the database.

You could use an @Activate to handle the check for the existing job, the challenge here though is what if the admin really wants that initial job registration not to be there? You'd keep recreating it on them. IMHO it's better that they just re-create the job if they deleted or changed it by accident.

RE: The upgrade process, yes it only runs once, but you never want to tamper with the database. Instead you can add another upgrade step, so 1.0 -> 1.1 -> 1.2 that uses the same upgrade process to recreate the job if it doesn't exist. Solves your problem and avoids database manipulation.

Be careful, the old way stops working at some point. Somewhere in the u70s, I think. (I found the commit, but forgot again)

We only noticed after upgrading, suddenly the jobs weren't executed anymore. It took us quite a while to figure out that the code for calling the old listeners had been removed.

As David said, addDispatchTrigger works just fine to automatically create the triggers.

JFYI: There is another way to do that, you could implement SchedulerJobConfiguration.

https://github.com/liferay/liferay-portal/blob/master/modules/apps/trash/trash-web/src/main/java/com/liferay/trash/web/internal/scheduler/CheckEntrySchedulerJobConfiguration.java

This has several disadvantages though, the dispatcher is far better.

In my case, all the old jobs still work, and I'm currently on 2023.Q3.2 release. During the upgrade, I looked into Liferay code, checked how they currently do it, and rewrote my code in the same way.

The job class still extends the BaseMessageListener and the destination name is a @Component annotation parameter called "destination.name".

You do save time/money/resources by not refactoring the code while you don't need to.

I do however feel like the benefits of the new way should not be overlooked as they provide clear value, especially for administrators.