Liferay 7 CE/Liferay DXP Scheduled Tasks

How to create scheduled tasks for Liferay CE or DXP from 7.0 - 7.3.

In Liferay 6.x, scheduled tasks were kind of easy to implement.

I mean, you'd implement a class that implements the Liferay Message Bus's MessageListener interface and then add the details in the <scheduler-entry /> sections in your liferay-portlet.xml file and you'd be off to the races.

Well, things are not so simple with Liferay 7 CE / Liferay DXP. In fact, I couldn't find a reference anywhere on dev.liferay.com, so I thought I'd whip up a quick blog on them.

Of course I'm going to pursue this as an OSGi-only solution.

StorageType Information

The first thing we need to know before we schedule a job, we should first discuss the supported StorageTypes. Liferay has three supported StorageTypes:

  • StorageType.MEMORY_CLUSTERED - This is the default storage type, one that you'll typically want to shoot for. This storage type combines two aspects, MEMORY and CLUSTERED. For MEMORY, that means the job information (next run, etc.) are only held in memory and are not persisted anywhere. For CLUSTERED, that means the job is cluster-aware and will only run on one node in the cluster.
  • StorageType.MEMORY - For this storage type, no job information is persisted. The important part here is that you may miss some job runs in cases of outages. For example, if you have a job to run on the 1st of every month but you have a big outage and the server/cluster is down on the 1st, the job will not run. And unlike in PERSISTED, when the server comes up the job will not run even though it was missed. Note that this storage type is not cluster-aware, so your job will run on every node in the cluster which could cause duplicate runs.
  • StorageType.PERSISTED - This is the opposite of MEMORY as job details will be persisted in the database. For the missed job above, when the server comes up on the 2nd it will realize the job was missed and will immediately process the job. Note that this storage type relies on cluster-support facilities in the storage engine (Quartz's implementation discussed here: http://www.quartz-scheduler.org/documentation/quartz-2.x/configuration/ConfigJDBCJobStoreClustering.html).

So if you're in a cluster, you'll want to stick with either MEMORY_CLUSTERED or PERSISTED to ensure your job doesn't run on every node (i.e. you're running a report to generate a PDF and email, you wouldn't want your 4 node cluster doing the report 4 times and emailing 4 copies). You may want to stick with the MEMORY type when you have, say, an administrative task that needs to run regularly on all nodes in your cluster.

Choosing between MEMORY[_CLUSTERED] and PERSISTED is how resiliant you need to be in the case of missed job fire times. For example, if that monthly report is mission critical, you might want to elect for PERSISTED to ensure the report goes out as soon as the cluster is back up and ready to pick up the missed job. However, if they are not mission critical it is easier to stick with one of the MEMORY options.

Finally, even if you're not currently in a cluster, I would encourage you to make choices as if you were running in a cluster right from the beginning. The last thing you want to have to do when you start scaling up your environment is trying to figure out why some previous regular tasks are not running as they used to when you had a single server. 

Adding StorageType To SchedulerEntry

We'll be handling our scheduling shortly, but for now we'll worry about the SchedulerEntry. The SchedulerEntry object contains most of the details about the scheduled task to be defined, but it does not have details about the StorageType. Remember that MEMORY_CLUSTERED is the default, so if you're going to be using that type, you can skip this section. But to be consistent, you can still apply the changes in this section even for the MEMORY_CLUSTERED type.

To add StorageType details to our SchedulerEntry, we need to make our SchedulerEntry implementation class implement the com.liferay.portal.kernel.scheduler.ScheduleTypeAware interface. When Liferay's scheduler implementation classes are identifying the StorageType to use, it starts with MEMORY_CLUSTERED and will only use another StorageType if the SchedulerEntry implements this interface.

So let's start by defining a SchedulerEntry wrapper class that implements the SchedulerEntry interface as well as the StorageTypeAware interface:

public class StorageTypeAwareSchedulerEntryImpl extends SchedulerEntryImpl implements SchedulerEntry, StorageTypeAware {

  /**
   * StorageTypeAwareSchedulerEntryImpl: Constructor for the class.
   * @param schedulerEntry
   */
  public StorageTypeAwareSchedulerEntryImpl(final SchedulerEntryImpl schedulerEntry) {
    super();

    _schedulerEntry = schedulerEntry;

    // use the same default that Liferay uses.
    _storageType = StorageType.MEMORY_CLUSTERED;
  }

  /**
   * StorageTypeAwareSchedulerEntryImpl: Constructor for the class.
   * @param schedulerEntry
   * @param storageType
   */
  public StorageTypeAwareSchedulerEntryImpl(final SchedulerEntryImpl schedulerEntry, final StorageType storageType) {
    super();

    _schedulerEntry = schedulerEntry;
    _storageType = storageType;
  }

  @Override
  public String getDescription() {
    return _schedulerEntry.getDescription();
  }

  @Override
  public String getEventListenerClass() {
    return _schedulerEntry.getEventListenerClass();
  }

  @Override
  public StorageType getStorageType() {
    return _storageType;
  }

  @Override
  public Trigger getTrigger() {
    return _schedulerEntry.getTrigger();
  }

  public void setDescription(final String description) {
    _schedulerEntry.setDescription(description);
  }
  public void setTrigger(final Trigger trigger) {
    _schedulerEntry.setTrigger(trigger);
  }
  public void setEventListenerClass(final String eventListenerClass) {
    _schedulerEntry.setEventListenerClass(eventListenerClass);
  }
  
  private SchedulerEntryImpl _schedulerEntry;
  private StorageType _storageType;
}

Now you can use this class to wrap a current SchedulerEntryImpl yet include the StorageTypeAware implementation.

Defining The Scheduled Task

NOTE: If you're using DXP FixPack 14 or later or Liferay 7 CE GA4 or later, jump down to the end of the blog post for a necessary change due to the deprecation of the BaseSchedulerEntryMessageListener class.

We have all of the pieces now to build out the code for a scheduled task in Liferay 7 CE / Liferay DXP:

@Component(
  immediate = true, property = {"cron.expression=0 0 0 * * ?"},
  service = MyTaskMessageListener.class
)
public class MyTaskMessageListener extends BaseSchedulerEntryMessageListener {

  /**
   * doReceive: This is where the magic happens, this is where you want to do the work for
   * the scheduled job.
   * @param message This is the message object tied to the job.  If you stored data with the
   *                job, the message will contain that data.   
   * @throws Exception In case there is some sort of error processing the task.
   */
  @Override
  protected void doReceive(Message message) throws Exception {

    _log.info("Scheduled task executed...");
  }

  /**
   * activate: Called whenever the properties for the component change (ala Config Admin)
   * or OSGi is activating the component.
   * @param properties The properties map from Config Admin.
   * @throws SchedulerException in case of error.
   */
  @Activate
  @Modified
  protected void activate(Map<String,Object> properties) throws SchedulerException {

    // extract the cron expression from the properties
    String cronExpression = GetterUtil.getString(properties.get("cron.expression"), _DEFAULT_CRON_EXPRESSION);

    // create a new trigger definition for the job.
    String listenerClass = getEventListenerClass();
    Trigger jobTrigger = _triggerFactory.createTrigger(listenerClass, listenerClass, new Date(), null, cronExpression);

    // wrap the current scheduler entry in our new wrapper.
    // use the persisted storaget type and set the wrapper back to the class field.
    schedulerEntryImpl = new StorageTypeAwareSchedulerEntryImpl(schedulerEntryImpl, StorageType.PERSISTED);

    // update the trigger for the scheduled job.
    schedulerEntryImpl.setTrigger(jobTrigger);

    // if we were initialized (i.e. if this is called due to CA modification)
    if (_initialized) {
      // first deactivate the current job before we schedule.
      deactivate();
    }

    // register the scheduled task
    _schedulerEngineHelper.register(this, schedulerEntryImpl, DestinationNames.SCHEDULER_DISPATCH);

    // set the initialized flag.
    _initialized = true;
  }

  /**
   * deactivate: Called when OSGi is deactivating the component.
   */
  @Deactivate
  protected void deactivate() {
    // if we previously were initialized
    if (_initialized) {
      // unschedule the job so it is cleaned up
      try {
        _schedulerEngineHelper.unschedule(schedulerEntryImpl, getStorageType());
      } catch (SchedulerException se) {
        if (_log.isWarnEnabled()) {
          _log.warn("Unable to unschedule trigger", se);
        }
      }

      // unregister this listener
      _schedulerEngineHelper.unregister(this);
    }
    
    // clear the initialized flag
    _initialized = false;
  }

  /**
   * getStorageType: Utility method to get the storage type from the scheduler entry wrapper.
   * @return StorageType The storage type to use.
   */
  protected StorageType getStorageType() {
    if (schedulerEntryImpl instanceof StorageTypeAware) {
      return ((StorageTypeAware) schedulerEntryImpl).getStorageType();
    }
    
    return StorageType.MEMORY_CLUSTERED;
  }
  
  /**
   * setModuleServiceLifecycle: So this requires some explanation...
   * 
   * OSGi will start a component once all of it's dependencies are satisfied.  However, there
   * are times where you want to hold off until the portal is completely ready to go.
   * 
   * This reference declaration is waiting for the ModuleServiceLifecycle's PORTAL_INITIALIZED
   * component which will not be available until, surprise surprise, the portal has finished
   * initializing.
   * 
   * With this reference, this component activation waits until portal initialization has completed.
   * @param moduleServiceLifecycle
   */
  @Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-")
  protected void setModuleServiceLifecycle(ModuleServiceLifecycle moduleServiceLifecycle) {
  }

  @Reference(unbind = "-")
  protected void setTriggerFactory(TriggerFactory triggerFactory) {
    _triggerFactory = triggerFactory;
  }

  @Reference(unbind = "-")
  protected void setSchedulerEngineHelper(SchedulerEngineHelper schedulerEngineHelper) {
    _schedulerEngineHelper = schedulerEngineHelper;
  }

  // the default cron expression is to run daily at midnight
  private static final String _DEFAULT_CRON_EXPRESSION = "0 0 0 * * ?";

  private static final Log _log = LogFactoryUtil.getLog(MyTaskMessageListener.class);

  private volatile boolean _initialized;
  private TriggerFactory _triggerFactory;
  private SchedulerEngineHelper _schedulerEngineHelper;
}

So the code here is kinda thick, but I've documented it as fully as I can.

The base class, BaseSchedulerEntryMessageListener, is a common base class for all schedule-based message listeners. It is pretty short, so you are encouraged to open it up in the source and peruse it to see what few services it provides.

The bulk of the code you can use as-is. You'll probably want to come up with your own default cron expression constant and property so you're not running at midnight (and that's midnight GMT, cron expressions are always based on the timezone your app server is configured to run on).

And you'll certainly want to fill out the doReceive() method to actually build your scheduled task logic.

One More Thing...

One thing to keep in mind, especially with the MEMORY and MEMORY_CLUSTERED storage types: Liferay does not do anything to prevent running the same jobs multiple times.

For example, say you have a job that takes 10 minutes to run, but you schedule it to run every 5 minutes. There's no way the job can complete in 5 minutes, so multiple jobs start piling up. Sure there's a pool backing the implementation to ensure the system doesn't run away and die on you, but even that might lead to disasterous results.

So take care in your scheduling. Know what the worst case scenario is for timing your jobs and use that information to define a schedule that will work even in this situation.

You may even want to consider some sort of locking or semaphore mechanism to prevent the same job running in parallel at all.

Just something to keep in mind...

Conclusion

So this is how all of those scheduled tasks from liferay-portlet.xml get migrated into the OSGi environment. Using this technique, you now have a migration path for this aspect of your legacy portlet code.

Update 05/18/2017

So I was contacted today about the use of the BaseSchedulerEntryMessageListener class as the base class for the message listener. Apparently this class has become deprecated as of DXP FP 13 as well as the upcoming GA4 release.

The only guidance I was given for updating the code happens to be the same guidance that I give to most folks wanting to know how to do something in Liferay - find an example in the Liferay source.

After reviewing various Liferay examples, we will need to change the parent class for our implementation and modify the activation code.

So now our message listener class is:

@Component(
  immediate = true, property = {"cron.expression=0 0 0 * * ?"},
  service = MyTaskMessageListener.class
)
public class MyTaskMessageListener extends BaseMessageListener {

  /**
   * doReceive: This is where the magic happens, this is where you want to do the work for
   * the scheduled job.
   * @param message This is the message object tied to the job.  If you stored data with the
   *                job, the message will contain that data.   
   * @throws Exception In case there is some sort of error processing the task.
   */
  @Override
  protected void doReceive(Message message) throws Exception {

    _log.info("Scheduled task executed...");
  }

  /**
   * activate: Called whenever the properties for the component change (ala Config Admin)
   * or OSGi is activating the component.
   * @param properties The properties map from Config Admin.
   * @throws SchedulerException in case of error.
   */
  @Activate
  @Modified
  protected void activate(Map<String,Object> properties) throws SchedulerException {

    // extract the cron expression from the properties
    String cronExpression = GetterUtil.getString(properties.get("cron.expression"), _DEFAULT_CRON_EXPRESSION);

    // create a new trigger definition for the job.
    String listenerClass = getClass().getName();
    Trigger jobTrigger = _triggerFactory.createTrigger(listenerClass, listenerClass, new Date(), null, cronExpression);

    // wrap the current scheduler entry in our new wrapper.
    // use the persisted storaget type and set the wrapper back to the class field.
    _schedulerEntryImpl = new SchedulerEntryImpl(getClass().getName(), jobTrigger);
    _schedulerEntryImpl = new StorageTypeAwareSchedulerEntryImpl(_schedulerEntryImpl, StorageType.PERSISTED);

    // update the trigger for the scheduled job.
    _schedulerEntryImpl.setTrigger(jobTrigger);

    // if we were initialized (i.e. if this is called due to CA modification)
    if (_initialized) {
      // first deactivate the current job before we schedule.
      deactivate();
    }

    // register the scheduled task
    _schedulerEngineHelper.register(this, _schedulerEntryImpl, DestinationNames.SCHEDULER_DISPATCH);

    // set the initialized flag.
    _initialized = true;
  }

  /**
   * deactivate: Called when OSGi is deactivating the component.
   */
  @Deactivate
  protected void deactivate() {
    // if we previously were initialized
    if (_initialized) {
      // unschedule the job so it is cleaned up
      try {
        _schedulerEngineHelper.unschedule(_schedulerEntryImpl, getStorageType());
      } catch (SchedulerException se) {
        if (_log.isWarnEnabled()) {
          _log.warn("Unable to unschedule trigger", se);
        }
      }

      // unregister this listener
      _schedulerEngineHelper.unregister(this);
    }
    
    // clear the initialized flag
    _initialized = false;
  }

  /**
   * getStorageType: Utility method to get the storage type from the scheduler entry wrapper.
   * @return StorageType The storage type to use.
   */
  protected StorageType getStorageType() {
    if (_schedulerEntryImpl instanceof StorageTypeAware) {
      return ((StorageTypeAware) _schedulerEntryImpl).getStorageType();
    }
    
    return StorageType.MEMORY_CLUSTERED;
  }
  
  /**
   * setModuleServiceLifecycle: So this requires some explanation...
   * 
   * OSGi will start a component once all of it's dependencies are satisfied.  However, there
   * are times where you want to hold off until the portal is completely ready to go.
   * 
   * This reference declaration is waiting for the ModuleServiceLifecycle's PORTAL_INITIALIZED
   * component which will not be available until, surprise surprise, the portal has finished
   * initializing.
   * 
   * With this reference, this component activation waits until portal initialization has completed.
   * @param moduleServiceLifecycle
   */
  @Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-")
  protected void setModuleServiceLifecycle(ModuleServiceLifecycle moduleServiceLifecycle) {
  }

  @Reference(unbind = "-")
  protected void setTriggerFactory(TriggerFactory triggerFactory) {
    _triggerFactory = triggerFactory;
  }

  @Reference(unbind = "-")
  protected void setSchedulerEngineHelper(SchedulerEngineHelper schedulerEngineHelper) {
    _schedulerEngineHelper = schedulerEngineHelper;
  }

  // the default cron expression is to run daily at midnight
  private static final String _DEFAULT_CRON_EXPRESSION = "0 0 0 * * ?";

  private static final Log _log = LogFactoryUtil.getLog(MyTaskMessageListener.class);

  private volatile boolean _initialized;
  private TriggerFactory _triggerFactory;
  private SchedulerEngineHelper _schedulerEngineHelper;
  private SchedulerEntryImpl _schedulerEntryImpl = null;
}

That's all there is to it, but it's best to avoid the deprecated class since you never know when deprecated will become disappeared...

Blogs
I have implemented a very similar approach but in my case, I want to call a service exported from on other service-builder bundle (api and service).

In my scheduler bundle, I import the called service in the bnd.bnd file (that it is exported by api bundle).

In the scheduler code, I try to "inject" the dependency as follows:

@Reference(unbind = "-")
protected void setMyLocalService(MyLocalService myLocalService) {
_myLocalService = myLocalService;
}
private MyLocalService _myLocalService;

But when I try to use the service in doReceive method, it is always null.

On oSGI console, I find the component Active but with this unsatisfied reference.

I have tried several ways to access the service (calling myLocalServiceUtil, using ServiceTracker, ...) but I always receive the same message. Only in case I remove the reference in code, the component start to receive messages but in my case I need to make actions using my service.

To check if I have any problem with other injection, I checked to inject userLocalService in the same way and it was injected ok.

Is there any additional security level or action to be performed in the service builder bundles or in the client to access the service from this component?.

All the examples I have checked always are referred to a portlet but in my case, I am defining a simple Component (service=my.class) so I do not know if maybe the archetype to create portlet or the scope of the portlets have some enablements that are not defined in my scheduler component.
Silly question, but is your service implementation deployed and started?
Yes, of course, I deploy two bundles (api and service) and both are active and running. I use Liferay IDE to generate a new service builder module plugin for mapping an existing table, but I do not change or extend anything on that module.

After compiling with gradlew, I build the jar and deploy them on Liferay. In fact, I am able to see using GOGO commands the services are exposed and active as you can see below:

529|Active | 10|check.cluster.nodes-api (1.0.0)
530|Active | 10|check.cluster.nodes-service (1.0.0)

----------------------------

g! b 529
check.cluster.nodes-api_1.0.0 [529]
Id=529, Status=ACTIVE Data Root=/opt/liferay/osgi/state/org.eclipse.osgi/529/data
"No registered services."
No services in use.
Exported packages
com.test.modules.cluster.exception; version="1.0.0"[exported]
com.test.modules.cluster.model; version="1.0.0"[exported]
com.test.modules.cluster.service; version="1.0.0"[exported]
com.test.modules.cluster.service.persistence; version="1.0.0"[exported]
Imported packages
com.liferay.expando.kernel.model; version="1.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.osgi.util; version="2.0.9" <com.liferay.osgi.util_3.0.7 [2]>
com.liferay.portal.kernel.annotation; version="6.3.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.bean; version="7.1.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.dao.orm; version="7.1.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.exception; version="7.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.jsonwebservice; version="7.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.model; version="1.0.1" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.search; version="7.3.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.security.access.control; version="1.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.service; version="1.8.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.service.persistence; version="1.4.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.spring.osgi; version="1.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.transaction; version="7.1.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.util; version="7.14.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
org.osgi.util.tracker; version="1.5.1" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
No fragment bundles
No required bundles

----------------------

g! b 530
check.cluster.nodes-service_1.0.0 [530]
Id=530, Status=ACTIVE Data Root=/opt/liferay/osgi/state/org.eclipse.osgi/530/data
"No registered services."
No services in use.
No exported packages
Imported packages
com.test.modules.cluster.exception; version="1.0.0" <check.cluster.nodes-api_1.0.0 [529]>
com.test.modules.cluster.model; version="1.0.0" <check.cluster.nodes-api_1.0.0 [529]>
com.test.modules.cluster.service; version="1.0.0" <check.cluster.nodes-api_1.0.0 [529]>
com.test.modules.cluster.service.persistence; version="1.0.0" <check.cluster.nodes-api_1.0.0 [529]>
com.liferay.counter.kernel.service; version="1.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.bean; version="7.1.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.configuration; version="6.2.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.dao.db; version="7.2.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.dao.jdbc; version="7.1.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.dao.orm; version="7.1.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.exception; version="7.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.json; version="7.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.log; version="7.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.model; version="1.0.1" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.model.impl; version="1.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.module.framework.service; version="1.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.search; version="7.3.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.service; version="1.8.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.service.persistence; version="1.4.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.service.persistence.impl; version="1.2.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.util; version="7.14.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.spring.extender.service; version="1.0.5" <com.liferay.portal.spring.extender_2.0.8 [336]>
javax.sql; version="0.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
No fragment bundles
No required bundles

-------------------------------

However, later in IDE I try to define a new module plugin (I tried with "service" type and "mvcportlet" type changing the @Component definition and the class from which extends).

My module only have a class defined in this way:

@Component(
immediate = true,
service = TestTableScheduler.class
)
public class TestTableScheduler extends BaseSchedulerEntryMessageListener {
...
}

and in doReceive method I want to call the service com.test.modules.cluster.service.myLocalService exported on the "api" bundle and implemented on the "service" bundle of my previous service builder module.

I try to inject the service using the @Reference annotation as I told in my previous message.

When I deploy the module in Liferay, the component is in ACTIVE state but with an unsatisfied reference as it is shown below:

531|Active | 10|check.cluster.nodes.scheduler (1.0.0)

-----------------------

g! b 531
check.cluster.nodes.scheduler_1.0.0 [531]
Id=531, Status=ACTIVE Data Root=/opt/liferay/osgi/state/org.eclipse.osgi/531/data
"No registered services."
Services in use:
{org.osgi.service.log.LogService, org.eclipse.equinox.log.ExtendedLogService}={service.id=2, service.bundleid=0, service.scope=bundle}
No exported packages
Imported packages
com.test.modules.cluster.model; version="1.0.0" <check.cluster.nodes-api_1.0.0 [529]>
com.test.modules.cluster.service; version="1.0.0" <check.cluster.nodes-api_1.0.0 [529]>
com.liferay.portal.kernel.cluster; version="7.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.log; version="7.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.messaging; version="7.0.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.module.framework; version="1.1.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
com.liferay.portal.kernel.scheduler; version="7.1.0" <org.eclipse.osgi_3.10.200.v20150831-0856 [0]>
No fragment bundles
No required bundles

------------------------

g! scr:list 531
BundleId Component Name Default State
Component Id State PIDs (Factory PID)
[ 531] com.test.modules.cluster.TestTableScheduler enabled
[2448] [unsatisfied reference]

-----------------------------

g! scr:info com.test.modules.cluster.TestTableScheduler
*** Bundle: check.cluster.nodes.scheduler (531)
Component Description:
Name: com.test.modules.cluster.TestTableScheduler
Implementation Class: com.test.modules.cluster.TestTableScheduler
Default State: enabled
Activation: immediate
Configuration Policy: optional
Activate Method: activate
Deactivate Method: deactivate
Modified Method: activate
Configuration Pid: [com.test.modules.cluster.TestTableScheduler]
Services:
com.test.modules.cluster.TestTableScheduler
Service Scope: singleton
Reference: myServiceLocalService
Interface Name: com.test.modules.cluster.service.myServiceLocalService
Cardinality: 1..1
Policy: static
Policy option: reluctant
Reference Scope: bundle
Reference: ModuleServiceLifecycle
Interface Name: com.liferay.portal.kernel.module.framework.ModuleServiceLifecycle
Target Filter: (module.service.lifecycle=portal.initialized)
Cardinality: 1..1
Policy: static
Policy option: reluctant
Reference Scope: bundle
Reference: SchedulerEngineHelper
Interface Name: com.liferay.portal.kernel.scheduler.SchedulerEngineHelper
Cardinality: 1..1
Policy: static
Policy option: reluctant
Reference Scope: bundle
Reference: TriggerFactory
Interface Name: com.liferay.portal.kernel.scheduler.TriggerFactory
Cardinality: 1..1
Policy: static
Policy option: reluctant
Reference Scope: bundle
Component Description Properties:
ModuleServiceLifecycle.target = (module.service.lifecycle=portal.initialized)
Component Configuration:
ComponentId: 2448
State: unsatisfied reference
SatisfiedReference: ModuleServiceLifecycle
Target: (module.service.lifecycle=portal.initialized)
(unbound)
SatisfiedReference: SchedulerEngineHelper
Target: null
(unbound)
SatisfiedReference: TriggerFactory
Target: null
(unbound)
UnsatisfiedReference: myServiceLocalService
Target: null
(no target services)
Component Configuration Properties:
ModuleServiceLifecycle.target = (module.service.lifecycle=portal.initialized)
component.id = 2448
component.name = com.test.modules.cluster.TestTableScheduler
Hello David,

Any clue or idea?. Have you change something on the services to make them available for any other component in any other bundle as Liferay makes with services like UserLocalService?.
If your server is up and you redeploy the bundle w/ the schedule component, is it non-null then?

Also are you using the right @Component and @Reference annotations? I know that java8 has it's own @Component annotation and I think Spring might have an @Reference, and if you're using those that will cause problems.
Regarding the annotations, I am using the right ones:

import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

On the other hand, if I try to redeploy the bundle, the result is the same (unsatisfied reference).

The only doubt I have is based on the fact I receive an exception with the message of table already exists but I guess this shouldn't be a problem at all, right?.

Regarding the service exposed from any service builder plugin, is it accesible by default by any other bundle deployed on the osgi container?. Does it exist any security constraint on the services layer?.
Hi, Sergio,
Can your custom service class be loaded by the same class loader used by your cron job class?
[...] Alex Man: David H Nebinger: These kinds of scheduled jobs will never be persisted; they are memory resident and only apply during runtime. I did'nt get this, can you tell me why it is not... [...] Read More
Hi David.
Nice article. Unfortunately, Liferay documentation does not include this argument. Just yesterday I published on my blog an article explaining how to take advantage of Gogo Shell for job administration. https://www.dontesta.it/en/2017/07/16/liferay-7-scheduler-manager-gogo-shell-command/
I get a 404 trying to access your blog, Antonio.
Sorry David. It should be corrected, I did a copy & paste. https://www.dontesta.it/en/2017/07/16/liferay-7-scheduler-manager-gogo-shell-command/
Yeah, it still happens, but I don't think it's the URL. I use Opera for the great built-in adblocking support, but I think Opera is not handling the nav to the URL correctly. Not your fault, probably mine.
Hi, this code works fine but when you're in a clustered environment the job fails to register and therefore doesn't run.
Do you have any idea as why this may happen?

Thanks!
Which code? Are there any errors or relevant information in the logs?
Hi David, thanks for you quick reply - I've tried posting on the lifery forum (https://web.liferay.com/community/forums/-/message_boards/message/92274105) but my post has been blocked as spam! emoticon

We've implemented a scheduled that looks like your 'public class MyTaskMessageListener extends BaseMessageListener' (so using the latest version instead of the deprecated class).

When only one node is up, the job registers fine, but when both nodes are up, the job doesn't get triggered and if we check the details of the active jobs (SchedulerEngineHelperUtil.getScheduledJobs()), our job looks like it's missing the trigger details.
There are no errors in the logs so not sure why it's only working in a non-clustered environment

---These are our job details
message:{destinationName=null, response=null, responseDestinationName=null, responseId=null, payload=null, values={GROUP_NAME=com.test.messaging.DailyJobMessageListener, EXCEPTIONS_MAX_SIZE=0, JOB_STATE=com.liferay.portal.kernel.scheduler.JobState@4cc7956d, JOB_NAME=com.test.messaging.DailyJobMessageListener}}
3storageType:MEMORY_CLUSTERED

--These are the details for any other liferay job

message:{destinationName=null, response=null, responseDestinationName=null, responseId=null, payload=null, values={GROUP_NAME=com.liferay.asset.publisher.web.internal.messaging.CheckAssetEntryMessageListener, START_TIME=Wed Jul 26 20:14:13 GMT 2017, NEXT_FIRE_TIME=Wed Jul 26 20:14:13 GMT 2017, EXCEPTIONS_MAX_SIZE=0, JOB_STATE=com.liferay.portal.kernel.scheduler.JobState@78b90a1a, JOB_NAME=com.liferay.asset.publisher.web.internal.messaging.CheckAssetEntryMessageListener}}
storageType:MEMORY_CLUSTERED
Trigger StartDate:Wed Jul 26 20:14:13 GMT 2017
Trigger EndDate:null
For personal experience, I can ensure that the scheduler works in a cluster environment, even with StorageType Persisted. In your case I seem to see that JOBS have the default StorageType.
I recommend you to see one of the Liferay message listener as for example: CheckAssetEntryMessageListener.java
https://github.com/liferay/liferay-portal/blob/f643e90452469d0c2367e68fcbf6bd52035cec50/modules/apps/web-experience/asset/asset-publisher-web/src/main/java/com/liferay/asset/publisher/web/internal/messaging/CheckAssetEntryMessageListener.java
Yes thanks, unless I'm missing something it looks like the class I've posted on here https://web.liferay.com/community/forums/-/message_boards/message/92274105 except for the fact that we use a cron expression.

Sorry David for polluting your blog, feel free to delete my comments since the liferay post has now been published
Hi David,
I have used the same code as you mentioned above. Every thing is working fine. But I want store scheduler in Database and I have used Storage type as persisted. but there are no any entries in the database for a scheduler and because of that, it is not started automatically when the server is restarted.
Following changes I made in your code,

/**
* getStorageType: Utility method to get the storage type from the scheduler entry wrapper.
* @return StorageType The storage type to use.
*/
protected StorageType getStorageType() {
if (_schedulerEntryImpl instanceof StorageTypeAware) {
return ((StorageTypeAware) _schedulerEntryImpl).getStorageType();
}

return StorageType.PERSISTED;
}

Please let me know If I am missing something.
[...] I have developed a simple scheduler following this blog: https://web.liferay.com/web/user.26526/blog/-/blogs/liferay-7-ce-liferay-dxp-scheduled-tasks The scheduler works fine on a single node. But... [...] Read More
Hi David,

nice how-to, it was really easy to get this working on Liferay 7.0.4 CE. There is only one small error: in the deactivate method you need not only unschedule the job but also delete it. Other wise it will stay in the scheduler and won't be rescheduled on the bundle redeployment. But this can be fixed easily by adding

schedulerEngineHelper.delete(schedulerEntry, getStorageType());

into the method body is a similar way as unschedule.

Thanks again, great job.
So if you're in a cluster, you'll want to stick with either MEMORY_CLUSTERED or PERSISTED to ensure your job doesn't run on every node (i.e. you're running a report to generate a PDF and email, you wouldn't want your 4 node cluster doing the report 4 times and emailing 4 copies). You may want to stick with the MEMORY type when you have, say, an administrative task that needs to run regularly on all nodes in your cluster

Question : In clustered environment, having a 4 nodes say (node-A , node-B, node-C, node-D).. i need to run a Scheduler always in node-A only.. what configuration i need to Do ?
You don't set it up like that. Liferay runs the scheduler on all of the nodes, but there is an election process to see who the master will be. The master runs the cluster jobs.

In case the master goes down (crashes, taken out of the cluster, stops responding to requests), the remaining nodes will elect a new master and the new master will process the jobs that are still waiting.

If you somehow got the scheduler to only run on one node, if that node would fail you wouldn't be running jobs at all anymore.

The Deprecated method works fine. When trying new method I am getting error at:

 

_schedulerEntryImpl = new SchedulerEntryImpl(getClass().getName(), jobTrigger);

 

The constructor SchedulerEntryImpl(String, Trigger) is undefined

Thank you very much for the helpful blog post, David!

I have looked for information related to schedulers in Liferay 7.x in the documentation and found nothing yet.

 

But, by chance, this Jira ticket was found (https://issues.liferay.com/browse/LPS-89033), which perhaps is useful to update the "One More Thing..." section of the blog post. It seems that now, the mentioned scenario is prevented.

 

By the way, as no official documentation was found (although this blog post fills in the gap, in my opinion), we took a look at the scheduled jobs available in Liferay source code. For instance, the "com.liferay.journal.web.internal.messaging.CheckArticleMessageListener" class.

I wonder why it is so different to the one described in this blog post. I mean that Liferay's code looks like straightforward. Are they not cluster-aware? (It sounds a bit strange to me but I could open a support ticket if so) Or is there a lean way to write scheduled jobs nowadays without requiring the "_initialized" check?

 

Kind regards and thank you very much.

Liferay does not really build their jobs to support ongoing development, redeployment, those kinds of things...

 

 

 

The Liferay jobs are core system jobs that will start when the portal starts and stay running throughout and not get stopped at some point for a redeployment of a newer or updated version.

 

 

 

 

This example tries not to make such an assumption, so the _initialized thing allows for a cleaner process to deal with possible redeployments.

Oh, sorry, I have read your reply after sending the link to the other blog post.

 

Thank you very much for the additional information.

In order to take a decision based upon data, I think we will assess both approaches.

Hi David, I have implemented the scheduler as the way you suggested and it  is working fine. I am facing a issue that whenever i restart application server, job is not in quartz tables anymore. My Storage Type is persisted. The jobs details are there in the database tables after triggering a job but when i stops the server, the job details are gone and quartz_tables are empty. I also removed the deactivate method too, but still after server stops, the job is not there is database. Can you please suggest me anything  on how i keep the job in Database after server restarts.