<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
  <link rel="self" href="https://liferay.dev/c/message_boards/find_thread?p_l_id=119785294&amp;threadId=113072656" />
  <subtitle>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</subtitle>
  <id>https://liferay.dev/c/message_boards/find_thread?p_l_id=119785294&amp;threadId=113072656</id>
  <updated>2026-04-04T03:13:26Z</updated>
  <dc:date>2026-04-04T03:13:26Z</dc:date>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113569494" />
    <author>
      <name>Alex Camaroti</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113569494</id>
    <updated>2019-05-08T13:20:57Z</updated>
    <published>2019-05-08T13:20:57Z</published>
    <summary type="html">Hello Guys,&lt;br /&gt;&lt;br /&gt;  I noticed that something good happened.&lt;br /&gt;  After a few restarts from each production server (I dont know the order of who was restarted first).&lt;br /&gt;  It comes back to the original state, starting the jobs by the main url.&lt;br /&gt;  If I try to deploy the job scheduler again, deleting the existed one. The problem happens again but for now I don&amp;#39;t need to worry about this problem anymore.&lt;br /&gt;  Thanks a million for all your help.&lt;br /&gt;  God bless the Liferay&amp;#39;s heros hehehe&amp;#39;</summary>
    <dc:creator>Alex Camaroti</dc:creator>
    <dc:date>2019-05-08T13:20:57Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113170430" />
    <author>
      <name>Andrew Jardine</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113170430</id>
    <updated>2019-04-11T20:03:10Z</updated>
    <published>2019-04-11T20:03:10Z</published>
    <summary type="html">I don&amp;#39;t see any issue with your cluster config, assuming the tcp.xml is configured correctly. Easiest way (at least that was taught to me and I still use) to make sure you cluster (replication) is working is to bring up the same page on both servers in two different browsers. Add a new portlet to the first page and then simply refresh the second page. If your cluster is working, then the change (to the page) will be replicated and you can see the results correctly on both servers. &lt;br /&gt;&lt;br /&gt;NOTE though, that since your scheduled tasks don&amp;#39;t actually do anything with clustering, clustering is not really required for this. All you need is more than one Liferay server all pointing to the same database. </summary>
    <dc:creator>Andrew Jardine</dc:creator>
    <dc:date>2019-04-11T20:03:10Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113169763" />
    <author>
      <name>Alex Camaroti</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113169763</id>
    <updated>2019-04-11T19:46:20Z</updated>
    <published>2019-04-11T19:46:20Z</published>
    <summary type="html">&lt;blockquote&gt;Andrew JardineHi Alex, &lt;br /&gt;&lt;br /&gt;I don&amp;#39;t think your test is valid. Let me try to explain once more --&lt;br /&gt;&lt;br /&gt;1. Your servers are stopped.&lt;br /&gt;2. Your code is NOT deployed on either Prod1 or Prod2&lt;br /&gt;3. You start Prod1, and then you start Prod2&lt;br /&gt;4. The first (Liferay) scheduled task fires on Prod1 -- Prod1 gets the Lock for scheduled tasks, and runs the task.&lt;br /&gt;5. A second (Liferay) scheduled task fires on Prod1 -- Prod1 has the Lock, so the task runs. &lt;br /&gt;6. The first (Liferay) scheduled task fires on Prod2 -- Prod2 tries but FAILS to get the Lock, so the task can&amp;#39;t run.&lt;br /&gt;&lt;br /&gt;.. this continues this way where all scheduled tasks (Liferay) will run only on Prod1.&lt;br /&gt;&lt;br /&gt;7. You deploy your code on both nodes.&lt;br /&gt;8. Your scheduled task runs on Prod1 -- the task executes because Prod1 has the Lock.&lt;br /&gt;9. You scheduled task runs on Prod2 -- the task FAILS (as in it simply won&amp;#39;t start in the first place) because Prod2 doesn&amp;#39;t have the Lock&lt;br /&gt;&lt;br /&gt;... so even with your services deployed, they will still only run on the node with the lock. This is nothing to do with load balancing, there is no request routing here as these threads spin up and run independent of a proxied request -- which is kind of the point of a scheduled task &lt;img alt="emoticon" src="@theme_images_path@/emoticons/happy.gif" &gt;&lt;br /&gt;&lt;br /&gt;10. You UNDEPLOY your scheduled task from Prod1.&lt;br /&gt;11. Eventually, your scheduled task tried to run on Prod2 -- but again, Prod2 still doesn&amp;#39;t have the lock do you can&amp;#39;t run the task. So it just won&amp;#39;t run on any node now&lt;br /&gt;&lt;br /&gt;.. remember that your task is probably not the only scheduled task in the system so your task probably doesn&amp;#39;t control WHEN and WHICH server get the lock. If you set your task to run at 7pm, but start your server earlier, then almost surely one of the Liferay tasks will fire first (like the JournalCheckInterval) and the server it runs on will obtain the lock. &lt;br /&gt;&lt;br /&gt;12. You put your code back on BOTH servers - so that your cluster deployments are properly sync&amp;#39;ed now.&lt;br /&gt;&lt;br /&gt;.. now HERE is the proper test to see if there is a problem.&lt;br /&gt;&lt;br /&gt;13. Assuming that Prod1 has the Lock still, SHUT THE SERVER DOWN.&lt;br /&gt;&lt;br /&gt;.. this will cause the server node to be removed from the cluster leaving just Prod2 in the pool. Now when the next scheduled task fires, Prod2 will be able to obtain the Lock and will now be the node running the tasks. &lt;br /&gt;&lt;br /&gt;14. Start Prod1 back up&lt;br /&gt;&lt;br /&gt;... and at this point you will be in the inverse scenario. Prod1, when it tries to run tasks, will no longer have or be able to obtain the Lock so the scheduled tasks will never run on prod1 and instead run on prod2.&lt;br /&gt;&lt;br /&gt;Now, you might say &amp;#34;well that&amp;#39;s no good! I want to blance the task execution amongst all my nodes!&amp;#34; -- and that&amp;#39;s a fair point. But it&amp;#39;s the limitation. The advantage though is the automatic failover to the other node when one goes down. The failover at least maintains business continuity. It does mean however a couple of things --&lt;br /&gt;&lt;br /&gt;1. Only ONE node in your cluster will ever run the scheduled tasks (all of them) until that node goes down&lt;br /&gt;2. You cannot (without a lot of effort) designate the node that will run scheduled tasks&lt;br /&gt;3. You should make sure that your scheduled task has what it needs to run on ANY node in your cluster because of #2 and to support the failover scenario.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;​​​​​​​Make sense?&lt;/blockquote&gt;&lt;br /&gt;Totally. I&amp;#39;m gonna do this tomorrow. &lt;br /&gt;About the cluster configuration that I posted yesterday, in your point of view is that ok or do I need something more?&lt;br /&gt;Thanks a million.</summary>
    <dc:creator>Alex Camaroti</dc:creator>
    <dc:date>2019-04-11T19:46:20Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113146370" />
    <author>
      <name>Andrew Jardine</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113146370</id>
    <updated>2019-04-11T13:36:34Z</updated>
    <published>2019-04-11T13:36:34Z</published>
    <summary type="html">Hi Alex, &lt;br /&gt;&lt;br /&gt;I don&amp;#39;t think your test is valid. Let me try to explain once more --&lt;br /&gt;&lt;br /&gt;1. Your servers are stopped.&lt;br /&gt;2. Your code is NOT deployed on either Prod1 or Prod2&lt;br /&gt;3. You start Prod1, and then you start Prod2&lt;br /&gt;4. The first (Liferay) scheduled task fires on Prod1 -- Prod1 gets the Lock for scheduled tasks, and runs the task.&lt;br /&gt;5. A second (Liferay) scheduled task fires on Prod1 -- Prod1 has the Lock, so the task runs. &lt;br /&gt;6. The first (Liferay) scheduled task fires on Prod2 -- Prod2 tries but FAILS to get the Lock, so the task can&amp;#39;t run.&lt;br /&gt;&lt;br /&gt;.. this continues this way where all scheduled tasks (Liferay) will run only on Prod1.&lt;br /&gt;&lt;br /&gt;7. You deploy your code on both nodes.&lt;br /&gt;8. Your scheduled task runs on Prod1 -- the task executes because Prod1 has the Lock.&lt;br /&gt;9. You scheduled task runs on Prod2 -- the task FAILS (as in it simply won&amp;#39;t start in the first place) because Prod2 doesn&amp;#39;t have the Lock&lt;br /&gt;&lt;br /&gt;... so even with your services deployed, they will still only run on the node with the lock. This is nothing to do with load balancing, there is no request routing here as these threads spin up and run independent of a proxied request -- which is kind of the point of a scheduled task &lt;img alt="emoticon" src="@theme_images_path@/emoticons/happy.gif" &gt;&lt;br /&gt;&lt;br /&gt;10. You UNDEPLOY your scheduled task from Prod1.&lt;br /&gt;11. Eventually, your scheduled task tried to run on Prod2 -- but again, Prod2 still doesn&amp;#39;t have the lock do you can&amp;#39;t run the task. So it just won&amp;#39;t run on any node now&lt;br /&gt;&lt;br /&gt;.. remember that your task is probably not the only scheduled task in the system so your task probably doesn&amp;#39;t control WHEN and WHICH server get the lock. If you set your task to run at 7pm, but start your server earlier, then almost surely one of the Liferay tasks will fire first (like the JournalCheckInterval) and the server it runs on will obtain the lock. &lt;br /&gt;&lt;br /&gt;12. You put your code back on BOTH servers - so that your cluster deployments are properly sync&amp;#39;ed now.&lt;br /&gt;&lt;br /&gt;.. now HERE is the proper test to see if there is a problem.&lt;br /&gt;&lt;br /&gt;13. Assuming that Prod1 has the Lock still, SHUT THE SERVER DOWN.&lt;br /&gt;&lt;br /&gt;.. this will cause the server node to be removed from the cluster leaving just Prod2 in the pool. Now when the next scheduled task fires, Prod2 will be able to obtain the Lock and will now be the node running the tasks. &lt;br /&gt;&lt;br /&gt;14. Start Prod1 back up&lt;br /&gt;&lt;br /&gt;... and at this point you will be in the inverse scenario. Prod1, when it tries to run tasks, will no longer have or be able to obtain the Lock so the scheduled tasks will never run on prod1 and instead run on prod2.&lt;br /&gt;&lt;br /&gt;Now, you might say &amp;#34;well that&amp;#39;s no good! I want to blance the task execution amongst all my nodes!&amp;#34; -- and that&amp;#39;s a fair point. But it&amp;#39;s the limitation. The advantage though is the automatic failover to the other node when one goes down. The failover at least maintains business continuity. It does mean however a couple of things --&lt;br /&gt;&lt;br /&gt;1. Only ONE node in your cluster will ever run the scheduled tasks (all of them) until that node goes down&lt;br /&gt;2. You cannot (without a lot of effort) designate the node that will run scheduled tasks&lt;br /&gt;3. You should make sure that your scheduled task has what it needs to run on ANY node in your cluster because of #2 and to support the failover scenario.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;​​​​​​​Make sense?</summary>
    <dc:creator>Andrew Jardine</dc:creator>
    <dc:date>2019-04-11T13:36:34Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113126469" />
    <author>
      <name>Alex Camaroti</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113126469</id>
    <updated>2019-04-10T19:34:48Z</updated>
    <published>2019-04-10T19:34:48Z</published>
    <summary type="html">&lt;html&gt;&lt;head&gt;&lt;/head&gt;&lt;body&gt;I'm gonna test everything in the end of the day.&lt;br&gt;Just fixing what I've said before: The job works fine on the main node (&lt;strong&gt;prd1&lt;/strong&gt;). The second node (&lt;strong&gt;prd2&lt;/strong&gt;) is the real problem.&lt;br&gt;The last test, I deleted the jobs of my osgi/modules from &lt;strong&gt;prd1&amp;nbsp;&lt;/strong&gt;and I kept them on &lt;strong&gt;prd2&lt;/strong&gt;.&lt;br&gt;Result: Doens't work.&lt;br&gt;So, I did the opposite: delete from &lt;strong&gt;prd2&amp;nbsp;&lt;/strong&gt;and kept them on &lt;strong&gt;prd1&lt;/strong&gt;.&lt;br&gt;Result: Worked fine using a specific ip server url.&lt;br&gt;&lt;br&gt;But talking about cluster configuration, I have this on my portal-ext.properties:&lt;br&gt;This config is on both servers.&lt;br&gt;&lt;pre&gt;&lt;code&gt;##
## Cluster Link
##

&amp;amp;nbsp; &amp;amp;nbsp; #
&amp;amp;nbsp; &amp;amp;nbsp; # Set the cluster node bootup response timeout in milliseconds.
&amp;amp;nbsp; &amp;amp;nbsp; #
&amp;amp;nbsp; &amp;amp;nbsp; cluster.link.node.bootup.response.timeout=10000

&amp;amp;nbsp; &amp;amp;nbsp; #
&amp;amp;nbsp; &amp;amp;nbsp; # Set this to true to enable the cluster link. This is required if you want
&amp;amp;nbsp; &amp;amp;nbsp; # to cluster indexing and other features that depend on the cluster link.
&amp;amp;nbsp; &amp;amp;nbsp; #
&amp;amp;nbsp; &amp;amp;nbsp; cluster.link.enabled=true
&amp;amp;nbsp;&amp;amp;nbsp; &amp;amp;nbsp;
&amp;amp;nbsp; &amp;amp;nbsp; #
&amp;amp;nbsp; &amp;amp;nbsp; # Set the JGroups properties for each channel, we support up to 10 transport
&amp;amp;nbsp; &amp;amp;nbsp; # channels and 1 single required control channel. Use as few transport
&amp;amp;nbsp; &amp;amp;nbsp; # channels as possible for best performance. By default, only one UDP
&amp;amp;nbsp; &amp;amp;nbsp; # control channel and one UDP transport channel are enabled. Channels can be
&amp;amp;nbsp; &amp;amp;nbsp; # configured by XML files that are located in the class path or by inline
&amp;amp;nbsp; &amp;amp;nbsp; # properties.
&amp;amp;nbsp; &amp;amp;nbsp; #
&amp;amp;nbsp; &amp;amp;nbsp; cluster.link.channel.properties.control=tcp.xml
&amp;amp;nbsp; &amp;amp;nbsp; cluster.link.channel.properties.transport.0=tcp.xml
&amp;amp;nbsp; &amp;amp;nbsp;&amp;amp;nbsp;
&amp;amp;nbsp; &amp;amp;nbsp; # Set this property to autodetect the default outgoing IP address so that
&amp;amp;nbsp; &amp;amp;nbsp; # JGroups can bind to it. The property must point to an address that is
&amp;amp;nbsp; &amp;amp;nbsp; # accessible to the portal server, www.google.com, or your local gateway.
&amp;amp;nbsp; &amp;amp;nbsp; #cluster.link.autodetect.address=www.google.com:80

&lt;/code&gt;&lt;/pre&gt;&lt;br&gt;&lt;br&gt;Is this config ok or do I need to implement something more?&lt;/body&gt;&lt;/html&gt;</summary>
    <dc:creator>Alex Camaroti</dc:creator>
    <dc:date>2019-04-10T19:34:48Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113110949" />
    <author>
      <name>Christoph Rabel</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113110949</id>
    <updated>2019-04-10T07:37:38Z</updated>
    <published>2019-04-10T07:37:38Z</published>
    <summary type="html">&lt;html&gt;&lt;head&gt;&lt;/head&gt;&lt;body&gt;&lt;blockquote&gt;Alex Camaroti&lt;br&gt;I referenced the primary node just because the job was deployed on both nodes, and it doenst work in one of them. before restart it was working fine.&lt;/blockquote&gt;&lt;br&gt;&lt;br&gt;Here lies your actual problem! Except for some really special scenarios, you should not care, on which node the job is executed. It simply should not matter. If one machine is down, the other one should run the job.&lt;br&gt;&lt;br&gt;&lt;blockquote&gt;Alex Camaroti&lt;br&gt;&lt;pre&gt;&lt;code&gt;​​​​​​​The closest you could get to making sure Node 1 and not Node 2 runs the task would be to restart your cluster, 
but only bring up one node for a time until you are sure that at least one task has been run and that Node 1 (your primary node) ran it. 
​​​​​​​Then you could start Node 2.&lt;/code&gt;&lt;/pre&gt;I think that I'm going to try this one first.&lt;/blockquote&gt;Please note that this is highly unreliable. One restart in the wrong order -&amp;gt; Problem.&lt;br&gt;&lt;br&gt;May I suggest a few other options:&lt;br&gt;-) Create an extra service just for the job and deploy it only on one server&lt;br&gt;-) Add a portal-ext property "enable_my_service". Set it true on server one, false on server two and just don't run the job on server two.&lt;/body&gt;&lt;/html&gt;</summary>
    <dc:creator>Christoph Rabel</dc:creator>
    <dc:date>2019-04-10T07:37:38Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113110632" />
    <author>
      <name>Christoph Rabel</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113110632</id>
    <updated>2019-04-10T07:29:55Z</updated>
    <published>2019-04-10T07:29:55Z</published>
    <summary type="html">&lt;blockquote&gt;Alex Camaroti &lt;br /&gt;I also made it. It helps a lot but nobody wants to wake up at midnight everyday to start the job manually hehe&amp;#39;&lt;br /&gt;Also, something strange happened. I made it for two jobs, one of them works fine creating the content requested, but the another one just write the log information but doens&amp;#39;t create new contents. I decided just to focus on the main problem, If I solve the job scheduler not running on the main node, the rest service triggering the job is not necessary anymore.&lt;br /&gt;&lt;/blockquote&gt;&lt;br /&gt;I am not sure if your main problem is actually the job scheduler. You have the service deployed on both nodes, but it behaves correctly only on one of them? I&amp;#39;d say, there is something fishy going on.</summary>
    <dc:creator>Christoph Rabel</dc:creator>
    <dc:date>2019-04-10T07:29:55Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113105560" />
    <author>
      <name>Alex Camaroti</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113105560</id>
    <updated>2019-04-09T20:06:42Z</updated>
    <published>2019-04-09T20:06:42Z</published>
    <summary type="html">&lt;html&gt;&lt;head&gt;&lt;/head&gt;&lt;body&gt;&lt;blockquote&gt;Andrew JardineHey Alex,&lt;br&gt;&lt;br&gt;You keep referring to the job running on the main node -- I just want to make sure that you understand that you can't control what node the job will run on. It's basically a race. Whichever node in your cluster runs a task first will get the lock. And to be clear, it doens't have to be YOUR task that runs first. Liferay has several scheduled tasks that run as well -- for example the JournalCheckInterval that runs (by default) every 15 minutes to see if there is content that should be published/unpublished.&amp;nbsp;&lt;br&gt;&lt;br&gt;The closest you could get to making sure Node 1 and not Node 2 runs the task would be to restart your cluster, but only bring up one node for a time until you are sure that at least one task has been run and that Node 1 (your primary node) ran it. Then you could start Node 2.&lt;br&gt;&lt;br&gt;But I would say that the design of this solution (from Liferay) is such that it expects the job to be able to run on any node -- hence the fail over we talked about yesterday.&amp;nbsp;&lt;br&gt;&lt;br&gt;You keep referencing the Primary Node. Does it HAVE to run on the primary node? and if yes, why?&lt;/blockquote&gt;&lt;br&gt;Hmm, I learned the idea of load-balance some days ago and I think that this is what you are talking about.&lt;br&gt;I referenced the primary node just because the job was deployed on both nodes, and it doenst work in one of them. before restart it was working fine.&lt;br&gt;&lt;pre&gt;&lt;code&gt;​​​​​​​The closest you could get to making sure Node 1 and not Node 2 runs the task would be to restart your cluster, 
but only bring up one node for a time until you are sure that at least one task has been run and that Node 1 (your primary node) ran it. 
​​​​​​​Then you could start Node 2.&lt;/code&gt;&lt;/pre&gt;I think that I'm going to try this one first.&lt;/body&gt;&lt;/html&gt;</summary>
    <dc:creator>Alex Camaroti</dc:creator>
    <dc:date>2019-04-09T20:06:42Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113105147" />
    <author>
      <name>Andrew Jardine</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113105147</id>
    <updated>2019-04-09T19:11:43Z</updated>
    <published>2019-04-09T19:11:43Z</published>
    <summary type="html">Hey Alex,&lt;br /&gt;&lt;br /&gt;You keep referring to the job running on the main node -- I just want to make sure that you understand that you can&amp;#39;t control what node the job will run on. It&amp;#39;s basically a race. Whichever node in your cluster runs a task first will get the lock. And to be clear, it doens&amp;#39;t have to be YOUR task that runs first. Liferay has several scheduled tasks that run as well -- for example the JournalCheckInterval that runs (by default) every 15 minutes to see if there is content that should be published/unpublished. &lt;br /&gt;&lt;br /&gt;The closest you could get to making sure Node 1 and not Node 2 runs the task would be to restart your cluster, but only bring up one node for a time until you are sure that at least one task has been run and that Node 1 (your primary node) ran it. Then you could start Node 2.&lt;br /&gt;&lt;br /&gt;But I would say that the design of this solution (from Liferay) is such that it expects the job to be able to run on any node -- hence the fail over we talked about yesterday. &lt;br /&gt;&lt;br /&gt;You keep referencing the Primary Node. Does it HAVE to run on the primary node? and if yes, why?</summary>
    <dc:creator>Andrew Jardine</dc:creator>
    <dc:date>2019-04-09T19:11:43Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113104364" />
    <author>
      <name>Alex Camaroti</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113104364</id>
    <updated>2019-04-09T19:01:22Z</updated>
    <published>2019-04-09T19:01:22Z</published>
    <summary type="html">&lt;blockquote&gt;Christoph RabelNot sure, if it helps in any way, but maybe you can use the idea somehow:&lt;br /&gt;I had a related problem, that I needed to control from the outside, when a job was run. (Usually scheduled, but sometimes it needed to be executed immediately).&lt;br /&gt;&lt;br /&gt;I created a rest service, that triggers the execution of the job. A cronjob starts the job every night but it can also be manually triggered and triggered by external services.&lt;/blockquote&gt;I also made it. It helps a lot but nobody wants to wake up at midnight everyday to start the job manually hehe&amp;#39;&lt;br /&gt;Also, something strange happened. I made it for two jobs, one of them works fine creating the content requested, but the another one just write the log information but doens&amp;#39;t create new contents. I decided just to focus on the main problem, If I solve the job scheduler not running on the main node, the rest service triggering the job is not necessary anymore.</summary>
    <dc:creator>Alex Camaroti</dc:creator>
    <dc:date>2019-04-09T19:01:22Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113103408" />
    <author>
      <name>Andrew Jardine</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113103408</id>
    <updated>2019-04-09T18:50:57Z</updated>
    <published>2019-04-09T18:50:57Z</published>
    <summary type="html">Hahaha -- hey, sometimes the best solution to the problem is to appease the Gods right? &lt;img alt="emoticon" src="@theme_images_path@/emoticons/happy.gif" &gt;</summary>
    <dc:creator>Andrew Jardine</dc:creator>
    <dc:date>2019-04-09T18:50:57Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113101999" />
    <author>
      <name>Christoph Rabel</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113101999</id>
    <updated>2019-04-09T17:31:01Z</updated>
    <published>2019-04-09T17:31:01Z</published>
    <summary type="html">Not sure, if it helps in any way, but maybe you can use the idea somehow:&lt;br /&gt;I had a related problem, that I needed to control from the outside, when a job was run. (Usually scheduled, but sometimes it needed to be executed immediately).&lt;br /&gt;&lt;br /&gt;I created a rest service, that triggers the execution of the job. A cronjob starts the job every night but it can also be manually triggered and triggered by external services.</summary>
    <dc:creator>Christoph Rabel</dc:creator>
    <dc:date>2019-04-09T17:31:01Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113101344" />
    <author>
      <name>Alex Camaroti</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113101344</id>
    <updated>2019-04-09T17:19:53Z</updated>
    <published>2019-04-09T17:19:53Z</published>
    <summary type="html">&lt;blockquote&gt;Christoph RabelI didn&amp;#39;t really follow the thread, but I had a superweird problem with the scheduler once too. After a few hours, it simply stopped working till I restarted the server. I really fiddled a while with the problem. Then I deleted the osgi/state folder, restarted and the problem was gone. I know, it&amp;#39;s a wild guess and kinda a &amp;#34;sacrifice a lamb and dance around the fire&amp;#34; solution, but deleting the state folder resolved weird issues for me a couple of times now.&lt;/blockquote&gt;&lt;br /&gt;It&amp;#39;s also a great tip Christoph.&lt;br /&gt;I&amp;#39;ll keep this option as a resource if I take too long to find a solution. I want to run the scheduled job from my primary node (prd1), instead of every time I access the second node (prd2) by ip.</summary>
    <dc:creator>Alex Camaroti</dc:creator>
    <dc:date>2019-04-09T17:19:53Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113100713" />
    <author>
      <name>Alex Camaroti</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113100713</id>
    <updated>2019-04-09T17:16:18Z</updated>
    <published>2019-04-09T17:16:18Z</published>
    <summary type="html">&lt;blockquote&gt;Andrew JardineWell, it&amp;#39;s not really a hole. The Quartz Scheduler is built so that it only runs on one node. So if you have a cluster of 15 nodes, even if you deploy the module to all 15 nodes (which you should) it will only ever run on one node. My point is that if you have that setup, 15 nodes all with your module, and you stop/start the module on a node that isn&amp;#39;t the one with the quartz lock, then perhaps that is why you see that message -- but if the message is reported on a node that is NOT controlling the lock, then maybe you can ignore it. So that is why I was strying to figure out if the error shows up on ALL your nodes? or on your Number of Nodes - 1 ... if it is the Number of Nodes - 1 then I would suspect that you are fine. &lt;br /&gt;&lt;br /&gt;... does the job still run in the end? on Any of the nodes?&lt;/blockquote&gt;You are right, it happens just on one node that is the main node (prd1). The second node (prd2) that I need to access via ip the job runs.</summary>
    <dc:creator>Alex Camaroti</dc:creator>
    <dc:date>2019-04-09T17:16:18Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113100379" />
    <author>
      <name>Christoph Rabel</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113100379</id>
    <updated>2019-04-09T17:00:45Z</updated>
    <published>2019-04-09T17:00:45Z</published>
    <summary type="html">I didn&amp;#39;t really follow the thread, but I had a superweird problem with the scheduler once too. After a few hours, it simply stopped working till I restarted the server. I really fiddled a while with the problem. Then I deleted the osgi/state folder, restarted and the problem was gone. I know, it&amp;#39;s a wild guess and kinda a &amp;#34;sacrifice a lamb and dance around the fire&amp;#34; solution, but deleting the state folder resolved weird issues for me a couple of times now.</summary>
    <dc:creator>Christoph Rabel</dc:creator>
    <dc:date>2019-04-09T17:00:45Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113099773" />
    <author>
      <name>Andrew Jardine</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113099773</id>
    <updated>2019-04-09T16:52:23Z</updated>
    <published>2019-04-09T16:52:23Z</published>
    <summary type="html">Well, it&amp;#39;s not really a hole. The Quartz Scheduler is built so that it only runs on one node. So if you have a cluster of 15 nodes, even if you deploy the module to all 15 nodes (which you should) it will only ever run on one node. My point is that if you have that setup, 15 nodes all with your module, and you stop/start the module on a node that isn&amp;#39;t the one with the quartz lock, then perhaps that is why you see that message -- but if the message is reported on a node that is NOT controlling the lock, then maybe you can ignore it. So that is why I was strying to figure out if the error shows up on ALL your nodes? or on your Number of Nodes - 1 ... if it is the Number of Nodes - 1 then I would suspect that you are fine. &lt;br /&gt;&lt;br /&gt;... does the job still run in the end? on Any of the nodes?</summary>
    <dc:creator>Andrew Jardine</dc:creator>
    <dc:date>2019-04-09T16:52:23Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113099123" />
    <author>
      <name>Alex Camaroti</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113099123</id>
    <updated>2019-04-09T16:46:30Z</updated>
    <published>2019-04-09T16:46:30Z</published>
    <summary type="html">Yep, when I tried to run alone in a specific ip of prd2, it worked. but just accesing prd2. I saw that my job is deployed in both servers but I don&amp;#39;t remember the order right now. &lt;br /&gt;&lt;br /&gt;When I runned the job on prd2 it also gaves me some information on prd2:&lt;strong&gt;2019-04-04 07:55:05.025 INFO [default-6194][ClusterSchedulerEngine:745] Receive notification from master, add memory clustered job {groupName=com.admin.cronjob.job.CronJob, jobName=com.admin.cronjob.job.CronJob, storageType=MEMORY_CLUSTERED}.&lt;br /&gt;&lt;br /&gt;&lt;/strong&gt;&lt;blockquote&gt;&lt;strong&gt;&lt;/strong&gt;Ok I am wondering now if maybe you are trying to initialize the service on the node that isn&amp;#39;t the one managing the jobs. For example, let&amp;#39;s say your Prod2 is the one running the tasks, but you are in the Control Panel on Prod1.&lt;br /&gt;&lt;/blockquote&gt;We deployed the same application on both servers, but both nodes should be sincronized, communicating with each other, right? As I&amp;#39;m a beginner entering in the middle of the project, I don&amp;#39;t know how exactly is this communication between servers. &lt;br /&gt;Hmm, I got your point. I just can not see a way out of this hole hehehe&amp;#39;</summary>
    <dc:creator>Alex Camaroti</dc:creator>
    <dc:date>2019-04-09T16:46:30Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113094725" />
    <author>
      <name>Andrew Jardine</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113094725</id>
    <updated>2019-04-09T15:20:44Z</updated>
    <published>2019-04-09T15:20:44Z</published>
    <summary type="html">Ok I am wondering now if maybe you are trying to initialize the service on the node that isn&amp;#39;t the one managing the jobs. For example, let&amp;#39;s say your Prod2 is the one running the tasks, but you are in the Control Panel on Prod1.&lt;br /&gt;&lt;br /&gt;Is there a way for you to open two browers where one is on Prod1 and the other (incognito, or a different browser) is on Prod2? And then try the same seteps on each to see if the erro shows up in one log but not the other?</summary>
    <dc:creator>Andrew Jardine</dc:creator>
    <dc:date>2019-04-09T15:20:44Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113093067" />
    <author>
      <name>Alex Camaroti</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113093067</id>
    <updated>2019-04-09T14:48:36Z</updated>
    <published>2019-04-09T14:48:36Z</published>
    <summary type="html">&lt;html&gt;&lt;head&gt;&lt;/head&gt;&lt;body&gt;Sure.&lt;br&gt;&lt;br&gt;To reply the case, I access:&lt;br&gt;&lt;br&gt;1. Control Panel &amp;gt; Applications &amp;gt; Applications Management&lt;br&gt;2. I choose a specific job, and I stop/start with the time changed on my ddl.&lt;br&gt;Note: Remembering that the same code woks fine on development server and test server (because it has just one node). Just production server that has 2 nodes (prd1 and prd2) that is not working by some clustering configuration that maybe was lost.&lt;br&gt;&lt;br&gt;&lt;br&gt;On the main server it doens't give to me too much information:&lt;br&gt;&lt;pre&gt;&lt;code&gt;2019-04-04 07:45:50.206 INFO [http-nio-8080-exec-257][CronJob:93] Deactivate : Thu Apr 04 07:45:50 BRT 
20192019-04-04 07:45:50.280 INFO [http-nio-8080-exec-257][BundleStartStopLogger:38] STOPPED com.admin.cronjob_1.0.0 [1168]
2019-04-04 07:45:58.061 INFO [http-nio-8080-exec-290][BundleStartStopLogger:35] STARTED com.admin.cronjob_1.0.0 [1168]
2019-04-04 07:45:58.072 INFO [http-nio-8080-exec-290][CronJob:63] Activate : Thu Apr 04 07:45:58 BRT 
20192019-04-04 07:45:58.074 INFO [http-nio-8080-exec-290][CronJob:178] Minutes : 1440
2019-04-04 07:45:58.074 INFO [http-nio-8080-exec-290][CronJob:79] The job is going to be executed&amp;amp;nbsp;on: Thu Apr 04 07:50:00 BRT 
20192019-04-04 07:45:58.090 INFO [http-nio-8080-exec-290][ClusterSchedulerEngine:358] Memory clustered job is not yet deployed on master
​​​​​​​2019-04-04 07:45:58.095 INFO [http-nio-8080-exec-290][CronJob:68] CronJobINFO :Job Registered.
&lt;/code&gt;&lt;/pre&gt;&lt;/body&gt;&lt;/html&gt;</summary>
    <dc:creator>Alex Camaroti</dc:creator>
    <dc:date>2019-04-09T14:48:36Z</dc:date>
  </entry>
  <entry>
    <title>RE: Job doenst trigger. Msg: Liferay job node not deployed on master</title>
    <link rel="alternate" href="https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113079110" />
    <author>
      <name>Andrew Jardine</name>
    </author>
    <id>https://liferay.dev/c/message_boards/find_message?p_l_id=119785294&amp;messageId=113079110</id>
    <updated>2019-04-08T22:15:51Z</updated>
    <published>2019-04-08T22:15:51Z</published>
    <summary type="html">&lt;html&gt;&lt;head&gt;&lt;/head&gt;&lt;body&gt;Hi Alex,&lt;br&gt;&lt;br&gt;I understand -- certainly not the first time I have tried to provide advice that assumes an ideal scenario of full control over the environment and the time it takes to figure it out and build it right &lt;img alt="emoticon" src="@theme_images_path@/emoticons/happy.gif"&gt;&lt;br&gt;&lt;br&gt;Coming back to your actual problem, I did some more digging and I found the class ClusterSchedulerEngine seems to be the class that reports the error you are experiencing --&lt;pre&gt;&lt;code&gt;...

try {
   SchedulerResponse schedulerResponse = future.get(
      _callMasterTimeout, TimeUnit.SECONDS);

   if ((schedulerResponse == null) ||
      (schedulerResponse.getTrigger() == null)) {

      if (_log.isInfoEnabled()) {
         _log.info(
            StringBundler.concat(
               "Memory clustered job ",
               getFullName(jobName, groupName),
               " is not yet deployed on master"));
      }
   }
   else {
      addMemoryClusteredJob(schedulerResponse);
   }
}
...
&lt;/code&gt;&lt;/pre&gt;&lt;br&gt;.. can you share a full stack trace so I can follow the rabbit down the hole?&amp;nbsp;&lt;br&gt;&lt;br&gt;Also, at the very least can you remove the call to setup the scheduler in the doReceive?&amp;nbsp; just to make sure that it's not part of the issue? I say that because normally when you register a task with a simple trigger, when it is deployed it is immediately run so I am wondering if when you deloy, you register the trask, but since the listener runs right away it is trying to register the task right away (a second time). Even if that is not the case, you shouldn't need that code all the same.&lt;/body&gt;&lt;/html&gt;</summary>
    <dc:creator>Andrew Jardine</dc:creator>
    <dc:date>2019-04-08T22:15:51Z</dc:date>
  </entry>
</feed>
