RE: Check cluster configuration Liferay DXP 7.0

thumbnail
Joaquin Cabal, modified 6 Years ago. Regular Member Posts: 106 Join Date: 9/7/09 Recent Posts
Check cluster configuration Liferay DXP 7.0
thumbnail
Joaquin Cabal, modified 6 Years ago. Regular Member Posts: 106 Join Date: 9/7/09 Recent Posts
Hi,  we are working with liferay 7.0 DXP  and have some doubts aboute the clustering config.

This is current config:

cluster.link.enabled=true
lucene.replicate.write=true
org.quartz.jobStore.isClustered=true
ehcache.cluster.link.replication.enabled=true

We have same Repository in both nodes, with AdvancedFileSystem. 
Also same Database.
We have an own osgi module running on each node with some logic that doesn't use Ehcache and has custom cache for Journal articles.

We have been researching and we still have differences between nodes.

Specifically with Jorunal Articles, If we modify a Journal article, it should be replicated in both nodes at our own osgi module?

Thanks in advance!


 
thumbnail
Olaf Kock, modified 6 Years ago. Liferay Legend Posts: 6441 Join Date: 9/23/08 Recent Posts
Joaquin CabalHi,  we are working with liferay 7.0 DXP  and have some doubts aboute the clustering config.

This is current config:

cluster.link.enabled=true
lucene.replicate.write=true
org.quartz.jobStore.isClustered=true
ehcache.cluster.link.replication.enabled=true

We have same Repository in both nodes, with AdvancedFileSystem. 
Also same Database.
We have an own osgi module running on each node with some logic that doesn't use Ehcache and has custom cache for Journal articles.
Starting from the end: You have a custom cache for Journal articles, and are wondering why it's not updated through Liferay's cluster communication? Start there. Or eliminate the cache - at least for a quick test.

You should see evidence of proper caching (and peer discovery) in the logs. The default configuration uses Multicast. If that's disabled, or if the machines can't see each other, the cluster won't be formed properly. Sooner or later you'll see each other's articles, but that's only when the cache has been flushed.

You also didn't mention that you've configured all nodes to contact the same Elasticsearch instance.
thumbnail
Joaquin Cabal, modified 6 Years ago. Regular Member Posts: 106 Join Date: 9/7/09 Recent Posts
Thanks Olaf!

We will work starting from Journal articles, without custom cache.
Also will see if both nodes use same Elastic search instance.
thumbnail
Joaquin Cabal, modified 6 Years ago. Regular Member Posts: 106 Join Date: 9/7/09 Recent Posts
Hi Olaf,

You know twe were trying to solve the error and didn't find a solution yet.

Maybe you can help me with this.

At osgi module level, a simple singleton class , when the variables of one node change , the other node should change also? Or the sinc of nodes is not at that level.

And having this in mind, if we use the Liferay impl cache in the module, should sinc between the nodes?

Thnaks in advance!
thumbnail
Olaf Kock, modified 6 Years ago. Liferay Legend Posts: 6441 Join Date: 9/23/08 Recent Posts
Joaquin Cabal

At osgi module level, a simple singleton class , when the variables of one node change , the other node should change also? Or the sinc of nodes is not at that level.
No, there's no magic of reaching into another process's memory. When an object is modified, the cluster communication layer will communicate the id of the changed object to the other cluster machines. In turn, the other nodes will remove the object in question from the cache.

This means that any future access to those objects won't be served from cache, so that they'll be accessed from database, thus be current again.

If you keep any objects in your own variables, there's nothing that the cluster communication can do. In fact, keeping objects over long time in singletons is a recipe for disaster: You might even keep references to code that has long been undeployed. Don't do that. This is the "custom cache" that you should eliminate. In fact, it's arguable that "cache" is a good name for this. Keep the id and access the objects through the API when you need them. If they're in cache, they'll be served quickly. If they're not in cache, they'll be served with the current and updated fresh data.
thumbnail
Joaquin Cabal, modified 6 Years ago. Regular Member Posts: 106 Join Date: 9/7/09 Recent Posts
Ok with that we so we know that the simple objects are not sinced.

Apart from that we have a Listener afterUpdate JournalArticles. When we publish Journal articles from Staging to remote , to the ip of one of the nodes where load balancer is, this node execute all afterupdate listeners, but the other node doesn't do that. You know if listener should be called in second node also?
thumbnail
Olaf Kock, modified 6 Years ago. Liferay Legend Posts: 6441 Join Date: 9/23/08 Recent Posts
Joaquin Cabal

Ok with that we so we know that the simple objects are not sinced.

Apart from that we have a Listener afterUpdate JournalArticles. When we publish Journal articles from Staging to remote , to the ip of one of the nodes where load balancer is, this node execute all afterupdate listeners, but the other node doesn't do that. You know if listener should be called in second node also?
Yes, I know. And no: They shouldn't be called. They're only called on the node where the update happens. All other nodes' cache will be invalidated and the objects technically never change in memory there.

One of the usecases for clusters is to be able to distribute the load, in order to serve more requests. It'd be quite detrimental to execute such code on every single node. They share the database, and it should be enough to execute every listener once and only once.

There's no magic happening in clustering. Only expiring objects from the cache when other nodes signal that they're changed.
thumbnail
Joaquin Cabal, modified 6 Years ago. Regular Member Posts: 106 Join Date: 9/7/09 Recent Posts
Ok I see Olaf. 
The DB should be enought when listener is called. And if the other has the cache invalidated fot that object, will be forced to go to DB with new data.
​​​​​​​Thanks in advance.