Ask Questions and Find Answers
Important:
Ask is now read-only. You can review any existing questions and answers, but not add anything new.
But - don't panic! While ask is no more, we've replaced it with discuss - the new Liferay Discussion Forum! Read more here here or just visit the site here:
discuss.liferay.com
RE: Cache replication in Liferay 7.2
Hello!
How can I enable cache replication for custom keys in Liferay 7.2? I executed the following groovy script on my first cluster node
The output is [1]. If I execute this code on the second node
I get an empty output. So the cache is not replicated. How can I achieve this?
How can I enable cache replication for custom keys in Liferay 7.2? I executed the following groovy script on my first cluster node
import com.liferay.portal.kernel.cache.MultiVMPoolUtil;
import com.liferay.portal.kernel.cache.PortalCache;
PortalCache<serializable, serializable> portalCache = MultiVMPoolUtil
.getPortalCache("com.sample.test");
portalCache.put(1, 1);
out.println(portalCache.getKeys())
</serializable,>
The output is [1]. If I execute this code on the second node
import com.liferay.portal.kernel.cache.MultiVMPoolUtil;
import com.liferay.portal.kernel.cache.PortalCache;
PortalCache<serializable, serializable> portalCache = MultiVMPoolUtil
.getPortalCache("com.sample.test");
out.println(portalCache.getKeys())
</serializable,>
I get an empty output. So the cache is not replicated. How can I achieve this?
Matthew K.:
Why are you going after cache replication? The default implementation just contains distributed cache invalidation: Just because you're changing one cached object in one node of your cluster does not mean that all noted should have it in cache from now on - that would severely limit the capacity.
Hello!
How can I enable cache replication for custom keys in Liferay 7.2? I executed the following groovy script on my first cluster node
...
I get an empty output. So the cache is not replicated. How can I achieve this?
In case it's an object that must be cached by all nodes on the cluster: They'll read it from database once it's flushed out of cache, and then it'll be current. But only when it's actually accessed.
I had a similar problem a while ago.
I have a "result" here that takes 4 minutes to compile. I also tried to put that into the cache (in a clustered env) and found that it was the wrong place.
Whenever the cache was invalidated, somebody had to wait, often these people also got impatient and clicked several times. Invalidating the cache was also the wrong behavior. Instead of invalidating the old result, it should have recalculated the new result and made that new result available afterwards.
In general I found that the cache was simply the wrong place for these data. I didn't want to cache it, I wanted "precalculated" results that sometimes need to be updated.
So, I wrote my own helper storage class for that. I pondered writing the result data to a database or some external service like redis but in the end it was sufficient for my usecase to just store it in some variable and add a trigger that checks every 15 minutes if it needs to be recalculated. If it is old, the trigger calculates a new result and stores it afterwards.
Of course, this fits my usecase and maybe only my usecase. My point is, maybe the cache is simply the wrong place for the data and you try to solve a problem that you shouldn't actually have. I fiddled a while with the cache solution and simply couldn't solve all the issues involved. I never got it to behave really nice.
I have a "result" here that takes 4 minutes to compile. I also tried to put that into the cache (in a clustered env) and found that it was the wrong place.
Whenever the cache was invalidated, somebody had to wait, often these people also got impatient and clicked several times. Invalidating the cache was also the wrong behavior. Instead of invalidating the old result, it should have recalculated the new result and made that new result available afterwards.
In general I found that the cache was simply the wrong place for these data. I didn't want to cache it, I wanted "precalculated" results that sometimes need to be updated.
So, I wrote my own helper storage class for that. I pondered writing the result data to a database or some external service like redis but in the end it was sufficient for my usecase to just store it in some variable and add a trigger that checks every 15 minutes if it needs to be recalculated. If it is old, the trigger calculates a new result and stores it afterwards.
Of course, this fits my usecase and maybe only my usecase. My point is, maybe the cache is simply the wrong place for the data and you try to solve a problem that you shouldn't actually have. I fiddled a while with the cache solution and simply couldn't solve all the issues involved. I never got it to behave really nice.
My use case is actually really similar. I also have a scheduler that runs every 30 minutes and performs a rather complex operation. The result is just a simple map with 5 values in it that needs to be stored somewhere so that other modules can access it. The thing is that a scheduler (by default) only runs on a single node of a cluster. So storing the value in some variable won't work because that variable is bound to a single JVM and that's why only one cluster node can access it.
@Christoph Rabel How could you solve that issue? If I understand correctly, you also use a single variable to store the value. How can you transfer that value to the other node if you also use a scheduler which runs on one node only?
@Christoph Rabel How could you solve that issue? If I understand correctly, you also use a single variable to store the value. How can you transfer that value to the other node if you also use a scheduler which runs on one node only?
Matthew K.:
I'd consider that a business problem that shouldn't be solved with the cache: If you recalculate every 30 minutes, but restart a server 2 min after the calculation: The cache doesn't get replicated just because another server was restarted and has a new cache. You'd need to obtain it from somewhere. My recommendation is to use a temporary storage location, e.g. the database or an external business system (but not limited to these options), once the calculation has been done. Once it's read from whatever backend you decide for, it can be cached, flushed from the cache and re-read any time.
My use case is actually really similar. I also have a scheduler that runs every 30 minutes and performs a rather complex operation. The result is just a simple map with 5 values in it that needs to be stored somewhere so that other modules can access it. The thing is that a scheduler (by default) only runs on a single node of a cluster. So storing the value in some variable won't work because that variable is bound to a single JVM and that's why only one cluster node can access it.
Well, in my case we solved it simply by calculating the stuff on both servers independently. This was sufficient, since the backend data changed only once a day.
If that isn't sufficient for you, you probably need some external storage service. Database, Redis, ... or maybe even the Elasticsearch servers.
If that isn't sufficient for you, you probably need some external storage service. Database, Redis, ... or maybe even the Elasticsearch servers.
Copyright © 2025 Liferay, Inc
• Privacy Policy
Powered by Liferay™