RE: How to implement Lock mechanism in Multiple node Environment

Kailas Chougule, modified 6 Years ago. New Member Posts: 8 Join Date: 1/30/14 Recent Posts
Hi,
Little background
I am using Liferay 5.2.3 and MariaDB. We have internal framework called DelegateHandler. For any CRUD we have different operation.
Like OperationName =  addCustomer this operation contain multiple processor.
OperationName =  postAmount also contain multiple processor 
Same for OperationName =  debitAmount
Issue :
When "postAmount" and "debitAmount" operation perform on same customer from 2 different node, I will get wrong "CustomerAmountBalance" value.

If "postAmount" and "debitAmount" operation perform on same customer from 1 node then i will get correct "CustomerAmountBalance" value.
To resolve this issue i am using "Reentrant Lock" mechanism in our DelegateHandler framework.  
thumbnail
David H Nebinger, modified 6 Years ago. Liferay Legend Posts: 14933 Join Date: 9/2/06 Recent Posts
5.2.3, wow, that's got some age on it...


There's not going to be a cross-node lock mechanism, but you might try building out on the Liferay Message Bus. I believe, even at that time, the LMB had cluster support and would synchronize message processing.

This way, the Post and Debit messages would go into the LMB and queue up, and the message listener would process them in order, assuring the correct outcome.  If the changes are applying to Liferay entities generated by ServiceBuilder, the updates will broadcast cache change messages to all nodes so they ditch the stale values and show the updated values persisted by the message listener.

If you are not using any Liferay facilities, I think you'll find your task is hard in general. If you have two nodes competing for a lock, they can be in a race to see who gets it first. You need an outside entity to manage the lock otherwise the node that manages it can starve out the node that doesn't. You'll face issues such as what happens if a node crashes while holding the lock, the lock manager crashes, cluster-wide notification so other nodes are aware of data value changes (not a problem if you never cache the retrieved value for the account), etc.

None of these issues are specific to Liferay, of course. Liferay has some built in mechanisms to help in these kinds of situations because they are things that Liferay has faced, but if you're not building off of Liferay tools and APIs those facilities may not be available to you.
thumbnail
Minhchau Dang, modified 6 Years ago. Liferay Master Posts: 598 Join Date: 10/22/07 Recent Posts
How to implement Lock mechanism in Multiple node Environment

If you're using Hibernate, you could use its multi-version concurrency control mechanism, where it uses a column that contains the version of the record, and essentially adds "set versionColumn = currentVersionValue + 1 where versionColumn = currentVersionValue" to the update clause. This causes the update to fail if something has already updated the row (since the versionColumn will be different), and it's something that comes built into Service Builder in later Liferay releases.

If you're not using Hibernate, another strategy is to force all of the updates to happen on a single node, which allows you to continue using the approach you have for a single node environment. You'll basically be writing all of the node coordination logic yourself though, because none of the stuff Liferay now has to help you out with this exists in 5.2.x.
thumbnail
Christoph Rabel, modified 6 Years ago. Liferay Legend Posts: 1555 Join Date: 9/24/09 Recent Posts
Minhchau Dang:

How to implement Lock mechanism in Multiple node Environment

If you're using Hibernate, you could use its multi-version concurrency control mechanism, where it uses a column that contains the version of the record, and essentially adds "set versionColumn = currentVersionValue + 1 where versionColumn = currentVersionValue" to the update clause. This causes the update to fail if something has already updated the row (since the versionColumn will be different), and it's something that comes built into Service Builder in later Liferay releases.
Really? Since which version? This was always one of the things I really hated about service builder.
thumbnail
David H Nebinger, modified 6 Years ago. Liferay Legend Posts: 14933 Join Date: 9/2/06 Recent Posts
Hibernate has had the version column support like forever. I remember decorating all of my hibernate xml files with version columns in each of the entities...
thumbnail
Christoph Rabel, modified 6 Years ago. Liferay Legend Posts: 1555 Join Date: 9/24/09 Recent Posts
I knew. But Liferay SB didn't support it. Take a Look at the second Link in my other  Post.
Pleased ignore the Capital words, the Smartphone automatically corrected them.
thumbnail
Minhchau Dang, modified 6 Years ago. Liferay Master Posts: 598 Join Date: 10/22/07 Recent Posts
Christoph Rabel:

Really? Since which version? This was always one of the things I really hated about service builder.

It was added in LPS-43264, so it's in the 7.0 DTD.
thumbnail
Christoph Rabel, modified 6 Years ago. Liferay Legend Posts: 1555 Join Date: 9/24/09 Recent Posts
Hmm. The dtd doesn't contain anything that seems to be relevant for optimistic locking.
I checked the commits related to LPS-43264 and a commit that adds a "versioned" flag to the dtd isn't there. Is it possible that this was only added later to 7.1 and didn't make it into 7.0?
thumbnail
Minhchau Dang, modified 6 Years ago. Liferay Master Posts: 598 Join Date: 10/22/07 Recent Posts
Christoph Rabel:

Hmm. The dtd doesn't contain anything that seems to be relevant for optimistic locking.

Ah, I should have linked to the actual key that was added. It's the "mvcc-enabled" flag, which we added in 0f11969a79445f996eb478b318a76150c3b7f683, which was included in service builder during the 7.0 milestone releases.

The pull request that's referenced on the ticket was the first pull request, which wasn't accepted. There are times where our Engineering team doesn't update the pull request field on LPS tickets, and so we often have to end up having to search GitHub in order to identify the pull request that actually made it in. In this case, there were four others after that, and only the last two were merged.
thumbnail
Christoph Rabel, modified 6 Years ago. Liferay Legend Posts: 1555 Join Date: 9/24/09 Recent Posts
You need to implement a locking strategy like optimistic locking or pessimistic locking. Which one is better for you, depends on your exact usecase.
https://enterprisecraftsmanship.com/2017/09/18/optimistic-locking-automatic-retry/
Implementing optimistic locking in Liferay 6 was possible, but annoying. I think, this post describes how to do it (I tried it once).
https://zenidas.wordpress.com/recipes/optimistic-locking-in-liferay/
I have no idea if that works in 5.2. But it should hint you in the right direction.