Blogs
This article provides the necessary steps to configure Cross-Cluster Replication (CCR) for Elasticsearch with Liferay on a local machine. By following this guide, you’ll simulate the behavior of CCR with Liferay for testing purposes.

Cluster CCR
This article provides the necessary steps to configure Cross-Cluster Replication (CCR) for Elasticsearch with Liferay on a local machine. By following this guide, you’ll simulate the behavior of CCR with Liferay for testing purposes.
CCR is only available in the DXP version, and an LES license is necessary to enable this feature.
In a typical Liferay DXP/search engine setup, a
single Liferay DXP cluster communicates with a single Elasticsearch
cluster, handling all read and write requests through a unified
connection to the search engine. However, to address data locality
and disaster recovery needs, Elasticsearch released
the Cross-Cluster Replication (CCR) feature , which is
compatible with Liferay
DXP for Elasticsearch versions 7 and above.
Steps to Test CCR on Your Local Machine
In this demo, we will simplify the elements to facilitate understanding of CCR behavior.
Step 1: Clone the CCR Repository
Clone the following repository to your local
machine:
git clone
https://github.com/dmcisneros/elasticsearch-ccr.git
After cloning, execute the following script:
./start.sh
- This will simulate two separate Elasticsearch clusters
located in different regions:
- CPD1 (leader cluster): accessible at http://localhost:39201
- CPD2 (follower cluster): accessible at http://localhost:39202
- Additionally, a Liferay instance (version 2024.q3.7) using PostgreSQL as its database will be started.
- Will add developer license to ES.
Step 2: Configure Liferay for CCR
Once the Elasticsearch clusters and Liferay are running, configure Liferay to enable CCR. We will perform these steps through the user interface, though you could also apply these settings via .config files.
- Start Liferay
- Open http://localhost:8080 and log in with the credentials: test/test .
-
Configure Elasticsearch Connections
- Navigate to: Control Panel -> Configuration -> System Settings -> Search -> Elasticsearch Connections
- Add connections for both Elasticsearch nodes:
- CPD1-es01: http://CPD1-es01:9200
- CPD2-es01: http://CPD2-es01:9200
- Enable Production
Mode for Elasticsearch and link to leader ES cluster
node
- Go to Control Panel ->
Configuration -> System Settings -> Search ->
Elasticsearch7
- Select Operation Mode as Remote
- Enable Production Mode
- Set the Remote Connection ID to
CPD1-es01.
- Go to Control Panel ->
Configuration -> System Settings -> Search ->
Elasticsearch7
- Initialize Search Indexes: Navigate to Control Panel -> Search -> Index Actions and click on All Search Indexes . This action will create all necessary indexes on CPD1-es01 (leader node).
Check Point
At this point, Liferay is configured to read from and write to CPD1-es01 .
Cluster follower is empty and no indexes have been created yet.
Step 3: Configure Cross-Cluster Replication (CCR)
The following steps set up CCR so that Liferay writes to CPD1-es01 (leader) and reads from CPD2-es01 (follower), ensuring that all indexes created in the leader are automatically replicated to the follower.
To view the current indexes in each cluster, use the following commands:
curl -X GET
"http://localhost:39201/_cat/indices" # leader
curl -X GET
"http://localhost:39202/_cat/indices" # follower
- Go to Cross-Cluster Replication Settings
- Navigate to Control Panel -> Configuration -> System Settings -> Search -> Cross-Cluster Replication
- Set CCR Options
- Enable Read from Local Cluster
- Enable Automatic Replication
-
Add Local Cluster Configurations :localhost:8080,CPD2-es01 (Check this article if your indexes does not read from follower). This groovy script will show your nodeName and port to be included on this input:
def localClusterNode = com.liferay.portal.kernel.cluster.ClusterExecutorUtil.getLocalClusterNode(); out.println(localClusterNode.getPortalInetAddress().getHostName() + ":" + localClusterNode.getPortalPort());
- Set the Remote Cluster Alias to CPD1-es01
- Set Remote Cluster Seed Node Transport to: CPD1-es01:9300
This configuration allows all indexes to be written to the follower. Once all documents are replicated, Liferay will read data from the follower cluster.
Step 4: Testing Disaster Recovery with CCR
In a disaster scenario where the leader cluster (CPD1) is down, you can redirect Liferay to continue operating by reading and writing to the follower (CPD2). This setup ensures continuity without losing the index layer.
To switch to the follower:
- Update the Remote Connection ID in System Settings -> Search -> Elasticsearch7 to point to CPD2-es01 .
- With this change, Liferay will use CPD2-es01 as both the reader and writer node, allowing continued operations despite the leader node being offline.
Reader will have their indexes closed, check the status and open all Liferay indexes:
- To check:
curl -X GET "http://localhost:39202/_cat/indices"
-
To open:
curl -X POST http://localhost:39202/liferay-14350031001261/_open
(One request per index)
Step 5: Re-Synchronizing the Leader Node After Recovery
Once the leader node (CPD1-es01) is back online, you may need to update the CCR settings to allow CPD1-es01 to catch up with any changes made on CPD2-es01 while it was offline. Repeat the CCR configuration (Step 3) but swap roles, setting CPD1-es01 as the follower to realign the indexes.
Final Summary
In this guide, we walked through the configuration of Cross-Cluster Replication (CCR) for Elasticsearch with Liferay, setting up a local testing environment to simulate CCR behavior. We covered the essential steps to:
- Clone and set up a Docker-based environment that simulates two Elasticsearch clusters (leader and follower) and a Liferay instance.
- Configure Elasticsearch connections in Liferay and initialize indexes on the leader cluster.
- Enable and configure CCR settings in Liferay to replicate data from the leader to the follower, allowing Liferay to read from the follower if needed.
- Test disaster recovery by switching to the follower in case of a leader failure, ensuring system continuity without data loss.
Re-sync the leader node with updated indexes from the follower after recovery.
These steps illustrate how CCR can enhance data redundancy and provide resilience in case of cluster issues, allowing Liferay to continue functioning smoothly even during unexpected disruptions.
Additional Resources
For further reading on CCR and troubleshooting, refer to these resources:
- Liferay CCR Documentation
- Configuring CCR in a Remote Leader Data Center
- Elasticsearch Rally
- Troubleshooting CCR Issues