This website uses cookies to ensure you get the best experience. Learn More.
Liferay Portal/DXP in Openshift
It is increasingly common to deploy applications or microservices in a container-based infrastructure, such as Kubernetes or Openshift. Liferay Portal / DXP is not far behind in this regard and thanks to the fact that it is a totally agnostic platform to the underlying infrastructure, it is possible to implement it in this type of infrastructure.
Liferay offers DXPCloud, a tailored PaaS solution for Liferay DXP / Commerce, multi-cloud, with which customers can focus all their effort, from the very beginning, on what is really important to them: their own business, not having to worry about the infrastructure that keeps it stable and therefore its maintenance.
In most cases DXPCloud is the ideal solution, yet in a few situations you may wonder if a custom infrastructure directly in Kubernetes or Openshift might be a better fit given your specific requirements.
The main objective of this blog is to share a HELM Chart with which to carry out a deployment of Liferay Portal 7.4 in Openshift, more specifically in CodeReady Containers.
The prerequisites to carry out an Openshift deployment on our workstations are the following:
Install Docker on your workstation.
Install CodeReady Containers Openshift and assign it sufficient resources such as CPU, Memory and Disk. In the following example, we will deploy a MySQL and Liferay Portal. However, if you want to deploy Elasticsearch or a Webserver ahead of Liferay Portal, you will need to allocate more resources for this. NOTE: This Helm Chart has been developed using the following versions in Code Ready Containers Openshift: Client Version: 4.4.3 Server Version: 4.9.5 Kubernetes Version: v1.22.0-rc.0+a44d0f0
./gradlew clean deploy To build the workspace
./gradlew clean deploy
./gradlew createDockerFile To build the Docker folder and the Dockerfile inside of the build/docker Workspace. Opening the generated Dockerfile, we can see that the lines that we added in our Workspace have been added to the Dockerfile.ext file:
./gradlew createDockerFile
Build the Docker image and tag version 1.0.0: docker build ${LiferayWorkspace}/build/docker -t default-route-openshift-image-registry.apps-crc.testing/liferay/liferay-portal-7.4-ga4:1.0.0
docker build ${LiferayWorkspace}/build/docker -t default-route-openshift-image-registry.apps-crc.testing/liferay/liferay-portal-7.4-ga4:1.0.0
Do login to Openshift as Developer to create the project: oc login -u developer -p pwd https://api.crc.testing:6443
oc login -u developer -p pwd https://api.crc.testing:6443
Create the project "liferay": oc new-project liferay
oc new-project liferay
Do login in Openshift with OC as kubeadmin:: oc login -u kubeadmin -p xxx https://api.crc.testing:6443
oc login -u kubeadmin -p xxx https://api.crc.testing:6443
If you currently don't have a database, you can create a MySQL using this yaml manifest within the same project: oc apply -f .//database.yaml
oc apply -f .//database.yaml
Do login in Docker to access the Image Registry: docker login -u `oc whoami` -p `oc whoami --show-token` default-route-openshift-image-registry.apps-crc.testing
docker login -u `oc whoami` -p `oc whoami --show-token` default-route-openshift-image-registry.apps-crc.testing
docker push default-route-openshift-image-registry.apps-crc.testing/liferay/liferay-portal-7.4-ga4:1.0.0
helm install liferay-chart . -f values.yaml
Accessing the Openshift Web Console, we can see how our Chart has been applied, displaying the resources within the “liferay” namespace:
As we have seen, with HELM we can manage and install the necessary objects within our project in Openshift to deploy Liferay Portal / DXP. The resources that we are managing in the Chart are the following:
./gradlew createDockerFile -Pliferay.workspace.environment=dev
Service: to expose the Liferay Portal / DXP pods as a service.
Ingress: to route and balance requests to the Liferay Portal / DXP service. A balancing by round-robin algorithm and an affinity to the Liferay Service pods are configured by means of a cookie called "lfrstickysessioncookie". This will guarantee the client's session to the pod in which he began his session, avoiding problems of loss of session to the end customers in a clustered environment.
Persistent Volume Claim: to have the necessary storage for Liferay Portal / DXP document management.
Service Monitor: to allow exposure of metrics through a port and endpoint within the Liferay service. To do this, you will need to expose metrics to the configured endpoint. If you want to implement it, follow the steps of this blog.
Here are the values files for each environment where we can deploy:
Local or default/common values: values.yaml
Dev: values-dev.yaml
Uat: values-uat.yaml
Prod: values-prod.yaml
Therefore, if we want to deploy in DEV environment with HELM, we would use the values.yaml together with the values-dev.yaml file to overwrite the default configuration with the dev environment:
helm install liferay-chart . -f values.yaml --values env/values-dev.yaml Now, if we modify our hosts file to point our localhost 127.0.0.1 to dev.liferay.openshift.com, we can access our Liferay Portal and see it in action: In Openshift Web Console: In the values of the dev environment, autoscaling is activated with a minimum of 1 replica and a maximum of 2: We will also have the OSGi configuration necessary to connect with a remote Elasticsearch, through configMap (we would additionally need to deploy Elasticsearch in Openshift, and for this, we can also use the Elasticsearch Chart) If we apply the values of UAT or PROD, we can check that the resources for the Liferay Portal / DXP containers and the number of replicas are greater: helm install liferay-chart . -f values.yaml --values env/values-prod.yaml Since we are injecting the portal-ext.properties in deployment time through the configMap file and it will be the same for all environments, if we include one from the Workspace it will not apply. We can modify properties of the portal-ext.properties by environment using the environment variables within each environment value:
helm install liferay-chart . -f values.yaml --values env/values-dev.yaml
helm install liferay-chart . -f values.yaml --values env/values-prod.yaml
With HELM we can have a set of objects that define the necessary resources to have an infrastructure, in this case, of Liferay Portal / DXP in Kubernetes / Openshift, called Chart. It will allow us to create and maintain releases and versioning them, with everything necessary for their deployments in the cluster. Keep in mind that HELM is no one the only utility, there are others such as Terraform, Ansible, Spinnaker, etc to deploy resources in Openshift. In a project based on a container infrastructure, we will need more tools that allow us to provide a certain automation for the execution of tests, building artifacts (jars, Docker images, the Charts themselves, etc.), publication of releases of the built artifacts, evaluation of quality gates, deployments, etc.
Great one :)