Liferay Portal/DXP in Openshift



It is increasingly common to deploy applications or microservices in a container-based infrastructure, such as Kubernetes or Openshift. Liferay Portal / DXP is not far behind in this regard and thanks to the fact that it is a totally agnostic platform to the underlying infrastructure, it is possible to implement it in this type of infrastructure.

Liferay offers DXPCloud, a tailored PaaS solution for Liferay DXP / Commerce, multi-cloud, with which customers can focus all their effort, from the very beginning, on what is really important to them: their own business, not having to worry about the infrastructure that keeps it stable and therefore its maintenance.

In most cases DXPCloud is the ideal solution, yet in a few situations you may wonder if a custom infrastructure directly in Kubernetes or Openshift might be a better fit given your specific requirements.

The main objective of this blog is to share a HELM Chart with which to carry out a deployment of Liferay Portal 7.4 in Openshift, more specifically in CodeReady Containers.

The prerequisites to carry out an Openshift deployment on our workstations are the following:

  1. Install Docker on your workstation.

  2. Install CodeReady Containers Openshift and assign it sufficient resources such as CPU, Memory and Disk. In the following example, we will deploy a MySQL and Liferay Portal. However, if you want to deploy Elasticsearch or a Webserver ahead of Liferay Portal, you will need to allocate more resources for this.
    NOTE: This Helm Chart has been developed using the following versions in Code Ready Containers Openshift:
    Client Version: 4.4.3
    Server Version: 4.9.5
    Kubernetes Version: v1.22.0-rc.0+a44d0f0

 

Let’s go

  1. The focus will be on using a HELM Chart to deploy Liferay Portal, so we assume that we have a database previously deployed in our Openshift to be able to use a clustered Liferay Portal configuration.
    We can use the following yaml manifest with which to deploy in our Openshift a MySQL 5.7 database with the basic configuration for Liferay Portal. If needed, we can use other HELM Chart to provision Postgres or MySQL (adding the necessary configuration to the values in Liferay Portal and in the database)
  2. We will use the following Liferay Gradle Workspace with which we will inject configuration by environment at build time of the docker image, as well as a portlet type module, as a test.


    The configuration of the environments is the following:

    OSGi: Configuration to allow redirects only under known domains. Note that this configuration is new as of 7.4
    In previous versions it was done by portal-ext.properties.

    JGroups: JGroups configuration using DNS Ping and DNS SRVs created in Openshift for node discovery.

    Dockerfile.ext: In it we include the following instructions with which to allow the users of the ROOT group access to the /mnt/liferay and /opt/liferay directories in the Docker image, in order to avoid problems when starting due to permission errors (see Arbitrary support from uids in Openshift)



    In the workspace, execute:
     
  3.  ./gradlew clean deploy
    To build the workspace

  4.  ./gradlew createDockerFile
    To build the Docker folder and the Dockerfile inside of the build/docker Workspace. Opening the generated Dockerfile, we can see that the lines that we added in our Workspace have been added to the Dockerfile.ext file:

     

  5. Build the Docker image and tag version 1.0.0:
    docker build ${LiferayWorkspace}/build/docker -t default-route-openshift-image-registry.apps-crc.testing/liferay/liferay-portal-7.4-ga4:1.0.0

  6. Do login to Openshift as Developer to create the project:
    oc login -u developer -p pwd https://api.crc.testing:6443

  7. Create the project "liferay":
    oc new-project liferay

  8. Do login in Openshift with OC as kubeadmin::
    oc login -u kubeadmin -p xxx https://api.crc.testing:6443

  9. If you currently don't have a database, you can create a MySQL using this yaml manifest within the same project:
    oc apply -f .//database.yaml 

  10. Do login in Docker to access the Image Registry:
    docker login -u `oc whoami` -p `oc whoami --show-token` default-route-openshift-image-registry.apps-crc.testing

  11. Push the Docker image created on the step number 5
    docker push default-route-openshift-image-registry.apps-crc.testing/liferay/liferay-portal-7.4-ga4:1.0.0


    In this last step we obtain the digest of the image in the registry. We can copy it to refer to it in our Chart values or access it from Openshift by consulting the ImageStream object created for the Liferay Portal image.
     
  12. Modify the values.yaml file of the Chart including the digest of the image pushed to the registry of our Openshift:

     
  13. Once logged into Openshift, use HELM to apply the Chart:
    helm install liferay-chart . -f values.yaml

     
  14. Accessing the Openshift Web Console, we can see how our Chart has been applied, displaying the resources within the “liferay” namespace:



     

Analyzing the Chart that we just installed

As we have seen, with HELM we can manage and install the necessary objects within our project in Openshift to deploy Liferay Portal / DXP. The resources that we are managing in the Chart are the following:

  1. Deployment: To create the Liferay Portal / DXP pods and required configuration in Kubernetes / Openshift
  2. Configmap: To create the configMap to inject the Portal-ext.properties file and the Elasticsearch configuration at deployment time, which will allow it to be hot swapped in the environment without the need to generate a new release or docker image and redeploy one release. The properties of the Portal-ext.properties file can also be introduced as environment variables in the Deployment, overwriting those of the file.
    It might be necessary to extend the Portal configuration added as configMaps, depending on the needs of the project.
    The rest of the configuration is added at build time, such as the JGroups cluster configuration or the OSGi configuration, which is found in the Liferay Gradle Workspace within the common folder:



    ** Notice that at build time you can indicate the environment configuration to be built with the following parameter:
    ./gradlew createDockerFile -Pliferay.workspace.environment=dev

    This parameter will cause the docker image to be built with the dev value in the LIFERAY_WORKSPACE_ENVIRONMENT environment variable, so that at Liferay Portal / DXP boot time it starts with environment configurations needed:



    However, the deployment that we are using in the Chart, overwrites this environment variable, including the variable handled by the HELM values, with which we can build with a value but at the end of the deployment process, HELM will be the one that indicates the values and the final configuration to use:



     
  3. HorizontalPodAutoscaler: to enable horizontal autoscaling of Liferay Portal / DXP pods by CPU and Memory resource consumption of containers running Liferay Portal / DXP. If you want to configure horizontal scaling for a custom metric see this other blog.
  4. Service: to expose the Liferay Portal / DXP pods as a service.

  5. Ingress: to route and balance requests to the Liferay Portal / DXP service. A balancing by round-robin algorithm and an affinity to the Liferay Service pods are configured by means of a cookie called "lfrstickysessioncookie". This will guarantee the client's session to the pod in which he began his session, avoiding problems of loss of session to the end customers in a clustered environment.

  6. Persistent Volume Claim: to have the necessary storage for Liferay Portal / DXP document management.

  7. Service Monitor: to allow exposure of metrics through a port and endpoint within the Liferay service. To do this, you will need to expose metrics to the configured endpoint. If you want to implement it, follow the steps of this blog.
     

Here are the values files for each environment where we can deploy:

  1. Local or default/common valuesvalues.yaml

  2. Devvalues-dev.yaml

  3. Uatvalues-uat.yaml

  4. Prodvalues-prod.yaml

Therefore, if we want to deploy in DEV environment with HELM, we would use the values.yaml together with the values-dev.yaml file to overwrite the default configuration with the dev environment:

helm install liferay-chart . -f values.yaml --values env/values-dev.yaml
 




Now, if we modify our hosts file to point our localhost 127.0.0.1 to dev.liferay.openshift.com, we can access our Liferay Portal and see it in action:



In Openshift Web Console:



In the values of the dev environment, autoscaling is activated with a minimum of 1 replica and a maximum of 2:



We will also have the OSGi configuration necessary to connect with a remote Elasticsearch, through configMap (we would additionally need to deploy Elasticsearch in Openshift, and for this, we can also use the Elasticsearch Chart)





If we apply the values of UAT or PROD, we can check that the resources for the Liferay Portal / DXP containers and the number of replicas are greater:



helm install liferay-chart . -f values.yaml --values env/values-prod.yaml



Since we are injecting the portal-ext.properties in deployment time through the configMap file and it will be the same for all environments, if we include one from the Workspace it will not apply.

We can modify properties of the portal-ext.properties by environment using the environment variables within each environment value:


 

Closure

With HELM we can have a set of objects that define the necessary resources to have an infrastructure, in this case, of Liferay Portal / DXP in Kubernetes / Openshift, called Chart. It will allow us to create and maintain releases and versioning them, with everything necessary for their deployments in the cluster. Keep in mind that HELM is no one the only utility, there are others such as Terraform, Ansible, Spinnaker, etc to deploy resources in Openshift. In a project based on a container infrastructure, we will need more tools that allow us to provide a certain automation for the execution of tests, building artifacts (jars, Docker images, the Charts themselves, etc.), publication of releases of the built artifacts, evaluation of quality gates, deployments, etc.