Deploying Liferay 7.3 CE in Kubernetes

Esta entrada de blog  está también disponible en español.

Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management.
 

On this blog post, we are going to work with k8s's API through command line interface, using kubectl to manage objects: namespaces, services, ingress, volumes, pods and deployments.
If you aren't familiar with k8s, you can read about the basic concepts on it's official website and this document about Cluster Architecture of Google Kubernetes Engine (GKE), where what is a cluster, principal cluster and node is explained.

In addition, I'm going to describe how to deploy Liferay 7.3 CE in conjunction with a service stack on Kubernetes, Nginx as an ingress, MySQL 5.7 as a database and ElasticSearch as engine search indexer, as well as, how to manage k8s from GUI dashboard.

Requirements

This blog post was written using  macOS Catalina 10.15.3, installation required:

  1. Docker desktop (v2.2.0.3 and Engine v19.03.5)

  2. Kubernetes (v1.15.5) enabled from the installation of the Docker version. From graphic interface of Docker, the fastest way we can execute a basic Kubernetes configuration is by activating the configuration below:

     

  3. We need to check that we have  docker + Kubernetes Up & Running from Docker Dashboard and check that we have enough CPU and Memory resources to deploy a whole stack of services for deployment. Specifically, to write this blog post I needed to assign 12 CPUs and 16Gb of memory to Docker.

     

  4. Install kubectl

  5. Install minikube 

(Although installation of Minikube is optional, it’s recommended to speed up configuration of k8s. If you're not going to use Minikube, you must set up nginx-ingress-controller)

 

Let's start !

We're going to start setting up Minikube with resources that we've assigned previously to docker, 12 CPU and 16GB of memory.
Open a terminal and execute:

minikube config set cpus 12

minikube config set memory 16384



Next step, execute delete followed by start command; follow instructions on Minikube output:

minikube delete

minikube start

You can see VM is starting with the desired resources..

 

Once we've started Minikube, let’s start creating the namespace where we are going to locate all resources into the cluster throughout the whole blog post. By default, if any scheme is not indicated, all resources are going to be allocated into “default” namespace.

Open a terminal and execute:


kubectl create namespace liferay-prod

Where liferay-prod is the namespace name that we want to create and use throughout the blog post.

The possibility to create different namespaces inside a cluster, lets us manage pod deployments into a desired namespace, avoiding possible collisions among several deployments, allowing them to live inside a controlled environment.

Now we can start to deploy our resources inside our namespace into our cluster in Kubernetes. In order to start deployment of Liferay 7.3 CE image in the fastest and easiest way we first must deploy some dependencies needed:

 

 

  1. Deploying MySQL: To do this, use this manifest YAML file. This file will first create the service (with which it can establish communication with Database), next it will create the persistent volume which Database requires to persist the information and finally it will create the deployment (template that each pod follows to run the deployment).

    Save the manifest file database-deployment.yaml and execute on a terminal:

    kubectl apply -f database-deployment.yaml -n=liferay-prod

    This command will apply all manifest into the namespace liferay-prod

    We can check the correct application of our resources executing following commands:
    kubectl get deployments database -n=liferay-prod


    kubectl get services database -n=liferay-prod

    kubectl get pvc database-data -n=liferay-prod

     

    To consult detailed information use the attribute describe instead of get, followed by the resource type from which more information is needed:

    kubectl describe deployment database -n=liferay-prod

    After manifest application, we'll have our first pod running on Kubernetes, because apply command with kind deployment , by default, starts the number of replicas indicated on the template. We can check pod status with the following command, which filters the list of pods by tag app=database inside the namespace liferay-prod:

    kubectl get pods -l app=database -n=liferay-prod

    To list all the pods that we have running into our namespace use the following command:

    kubectl get pods -n=liferay-prod
     

     

  2. Deploying ElasticSearch: For it, use this YAML manifest,This file will first create the service(with which it can establish communication with ElasticSearch), next it will create the persistent volume which Indexer requires to persist the information and finally it will create the deployment (template that each pod follows to run the deployment).

    I’m using the public image of ElasticSearch from Liferay DXP Cloud, since it is configured with required modules to run with Liferay 7.3 CE. You can start from the default image of ES as well, pulling image from their public registry on Docker Hub, but it requires adding the needed modules: analysis-icu, analysis-kuromoji, analysis-smartcn and analysis-stempel.

    Save manifest file search-deployment.yaml and execute on a terminal:

    kubectl apply -f search-deployment.yaml -n=liferay-prod

    This command will apply all manifiest inside the namespace liferay-prod:


    As we have previously done with MySQL, we can check the correct application of service, deployment and volume with the commands get and describe:

     

  3. Deploying Liferay 7.3 CE: For it, use this manifest YAML, This file will first create the  service (which one Liferay can establish communication), next it will create the persistent volume which Liferay requires (where Liferay persist the information related to documents and media), other persistent volume that we are going to use to make some configuration deployment and finally it will create the deployment template (template that each pod follows to run the deployment).

    Save the manifest file liferay-deployment.yaml and execute on a terminal:

    kubectl apply -f liferay-deployment.yaml -n=liferay-prod

    This command will apply all manifest inside the namespace liferay-prod:
     


    As we have previously done with MySQL, we can check the correct application of service, deployment and volumes with the commands get and describe.



    After applying the manifest, we have one pod of our deployment Liferay running. As we're using a default public image of Liferay 7.3 CE, without any configuration, this image will be running against the Hypersonic database and it'll be using embedded as the search engine indexer.

    Now we can configure Liferay to link to our services database and search:

    a) Save the following files into folder named "files" in the local environment in order to copy to the persistent volume named Liferay-data:
    ~/files/portal-ext.properties ​​​: Set up the database connection previously deployed  and configure Liferay to work in a cluster. We'll use multicast cluster configuration, but if you want to set up cluster with unicast communication, you need to add unicast file configuration (using DNS_PNG or JDBC_PING) to liferay_home and modify the following properties: cluster.link.channel.properties.control and cluster.link.channel.properties.transport.0. 
    ~/files/com.liferay.portal.search.elasticsearch6.configuration.ElasticsearchConfiguration.config:
    Where transportAddresses points to the name of service previously defined for our search deployments and the port where ElasticSearch will be listening,  which we have left open on its manifest file.
    search.liferay-prod.svc.cluster.local can be replaced with just search because both services (liferay and search) are into the same namespace.

    b) We are going to take advantage of the started pod to copy configuration files inside the persistent volume:

    kubectl get pods -l app=liferay -n=liferay-prod


    kubectl cp ./files liferay-6f9554fdf9-dglnd:/mnt/liferay -n=liferay-prod
    (this command will not return any output)
    You can check that the files are inside the directory accessing through SSH and executing the following command:
    kubectl exec -it liferay-6f9554fdf9-dglnd /bin/bash

    By default, a Liferay Docker image searches in the folder files inside the path defined on the environment variable LIFERAY_MOUNT_DIR with value /mnt/liferay and it will copy the content to liferay_home. You can take a look at Docker Hub documentation for more information related to environment variables or the image.

    Now we can delete the existing pod in order to start our first configured Liferay pod:kubectl delete pods liferay-6f9554fdf9-dglnd -n=liferay-prod

    This action will run a new pod and will be ready when the desired pod is definitely killed.

    Get the last pod started:

    kubectl get pods -l app=liferay -n=liferay-prod



    Access Liferay Logs to check the correct start process** and the connection with database and search engine dependencies:
    **(this first start could take several minutes due to the fact that the whole Liferay scheme and database needed object will be created)

    kubectl logs liferay-6f9554fdf9-nvmsj -n=liferay-prod --tail 1000

    ¡ On this moment, we'll have one node of Liferay Liferay 7.3 CE UP & Running !

    At this moment, we'll have one node of Liferay Liferay 7.3 CE UP & Running !
    Now we can scale Liferay 7.3 CE. For this, modify manifest file liferay-deployment.yaml modifying the replicas number setted as default with 1 value or execute the following command “kubectl scale” above the desired deployment to be scaled:

    kubectl scale deployment.v1.apps/liferay -n=liferay-prod --replicas=2



    This execution will scale 1 more pod of Liferay 7.3 CE:

    We can check the correct communication of the pods (health of the cluster) accessing the logs of the oldest pod:

    kubectl logs liferay-6f9554fdf9-67bbt -n=liferay-prod --tail 1000



    We can do a scale down in the same way that we did the scale up, modifying the manifest YAML file and executing a kubectl apply -f or executing kubectl scale:

    kubectl scale deployment.v1.apps/liferay -n=liferay-prod --replicas=1


    As we can see, in the image above, the execution of the pod liferay-6f9554fdf9-ltpbl is finishing. Now we can check if the rest of the live pods have seen the shutdown of this node by accessing the logs again:




    4. Deploying Nginx Ingress: Before deploying Nginx Ingress, we will enable the Ingress resource in Minikube, so that it assigns us an IP within the node and we can resolve the dns against the IP of the Ingress:

    minikube addons enable ingress

    Once enabled, we will use the following yaml manifest, which will create an Ingress type resource through which we will use a cookie as a sticky session between client and pod liferay, in order to be able to maintain the session. In this way, we will delegate load balancing to the liferay container cluster in k8s.

    We will use as host pointing to our cluster liferay, a domain that we will have to enter in our hosts file.

    Save the manifest in the nginx-ingress.yaml file and execute it in a terminal:

    kubectl apply -f nginx-ingress.yaml -n=liferay-prod

    This command will apply the entire manifest in the liferay-prod namespace:



    Now we will check the assigned IP to our Ingress:

    kubectl get ingress -n=liferay-prod


    ** The IP allocation time can range from 1 to 2 minutes.

     

    We will need to modify the hosts file of our machine to point to the IP assigned to Ingress. To do this, in a terminal, execute:
    sudo nano /etc/hosts

    Add the entry to the file:

    192.168.64.7 liferay.kubernetes.com



    Once the hosts file is saved, open our preferred browser and access http://liferay.kubernetes.com

 

 



Closure

We have managed to deploy Liferay, database, nginx ingress and search engine in k8s, configuring Liferay to be easily scalable in a matter of a couple of minutes thanks to k8s as container orchestrator and a series of docker images provided to configure more quickly.

We have been able to start working with k8s commands through kubectl and have learned to configure a yaml manifest to generate deployments in k8s.

 



 

Using Minikube dashboard

In k8s we can use a dashboard with which we can perform all the actions in a graphic way. As in this how-to we have relied on Minikube for the configuration of our node, we will use the dashboard provided by Minikube. To do this, open a terminal and execute:

minikube dashboard


 

A navigation window in our default browser will open.

Once inside, we can view our cluster, node and namespaces (and within it, its resources) from our browser and perform the actions / modifications that we want:




 

 

Blogs

Hola Marcial, excelente aporte, 

Tengo un problema en un punto en el cual debo copiar  el contenido de la carpeta /files dentro del pod no importa lo que realize me indica que no cuento con los permisos para crear carpetas y eso me tiene detenido 

¿Tu sabrias como solucionarlo?

 

muchas gracias quedo atento

Thanks Marcial, very interesting. However it seems that only Liferay is scaled to 2 replicas. But it seems the 2 instances of LR will be pointing to the same DB. How can we also scale the numbers of DB?  Thanks.

When I copying the local filesystem files (Portal-ext.properties) running container mnt/liferay  getting the below error.tar: can't open 'portal-ext.properties': Permission deniedcommand terminated with exit code 1

hello,

I have a requirement to deploy liferay on K8S and i was wondering about the files that will not be save on the database reather it will be on data folder how to manage that for two replicas to be both identical?