Building Liferay 7.3 CE docker images

To use Docker to run Liferay in the fastest and most agile way, it is useful to know how to  configure Liferay images and how to create our custom images.

In this blog post, we explain how Liferay's Docker image is structured in order to know how to configure it before running Liferay. We’ll also see two ways to build our Liferay custom images:

  • From an image of the public registry of Docker Hub Liferay Portal 7.3 CE adding your tailored developments and configuration from your Liferay Workspace (useful to implement on development or testing environments).

  • From a custom image from a tailored local bundle (useful to build images for productive or UAT environments fully adapted for the fastest deployment)

Requirements

This posts was written using macOS Catalina 10.15.3, using:

  1. Docker desktop (v2.2.0.3 and Engine v19.03.5)

  2. Liferay Developer Studio with Liferay Workspace

Liferay Developer Studio could be replaced with another IDE if preferred.

 

Understanding how is Liferay Docker image structured

Liferay Docker image has some scripts that are executed before running Liferay in order to configure our Liferay installation.

These scripts are in the folder user/local/bin and the main script is named liferay_entrypoint.sh:

Inside this script file we can see the call orchestration to other scripts to make the image configurable:

As we can see, we have specific configuration points: before configuration, on the configuration, before running Liferay and post shutdown.

Inside configure_liferay.sh script, we can find the logic that copies configuration persisted in the mounted volume:

  1. folder files: Files located here will be copied to liferay_home

  2. folder scripts: Files located here will be executed.

  3. folder deploy: Files located here will be moved to liferay_home/deploy.

  4. folder patching: It will execute the patching-tool to apply patches in Liferay (only available on Liferay DXP).

In addition to the config files, we can configure Liferay using environment variables previously defined into the image. Check these Environment Variables on the following documentation located on Docker Hub. Take into account that we can override default JVM value using LIFERAY_JVM_OPTS environment variable, also each defined property in portal-ext.properties can be overridden using one variable (check  portal.properties file in order to locate each one). Keep in mind, that if one property was defined in portal-ext.properties file and it was set as an environment variable as well, this last one will prevail.

In the picture below, you can see a k8s deployment applying some environment variables:

 



Building Liferay Docker image from our Liferay Workspace:

In this part we see how to add our custom developments and configuration to a public Liferay image of Docker Hub, using our Liferay Workspace.

Let’s start by adding the following property to the gradle-properties file located in our Liferay Workspace. This property has the default value of the Docker image for the version that we configured our Workspace. We can change it to the correct image that we want to pull:

liferay.workspace.docker.image.liferay = liferay/portal:7.3.1-ga2

Images are avaiable on Docker Hub, for Liferay DXP and Liferay Portal.

Open a terminal and inside the parent workspace folder execute:

./gradlew createDockerfile

This action will create, into ${liferay_workspace}/build  folder, one folder named docker which has: configs, deploy, scripts and files folders and one Dockerfile file, with the image that we want to use and folders that we want to mount on directories:

Now we can go to our Liferay Developer Studio and put configuration and developments into our Workspace project that we want to push to the image. To do this, on this blog post, we created a MVCPortlet and added a portal-ext.properties file, ElasticSearch configuration file and we built the image for production environment.

The configuration will be created into the configs folder of the Liferay Workspace. This gives us the following folder tree: 

Open a terminal and execute, inside the workspace:

./gradlew clean buildDockerImage -Pliferay.workspace.environment=prod

The command output shows us that it has generated the image with tag “my-workspace-liferay:7.2.0-ga1”, so we have our image docker ready.

To check docker image executing:
docker images

We modify our docker-compose.yaml or deployment if we are working on k8s, pulling it from our new docker Liferay image.

In this case, we’ll modify k8s deployment:

As we can see on the command output only modifies Liferay deployment.

If you’re working with Minikube on k8s, you’ll need to create the local Docker image with Minikube’s daemon. Click here to see how to do it.

Launching a describe command, we can check configuration deployment with the new image:
kubectl describe deployment liferay -n=liferay-prod

Once the image is running, we can check in Liferay’s logs that our configuration is being applied for the environment that we’ve created it for:

When the image is running, we can check how Liferay 7.3 CE is working with desired developments and configuration for the environment:

 


Closure

We built an image of Liferay 7.3 CE, including our custom developments and we applied the configuration to the desired environment for the environment where we want to run it in just a few steps 

In this way, we can build docker images in a very agile way in order to have our project environments with custom developments updated and the configuration correctly applied.

 


 


 

Building Docker image from a bundle

Now we are going to see how to build an image from a bundle to deploy it on k8s.

Being able to build an image from a bundle is useful to make an image with an exact patch level or the needed changes on the server application such as to delete unused folders or to install features on it. Doing it this way, the start process of your image will be more agile than applying your modifications using scripts at the start process.

To start, we can checkout the product branches from where we want to build the bundle or download it directly and make the necessary customizations.

 

Requirements:

This post was written using bundle Liferay 7.3.1 GA2 executing these actions:

  1. Put a portlet into deploy folder

  2. Put our ElasticSearch configuration into osgi folder

  3. Put our portal-ext.properties file in liferay_home.

  4. Deleted ${liferay_home}/data/hypersonic and ${liferay_home}/data/elasticsearch6 folders since it won’t be used.

Once Liferay Docker is downloaded, we use script build_local_image.sh.

This script has the following arguments:

  1. Bundle path

  2. Image name

  3. Version / Tag

  4. Optional “push” command to push the image into Docker registry.

Open a terminal and execute inside Liferay Docker folder (to push the image add push param at the end):

./build_local_image.sh ./../liferay-ce-portal-7.3.1-ga2  my-custom-image 7.3.1ga2-pre

The script will build our image with the needed entry scripts to our Liferay image with the folders scripts, files and deploy which we can use to configure it, as we did in the first example.

In addition, the script checks the built image and it is mounted into a docker container for checking, so all dependencies must be resolved in the test phase  (in this case, MySQL and Elasticsearch) and a container must be started for each dependency. If testing the build image is not necessary, modify the script commenting the instruction test_docker_image to avoid it or add an argument to implement a condition.

To check our docker images and confirm our image is ready to use:
docker images

We modify our docker-compose.yaml or deployment if we are working on k8s, to pull from our new Liferay image.

In this case, we’ll modify k8s deployment:

As we can see on the command output, it only modifies Liferay deployment.

If you are working with Minikube on k8s, you will need to create the local Docker image with Minikube’s daemon. You can generate it using the temp directory generated after script  build_local_image.sh execution.  Check the following link to review how to do it .


Once the image is started we can open the log file in order to check that it’s not applying any configuration file and that it is starting already configured.

Also it has deployed our sample-portlet located into deploy folder:

When we have the image up and running, we can access our Liferay 7.3 CE and check that all the configuration and custom developments are running as expected:

 

Closure

We learnt the procedure to build a Liferay Docker image from a local bundle. It’s useful to install an exact patching level on Liferay DXP, custom developments and configuration/customization on the application server to make the starting process of Liferay more agile.