Building Customized Liferay Docker Images

Hey, did you know that the Gradle version of the Liferay Workspace now supports building your very own customized Liferay Docker images?

Neither did I... I stumbled upon this support in the recent Blade releases. I think at this point in time it might officially be considered as "incubating", but I've given it a spin and feel like it is in good enough shape to start spreading the word...

Prerequisites

Okay, so the first prerequisite is that you have to be using the Gradle version of the Liferay Workspace. Support is not there for the Maven version. Before you ask, no I really don't know if there are plans to add to the Maven version or not; my guess is that it is planned but that's just a guess, and I certainly don't have any details about timelines.

Next prerequisite, well that is going to be having Docker installed. During the image creation and container building, the Docker CLI will be invoked to do the heavy lifting, so you need to have it installed.

Creating the Liferay Dockerfile

The first step when doing Docker is to create your Dockerfile. When you use the command ./gradlew createDockerfile, you'll find a new directory build/docker.

Note that this step is not typically needed. I'm just using this command so I can see what will be used for the Docker image without actually creating the image yet.

Inside of this directory is a Dockerfile and two subdirectories, deploy and files.

Checking the Dockerfile contents, you'll find it is really, really simple:

FROM liferay/portal:7.1.1-ga2
COPY --chown=liferay:liferay deploy /etc/liferay/mount/deploy
COPY --chown=liferay:liferay files /etc/liferay/mount/files

This declares that you are building a new Docker image from a Liferay base Docker image, you're also mounting the deploy and files directories into the Docker image.

The first part to note is that you are starting from a preconstructed Liferay Docker image that Liferay has published to DockerHub.

Depending upon the version used when creating the Liferay workspace, the FROM instruction will likely point to a CE image for 7.0, 7.1 or 7.2.

You can actually change the base image you want to use... Edit your gradle.properties file and change the value for the liferay.workspace.docker.image.liferay property. The value will be in the form of a standard Docker image description, including optionally the registry host name, the slash separated image name, plus the colon separating the image tag.

liferay/portal:7.1.1-ga2 is the value from my gradle.properties, that's what feeds the FROM instruction from the generated Dockerfile.

I could easily swing out to https://hub.docker.com/r/liferay/portal/tags to find that there is an updated tag, 7.1.3-ga4, and I can change my liferay.workspace.docker.image.liferay property to be liferay/portal:7.1.3-ga4 and when I create my Dockerfile again it will use the updated Liferay image.

Or maybe I'm a DXP customer so instead I want to use a DXP image from https://hub.docker.com/r/liferay/dxp/tags so I change my liferay.workspace.docker.image.liferay property to be liferay/dxp:7.1.10.2 to get a DXP bundle w/ Service Pack 2.

We can also use any image name we want here, so we could just as easily point to our own registry and our own Docker base image (as long as it conforms to the Liferay Docker images, will be discussed further on).

Configuring the Docker Image

If you're familiar w/ my blog post https://liferay.dev/blogs/-/blogs/liferay-workspace-distribution-bundles, you'll know that there are environment folders in the configs folder used to build an environment-specific distribution bundle.

There was an additional folder in configs that I didn't really cover before, the docker folder. Well now I get to cover that folder.

The configs/docker folder is where your configuration and overlay files for your Docker image. You'll put files here such as your portal-ext.properties file, your osgi/configs files, overrides for the tomcat-9.0.x files (where x is going to be the version of tomcat included in the bundle), etc.

Note that unlike how common is used with the other environments, common is not currently used, all of your configuration and overlays must be in the configs/docker directory.

Starting with version 2.0.7+ of the Liferay Gradle Workspace plugin, the
-Pliferay.workspace.environment argument can be used to specify the environment build for the Docker image.

So you could use the following command:

./gradlew clean buildDockerImage -Pliferay.workspace.environment=prod

This will use the prod environment configuration when creating the Docker image.

Additionally with 2.0.7+, the common directory will also be used for Docker images.

Non-docker distBundle builds will use configs/common and overlays from configs/<environment>. Docker image builds will use configs/common, then overlays from configs/docker and finally overlays from configs/<environment>.

Understanding the Dockerfile

When we create the Dockerfile, we will have something like:

FROM liferay/dxp:7.1.10.2
COPY --chown=liferay:liferay deploy /etc/liferay/mount/deploy
COPY --chown=liferay:liferay files /etc/liferay/mount/files

The FROM instruction is pretty straight-forward now that we know how the image is identified, and if you know Docker, the two COPY instructions are simple too, but what do they do?

Liferay-based Docker images use a custom ENTRYPOINT script that use a couple of special directories:

  • /etc/liferay/mount/files - Contains files that will be copied into the base image LIFERAY_HOME directory. So our configs/docker/portal-ext.properties file gets copied to build/docker/files which, in turn, will be copied into the /etc/liferay/mount/files directory when the image is built and, when the container is run, will be copied to the LIFERAY_HOME directory before Tomcat/Liferay is started.
  • /etc/liferay/mount/scripts - An optional folder that contains bash scripts, these scripts will be execute when the container is starting. Normally you won't need one of these.
  • /etc/liferay/mount/deploy - This folder will contain all of your built artifacts, the jars and wars, and when the container starts they are copied to the Liferay deploy folder.

When your container starts, these directories are processed first, and the final step is the launch of the Tomcat process.

Building the Liferay Docker Image

Building your new Liferay Docker image is a simple command:

./gradlew buildDockerImage

This is going to do the normal Docker Image build thing using the Liferay bundle you've specified, adding in your configuration and overlays, plus also including your development artifacts into your fresh Docker Image.

If the Gradle command is successful, you can then check your image(s):

sample2 $ docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
sample2             prod                2e5ab0ab06f7        5 seconds ago       1.64GB
sample2             local               2f469101968f        3 minutes ago       1.64GB

The Repository will be the workspace name. In my case, my workspace is named "sample2", so the repository is also named "sample2".

The assigned tag is the environment used for the build. Since I have the Liferay Gradle Workspace plugin version 2.0.7, when I built my first Docker image, it used the default "local" environment. The second time I included the -Pliferay.workspace.environment=prod argument and got a second docker container from "prod".

Note: If you have a version of the Liferay Gradle Workspace plugin older than 2.0.7, the repository and the tag columns with have "<null>" for the values. You'll need to use the Docker CLI to make necessary changes to your images before you can push them.

As of this writing, there is only one tag assigned, the environment, and no labels were assigned. Although I can easily add more tags and labels using the Docker CLI, I'd really like the build to take care of this stuff for me.

Fortunately the development tools team has heard my pleas and they're looking into making some changes here (yay!).

Running the Liferay Docker Image

There are additional Docker tasks available to support creating and running your container.

The full list of tasks are:

Docker tasks
------------
buildDockerImage - Builds the Docker image with all modules/configs deployed.
createDockerContainer - Creates a Docker container from your liferay image and mounts build/docker to /etc/liferay.
createDockerfile - Creates a dockerfile to build the Docker image.
logsDockerContainer - Logs the Docker container.
pullDockerImage - Pull the Docker image.
removeDockerContainer - Removes the Docker container.
startDockerContainer - Starts the Docker container.
stopDockerContainer - Stops the Docker container.

The commands are fairly obvious as to what they are for, so I'm not going to delve further into it.

Besides, as a developer I find running a local Docker container to be a pain because a) performance overhead is a factor and b) profiling/debugging against a container can be challenging to get working.

Building Your Own Base Liferay Docker Image

Okay, so at the start of this blog I covered how the workspace will create a custom Docker image using one of Liferay's packaged bundle images from DockerHub.

This may not work out so well for you if you are on DXP using a specific FixPack and/or Hot Fix or if you are on CE or DXP and want a later version of Tomcat.

Fortunately Liferay gives us most of the tools we'll need to deal with this issue.

First thing you need is a custom bundle tarball. Fortunately I've just provided a script to help you build your own custom tarball: https://liferay.dev/blogs/-/blogs/creating-a-custom-liferay-tarball

Additionally, you'll need to clone Liferay's Docker scripts from https://github.com/liferay/liferay-docker

Liferay's Docker scripts make some assumptions about what is being built, such as that the bundle is in either .zip or .7z form, that the images will be downloaded from Liferay's repository URL, etc.

Basically, though, you're going to need a modified version of the build_image.sh script. You need to cut lines 82-118 since you won't need to download an image, plus if you built a tarball instead of a zip file you'll need to untar instead of unzip.

You're also going to want to cut lines 137-173 since you'll be either using LCS with your own account to get a license (so you'll have a LCS config file somewhere under the workspace's configs/<environment>/osgi/configs directory) or you'll have an actual license file (as configs/<environment>/osgi/modules directory).

You'll probably want to cut lines 175-203 so that you can come up with your own name, rather than liferay/portal or liferay/dxp, I'm sure you'd rather name your bundle something specific for you, and you'll certainly want to use your own image name prefix.

Lines 205-241 programmatically derive various labels to assign to the image, especially the "liferay/" leading image name prefix. You might want to keep this logic (edited for your own prefix) or you can simplify to use labels of your own design. Make sure that whatever tags you assign, you have the permissions to push those tags to the Docker repository.

After you have all of this done, you then just invoke the command ./build_image.sh my-custom-bundle.zip and all of the heavy lifting should be done for you.

Your custom image will be built, pushed to the Docker repository, and then available for you to use for last step of building customized Liferay Docker container(s) for your organization.

The changes mentioned so far, well they still rely on Liferay's Dockerfile defined for the Liferay Docker Images. I've heard folks say that the image is, itself, kind of opinionated since it starts from the Azul Zulu 8 JDK as the base image and makes further assumptions from there.

So at this point I recommend that you take a little time and check out the Dockerfile and make sure it will work for you. If it doesn't, you're free to make changes as necessary for your own purposes. Just be sure to keep things like the ENTRYPOINT script, the ENV variables like LIFERAY_HOME, etc. Add in the additional tools you think you need, do whatever. As long as the ENTRYPOINT and the /etc/liferay and /opt/liferay stuff stay intact, you'll be fine.

Conclusion

So there you have it, a whirlwind dive into the new Docker support that is part of the Liferay Gradle Workspace.

In preparation for this blog, I introduced a script to help you build a custom Liferay Bundle tarball.

Next, I introduced you to the Liferay Workspace environments so you could build custom distribution bundles specific for a specific environment.

Finally, I've shown you all of the Docker support that has been added recently to the Liferay Gradle Workspace and how you too can leverage this tooling to create your own Liferay Docker images and containers.

I think I have one more blog to work on for this series, basically building a completely custom Liferay Docker image using the warmed up custom distribution bundle from the last blog. While this pattern certainly works, I don't like the fact that the ENTRYPOINT script is going to be redeploying files and modules every time my container starts up. I think that, if I have a warmed bundle w/ all deployments completed, it will represent a more consistent image that should start a container faster because it would eliminate the deployment processing.

Plus there's other things I want to consider such as Ghostscript and ImageMagick, Xuggler, adding Apache or Nginx w/ an AJP configuration, etc. etc. etc. See you soon!

Update 01/04/2024

Someone recently reached out and indicated that my links to build_image.sh were no longer valid.

After some digging through the commit logs, I found that build_image.sh was renamed to build_bundle_image.sh.

Now, I've updated the old links to point to the old version of the file, but I have to say that when you compare the old file to the newer build_bundle_image.sh there are so many changes that I'm no longer certain if using the old script would build the right custom image for you.

Plus, if you check the repo, there's a number of other build_*_image.sh scripts that provide different variants, such as JDK11 only or optional JDK11/JDK8 support, amongst other things.

It has been over four years since this blog was published, so it shouldn't come as a surprise that the building of the custom docker image has changed so much...

The build_bundle_image.sh script can still be a good starting point for creating your custom script, you'll just need to dig into it and understand how it is using functions from _common.sh and _liferay_common.sh and redo things to reflect your own needs.

Or hey, you could try the old script instructions with updated links above and maybe that will give you what you need.

Blogs

This is great but it seems to be intimately tied to the Liferay IDE. Is that true?

 

Could any of these be used in a separate build pipeline running out of something like Azure GIT?

I avoid focusing on an ide, I use intellij but I know others use the Liferay IDE. 

 

This blog doesn't use the ide, just the Liferay gradle workspace, so it is all command line. 

 

The ides  can make it easier to invoke, but you could do this from any command line or even Jenkins. 

Hi David, you wrote a nice and useful article.

I was inspired by this work of yours to write this on my blog: How to build a Docker Liferay 7.2 image with the Oracle Database support https://www.dontesta.it/en/2019/08/21/how-to-build-a-docker-liferay-7-2-image-with-the-oracle-database-support/

Do you know why the Liferay docker images are still using jdk8? We have a requirement to use openjdk-11, so I started from Liferay's 7.2.0-ga1 image, added the zulu jdk11 to it and removed jdk8. I'm not sure though if this is a good approach, or if it would be better to completely build the image from scratch.  

This is my Dockerfile:

 

FROM liferay/portal:7.2.0-ga1

USER root ENV JAVA_HOME=/usr/lib/jvm/zulu-11 RUN ZULU_ARCH=zulu11.33.15-ca-jdk11.0.4-linux_musl_x64.tar.gz && \     INSTALL_DIR=$( dirname $JAVA_HOME ) && \     BIN_DIR=/usr/bin && \     MAN_DIR=/usr/share/man/man1 && \     ZULU_DIR=$( basename ${ZULU_ARCH} .tar.gz ) && \     cd ~ && wget -q https://cdn.azul.com/zulu/bin/${ZULU_ARCH} && \     mkdir -p ${INSTALL_DIR} && \     tar -xf ./${ZULU_ARCH} -C ${INSTALL_DIR} && rm -f ${ZULU_ARCH} && \     mv ${INSTALL_DIR}/${ZULU_DIR} ${JAVA_HOME} && \     cd ${BIN_DIR} && find ${JAVA_HOME}/bin -type f -perm -a=x -exec ln -sfn {} . \; && \     rm /usr/lib/jvm/default-jvm && rm -rf /usr/lib/jvm/java-1.8-openjdk && rm -rf /usr/lib/jvm/zulu-8 && \     mkdir -p ${MAN_DIR} && \     cd ${MAN_DIR} && find ${JAVA_HOME}/man/man1 -type f -name "*.1" -exec ln -s {} . \;

USER liferay:liferay

COPY --chown=liferay:liferay ./files/ /etc/liferay/mount/files/

 

Although Liferay is certified for Java 11, it doesn't need Java 11 to run.

 

I think you can go either way, especially if the current Dockerfile is working for you. The Liferay image is itself as light as it can be, but I don't think it would be difficult to use their Dockerfile to create your own baseline using Zulu 11. The downside is that you have to do the work yourself to swap out OpenJDK 8 for Zulu 11, but your current Dockerfile just works towards layering Zulu 11 on top.

 

I'd say it's your call; if you're happy with what you have and it isn't broke, I can't see a pressing need to fix it. A docker purist might argue that your image is going to contain unused OpenJDK 8 cruft in it, but it might be worth it to carry the baggage rather than go your own path.

Hi David, thanks for the great article!

I had one question regarding custom bundle docker images. You mentioned the following:

"While this pattern certainly works, I don't like the fact that the ENTRYPOINT script is going to be redeploying files and modules every time my container starts up. I think that, if I have a warmed bundle w/ all deployments completed, it will represent a more consistent image that should start a container faster because it would eliminate the deployment processing."

My question is - How do you warm up a bundle such that it is ready to go upon startup?

My initial thought is starting the bundle in a controlled environment, letting it deploy the modules from deploy/ into osgi/ and have that as the final bundle.  Am I missing something else?

Thanks!