Liferay Docker Image Features

The base Liferay Docker images have some cool features to help you build a solid container...


So I was recently asked to help build a custom Liferay docker image for a client and there were some specific requirements such as:

  • Should not contain hard-coded credentials, those will be available as environment variables.
  • Should be a single image that can be used in every environment, DEV, UAT and PROD.

Now these kinds of things can be challenging to do if you were to say, pull down a Jenkins image and want to have the same kind of flexibility...

But the Liferay Docker Base Images actually have a lot of functionality to them which I thought I'd share with you all...

JVM Choice

Yes, that's right, you do have a JVM choice for your image. You can use JDK 8 or 11, both are supported and also included in the image.

You'll default to Zulu JDK 8, but if you use the environment variable JAVA_VERSION=zulu11 your environment will use Zulu JDK 11 instead.

If you've been using Liferay for a long time like I have, you know the file and how important it is for configuring Liferay.

But did you know that there is an alternative to creating a file? There is, and it is based on environment variables.

Liferay supports setting any property that you would define in via an environmental property.

The format kind of predictable; the environment var must start with the LIFERAY_ prefix, and then you take the property name, capitalize it and replace any periods with _PERIOD_. So, for example, locales.enabled becomes LIFERAY_LOCALES_PERIOD_ENABLED and jdbc.default.username becomes LIFERAY_JDBC_PERIOD_DEFAULT_PERIOD_USERNAME.

In cases where a property name has mixed case, then things differ a little. The uppercase character is replaced with _UPPERCASEX_ where X is the character. So the jdbc.default.driverClassName property with the uppercase C and N will become the LIFERAY_JDBC_PERIOD_DEFAULT_PERIOD_DRIVER_UPPERCASEC_LASS_UPPERCASEN_AME environment variable.

If you have your own properties that you've been adding to and referencing them via PrefsUtil, well you can use this same technique to reference an environment variable as a replacement for setting the values in

Me, I prefer to mix the technique. I want to provide a file that has common settings for all of my environments, then leave the environment vars for specific values.

So I will normally have the jdbc.default.driverClassName in my because every environment is going to use Postgres, for example, and I may even set jdbc.default.username too if that is also going to be the same. But I'll leave the jdbc.default.url and jbdc.default.password for environment variables.

This way my environment variables control what a specific environment has, but I'm slimming the list to just what is necessary. And it also allows me to satisfy a requirement of having one image that can be used in all environments.

Volume Mapping

In the coming sections, I'm going to be referring to special directories that are in the image under the /mnt/liferay directory.

If you are building a custom image, you could easily populate this directory in your dockerfile and copy external resources in place and Liferay will use them correctly.

Alternatively, you could mount the /mnt/liferay directory from a host volume using the -v option.

So if I put my stuff in the /opt/liferay/image/testing directory, I could use the command docker run ... -v /opt/liferay/image/testing:/mnt/liferay ... so the image will use my local filesystem when looking for the special files.

Note that if you do use the -v option this way, the host volume completely replaces the /mnt/liferay folder in the image, it does not "merge" them. If the image has an /mnt/liferay/data folder but there is no /opt/liferay/image/testing/data folder, as far as the container is concerned there will not be an /mnt/liferay/data folder and any attempt to access it would fail.

Overriding Files

The image of course is going to contain a number of files for Liferay, Tomcat, and other things. Sometimes you may want to overwrite a file from the image with your own copy. For example, you might want to replace the default Tomcat bin/ file with one of your own.

The Liferay image supports this using the /mnt/liferay/files directory. Any files/folders here will be overlayed into the image at /opt/liferay before Liferay starts.

So for the script override, I would just need to make it available as /mnt/liferay/files/tomcat/bin/ and at runtime it will be copied to /opt/liferay/tomcat/bin, replacing the current file there, and using it as the startup.

You could also do this with your file. Create it as /mnt/liferay/files/ and it will be copied to /opt/liferay/ before Liferay starts. This technique can be used along with the volume mapping in the previous section to move the file out of the image altogether, pulling it from the host OS when the image is starting.

Same deal for your activation key xml file (if you have one). Using /mnt/liferay/files/osgi/modules/activation-key-...xml, it would be copied into /opt/liferay/osgi/modules before Liferay starts effectively dropping your key where it needs to go. Again this is moving an environment-specific key (i.e. prod vs non-prod) outside of the image, so the image can be used as-is in any environment; you just need to control what -v source you use for the mounting.

Shell Scripts

The scripts are really the fun part, but I haven't really seen much recommendations on how to handle them and the kinds of things you might do with them, so I wanted to touch on them here.

Basically any script that is in the /mnt/liferay/scripts directory are executed before Liferay starts.

A note about script naming... I like to have some control over the order of script execution. I accomplish this by prefixing all of the scripts I write with a number, such as and and When the scripts are processed, they'll be executed in numerical order...

I like to use scripts to combine all of the previous techniques a powerful and flexible mechanism to prepare the runtime container...

For example, I prefer to use JNDI definitions for the database connections as this will ensure that the connection details and credentials are not exposed to Liferay and not subject to Liferay to reveal.

To do this, I will need to overwrite the /opt/liferay/tomcat/conf/Catalina/localhost/ROOT.xml (because I also like to keep the context bound to the app and not make it global).

Parts of this will be the same in every environ such as the database driver (if not already available). I'll drop the db driver and the Hikari jars into /mnt/liferay/files/tomcat/lib/ext so they will be copied to /opt/liferay/tomcat/lib/ext and available to Tomcat (you've seen, right?).

My ROOT.xml file, well I guess I could put it in /mnt/liferay/files/tomcat/conf/Catalina/localhost and let the image copy it in, but that would mean I'd have to have the passwords in the file and may make it impossible to change from an environment variable perspective.

What I really want to have is environment variables in the ROOT.xml so I can define creds and URL in the container startup, but Tomcat doesn't really support live replacements in its configuration files.

Initially I used a /mnt/liferay/templates directory where I put ROOT.xml with placeholder strings instead of actual values with a script responsible for replacing placeholders and moving to final location. So my JNDI fragment would be something like:

<Resource name="jdbc/liferay" auth="Container"
      dataSource.password="%%JDBC_PSWD%%" />

With a file like this, you can easily handle the replacements with a simple sed command such as:

  /mnt/liferay/templates/ROOT.xml > \

This sed command is going to replace the %%JDBC_USER%% marker in the source file with the value of the LIFERAY_DB_USERNAME environment variable, and the output will be redirected to the ROOT.xml file where Tomcat expects to find it.

You're going to want to test this out. I found out the hard way that you can't put an unescaped URL into an XML file like this because odd failures will occur.

Since I have multiple replacements to make, I could use a chain of sed commands to apply each replacement.

Another alternative, one that I use now, is to do the changes in-place. We could put the ROOT.xml file with the placeholders in /mnt/liferay/files/tomcat/conf/Catalina/localhost, then we could run the sed command with -i so it changes the file directly:


For the scripting aspect, we can define the /mnt/liferay/scripts/ script with the following:


# Declare an associative array for our replacements
declare -a jndi

# Use a function to facilitate the replacements
updateJndi() {
  # Loop through the array
  for key in "${!jndi[@]}"
    # Extract the value
    # perform the in-place replacement
    sed -i "s|${key}|${value}|g" /opt/liferay/tomcat/conf/Catalina/localhost/ROOT.xml

# run the function

Perhaps this is more complicated than it needs to be, but hopefully it gives you some ideas.

Note that the sed -i command actually leaves a file behind. In the /opt/liferay/tomcat/conf/Catalina/localhost directory you'll still have the ROOT.xml file, but you'll also have a ._ROOT.xml hidden file. And boy, does this file cause Tomcat a heap of trouble. You'll get context startup failures and you think it's pointing at the ROOT.xml file, but it's not, it's referring to the hidden field. Now, in my scripts, if I am doing sed -i on a file, I'm going to add a step to remove the hidden files. I don't need it and don't want them causing any problems...

So now we have a template file and that file is updated before Liferay starts, replacing the placeholders with values from environment variables.

As another example, consider the simple case of having your activation key files in /mnt/liferay/keys. You need one copied to /opt/liferay/osgi/modules, but you want to control the one using an environment variable. You could leverage a script like:


# Use the LIFERAY_ENV environment variable to copy in the right activation key...

case $LIFERAY_ENV in
    cp /mnt/liferay/keys/ /opt/liferay/osgi/modules
    cp /mnt/liferay/keys/activation-...uat.xml /opt/liferay/osgi/modules
    cp /mnt/liferay/keys/ /opt/liferay/osgi/modules
    echo ERROR! Invalid environment $LIFERAY_ENV value.

In this way we'd get the right activation key based on the environment variable even though we don't really know the contents of the key. We're also grabbing it from /mnt/liferay/keys, so if we're using the volume mount trick our volume can have all of the keys and it will be separate from the image.

It should be clear now that the scripts directory can contain a shell script that does whatever you need it to do. You have access to a basic linux shell and a basic command set, so you could leverage curl commands to download files to the image with environment variables for added flexibility. The world is practically your oyster in regards to setting up the runtime container image.


Another directory you can leverage is the /mnt/liferay/deploy folder. Any files that you drop in here are going to be copied to the /opt/liferay/deploy folder and processed as normal deployments when Liferay starts.

This works out well if you just don't want to build your own docker image in the Liferay workspace, opting instead to use an official docker image along with this "just in time" deployment.

Note that you will get errors if you do not have an /mnt/liferay/deploy folder, even if you have nothing to deploy. I think this is a bug, the Liferay image should be able to wrap an if [ -e /mnt/liferay/deploy ] around the processing of the deployments and skip it if it is not there, but until it changes you must create this directory.

Docker Environment Variables File

So I don't know about you, but I can't see myself typing out docker run -e this -e that -e the_other every time that I want to fire up my docker container. I mean, after all just in this blog I've mentioned at least 10 different environment variables to set and that doesn't even cover the many portal properties I'd probably also want to override.

A great solution is to use an environment list file. The file is close to a properties file format, although there are a couple of differences:

# This is a comment

The oddball here is the USER value. Actually this will pass the USER environment from your command shell, and the value, into your docker environment. This would be the same as adding -e USER=${USER} to your docker run command.

Once you have this file, then you can proceed to use the command docker run --env-file myfile.list ... and this file will be used to set the environment variables passed into the docker container.

Plus, you can reuse the file every time you need to, so forget about typing in all of those env vars going forward...

Revisting Custom Docker Images

So back in, I presented what (at the time) was the workspace-supported way to create a custom docker image.

If you follow the same path as outlined there, the Dockerfile used for the latest version of the image is:

FROM liferay/dxp:7.3.10-dxp-1
COPY --chown=liferay:liferay deploy /mnt/liferay/deploy
COPY --chown=liferay:liferay patching /mnt/liferay/patching
COPY --chown=liferay:liferay scripts /mnt/liferay/scripts
COPY --chown=liferay:liferay configs /home/liferay/configs
COPY --chown=liferay:liferay \
# Any instructions here will be appended to the end of the Dockerfile
# created by `createDockerfile`.

It's still effectively the same Dockerfile as presented in the older blog, just additional capability for supporting patches and the scripts and configs that I've covered here.

The comment at the end? That comes from the project root Dockerfile.ext file and is used in the workspace to add custom stuff to the end of your Docker image.

The image that you end up with, all of your modules and wars will be in the deploy directory.

So, when this image starts, Liferay/Tomcat will start up and will end up deploying all of your modules, etc. This works of course, it is used by many a project.

Alternatively, you could create your own complete image that has your artifacts fully deployed. On a recent project, I did just that...

You can use either the distBundleZip or distBundleTar tasks to get the Liferay bundle prepped (the version of Liferay from your, the custom modules and wars moved into the right directories, all good to go).

You'll find in the build/dist directory is your expanded bundle. From here we need to change the folder name from build/dist/tomcat-9.0.xx to build/dist/tomcat (you might want to create a soft link from tomcat to the old tomcat-9.0.xx just in case).

With this minor change, we can do a docker build using the following Dockerfile:

# Use the Liferay base image, has java 8 and java 11, necessary liferay tools, etc.
FROM liferay/base:latest

# Define the args we support

RUN apk --no-cache add busybox-extras

# Copy the dist folder in as the new /opt/liferay root
COPY --chown=liferay:liferay build/dist /opt/liferay

# Soft-link files back to the home mount
RUN ln -fs /opt/liferay/* /home/liferay

# Set up the /mnt/liferay folder in case someone forgets the -v
RUN install -d -m 0755 -p liferay -g liferay /mnt/liferay/deploy /mnt/liferay/patching /mnt/liferay/scripts
COPY --chown=liferay:liferay configs /home/liferay/configs
COPY --chown=liferay:liferay /usr/local/liferay/scripts/pre-configure/

# Define the entry point as a script from the base
ENTRYPOINT /usr/local/bin/

# Liferay/Tomcat basics

ENV LIFERAY_HOME=/opt/liferay

# Set up some defaults in case overrides are skipped

# These are the publicly exposed ports
EXPOSE 8000 8009 8080 11311

# This health check is the same used with DXPC
	--interval=1m \
	--start-period=1m \
	--timeout=1m \
	CMD curl -fsS "http://localhost:8080/c/portal/layout" || exit 1

# Define some labels on the image
LABEL org.label-schema.schema-version="1.0"
LABEL org.label-schema.vendor="Vantage DC"
LABEL org.label-schema.version="${LABEL_VERSION}"
LABEL org.label-schema.vcs-ref="${LABEL_VCS_REF}"

# Switch to the liferay user for the run time
USER liferay:liferay

# Set the working dir
WORKDIR /opt/liferay

# Docker will now launch the entry point script...

Lest you think I created something new here, I really didn't. If you check the official Liferay Dockerfile used to create the normal Liferay images, you'll see that the bulk of it is from there. I did add the content from the workspace's Dockerfile; after all, I want to ensure that the Liferay entrypoint script is going to work for this custom bundle as it does for the regular Liferay bundle.

Now, I'm not getting into how invoke Docker to build your image with this. Me, I was using Jenkins so I leveraged facilities there to build my image and then push to the local repo. You could also build it by hand on the command line or leverage one of the Gradle plugins to build your image.

I kind of like this concept because my image already has my artifacts deployed, so it's not really a container startup function anymore.


Some points in conclusion that perhaps weren't made clear...

First, precedence... For the /mnt/liferay directory, the file copies happen before the script executions, not after. So avoid scripts that change files in /mnt/liferay/files because they won't have the effect you expect. And both of these actions occur before the processing of the /mnt/liferay/deploy folder.

Second, persistence. The files copied from /mnt/liferay/files and the changes imposed by scripts in /mnt/liferay/scripts are not persistent. They will only be applied within the running container. If the container is shut down, the changes are lost. When the container is started again, /mnt/liferay/files are re-copied and scripts in /mnt/liferay/scripts are re-executed. This is important to understand, especially if you are using a mount for /mnt/liferay as any changes in the host filesystem would be reflected in the next container launch.

The persistence aspect also applies to the /mnt/liferay/deploy folder; basically every time the docker container starts, it will be redeploying the artifacts.

We can build our own images still, either by using the Liferay Workspace way or, alternatively, using our own Dockerfile, so we can get the image we want or need.