Recently I asked some of my teammates for ideas about what to blog about next. Most of the time I take my inspiration from different clients I work with or questions that come up on the Liferay Community channels, but lately the well has seemed kind of dry. So I went fishing for suggestions.
My friend Andrew Betts, who happens to be in the Liferay DXP Cloud (DXPC) team suggested that I write a blog about, wait for it, DXPC. At first it didn't really grab me; I prefer technical issues where I can get my hands dirty, and at first glance I couldn't tell if I was really going to get to be hands-on or not. Eventually I realized that setting up a DXPC environment could really be technical because of everything that's involved, so the seed Andrew planted finally started to take root and grew into the blog post you're reading now.
Liferay DXP Cloud is a hosted and managed version of Liferay DXP. As the client you get a lot of control in the environment, it just launches into Liferay's cloud and there is support monitoring the cloud to watch for basic problems. Effectively though it is just like your on-prem DXP solution as there is still the application server, database and Elastic, there's just more polish around the devops aspects of building and deploying. There's other cool features such as automatic DR environments, site to site VPN support to allow connections into your environment, DynaTrace for APM, ...
I'm not here to sell you on DXPC, but I'm going to pick up just after you've decided to move to DXPC.
First thing that Liferay will be doing is provisioning your environments. You'll get a form to fill out that will resemble something like the form below:
There's a bunch of key fields here that need to be defined in order to provision your DXPC environments:
After Liferay receives this information, it is passed on to the Liferay DXPC Provisioning Team. This team will be creating a number of new assets for you:
The provisioning process can take 2-5 days to complete before you will have access to your new DXPC assets.
At the end of the provisioning process, you'll receive 3 emails about your new assets, but the most important one will have the subject line of "Your Name, your DXP Cloud project (project name) is ready". Here's a version in it's tiny glory:
The email basically contains 4 important sections (and 3 informational sections) for accessing your new environments. I'm going to go from the bottom up because, as a developer, this will likely be the order in which you access the environments.
The absolute bottom two sections have links to support and Liferay University videos to introduce the DXPC environment to you.
The next one is your "infrastructure" environment link, which contains basically your Jenkins instance. This Jenkins instance is pre-configured with jobs to automagically rebuild your DXPC Docker Images from the Github repository for the Main and Develop branches. As you commit to or merge PRs to either of these key branches in your repository, Jenkins will take care of completing the build and creating all of the images needed for your DXPC environments. Note that the job only builds the images, it is not going to automagically deploy them to DXPC for you.
The next one up from the bottom is the link to your new Github repository. This repository is automatically created for you in the "dxpcloud" organization as a way to transfer the DXPC files to you. You must move the repo to your own gitlab or bitbucket repo or stay on Github but use your own organization, and you have to complete this within the first 10-14 days after your environment has been provisioned. The DXPC support team can help you update your environment for the new external repo.
There's a note in this section saying if you don't get the invitation email to join your github repo, you should send an email to firstname.lastname@example.org, and this is actually the only section that counsels you to do this. My only guess here is that if any part of the provisioning has failed, it is likely going to be on this particular step. If it happens to you, don't fret about it. Just send an email to the provisioning team and they'll fix you right up.
Next section up are links to the non-production environments, typically DEV and UAT. You can't really click on them at this point because nothing has been actually deployed yet. One thing to note is that the non-prod environments are all protected by Basic Auth at the Nginx level; the credentials you find in this section of the email are only for getting into the non-prod environments. Now you might be asking "Why hide Liferay behind Basic Auth?" Well, Liferay is only going to give you a default environ (when you deploy), so the standard email@example.com credentials will be your Liferay admin account. The Basic Auth prevents someone with knowledge of the default Liferay credentials from discovering your non-prod environments and logging in as an administrator to wreak a little havoc. So me, I'd leave them in place, but you can disable if you want to. I'll share how to do that a little farther down the page...
The next section up in the email is the link to access your DXPC Admin Console. Note that this is completely different from a Liferay admin console, so don't confuse the two. The DXPC admin console is where you go to view logs, (re)deploy environments/updates, check systems components status, ... Basically every activity you do to manage your DXPC environment is going to start from the DXPC Admin Console.
The final top section is the "Accept Our Email Invitations" block. Liferay will be sending you separate emails for each environment that was created for you (DEV, UAT, PROD, maybe a DR, Infrastructure, etc). If you don't get these emails, check your spam folder (and if you find them there, take a moment to whitelist the sender so future DXPC emails get delivered correctly to your inbox).
So Liferay has just dropped all of these new toys off for you to play with, but where do you start?
Me, I always want to allocate time to verify that everything is working correctly and even get my initial environments created.
The first thing you need to do is get access to the Github repo. With this in place, you want to figure out how you're going to be updating the repo. Here at Liferay Global Services, we typically will fork this repo to give us a private place to work and, as we complete tasks, we'll send PRs to this repo for merging. This helps each developer work in their own environment, free of merge conflicts during development activities, and forced merger responsibilities when prepping the PR for submission. It works really well for us, and we encourage clients to follow the same pattern. It is just a git repo, so you are free to manage it any way you want and any way you are used to.
Ultimately though you need to clone this repo (or your fork if you are using our suggestion) to your local system as this is where all of your environment configuration stems from.
This new repository you have just cloned contains what we refer to as the DXPC Workspace. The workspace has configuration for each one of the components of a typical Liferay DXP environment - database, search, Liferay and web servers, plus goodies for backups and CI handling.
Here's the basic folder structure that you'll get in your new repository:
Each of the root components has a special configuration file, the LCP.json file. This is a JSON file which contains configuration details specific to each component. When Jenkins is building out the environments, the details from the LCP.json files will be used as the primary definition for each service component. Some of the content you'll see in the LCP.json file will be repeated across each service, and some is unique to the specific service.
Here, for example, is a snippet from the LCP.json file from the database folder:
Some of this may not make any sense, and most of the time you won't need to tamper with the file contents at all because it will contain values previously agreed upon from the contracts. Some of it you may need to change at some point (i.e. if Liferay provides a new image version for the database, you might need to change the image version here), but from the initial provisioning standpoint you should have reasonable starting values.
Each of the components has a configs directory and environment-based subdirectories; I've only expanded this portion for the backup component in the image above, but you will find this same structure on all of the components. As developers we might be familiar with building an artifact specifically for a target environment such as DEV or PROD, or we have also seen where we have a single artifact but environmental configuration is external to the artifact so we can use the same artifact for DEV and PROD, but configuration is external.
The DXP Cloud workspace handles things a little differently. The configuration for all environments are part of the build, but environment variables at run time will have the docker environment use the appropriate configuration set. So the database component, for example, will build into your database image, and this image is used to populate your DEV, UAT and PROD DXPC environments, but each environment will start the database with a different environment setting, so when you start DEV it will start the database service and use any of the configs/dev specific configuration. The configs/common directory is special in that it is where you provide configuration that applies to all environments.
This can be a little hard to get used to, but pretty soon it will make sense and the good news is that Liferay does this consistently across each of the service components, so you don't have to learn a new way of configuration each time.
The one folder I didn't expand in the listing above is the liferay component. I didn't do this because if you're already a Liferay developer, you already know what this folder contains - a typical Liferay Workspace (so all of my other blogs about the workspace and how you can use it and the features it has, etc., all of those still apply to the Liferay Workspace that is part of the DXPC Workspace).
The one important addition here that I want to highlight is the LCP.json file. This file is probably the one you're going to be changing the most often because this one controls the cluster and individual node sizing and other important OS-level settings. As this is such an important file, I'm going to include the starting one that you'll be given:
"LIFERAY_JVM_OPTS": "-Xms2048m -Xmx6144m"
"LIFERAY_JVM_OPTS": "-Xms4096m -Xmx12288m"
"LIFERAY_JVM_OPTS": "-Xms4096m -Xmx12288m"
"LIFERAY_JVM_OPTS": "-Xms4096m -Xmx12288m"
There's a lot going on here, right? So let's pick out some of the important parts:
We start with the default system configuration. Here we can see that the default is an 8g system w/ 8 CPU but only a single server, and we also declare that the only port on the service will be port 8080 and that it is not publicly available (since external is false).
Next is the definition of the "readiness probe" and "liveliness probe". This is what the DXPC monitoring will use to verify that the environment is ready and able to serve traffic.
The env section is going to be very important, this is where we can define environment variables that will be set within the OS. So we can see the LIFERAY_JVM_OPTS environment variable being set, that would be passed into the runtime container and, when Liferay/Tomcat is started, will be used as the JVM options to start the instance. So for our 8g system, we're going to let Liferay use 6g of that space. We can use the env section to define additional environment variables that we want to pass into the image. Liferay allows an alternative to using a portal-ext.properties file for properties overrides, you can also use environment variables following a specific naming format to set the properties (you can find the right environment variable name for each portal property by checking the portal.properties file in the Liferay source), so anything you could set in portal-ext.properties you could instead set in the environment variables.
The dependencies section lists the services that the Liferay container depends upon, namely the database and search (Elastic). Volumes defines the shared external volumes and in this case it is the Liferay data volume.
The environments section has environment-specific override values. The infra environment is your Jenkins service, so deploy is set to false so you don't get a Liferay/Tomcat server in your infrastructure environment.
PROD, UAT and DR will be pre-populated with values from your contract, so in this example we opted for a 16g 12 CPU cluster of 2 nodes in each of these environments, so we override the defaults with the right values.
So after we've verified that all of our files are there, checked our LCP.json files and found them to be in line with our contracts and expectations, we're ready make our initial changes in preparation for our environment creation.
These are the things that I'm going to [possibly] do in the DXPC Workspace:
Remember how I said you could disable the Basic Auth settings for the non-prod environments? If you want to do this, you're going to go to the webserver/configs/env/conf.d/liferay.conf and comment out or remove the lines with the auth_basic prefix. If you want to keep the basic auth configuration but simplify the password, you can point to a different file that has the value(s) you want to use. Follow these steps replacing env with the environment that you want to change, dev, uat, ...
In the Liferay Workspace, I'm going to go to the configs/common folder and create my portal-ext.properties, and I'm going to use https://liferay.dev/blogs/-/blogs/professional-liferay-deployment#properties as my starting point. I want to define the most correct and complete portal-ext.properties before the first launch. I will typically only do the common portal-ext.properties file and then handle environment-specific overrides in the LCP.json env area. I especially want to set the default admin password to not be "test" so anyone who stumbles upon my Liferay environment will not be able to log in as an admin (I'll change it again in the UI later on so the password in the file is only temporary, but it is a security aspect I feel is important).
I actually spend a lot more time on my portal-ext.properties file before first launch than most others do; I personally feel that getting these values right the first time means I won't have old data or invalid configuration in my initial instance and is a better foundation for building my Liferay solution than starting with a basically empty properties file and tweaking later.
While I'm in the LCP.json file of the liferay component, I'm going to set the scale to 1 on all of the environments. If you try to launch with 2 or more Liferay nodes, each will get to the new, empty database at the same time and will try to create all of the initial Liferay tables, and if you check the logs for each node they'll report messages like "Duplicate table Xxx..." sort of failures. For the very first time the DB is created, I want to restrict the startup to just 1 node so the cluster isn't trying to create the database at the same time. After my environments are created, then I'll change the LCP.json file back to the right cluster size, but for first launch you really can't beat just setting the scale to 1.
I'm also going to check out the deployment checklist for the version of Liferay I'm using so I get recommendations it has except for the JVM parameters. The DXPC image actually already incorporates the CATALINA_OPTS for you based off of the deployment checklist recommendations. You can, of course, override or replace them all if you need to, but it is a good starting set of JVM parameters (after I'm all done with the build out and prior to go-live, I'll do some load testing, profiling and tuning of the JVM parameters, but following the deployment checklist I'll have a pretty decent starting point).
When I get all of these changes done, I'm going to commit and push them to my repository. If I'm using forks like Liferay recommends, I'll send my PR to the main fork and get it merged into the main branch.
I'll then check out my Jenkins and verify it is able to build my whole DXPC workspace; I want to see some success here before I try to deploy the environments. If I do face issues here, I'm going to resolve them before getting to the next step...
When the build is done, our next step is to move over to the DXPC Admin Console. When you first log in, you'll see a view like:
When you first land on the console, every environment will appear like the 2nd item here does, they'll all say "no services" next to them because, even though the DXPC team has provisioned our environments, nothing has been populated into them.
Starting from the DEV environment, we're going to click into an environment which will show us the detail page:
From here we can click on the Builds link on the upper toolbar towards the right side:
Your list of course will be different and, if we're following the process I've been laying out, we would only have one build available to us. We'll click on the pea-pod menu on the right of the build that we want to deploy:
We'll then pick "Deploy build to..." to move to the actual deployment:
Here we need to select the environment we want to deploy to. We'll start with DEV, but eventually we'll hit them all. After selecting DEV, we can click the "Deploy Build" button to start deploying out the environment.
At this point, all of our system components are going to be created per our DXPC Workspace, the LCP.json configurations, environment configurations and the Docker images that Jenkins had created for us. So it will create the backup, database, Liferay/tomcat, search and webserver (Nginx) component services. We'll see on the status page how all of the services will be listed and, as the startup completes, will change from the gray dancing dots over to a pretty, green Ready label.
All of the status indicators are reliable except for the Liferay service. It will always show the green Ready label before it is actually done starting the portal.
If we click on the liferay service, we can actually see the log messages from Liferay:
We can also use this to get to the Linux command line (Shell), some basic metrics, see (and change) environment variables and also check the custom domains.
When the environment is up and ready, we should also review the Network page (available from the hamburger menu):
The key parts here are the Ingress endpoint and the Address list.
The ingress load balancer IP is the address that you use for forwarding your DNS... So if you own www.example.com and you're hosting it on DXPC and you're given the IP address of 220.127.116.11, you will configure your www.example.com to resolve to the 18.104.22.168 IP address. This is obviously a simple case, as your own network will likely want to direct different routes to different hosts and you'll have to work out a route-based redirect, but hopefully this gives you the info to use your ingress address correctly. As this is my dev server, I would likely want to use dev.example.com for my domain, so I'd have to handle routing that name over to 22.214.171.124.
The addresses show those ports which are open externally and the name to access the service. Primarily you're going to look at the webserver name because that will get to your open Nginx service and route traffic internally (bypassing the ingress load balancer). The non-prod links that I pointed out in the provisioning email near the beginning of this blog post point to the ingress load balancers for the non-prod environments, so those are typically the one(s) that you'd use to access the non-prod environs.
Back on topic to our First Time Setup and using the DXPC Admin console...
So at this point we have finished creating the DEV servers, we've deployed the bundle we built in Jenkins, everything has started, and we've checked the network endpoints to review the details.
The final two things we want to do are:
When we complete these two tasks, we can say that the DEV environment is good to go, we've verified everything is working.
Why did we do all of this pretty much out of the gate? Well the DXPC team has recently finished provisioning our environments. Should something have gone wrong with the setup, it will still be fresh in their minds and it should make it easier for them to help as necessary. Plus they're going to want to know that we've been able to get started (like a waiter coming to see how you're enjoying the meal after you've taken a couple of bites) so we'll be able to answer affirmatively.
But, now that DEV is done, we want to repeat this step for all of our other allocated environments. Do each of them, one at a time, do the deploy and the startup and the network and the testing, make sure the environment is good to go, then move onto the next one.
And yes, even do this to PROD. Sure it will only be a vanilla Liferay DXP, but our goal at this point is not really to test the customizations that we're going to be working on and eventually deploying to PROD, our goal here is to test all of the processes, to verify that we can deploy to every environment, including PROD, and that all of the pieces start cleanly and serve up even the vanilla traffic.
After we've finished PROD, I'm going to go back to my DXPC Workspace, into the liferay/LCP.json file and restore the scale for my clustered environment, going to push that to the repo, verify Jenkins did the build, then I'm going to deploy the new cluster image to my multi-server environments. The database in each will have been properly created by the single node we were using before, so we don't have to worry about all of the cluster nodes trying to create the database at the same time. Once the cluster is up, we can verify the cluster is working properly by checking the logs for jgroups messages showing the cluster is well-formed.
You can test out your entire DXPC environment locally if you have Docker installed. Just download the docker-compose configuration from https://github.com/LiferayCloud/stack-upgrade/blob/develop/docker-compose.yml and put it in the root of your Liferay DXPC Workspace folder. Then you can use commands like docker-compose up and docker-compose down to launch everything. It will basically leverage your images and bring everything up leveraging the local configurations (so liferay/configs/local, for example).
So this has been a long blog, certainly, but I started out wanting to show how to get started with Liferay DXP Cloud, and I feel like I've done just that - only covered "getting started". There really is a heck of a lot more for you to pick up and learn in your new environment such as how to complete backups and restores, how to apply fixpacks/hotfixes or, better yet, how to update your Liferay (and others) images to later versions, Disaster Recovery (if you've opted for that), ...
I feel like I've barely scratched the surface!
Once you get through the initial verification and setup and get into a good rhythm, you'll eventually see that development/deployment in DXPC is basically a repeat of the following steps:
Really, that's it in a nutshell. When you can embrace it fully, it is just so elegant to get from a commit to fully deployed in basically a few mouse clicks...
Anyway, I hope you find this blog useful. I'll probably have some updates in a couple of days when my friends on the DXPC team read what I've written and start pointing out all of the things I got wrong. If you want some more info on DXPC or you would like a hook up with someone that can give you a demo or even a sales pitch, leave a comment below or even better hit me up on the Community Slack channels. I'll be able to either answer what you want to know or put you in touch with someone who can.