Getting Started with Liferay DXP Cloud

You've made the decision to launch on Liferay DXP Cloud, but how do you really do this?

Introduction

Recently I asked some of my teammates for ideas about what to blog about next. Most of the time I take my inspiration from different clients I work with or questions that come up on the Liferay Community channels, but lately the well has seemed kind of dry. So I went fishing for suggestions.

My friend Andrew Betts, who happens to be in the Liferay DXP Cloud (DXPC) team suggested that I write a blog about, wait for it, DXPC. At first it didn't really grab me; I prefer technical issues where I can get my hands dirty, and at first glance I couldn't tell if I was really going to get to be hands-on or not. Eventually I realized that setting up a DXPC environment could really be technical because of everything that's involved, so the seed Andrew planted finally started to take root and grew into the blog post you're reading now.

What is DXPC?

Liferay DXP Cloud is a hosted and managed version of Liferay DXP. As the client you get a lot of control in the environment, it just launches into Liferay's cloud and there is support monitoring the cloud to watch for basic problems. Effectively though it is just like your on-prem DXP solution as there is still the application server, database and Elastic, there's just more polish around the devops aspects of building and deploying. There's other cool features such as automatic DR environments, site to site VPN support to allow connections into your environment, DynaTrace for APM, ...

I'm not here to sell you on DXPC, but I'm going to pick up just after you've decided to move to DXPC.

I actually have a project that is starting DXPC onboarding, so when it was getting organized I thought I'd write this blog post and capture all of the details so I'd have all of the steps correct. However, I was introduced to the project at the point where the client indicated they had selected to launch on DXPC.

But from the point the decision was made, there was over 4 weeks of doing nothing. Well, nothing from my perspective... Actually there are contractual things that both Liferay and the business need to work out, agree to, and eventually sign off on. Also, being late spring/early summer coming out of Covid-19, key people were out on vacation and this more than anything else helped to impede forward progress.

This blog leaves out all of these business and contractual details mostly because I'm not involved in that aspect. It can be completed in less than 4 weeks time, but I honestly cannot give a reasonable estimate of how long it may take for your business. I guess I would suggest to have the key people available to complete the required activities, plus try to keep the ball in Liferay's course to help get through this process in a timely manner.

What happens first?

First thing that Liferay will be doing is provisioning your environments. You'll get a form to fill out that will resemble something like the form below:

There's a bunch of key fields here that need to be defined in order to provision your DXPC environments:

  • Organization Name: This will be your company name.
  • Project ID: This is the name for your DXPC environment that you'll be seeing in the DXPC Console, so pick a value here that will help distinguish between environments when you have multiple environments.
  • Primary & DR Regions: These are the regions where your primary environment and DR environment lives (if you have opted for the DXPC DR option). Select a region that is close to your primary audience and, of course, don't pick the same region for both Primary and DR environments.
  • Admin Account(s): Here you get to enter 1 or 2 accounts which will be your DXPC administration accounts. Note that this is not for administration of your Liferay environments, this is just identifying users which have administrative access to the DXPC console, so it is the user(s) which can start/stop environments, perform deployments, etc. Best practice is to have 2 admin users so there is always a backup in case the primary administrator has a problem. Note that both users need to provide their Github usernames as these users will be able to access the DXPC workspace repository that will be created in Github.
If you are working with Liferay Global Services, I'd recommend setting [one of] the consultant(s) as an administrator for the duration of your engagement. This will allow your GS team to build and deploy to your DXPC environments as necessary.

Provisioning

After Liferay receives this information, it is passed on to the Liferay DXPC Provisioning Team. This team will be creating a number of new assets for you:

  • A Liferay DXPC Workspace - I've mentioned the Liferay Workspace many times before, and the DXPC Workspace contains that and more. It includes component configurations for Nginx (for the web server), Liferay/Tomcat (for the application server), MySQL (for the database), and ElasticSearch (for the search support) Docker images. The DXPC Workspace also includes a Liferay Workspace for managing all of your Liferay customizations.
  • A Jenkins account for CI/CD and managing your builds.
  • A Liferay DXP Cloud console account for managing your builds and deployment, view your logs, manage your services and backups.
  • [Optional] A Dynatrace environment for monitoring your DXPC systems.

The provisioning process can take 2-5 days to complete before you will have access to your new DXPC assets.

At the end of the provisioning process, you'll receive 3 emails about your new assets, but the most important one will have the subject line of "Your Name, your DXP Cloud project (project name) is ready". Here's a version in it's tiny glory:

 The email basically contains 4 important sections (and 3 informational sections) for accessing your new environments. I'm going to go from the bottom up because, as a developer, this will likely be the order in which you access the environments.

 
The absolute bottom two sections have links to support and Liferay University videos to introduce the DXPC environment to you.


The next one is your "infrastructure" environment link, which contains basically your Jenkins  instance. This Jenkins instance is pre-configured with jobs to automagically rebuild your DXPC Docker Images from the Github repository for the Main and Develop branches. As you commit to or merge PRs to either of these key branches in your repository, Jenkins will take care of completing the build and creating all of the images needed for your DXPC environments. Note that the job only builds the images, it is not going to automagically deploy them to DXPC for you.


The next one up from the bottom is the link to your new Github repository. This repository is automatically created for you in the "dxpcloud" organization as a way to transfer the DXPC files to you. You must move the repo to your own gitlab or bitbucket repo or stay on Github but use your own organization, and you have to complete this within the first 10-14 days after your environment has been provisioned. The DXPC support team can help you update your environment for the new external repo.

There's a note in this section saying if you don't get the invitation email to join your github repo, you should send an email to provisioning@liferay.cloud, and this is actually the only section that counsels you to do this. My only guess here is that if any part of the provisioning has failed, it is likely going to be on this particular step. If it happens to you, don't fret about it. Just send an email to the provisioning team and they'll fix you right up.


Next section up are links to the non-production environments, typically DEV and UAT. You can't really click on them at this point because nothing has been actually deployed yet. One thing to note is that the non-prod environments are all protected by Basic Auth at the Nginx level; the credentials you find in this section of the email are only for getting into the non-prod environments. Now you might be asking "Why hide Liferay behind Basic Auth?" Well, Liferay is only going to give you a default environ (when you deploy), so the standard test@liferay.com credentials will be your Liferay admin account. The Basic Auth prevents someone with knowledge of the default Liferay credentials from discovering your non-prod environments and logging in as an administrator to wreak a little havoc. So me, I'd leave them in place, but you can disable if you want to. I'll share how to do that a little farther down the page...


The next section up in the email is the link to access your DXPC Admin Console. Note that this is completely different from a Liferay admin console, so don't confuse the two. The DXPC admin console is where you go to view logs, (re)deploy environments/updates, check systems components status, ... Basically every activity you do to manage your DXPC environment is going to start from the DXPC Admin Console.

An important aspect to note here, your DXPC administrators can be completely different from your Liferay administrators. Your DXPC admins are going to be your operations team, the ones that monitor systems, perform deployments, reboot servers, etc. That's completely different from your Liferay administrators who are managing sites and maybe even users and content. Can these two different types of administrators be the same people? Sure, but they don't have to be if you don't want them to be.


The final top section is the "Accept Our Email Invitations" block. Liferay will be sending you separate emails for each environment that was created for you (DEV, UAT, PROD, maybe a DR, Infrastructure, etc). If you don't get these emails, check your spam folder (and if you find them there, take a moment to whitelist the sender so future DXPC emails get delivered correctly to your inbox).

Verification and Setup

So Liferay has just dropped all of these new toys off for you to play with, but where do you start?

Remember that we must copy our Github repo out of the Github "dxpcloud" workspace within the first 10-14 days, and we should do this as soon as possible. Either Gitlab, Bitbucket or your own Github organization repo is fine, and the DXPC support team will be happy to help us fix things up once we've finished the move. The rest of this blog assumes this step has been completed.

Me, I always want to allocate time to verify that everything is working correctly and even get my initial environments created.

Create the environments? Hasn't Liferay already done that? No, not really. The DXP Cloud team has provisioned the environments for you, but as of yet there are no databases, no application servers, no ElasticSearch servers, ... So yeah, part of our verification process is going to be to get these initial services created for the first time.

The first thing you need to do is get access to the Github repo. With this in place, you want to figure out how you're going to be updating the repo. Here at Liferay Global Services, we typically will fork this repo to give us a private place to work and, as we complete tasks, we'll send PRs to this repo for merging. This helps each developer work in their own environment, free of merge conflicts during development activities, and forced merger responsibilities when prepping the PR for submission. It works really well for us, and we encourage clients to follow the same pattern. It is just a git repo, so you are free to manage it any way you want and any way you are used to.

Ultimately though you need to clone this repo (or your fork if you are using our suggestion) to your local system as this is where all of your environment configuration stems from.

DXP Cloud Workspace

This new repository you have just cloned contains what we refer to as the DXPC Workspace. The workspace has configuration for each one of the components of a typical Liferay DXP environment - database, search, Liferay and web servers, plus goodies for backups and CI handling.

Here's the basic folder structure that you'll get in your new repository:


Each of the root components has a special configuration file, the LCP.json file. This is a JSON file which contains configuration details specific to each component. When Jenkins is building out the environments, the details from the LCP.json files will be used as the primary definition for each service component. Some of the content you'll see in the LCP.json file will be repeated across each service, and some is unique to the specific service.

Here, for example, is a snippet from the LCP.json file from the database folder:

{
  "kind": "Deployment",
  "id": "database",
  "image": "liferaycloud/database:4.2.2",
  "memory": 1024,
  "cpu": 2,
  "scale": 1,
  "ports": [
    {
      "port": 3306,
      "external": false
    },
  ...

Some of this may not make any sense, and most of the time you won't need to tamper with the file contents at all because it will contain values previously agreed upon from the contracts. Some of it you may need to change at some point (i.e. if Liferay provides a new image version for the database, you might need to change the image version here), but from the initial provisioning standpoint you should have reasonable starting values.

Each of the components has a configs directory and environment-based subdirectories; I've only expanded this portion for the backup component in the image above, but you will find this same structure on all of the components. As developers we might be familiar with building an artifact specifically for a target environment such as DEV or PROD, or we have also seen where we have a single artifact but environmental configuration is external to the artifact so we can use the same artifact for DEV and PROD, but configuration is external.

The DXP Cloud workspace handles things a little differently. The configuration for all environments are part of the build, but environment variables at run time will have the docker environment use the appropriate configuration set. So the database component, for example, will build into your database image, and this image is used to populate your DEV, UAT and PROD DXPC environments, but each environment will start the database with a different environment setting, so when you start DEV it will start the database service and use any of the configs/dev specific configuration. The configs/common directory is special in that it is where you provide configuration that applies to all environments.

This can be a little hard to get used to, but pretty soon it will make sense and the good news is that Liferay does this consistently across each of the service components, so you don't have to learn a new way of configuration each time.

The Liferay Workspace

The one folder I didn't expand in the listing above is the liferay component. I didn't do this because if you're already a Liferay developer, you already know what this folder contains - a typical Liferay Workspace (so all of my other blogs about the workspace and how you can use it and the features it has, etc., all of those still apply to the Liferay Workspace that is part of the DXPC Workspace).

The one important addition here that I want to highlight is the LCP.json file. This file is probably the one you're going to be changing the most often because this one controls the cluster and individual node sizing and other important OS-level settings. As this is such an important file, I'm going to include the starting one that you'll be given:

{
  "kind": "Deployment",
  "id": "liferay",
  "image": "liferaycloud/liferay-dxp:7.3-4.2.1",
  "memory": 8192,
  "cpu": 8,
  "scale": 1,
  "ports": [
    {
      "port": 8080,
      "external": false
    }
  ],
  "readinessProbe": {
    "httpGet": {
      "path": "/c/portal/layout",
      "port": 8080
    },
    "initialDelaySeconds": 120,
    "periodSeconds": 15,
    "timeoutSeconds": 5,
    "failureThreshold": 3,
    "successThreshold": 1
  },
  "livenessProbe": {
    "httpGet": {
      "path": "/c/portal/layout",
      "port": 8080
    },
    "initialDelaySeconds": 480,
    "periodSeconds": 60,
    "timeoutSeconds": 5,
    "failureThreshold": 3,
    "successThreshold": 1
  },
  "publishNotReadyAddressesForCluster": false,
  "env": {
    "LCP_PROJECT_LIFERAY_CLUSTER_ENABLED": "true",
    "LIFERAY_JVM_OPTS": "-Xms2048m -Xmx6144m"
  },
  "dependencies": [
    "database",
    "search"
  ],
  "volumes": {
    "data": "/opt/liferay/data"
  },
  "environments": {
    "infra": {
      "deploy": false
    },
    "prd": {
      "cpu": 12,
      "memory": 16384,
      "scale": 2,
      "env": {
        "LIFERAY_JVM_OPTS": "-Xms4096m -Xmx12288m"
      }
    },
    "uat": {
      "cpu": 12,
      "memory": 16384,
      "scale": 2,
      "env": {
        "LIFERAY_JVM_OPTS": "-Xms4096m -Xmx12288m"
      }
    },
    "dr": {
      "cpu": 12,
      "memory": 16384,
      "scale": 2,
      "env": {
        "LIFERAY_JVM_OPTS": "-Xms4096m -Xmx12288m"
      }
    }
  }
}

There's a lot going on here, right? So let's pick out some of the important parts:

We start with the default system configuration. Here we can see that the default is an 8g system w/ 8 CPU but only a single server, and we also declare that the only port on the service will be port 8080 and that it is not publicly available (since external is false).

Next is the definition of the "readiness probe" and "liveliness probe". This is what the DXPC monitoring will use to verify that the environment is ready and able to serve traffic.

The env section is going to be very important, this is where we can define environment variables that will be set within the OS. So we can see the LIFERAY_JVM_OPTS environment variable being set, that would be passed into the runtime container and, when Liferay/Tomcat is started, will be used as the JVM options to start the instance. So for our 8g system, we're going to let Liferay use 6g of that space. We can use the env section to define additional environment variables that we want to pass into the image. Liferay allows an alternative to using a portal-ext.properties file for properties overrides, you can also use environment variables following a specific naming format to set the properties (you can find the right environment variable name for each portal property by checking the portal.properties file in the Liferay source), so anything you could set in portal-ext.properties you could instead set in the environment variables.

The dependencies section lists the services that the Liferay container depends upon, namely the database and search (Elastic). Volumes defines the shared external volumes and in this case it is the Liferay data volume.

The environments section has environment-specific override values. The infra environment is your Jenkins service, so deploy is set to false so you don't get a Liferay/Tomcat server in your infrastructure environment.

PROD, UAT and DR will be pre-populated with values from your contract, so in this example we opted for a 16g 12 CPU cluster of 2 nodes in each of these environments, so we override the defaults with the right values.

One verification step I like to do here is to ensure that the memory value and the JVM memory settings are aligned. Here for example the system is 16g but only 12g is allocated to Liferay. This is going to leave 4g for the OS and any other services I need to run there, and I might feel that 4g is really wasteful so perhaps the JVM should be bumped up to 14g instead of only 12g. Your call, but stay below the memory value and be sure to leave room for the OS runtime.
If there is a change I make to this file, it will most often be to the "scale" property. There are times where you will want to force launching a single instance only, not the full cluster. When you need to do this, set the scale to "1" (you'll need to commit to the main repo, wait for Jenkins to complete the build, go into your DXPC Console and deploy the new build, but it will be limited to a single node). Remember that if you do change this to 1, when you're done restore it back to the original value or you'll continue to have only 1 node running...

First Time Setup

So after we've verified that all of our files are there, checked our LCP.json files and found them to be in line with our contracts and expectations, we're ready make our initial changes in preparation for our environment creation.

These are the things that I'm going to [possibly] do in the DXPC Workspace:

Remember how I said you could disable the Basic Auth settings for the non-prod environments? If you want to do this, you're going to go to the webserver/configs/env/conf.d/liferay.conf and comment out or remove the lines with the auth_basic prefix. If you want to keep the basic auth configuration but simplify the password, you can point to a different file that has the value(s) you want to use. Follow these steps replacing env with the environment that you want to change, dev, uat, ...

In the Liferay Workspace, I'm going to go to the configs/common folder and create my portal-ext.properties, and I'm going to use https://liferay.dev/blogs/-/blogs/professional-liferay-deployment#properties as my starting point. I want to define the most correct and complete portal-ext.properties before the first launch. I will typically only do the common portal-ext.properties file and then handle environment-specific overrides in the LCP.json env area. I especially want to set the default admin password to not be "test" so anyone who stumbles upon my Liferay environment will not be able to log in as an admin (I'll change it again in the UI later on so the password in the file is only temporary, but it is a security aspect I feel is important).

I actually spend a lot more time on my portal-ext.properties file before first launch than most others do; I personally feel that getting these values right the first time means I won't have old data or invalid configuration in my initial instance and is a better foundation for building my Liferay solution than starting with a basically empty properties file and tweaking later.

While I'm in the LCP.json file of the liferay component, I'm going to set the scale to 1 on all of the environments. If you try to launch with 2 or more Liferay nodes, each will get to the new, empty database at the same time and will try to create all of the initial Liferay tables, and if you check the logs for each node they'll report messages like "Duplicate table Xxx..." sort of failures. For the very first time the DB is created, I want to restrict the startup to just 1 node so the cluster isn't trying to create the database at the same time. After my environments are created, then I'll change the LCP.json file back to the right cluster size, but for first launch you really can't beat just setting the scale to 1.

I'm also going to check out the deployment checklist for the version of Liferay I'm using so I get recommendations it has except for the JVM parameters. The DXPC image actually already incorporates the CATALINA_OPTS for you based off of the deployment checklist recommendations. You can, of course, override or replace them all if you need to, but it is a good starting set of JVM parameters (after I'm all done with the build out and prior to go-live, I'll do some load testing, profiling and tuning of the JVM parameters, but following the deployment checklist I'll have a pretty decent starting point).

What's Missing? If you have done a standard Liferay DXP before, you might have noticed that there is no Activation Key / License to worry about. That's normal for DXPC, everything is provisioned from the contract agreement so the build process will automagically inject a license for you, you don't need to get them separately from Liferay Support.

When I get all of these changes done, I'm going to commit and push them to my repository. If I'm using forks like Liferay recommends, I'll send my PR to the main fork and get it merged into the main branch.

I'll then check out my Jenkins and verify it is able to build my whole DXPC workspace; I want to see some success here before I try to deploy the environments. If I do face issues here, I'm going to resolve them before getting to the next step...

DXPC Admin Console

When the build is done, our next step is to move over to the DXPC Admin Console. When you first log in, you'll see a view like:


When you first land on the console, every environment will appear like the 2nd item here does, they'll all say "no services" next to them because, even though the DXPC team has provisioned our environments, nothing has been populated into them.

Starting from the DEV environment, we're going to click into an environment which will show us the detail page:


From here we can click on the Builds link on the upper toolbar towards the right side:


Your list of course will be different and, if we're following the process I've been laying out, we would only have one build available to us. We'll click on the pea-pod menu on the right of the build that we want to deploy:


We'll then pick "Deploy build to..." to move to the actual deployment:


Here we need to select the environment we want to deploy to. We'll start with DEV, but eventually we'll hit them all. After selecting DEV, we can click the "Deploy Build" button to start deploying out the environment.

At this point, all of our system components are going to be created per our DXPC Workspace, the LCP.json configurations, environment configurations and the Docker images that Jenkins had created for us. So it will create the backup, database, Liferay/tomcat, search and webserver (Nginx) component services. We'll see on the status page how all of the services will be listed and, as the startup completes, will change from the gray dancing dots over to a pretty, green Ready label.

All of the status indicators are reliable except for the Liferay service. It will always show the green Ready label before it is actually done starting the portal.

If we click on the liferay service, we can actually see the log messages from Liferay:


We can also use this to get to the Linux command line (Shell), some basic metrics, see (and change) environment variables and also check the custom domains.

When the environment is up and ready, we should also review the Network page (available from the hamburger menu):


The key parts here are the Ingress endpoint and the Address list.

The ingress load balancer IP is the address that you use for forwarding your DNS...  So if you own www.example.com and you're hosting it on DXPC and you're given the IP address of 34.1.2.3, you will configure your www.example.com to resolve to the 34.1.2.3 IP address. This is obviously a simple case, as your own network will likely want to direct different routes to different hosts and you'll have to work out a route-based redirect, but hopefully this gives you the info to use your ingress address correctly. As this is my dev server, I would likely want to use dev.example.com for my domain, so I'd have to handle routing that name over to 34.1.2.3.

The addresses show those ports which are open externally and the name to access the service. Primarily you're going to look at the webserver name because that will get to your open Nginx service and route traffic internally (bypassing the ingress load balancer). The non-prod links that I pointed out in the provisioning email near the beginning of this blog post point to the ingress load balancers for the non-prod environments, so those are typically the one(s) that you'd use to access the non-prod environs.

Back on topic to our First Time Setup and using the DXPC Admin console...

So at this point we have finished creating the DEV servers, we've deployed the bundle we built in Jenkins, everything has started, and we've checked the network endpoints to review the details.

The final two things we want to do are:

  1. Check the logs (especially the Liferay logs) and verify there were no obvious failures. You might not have any obvious failures, but if you do, go to the provisioning email and at the bottom you'll find the link to open a Liferay Support ticket. In the environments I was setting up when capturing these screen prints, one of my database services in one environment failed to be created. It was nothing I had done, my DXPC Workspace was clean and build was good, there was just a problem in DXPC creating my database. I opened a support ticket on it, it got assigned to a DXPC support person and they helped resolve the problem quickly. I had to delete my database and then redeploy to get it to be created, but they helped me through these steps so it ended up not being a big deal at all.
  2. Actually log into the environment. If using Basic Auth, use the credentials provided in the provisioning email to get to Liferay, then use the Liferay admin credentials (that hopefully you defined in your portal-ext.properties so it is not test@liferay.com/test) and verify that it looks like a functioning yet vanilla Liferay server.

When we complete these two tasks, we can say that the DEV environment is good to go, we've verified everything is working.

Why did we do all of this pretty much out of the gate? Well the DXPC team has recently finished provisioning our environments. Should something have gone wrong with the setup, it will still be fresh in their minds and it should make it easier for them to help as necessary. Plus they're going to want to know that we've been able to get started (like a waiter coming to see how you're enjoying the meal after you've taken a couple of bites) so we'll be able to answer affirmatively.

But, now that DEV is done, we want to repeat this step for all of our other allocated environments. Do each of them, one at a time, do the deploy and the startup and the network and the testing, make sure the environment is good to go, then move onto the next one.

And yes, even do this to PROD. Sure it will only be a vanilla Liferay DXP, but our goal at this point is not really to test the customizations that we're going to be working on and eventually deploying to PROD, our goal here is to test all of the processes, to verify that we can deploy to every environment, including PROD, and that all of the pieces start cleanly and serve up even the vanilla traffic.

After we've finished PROD, I'm going to go back to my DXPC Workspace, into the liferay/LCP.json file and restore the scale for my clustered environment, going to push that to the repo, verify Jenkins did the build, then I'm going to deploy the new cluster image to my multi-server environments. The database in each will have been properly created by the single node we were using before, so we don't have to worry about all of the cluster nodes trying to create the database at the same time. Once the cluster is up, we can verify the cluster is working properly by checking the logs for jgroups messages showing the cluster is well-formed.

Local Testing

You can test out your entire DXPC environment locally if you have Docker installed. Just download the docker-compose configuration from https://github.com/LiferayCloud/stack-upgrade/blob/develop/docker-compose.yml and put it in the root of your Liferay DXPC Workspace folder. Then you can use commands like docker-compose up and docker-compose down to launch everything. It will basically leverage your images and bring everything up leveraging the local configurations (so liferay/configs/local, for example).

Conclusion

So this has been a long blog, certainly, but I started out wanting to show how to get started with Liferay DXP Cloud, and I feel like I've done just that - only covered "getting started". There really is a heck of a lot more for you to pick up and learn in your new environment such as how to complete backups and restores, how to apply fixpacks/hotfixes or, better yet, how to update your Liferay (and others) images to later versions, Disaster Recovery (if you've opted for that), ...

I feel like I've barely scratched the surface!

Once you get through the initial verification and setup and get into a good rhythm, you'll eventually see that development/deployment in DXPC is basically a repeat of the following steps:

  1. Write code, push to repo.

  2. Log into DXPC Console, deploy build to environment(s).

Really, that's it in a nutshell. When you can embrace it fully, it is just so elegant to get from a commit to fully deployed in basically a few mouse clicks...

Anyway, I hope you find this blog useful. I'll probably have some updates in a couple of days when my friends on the DXPC team read what I've written and start pointing out all of the things I got wrong. If you want some more info on DXPC or you would like a hook up with someone that can give you a demo or even a sales pitch, leave a comment below or even better hit me up on the Community Slack channels. I'll be able to either answer what you want to know or put you in touch with someone who can.

Before I go, a note to clients who are currently using Liferay DXP on-premises and would prefer to move to this DXP Cloud goodness, not only can I help connect you with someone to get you the DXPC details, but I can also share info on a Migration package that Liferay Global Services can provide to basically help you migrate your on-prem Liferay DXP environment straight to DXP Cloud, so we can get you there and you'd barely need to lift a finger. Well, you'll probably have to do more than lift a finger, but you wouldn't have to do the migration on your own.