Generating Clients for Liferay Headless APIs and Objects

Get a head start consuming Liferay Headless APIs and Objects APIs by generating the client code...

If you're looking for the simplified process, you can skip ahead by clicking here.

Introduction

Lately my focus has been on development of React applications consuming Liferay Headless APIs.

When you only need to invoke a couple of endpoints, i.e. you're accessing a single Object endpoint and using GET, PATCHPOST, PUT and DELETE methods you end up writing some variation of the same kind of calls using fetch() in Javascript or one of the HTTP client libraries in Java. It's a bit of grunt work, but for a small set of endpoints it is over quickly.

Once your Object graph grows, once you start invoking many of the other Liferay Headless APIs such as Headless Delivery, User Admin, List Type Admin, etc, hand-writing the client code necessary to invoke the APIs can be quite painful.

Wouldn't it be nice if there was a tool available that could generate the client code for us so we could jump into using those APIs instead?

Turns out there is a tool for this... Let's learn about the tool and check how we can use it.

OpenAPI Generator

OpenAPI has a command line tool, openapi-generator, which is used to generate client code based on the OpenAPI YAML file that defines the paths and the components.

The Github repo for OpenAPI Generator provides the instructions for how to use the tool using Docker, but if you're on a Mac you can also use Homebrew to install the CLI.

To use the generator, you're going to be using some variation of the command openapi-generator generate -i openapi-yaml -g type -o output-directory . The openapi-yaml argument is either the path to an OpenAPI YAML file or a URL to retrieve the YAML file. The type argument is the type of client to generate; the ones I use are typically java, javascript or typescript-fetch. The output-directory argument is where the client project is going to be generated.

There's actually a long list of supported generator types, so if you're building a client to use outside of Liferay based on Go, Python, or one of many other different languages, there may be a generator type already available for you. My work is primarily in Java and Javascript, so those are the generators I'm featuring in this post.

In this blog I'm going to use the Docker version of the tool to be most compatible with all platforms, but in my everyday life I have installed the CLI per the Homebrew instructions, so I know that path works too even though I'm not going to be using that here.

Getting the OpenAPI YAML Files

As mentioned above, you can invoke the generator and provide as the input a URL or a path to a local YAML file.

I prefer using the URL myself, but the advantage of downloading the YAML file locally and then using it to generate the client is that the YAML file can be placed under revision control.

If you want to use the URL, though, you need to have the Basic Auth header value so the generator can access and retrieve the file. A convenient tool for this is https://www.debugbear.com/basic-auth-header-generator; it can give you the value just by plugging in your credentials.

If you're using the test@liferay.com/test credentials, I can tell you that your Basic Auth header value is going to be:

Authorization: Basic dGVzdEBsaWZlcmF5LmNvbTp0ZXN0

The next thing that you need is the URL to get the YAML file. For this, I recommend using the Liferay API Explorer to find the URL until you understand how to manufacture the URL yourself.

When running a local bundle, point your browser at http://localhost:8080, log in as an admin, then set the address to http://localhost:8080/o/api. Since you're logged in as an admin, the site should render without any issue.

In the API Explorer, pick an application that you want to generate the client for, then from the available paths, find the default one and expand it. Enter yaml for the type parameter and click the Execute button.

In the output window you'll see the actual YAML for this application, but of interest to us is the Request URL input because it shows the URL necessary to retrieve the data.

As can be seen in the image below for my Vacation object, I found the default path, entered yaml for the type and hit the Execute button which gives me the Request URL http://localhost:8080/o/c/vacations/openapi.yaml which is what I really need.

When you have the Basic Auth header and the Request URL, you're ready to actually start generating the client code...

Generating the Client Code

We'll start with the Java code...

And we'll do that using the command:

$ docker run --rm -v "${PWD}:/local" openapitools/openapi-generator-cli generate \
  -i http://host.docker.internal:8080/o/c/vacations/openapi.yaml \
  -a "authorization:Basic dGVzdEBsaWZlcmF5LmNvbTp0ZXN0" \
  -g java \
  -o /local/java-vacation-client

Using the current directory, the client project named java-vacation-client (in my case) will be created in the directory with a new project, complete with build scripts, metadata, documentation, and of course the code necessary to build a library.

Well, almost. See, I'm using Colima and I had to modify my config to add a mount that was writable so the Docker container could actually create the directory. In ~/.colima/default/colima.yaml I had to add the declaration to the mounts section:
 
mounts:
  - location: ${PWD}
    writable: true

To generate a Javascript client, change the argument to -g javascript, and to generate a TypeScript client, change the argument to -g typescript-fetch.

Take some time and review and get familiar with the generated client code, because if you're like me, you're going to be making a whole lot of changes...

Changing the Client Code

Yes, we need to change the generated code for two reasons...

First reason is that the code is blissfully unaware of the security aspects necessary for calling headless. The Java client, for example, uses OkHttp for the lower-level HTTP calls, but it [obviously] does not know how to get the current value of the p_auth header and attach using the x-csrf-token to use the current user's token for invoking Liferay (nor does it have anything for OAuth2 in case you were building an external app that wasn't running within Liferay). Same with the Javascript and TypeScript clients, they don't use the complete tokens for access.

Second reason is that, well, who uses only one application?

My current React app needs to hit six different Objects (so 6 different applications), Picklists (so the List Type application), and I'm also using Categories (so 2 more applications for vocabularies and categories) and User relationships (the Admin User application)... That's at least 10 different applications, 10 different YAML files, 10 different runs of openapi-generator, and therefore 10 different client folders.

So yes, I recommend modifying the generated code to build-in support for drilling through Liferay Security (in the appropriate method based on your app) and merging all of the different clients code together and also moved into your app.

Handling Authorization

The authorization stuff is pretty easy if you're running within Liferay. In this case you want to levage the current user's authentication token header. For Java code, Liferay has a convenient method, com.liferay.portal.kernel.security.auth.AuthUtil.getToken(HttpServletRequest) that can give you the value to use for the x-csrf-token header. For the Javascript and TS solutions, you can use the value from the Liferay.authToken JS variable value.

The code changes are pretty easy for this too. The clients have a map of default headers in the ApiClient source file, so all you need to do is update the headers to include the security auth stuff. In one of my Javascript clients, I changed the defaultHeaders value to:

this.defaultHeaders = {
  'x-csrf-token': Liferay.authToken,
  'accept': 'application/json',
  'Content-Type': 'application/json',
};

The Java and TypeScript ApiClient files can be updated in a similar way.

Adding these as default headers ensures the security token is passed with every request and you don't have to worry about manually doing this in each individual use of the client.

Now, if you are building an app that will live outside of Liferay, you'll need to complete the OAuth2 authorization process to get an auth token to include with your requests. Basic Auth is an option, but it is a really, really insecure option (only for PaaS or Self Hosted, not allowed in SaaS) that you should avoid like the plague. This will involve calling the OAuth2 endpoints with client tokens and secrets, receiving a token, using it in the ApiClient headers and refreshing the token as necessary to keep it valid.

Merging Clients

While not necessarily required, I recommend this effort.

A clear reason not to merge clients, though, is it makes regenerating the client code harder because you have to regenerate it and then repeat your merge activities. Definitely a downside for my recommendation, especially when you're in the heat of development and your Objects are in flux...

But there are reasons (good ones I hope) to merge the clients...

Primarily it reduces duplication. Each generated client is going to have the same set of core files and dependencies, such as the ApiClient source file, the OkHttp dependency for the java client, etc. Merging the clients can eliminate this duplication.

It also simplifies the app too. Having an app that uses 10+ clients only adds to the confusion, especially considering the duplication in the clients.

In addition, I don't necessarily want to build these as libraries/dependencies that I need to publish (either publicly or privately). If I'm going to use the Java client within an OSGi module, I would really rather refactor this to support having an api and service modules so I can encapsulate the dependencies (such as OkHttp) and present as one large set of services.

And finally, the generated methods for the individual calls get their names from the OpenAPI declaration and, while descriptive, are a bit muddled. Rather than renaming those methods (boy, I like to), instead I recommend encapsulating them; expose friendly names, and let the implementation call the internal muddled names. An example of the muddled names is when you have Objects with complex relations you can find methods with names like deleteCertificationExamExamQuestionsCertificationExamQuestion and I defy you to look at this and understand what exactly you'd be deleting when you invoke the method.

In the java client, there's a bunch of methods to complete an activity but named slightly differently depending upon how much control you need. For example, there's patchVacation, patchVacationWithHttpInfo, patchVacationValidateBeforeCall, patchVacationCall, and patchVacationAsync. While it's fine for a low-level library, I'm likely not using (or even exposing) all of these directly in my app and wouldn't want them all to be public.

Once you decide to merge clients, that's actually the easy step. The harder step is then actually doing the merge...

First you have to compare all of the duplicate files (such as the ApiClient file) and check the differences and resolve them. In the ApiClient example, you find that the URL is different, and to merge you need to change it to the least common denominator. But making that change forces you to go to the non-duplicate files and change the URL paths each use. In the Java client's VacationApi class, I would have to update all of the localVarPath variables to include the portion stripped from the ApiClient URL.

And then there's changes to the non-duplicate files such as changing method names. A more complicated situation occurs when you have non-duplicate files (meaning they are based upon the YAML file rather than project template duplicates) that are kind of duplicates...

I found this while merging client code from Headless Admin Taxonomy... The Creator model object is slightly different than the Creator used by other endpoints, so it can take some comparison to know what to keep from one, the other or both. 

This also happens when dealing with relationships. When, for example, Student has a 1:many relationship to Classes, since they are separate Objects they have their own application and therefore generate different clients, but the YAML files know and expose a little bit about the other components (Student YAML exposes it has a Class component but it doesn't have all of the field details as the Class component from the Class YAML file which also has a Student component). When merging these into a single module you again have to see what you want to keep from one, the other or both.

Alternatives to Manual Merging

Now maybe you're just reading through the blog post or maybe you're actually doing this in a real project...

None the less, by the time you get here, you're going to be like "Boy, this is a lot of nasty work..." and you'd be absolutely right.

Personally I kind of think it's important to understand this effort, but that doesn't mean I want to keep doing it manually myself.

And as it turns out a lot of others (completely outside of Liferay/Headless) have faced a similar sort of problem in that they needed to use multiple applications (YAMLs) but wanted them joined into a single file for different reasons.

Because of that, there's actually tools out there that know how to combine OpenAPI YAML files into a single file, and that is going to help us in merging the YAML files so when we generate a client, we won't have to face the manual merge nightmare.

I'm going to introduce one here that I use, but you might want to investigate others for features or capabilities you need that are missing from my recommendation.

To use my tool, we have to change our process just a bit, but not too much...

Downloading the YAML Files

First, we have to download all of the separate YAML files manually. Fortunately the API Explorer gives us the curl commands for each one (it's right above the Request URL field in the default section we were in above), so we don't have to come up with this on our own.

One thing to watch out for is 403 errors. I got them when I tried to leverage the x-csrf-token as provided in the curl command text area, but when I switched to using the basic auth header I was good.

Here's the commands I used to grab a number of yaml files:

curl -X 'GET' 'http://localhost:8080/o/c/vacations/openapi.yaml' \
  -H 'accept: application/json' -u test@liferay.com:test  -o vacations.yaml
curl -X 'GET' 'http://localhost:8080/o/headless-admin-user/v1.0/openapi.yaml' \
  -H 'accept: application/json' -u test@liferay.com:test -o admin-users.yaml
curl -X 'GET' 'http://localhost:8080/o/headless-admin-list-type/v1.0/openapi.yaml' \
  -H 'accept: application/json' -u test@liferay.com:test  -o admin-list-type.yaml
curl -X 'GET' 'http://localhost:8080/o/headless-admin-taxonomy/v1.0/openapi.yaml' \
  -H 'accept: application/json' -u test@liferay.com:test -o admin-taxonomy.yaml

Identifying Conflicts

Now that I had the files, I thought I was ready to join them and move on:

$ npx @redocly/cli join -o service.yaml vacations.yaml admin-list-type.yaml \
  admin-taxonomy.yaml admin-users.yaml
Conflict on paths => operationIds : getOpenAPI in files: 
  vacations.yaml,admin-list-type.yaml,admin-taxonomy.yaml,admin-users.yaml
Conflict on paths => /v1.0/openapi.{type} : get in files: 
  admin-list-type.yaml,admin-taxonomy.yaml,admin-users.yaml
Conflict on components => schemas : Creator in files: 
  vacations.yaml,admin-taxonomy.yaml,admin-users.yaml
Conflict on components => schemas : UserGroupBrief in files: 
  vacations.yaml,admin-users.yaml
Conflict on components => schemas : Vacation in files: 
  vacations.yaml,admin-users.yaml
Conflict on components => schemas : UserAccount in files: 
  vacations.yaml,admin-users.yaml
Conflict on components => schemas : WebUrl in files: 
  vacations.yaml,admin-users.yaml
Conflict on components => schemas : CertExamUserAssignment in files: 
  vacations.yaml,admin-users.yaml
Please fix conflicts before running join.

Like I said, I thought I was ready...

The tool I'm recommending is Redocly CLI. It is an extensive tool with lots of capabilities regarding YAML files and OpenAPI YAMLs specifically, but the command that we're interested in is the join command.

The join command (marked as a beta feature by the Redocly team but has worked quite well for me) is used to merge separate OpenAPI YAML files into a single file. There are many other reasons to want to do this, but my own reason is simply because I want to generate one client that I can use for all of the applications I need instead of having many individual clients to worry about.

From the output of the cli above, there are two types of conflicts between some or all of the YAML files. Path conflicts are conflicts in the endpoints, and these can be from duplication names or operation ids. Component conflicts arise when the component declarations found in more than one of the YAML files do not exactly match each other.

In order to proceed, both types of conflicts need to be resolved.

Resolving Path Conflicts

Although only these two path conflicts were reported by the join command, there's a lot of unreported conflicts that have to be cleaned up.

You see, each YAML file indicates the server it is for, but it includes path information in it. If you were to hand-delete the /openapi conflict above, and fix component conflicts, you'd end up with one file, but it would start off with these details:

openapi: 3.0.1
info:
  title: Object
  version: v1.0
servers:
  - url: http://localhost:8080/o/c/vacations/
  - url: http://localhost:8080/o/headless-admin-list-type/
  - url: http://localhost:8080/o/headless-admin-taxonomy/
  - url: http://localhost:8080/o/headless-admin-user/
paths:
  /batch:
    put:

This is truncated, but it allows us to see the issue up close... The merged file lists 4 servers, one for each of the YAML files that we processed, but the paths are still the basic ones from the original files...

The /openapi conflict I got was the only one recorded because I only have a single custom Object. If I had multiple custom Objects, the /batch endpoint would be reported as a conflict because each custom Object YAML will have its own /batch path and is therefore not unique, but when combined with the appropriate server would be a unique path, but the join command is unable to make those path changes to avoid conflicts.

To handle the real path conflicts, we need to change the server attribute in the source YAMLs and adjust all of the paths...

So in my vacation.yaml file, it would need to change to:

openapi: 3.0.1
info:
  title: Object
  version: v1.0
  svc: Vaca
servers:
- url: http://localhost:8080/
paths:
  /o/c/vacations/batch:
    put:

This way, when it is merged, the correct paths will be valid for all of the individual endpoints and no path information is lost.

And for the /openapi guys, when we prefix with the stuff from the server url, that too will be unique across the files and fix that conflict. We may have to alter the operationIds manually to make them unique, but that way we could keep the methods. Personally, I just delete them from the YAML files because, like I said, I would never invoke them from my application so I see no reason to keep them around.

Resolving Component Conflicts

Let's look at the Creator component. There are three different versions of it defined in the vacations, admin-taxonomy and admin-users YAML files. They're not really three different components, per se, but they are slightly different. The admin-taxonomy version has a description that the other Creator definitions do not have, and the Objects-based versions all have UserGroup details that the other definitions do not have.

And that's actually the source of the conflicts - it's not that the components are the same name in multiple YAML files, it's that they are not exact copies of each other in the YAML files. Resolving the conflicts means syncing the definitions in the YAML files.

Most of the time you're going to find that the differences are extremely minor, sometimes not really relevant to you anyway, but you need to decide how to resolve them.

One way is to leverage a prefix. The prefix will be prepended on each conflicting component and the value to use is a property from the info section at the top of the YAML file. We don't have a suitable property already, but we could add one. For example, we could add a svc value to each file, then use the command line argument --prefix-components-with-info-prop svc to automagically change the name and avoid conflicts...

This will work, but then you end up with all of your components prefixed with the value, so you might get Vacation_Creator and Taxonomy_Creator and UserAdmin_Creator to avoid the name conflicts, but you also get Vacation_Vacation, UserAdmin_User, Taxonomy_Category and other naming craziness.

You avoid the conflicts, but then you end up with lots of effective duplicates.

I think it is better to simply eliminate the duplicates rather than trying to deal with the simple prefix problem, but that will require more editing of the YAML files, plus it will take some research to know which version of each component that you want to keep, maybe adding to it so the merged component will work on for all of the named YAML files.

The conflict list from the join command will help you identify the components that you need to work on, plus it will tell you the files you need to touch to resolve the errors.

For each component that is in conflict, find the "best one" (meaning the one requiring the fewest edits), and add anything from other YAMLs so the one component works in all files. After that is determined, then copy the component to the other YAMLs overwriting the version they have. Note if you bring over a $ref that doesn't exist in the YAML file, you'll have to pull over those components also.

Completing the Merge

With the separate files all tweaked and edited and ready for the big join:

$ npx @redocly/cli join -o service.yaml vacations.yaml admin-list-type.yaml \ 
  admin-taxonomy.yaml admin-users.yaml

This time, you shouldn't get any errors when the join occurs. If you do get an error, you'll have to clean those up and try again.

Generating the Client

Once you have your merged file, you're ready to generate your unified client code. To review, that's done using the command:

$ docker run --rm -v "${PWD}:/local" openapitools/openapi-generator-cli generate \
  -i /local/service.yaml -g javascript -o /local/client
Exception in thread "main" org.openapitools.codegen.SpecValidationException: 
  There were issues with the specification. The option can be disabled via 
  validateSpec (Maven/Gradle) or --skip-validate-spec (CLI).
 | Error count: 17, Warning count: 1
Errors:
  -attribute paths.
    '/o/headless-admin-user/v1.0/user-accounts/{userAccountId}/assignedExam'
    (get).parameters.[userAccountId].content is missing
  -attribute paths.
    '/o/headless-admin-user/v1.0/user-accounts/{userAccountId}/submittedBy/
    {vacationId}'(put).parameters.[vacationId].content is missing
  -attribute paths.
    '/o/headless-admin-user/v1.0/user-accounts/{userAccountId}/assignedExam/
    {certExamUserAssignmentId}'(put).responses.default.description is missing
  -attribute paths.
    '/o/headless-admin-user/v1.0/user-accounts/{userAccountId}/submittedBy/
    {vacationId}'(delete).parameters.[userAccountId].content is missing
  -attribute paths.
    '/o/headless-admin-user/v1.0/user-accounts/{userAccountId}/assignedExam/
    {certExamUserAssignmentId}'(delete).parameters.[certExamUserAssignmentId]
    .content is missing
  -attribute paths.'/o/headless-admin-user/v1.0/user-accounts/{userAccountId}/
    submittedBy/{vacationId}'(put).responses.default.description is missing

As mentioned earlier, I'm using the docker command to invoke the generator. Since I have the merged YAML file locally that I want to process, I changed the input to reference this file.

I was hoping that the tool would run and all would be good and I'd be ready to review and use the client.

Turns out I was actually not quite ready as I had hoped...

This is actually due to a failure validating the OpenAPI spec YAML file that Liferay is building on the fly for system objects that have relationships to custom objects. I've reported this as a bug, hopefully it's one that can be fixed some time soon: https://liferay.atlassian.net/browse/LPD-36166
Technically any error reported by the openapi-generator is actually a failure to validate the OpenAPI YAML for a service, and this is not expected by the product. The Liferay headless APIs are supposed to be OpenAPI-compliant, and that means they should pass validation.

If you encounter one of these validation failures, I would report them to Liferay so they can be fixed.

The types of errors reported point to problems with the paths in the YAML file, errors that we should clean up to get a clean client when we're done. Although the issues were reported as a bug to Liferay to resolve, in the mean time it will be up to us to fix the validation failures so we can continue.

The two errors that I've typically seen and are shown above are:

The "content is missing" error. This is complaining about the lack of a type assigned to the parameters which doesn't make a heck of a lot of sense to me since the parameters in the URL are strings, right?

To make those go away, I just added a type to the values, so basically the last two lines of the parameter definition below:

      parameters:
        - name: userAccountId
          in: path
          required: true
          schema:
            type: string

I had to add those lines to every parameter that is being called out in the "content is missing" error lines. They're pretty easy to find, just use the URL fragment to find it in the file, match the method and the parameter name and you're at the right place.

The "description is missing" errors are similar, but for some reason the responses/default attribute must have a description on it. Again, using the URL fragment I'd find the path, find the responses section and under default I'd add a description item.

When your YAML file is clean, instead of writing out warnings, you'll get a generated client project, just like we've seen before. The difference now is that we have one client project instead of many individual ones.

Using the Client

So the openapi-generator will generate a client and it also generates a ton of documentation for use.

The Readme.md file covers the basic usage of the client, and the docs/*.md files document the model classes and service classes along with how to invoke the methods.

For the Java client, the intention is that you'll build the client jar and then use that in your application. For a Liferay OSGi module, you'd use compileInclude on the client jar to embed it as-is in another module.

Personally I'd be tearing it up, refactoring out interfaces for the pojos and services and the client implementation classes, then back those using @Components to expose the service implementations for the other points of code to inject and use the services.

For the Javascript/TypeScript guys, I typically follow the first of two approaches.

The first approach is to just copy the code and place it into my React application. I'll do this because I typically only need it in one application and therefore don't want to deal with the hassle of turning it into a loadable library. Pulling the code in seems to be the easiest path for me generally. I typically encapsulate it within a Provider so my React code has access, I can expose friendly names, but otherwise leave the generated client completely intact.

The other route, one reserved for when I have a lot of possible custom elements leveraging the same suite of APIs, well that requires building as a module and exposing as an import map. I'm not going to get into that here (heck this blog is already long enough), but a future blog post on import maps will take this on.

Where's the Beef?

Or maybe better, where's the ROI in generating the client?

How do we as developers benefit from the generated clients, especially if we take on this extra effort to merge separate clients into one?

Here's the list that I came up with:

1. You generate a full and complete client for the target application (YAML) at one time rather than having to add methods in a piecemeal fashion. When building by hand, I kept finding myself saying "Oh, yeah, I need to add a call to invoke this other endpoint...". The generated client covers the whole API, and the only time you might need to regen is if you change the actual application itself (i.e. modifying the associated Object).

2. You enforce consistency for using the APIs regardless of which ones you are using. The generated code looks the same for a custom Object as the Headless Delivery API, once you learn how one works, you kind of know how all of the others will work.

3. You use the same dependencies across all clients, even if they end up being managed by different developers. The Java client examples all use OkHttp; you don't have some using Spring, some using raw JDK http connections, of some using Apache HttpClient/HttpComponent or some other dependency. Same deal with the javascript options, they're all going to use the same http client (even if it is simply leveraging fetch()) and that will make them consistent.

4. If you change the Objects in some way, regenerating the clients should bring those changes into your app. Depending upon how much changes you make from the original generated code to how you use it in your application, well then you'd have to repeat that effort, but it may be lesser than trying to manually change the previously generated client to add the new endpoints...

One way I use to mitigate this problem is to leverage a tool like BeyondCompare to compare the newly generated client directory against my modified version. I can then synchronize changes across, keeping the changes I've made but pulling in the new additions from the regenerated client. Depending upon the scope of your changes, you might find this a useful path also.
Hmm, I wonder if they'd pay me for this blatant BeyondCompare plug?

5. For the Java devs out there, the OJM (Object JSON Mapping) is handled for you. So you take the Java client, you use in a typical Java way, and the fact that you're making web service calls behind the scenes is completely transparent for you, all in the same way that ServiceBuilder and ORMs hide the mapping necessary there.

Even if you're using the client within an OSGi module deployed into Liferay, it can allow your Java code to transparently use Objects without getting into the technical implementation...

My Process Review

Just wanted to review my process but leave all of the fluff out. This can be a process for you to use or adjust to your needs.

  1. Determine the application(s) the new app is going to consume, using a variation of the following curl command to download the YAML files:
     
    $ curl -X 'GET' 'http://localhost:8080/o/c/vacations/openapi.yaml' \
      -H 'accept: application/json' -u test@liferay.com:test  -o vacations.yaml
    
  2. Edit each of the files to deal with the path conflicts, removing the suffix from the server attribute URL and pasting it as a prefix in all defined paths. I also delete the /openapi path.
  3. Run the redocly CLI to get the list of component conflicts that need to be resolved:
     
    $ npx @redocly/cli join -o service.yaml vacations.yaml admin-list-type.yaml \
      admin-taxonomy.yaml admin-users.yaml
    
  4. Edit the necessary YAML files to ensure all of the conflicting components are exact copies of each other.
  5. Run the redocly CLI to join all of the YAML files into a single file:
     
    $ npx @redocly/cli join -o service.yaml vacations.yaml admin-list-type.yaml \
      admin-taxonomy.yaml admin-users.yaml
    
  6. Run the openapi-generator-cli to generate the client project:
     
    $ docker run --rm -v "${PWD}:/local" openapitools/openapi-generator-cli generate \
      -i /local/service.yaml -g javascript -o /local/client
    
  7. If there are validation failures, I fix those in the source file(s), then I go back to step #5.
  8. Incorporate the generated client into my project and start coding against it.

The reason for step #7 to alter the source files rather than the merged files has to do with supporting future updates.

I do have a script available here that can successfully get you to the joined yaml file ready for the openapi generator to run.

If I change a custom Object, such as my Vacation Object, that will change the headless endpoints, so I'll need a new client with the updates. I can go through the steps above, but I only have to worry about the one Vacation YAML file, my other YAMLs wouldn't have necessarily changed. Since the updates for the validation failures happen in the source files, the new merged file would pull the fixes in and the client generation should only report validation failures from the Vacation YAML alone.

So sure, updating the source file means I need to do the join again, but that will always be easier than having to fix validation errors after every join.

I have an even better script available here which can handle the YAML cleanups, the YAML merge, and then generate the client project.

Conclusion

Generating the client code is likely unnecessary for simple applications where you're just using a handful of calls.

But when you are accessing multiple custom Objects and Liferay Headless APIs, manually writing code to access those endpoints is going to be a chore.

I used the javascript generated client for one of my React applications recently, and it really worked out quite well. I could focus on my actual code instead of worrying about writing all of the client methods...

Anyway I hope you find this useful, even though it is one of my longer posts...

Blogs