Centralized Liferay Auditing

An Alternative to Audit Message Database Persistence

Introduction

Out of the box, Liferay DXP has fairly limited audit message output targets. In fact, there are only two that are provided: logging them or persisting to the database.

Each of these come with their own set of issues.

Logging is tough because it is not searchable and hardly usable.

Database persistence doesn't have these problems, but the administration and maintenance of the table is not handled at all by the product. It's really up to you to clean up the database, and for good reason. Liferay doesn't know what your retention requirements are for audit information and, well, each implementation comes with different requirements.

What I want to present here is another choice - sending the details to ELK.

Using ELK for Auditing

ELK is kind of a great choice in general for the audit messages. They can be centralized so all nodes can ship the messages in, it is in a persisted format which is easy to build visualizations against, and even though you would still be responsible for cleaning up data, it is a simple matter of deleting an index to get rid of all of the records under the given date.

It is actually super easy to process all of the audit logs through ELK. Building off of the Centralized Liferay Logging blog post, we'll go through the configuration to do this for audit messages.

Liferay Logging Configuration

Okay, so we have to go back to our portal-logging-ext.xml file and add a special logger setup for the audit log:

<!-- Need a simple appender to put the message out to the file and none of the other cruft -->
<appender class="org.apache.log4j.rolling.RollingFileAppender" name="AUDIT_FILE">
  <rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
    <param name="FileNamePattern" value="@liferay.home@/logs/liferay@spi.id@.%d{yyyy-MM-dd}.audit.log" />
  </rollingPolicy>

  <layout class="org.apache.log4j.EnhancedPatternLayout">
    <param name="ConversionPattern" value="%m%n" />
  </layout>
</appender>

<!-- Define the logger for the LoggingAuditMessageProcessor class -->
<logger name="com.liferay.portal.security.audit.router.internal.LoggingAuditMessageProcessor" additivity="false">
  <level value="INFO" />
  <appender-ref ref="AUDIT_FILE" />
</logger>

This defines an appender for the AUDIT_FILE and basically only includes the message and a newline. We don't want the timestamp or anything else in the line, just the generated message itself.

The logger is bound to the class that will be outputting the message, so we send it to our audit log and we disable additivity so it won't end up going to any other Liferay logging configuration that we are otherwise using.

Liferay Configuration

There is yet more configuration for Liferay, but this part is accomplished within Liferay itself.

Log into Liferay as an administrator and go to the System Settings control panel. You can either dig under the Foundation tab or just search for audit to find appropriate sections.

First you want to enable auditing under the Audit page.

Under the Persistent Message Audit Message Processor, you want to disable the persisted audit messages since we don't want them going to the database.

Under the Logging Message Audit Message Processor, we want to enable it and choose JSON for the logging format.

Since our log4j configuration is using the message format of %m%n, when the JSON message is logged, it will be just the JSON for the audit message and nothing else.

If you check the Output to Console option, your audit messages will also go to the app server log and likely also to the Liferay log.

Filebeat Preparation

We can add another Filebeat configuration to ship our audit messages. The following configuration is based upon routing both log messages and audit messages to the same Logstash service; if you are only shipping audit messages, the configuration can be simplified.

filebeat.prospectors:
# Older versions would be filebeat.inputs instead of prospectors.

- type: log
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /Users/dnebinger/liferay/70ee/bundle/logs/liferay*.json.log

  json.keys_under_root: true
  json.add_error_key: true
  json.message_key: message
  fields:
    audit: false
  fields_under_root: true
  
- type: log
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /Users/dnebinger/liferay/70ee/bundle/logs/liferay*.audit.log

  json.keys_under_root: true
  json.add_error_key: true
  json.message_key: message
  fields:
    audit: true
  fields_under_root: true

output.logstash:
  hosts: ["logstash.example.com:5044"]

Using this configuration, we're now adding a field named audit to the records, it will be set to true for the audit records and false for regular log messages. This will come up again in the Logstash configuration.

Logstash Preparation

Like our centralized logging Logstash configuration, we'll have a configuration to route the audit messages into a separate Elasticsearch index.

input {
  beats {
    port => 5044
  }
}

output {
  if [audit] {
    elasticsearch {
      hosts => ["http://localhost:9200"]
      index => "liferay-audit-%{+YYYY.MM.dd}"
    }
  } else {
    elasticsearch {
      hosts => ["http://localhost:9200"]
      index => "liferay-log-%{+YYYY.MM.dd}"
    }
  }
}

This configuration uses the new audit field injected by Filebeat to route the incoming messages either to the existing log indexes or, for the audit messages, the new audit indexes.

Kibana Preparation

In Kibana, use the Index Pattern to pull in the liferay-audit- index pattern.

With the data in Elastic and Kibana set up to access the indexed data, you can now start building whatever visualizations are necessary.

Conclusion

So now we have another choice for the audit messages, one that we can use for monitoring, reporting and easy cleanup of old data.

And, if you have some Kibana visualizations you'd like to share after loading your audit indexes, please do!

Blogs

Hi David H Nebinger. Very interesting ! and thank you for sharing! I am a beginner in ELK. Can you share a source code (git-hub) of the ELK / Liferay interaction with docker? I am on that interest many of us!

Thank you.