RE: Liferay cluster configuration

Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Hi,

We are working on the beginning of a project in which Liferay installed on Linux will be used, it will be the version Liferay Portal CE 7.1 GA3 bundled with Tomcat. One of the requirements is that it should be carried out in a high availability environment, that is, an installation of a Liferay cluster with two nodes.
I have been looking at the documentation, and initially I want to be clear about the steps for such configuration.

Initially we already have a Postgres database in replication that is another of the requirements of the project. We start from these two machines in which Liferay Portal CE 7.1 GA3 bundled with Tomcat has been installed in each of them and in the initial configuration, it is necessary to make the two nodes (both Liferay installations) point to the same base of data, in this case to the master node of the Postgres cluster.
By doing this do we already have the two instances of Liferay in cluster?

The following points are mandatory to configure them?

Documents and Media repositories must have the same configuration and be accessible to all nodes of the cluster.

Search should be on a separate search server that is optionally clustered.

Cluster Link must be enabled in the cache replicates across all nodes of the cluster.

Applications must be auto-deployed to each node individually.

Well I would like to have more information about who has configured two nodes in a cluster with Liferay to have all the information available.

Thank you very much in advance.
thumbnail
Olaf Kock, modified 6 Years ago. Liferay Legend Posts: 6441 Join Date: 9/23/08 Recent Posts
Admin CAUCE

We are working on the beginning of a project in which Liferay installed on Linux will be used, it will be the version Liferay Portal CE 7.1 GA3 bundled with Tomcat. One of the requirements is that it should be carried out in a high availability environment, that is, an installation of a Liferay cluster with two nodes.



All required steps are well laid out in the documentation.


Initially we already have a Postgres database in replication that is another of the requirements of the project. We start from these two machines in which Liferay Portal CE 7.1 GA3 bundled with Tomcat has been installed in each of them and in the initial configuration, it is necessary to make the two nodes (both Liferay installations) point to the same base of data, in this case to the master node of the Postgres cluster.
By doing this do we already have the two instances of Liferay in cluster?

No

The following points are mandatory to configure them?

Yes


Documents and Media repositories must have the same configuration and be accessible to all nodes of the cluster.

Search should be on a separate search server that is optionally clustered.

Cluster Link must be enabled in the cache replicates across all nodes of the cluster.

Applications must be auto-deployed to each node individually.

Documents - by default - are not stored in the database. Documents uploaded on one server must be accessible to the other server.
Content indexed on one server must be searchable on the other server
When cached objects are modified on one server, the other server must be notified
Applications can only be used on the server where they're deployed on - thus: If you don't deploy them on all clustered machines, they won't be available on all machines.


Well I would like to have more information about who has configured two nodes in a cluster with Liferay to have all the information available.
Just follow the documentation. It's actually quite good. And test - upload, download, changing content, searching, etc. by going to all individual cluster nodes. Validate that changes to one cluster machine are reflected on the other.
Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Good afternoon Olaf,

Thank you very much for the response you have given me.

I am following the steps to configure a Liferay cluster. And for the moment everything is correct.

* I have a dedicated machine for the ElasticSearch service
* I have a dedicated machine to contain Documents and Media, and it is accessible via NFS through the two Liferay nodes.
* I have a load balancer in front of the two Liferay nodes configured in HAProxy.
* I have the two Liferay nodes configured the same:

The file "portal-setup-wizard.properties" has the following content in both nodes. I do not put the configuration to the database. I only put the content to enable the cluster:

cluster.link.enabled = true

dl.store.impl = com.liferay.portal.store.file.system.AdvancedFileSystemStore

setup.wizard.enabled = false

web.server.display.node = true

Done this, I start the service in each one of the nodes (sh catalina.sh start) and in both two I get the following error in the output of the catalina.out:
08-Apr-2019 14: 25: 49.363 INFORMATION [main] org.apache.catalina.core.ApplicationContext.log Initializing Spring root WebApplicationContext
2019-04-08 14: 30: 10.693 ERROR [SCR Component Actor] [com_liferay_portal_cache_ehcache_impl: 97] [com.liferay.portal.cache.ehcache.internal.MultiVMEhcachePortalCacheManager (283)] The activate method has thrown an exception
net.sf.ehcache.CacheException: Problem starting listener for RMICachePeer //10.203.23.6:37563/com.liferay.portal.kernel.dao.orm.FinderCache.com.liferay.portal.model.impl.ResourceActionImpl. Initial cause was Connection refused to host: 10.203.23.6; nested exception is: _ java.net.ConnectException: Connection time expired (Connection timed out) [Sanitized]

I comment the line cluster.link.enabled = true and the Liferay service starts me correctly. Something is slipping away and I do not know what. Any suggestions?

Thank you very much again in advance.
thumbnail
Olaf Kock, modified 6 Years ago. Liferay Legend Posts: 6441 Join Date: 9/23/08 Recent Posts
Initial cause was Connection refused to host: 10.203.23.6

Is 10.203.23.6 one of your cluster machines? It might be that they find each other through Multicast, but the connection after exploring the other machine is denied, e.g. by Firewall rules.
Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Hi Olaf,

Thanks for the reply.

Here is how I have mounted my cluster environment with IP:

10.200.23.9 - ElasticSearch Server (TCP / 9300,9200)
10.200.23.8 - Content Server "Document and Media (DaM)
10.200.23.7 - Node 1 LR
10.200.23.6 - Node 2 LR
10.200.23.5 - HAProxy Balancer

If I comment the line that gives me the error when starting the Liferay service, I get the two nodes correctly. My question is, is it mandatory to enable the cluster link, or could you work in a cluster without having the cluster link activated?

Another question that I have, all the changes that I make at the server level as at the Liferay level, I have to replicate them in all the nodes, for example, apply some configuration in the tomcat, or add some porler in the initial screen of the Liferay.

I would need some more information to enter how the Liferay is configured correctly in Cluster mode.

Thanks in advance.
thumbnail
Olaf Kock, modified 6 Years ago. Liferay Legend Posts: 6441 Join Date: 9/23/08 Recent Posts
Admin CAUCE

Here is how I have mounted my cluster environment with IP:

10.200.23.9 - ElasticSearch Server (TCP / 9300,9200)
10.200.23.8 - Content Server "Document and Media (DaM)
10.200.23.7 - Node 1 LR
10.[b]200[/b].23.6 - Node 2 LR
10.200.23.5 - HAProxy Balancer
Did you see that the error message is for 10.203.x.x?
Admin CAUCE
If I comment the line that gives me the error when starting the Liferay service, I get the two nodes correctly. My question is, is it mandatory to enable the cluster link, or could you work in a cluster without having the cluster link activated?
You can try what happens without clusterlink: Set up two machines that access the same database. Visit both servers' homepages. Add an article on one of the machines, then reload the page on the other one: It won't be updated. Now clear caches on the stale machine - voila, content is there. That's what you need clusterlink for. But you can rely on Multicast or configure Unicast. You might have configured unicast and provided a wrong IP address. Or you might have servers with multiple NICs - and they can't communicate on one of them. You can fix the NIC that's used. See portal.properties, Ctrl-f "cluster.link.autodetect.address"
Admin CAUCE
I would need some more information to enter how the Liferay is configured correctly in Cluster mode.
If the documentation isn't enough: Do you know of the Devops course? It's available in person (well, I don't know where you are) or online, or online in the training flat rate .
Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Hi,

I finally found the problem. It was that I had misplaced the IPs in the file / etc / hosts on both machines where I have mounted each of the Liferay nodes. The architecture I'm doing is about virtual machines with vmware.

Now I already deploy both Liferay nodes correctly. Now I have to do the tests that you have told me to verify that the cluster link works. Exactly the cluster-link so you tell me is based on that whatever you add in a node, is automatically replicated in the other node through cluster link (roles, permissions, content, porlets ...)

So having my architecture in this way, could we say that we have a Liferay environment in Cluster? Any more recommendation that could be useful to refine this architecture a bit more?

Best regards.
thumbnail
Andrew Jardine, modified 6 Years ago. Liferay Legend Posts: 2416 Join Date: 12/22/10 Recent Posts
Just remember that Liferay clustering is limited to Cache and Search (unless you have configured an external search server -- which I believe you said above you have). 

So in your case when you make a change on Node A, it will alert the other nodes in the cluster about the change so that the caches stay in sync. Database changes (like adding new records) are not affected by the cluster replication. 

One simple test is this. Bring up two crowsers each on the same page. Add a portlet to browser A and then refresh browser B. If the replication is working correctly, you should see the portlet on Browser B as well as Browser A. 

​​​​​​​
Olaf Kock, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Thank you very much Andrew for the response

As I said in previous threads, I followed the official documentation to create a Cluster Liferay environment.

I have two Linux virtual machines with Liferay 7.1 with embedded Tomcat. I have performed the following steps:

1 - Aim the two Liferay nodes to the same Postgres database.
2 - I have installed the elasticsearch service in another virtual machine to have it separated from the cluster.
3 - I have also created a virtual machine that will host Documents and Media and in the Liferay machines I have made NFS mount points to that machine so that it can be accessed from the two nodes.
4 - I have enabled the cluster-link

The files that I have configured are the following:

File portal-setup.propierties

admin.email.from.address=admin@liferay.com
admin.email.from.name = Administrator LR
company.default.locale = en_ES
company.default.name = Liferay Cluster Portal
company.default.web.id = liferay.com
default.admin.email.address.prefix = admin
default.admin.first.name = Administrator
default.admin.last.name = LR
jdbc.default.driverClassName = org.postgresql.Driver
jdbc.default.password = *******
jdbc.default.url = jdbc: postgresql: //10.203.227.3: 5432 / jrd_liferay_cluster
jdbc.default.username = user
liferay.home = / opt / liferay-ce-portal-7.1.0-ga1

setup.wizard.add.sample.data = on

#################################################################################################### #########################
# Additional Configuration - CLUSTER LIFERAY
#################################################################################################### #########################
layout.user.private.layouts.enabled = false
layout.user.private.layouts.modifiable = false
layout.user.private.layouts.auto.create = false

layout.user.public.layouts.enabled = false
layout.user.public.layouts.modifiable = false
layout.user.public.layouts.auto.create = false

cluster.link.enabled = true

dl.store.impl = com.liferay.portal.store.file.system.AdvancedFileSystemStore

setup.wizard.enabled = false

web.server.display.node = true

File "... / osgi / configs / com.liferay.portal.store.file.system.configuration.AdvancedFileSystemStoreConfiguration.cfg"

rootDir = / mnt / data_liferay

File "... / osgi / configs / com.liferay.portal.search.elasticsearch.configuration.ElasticsearchConfiguration.cfg"
operationMode = "REMOTE"
​​​​​​​transportAddresses = "10.200.23.9:9300"

When I start the Liferay service on both nodes, I boot correctly but in the middle of the log of "catalina.out" I get the following errors:

​​​​​​​2019-04-10 07: 24: 10.571 INFO [main] [AutoDeployDir: 193] Auto deploy scanner started for /opt/liferay-ce-portal-7.1.0-ga1/deploy
2019-04-10 07: 24: 11.959 ERROR [main] [SearchIndexPortalInstanceLifecycleListener: 40] Unable to initialize search engine for company 20099
NoNodeAvailableException [None of the configured nodes are available: [{# transport # -1} {FGaQw2ELQXuyCINNY5-xSw} {localhost} {127.0.0.1:9300}]]
        at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable (TransportClientNodesService.java:347)
        at org.elasticsearch.client.transport.TransportClientNodesService.execute (TransportClientNodesService.java:245)
        at org.elasticsearch.client.transport.TransportProxyClient.execute (TransportProxyClient.java:60)
        at org.elasticsearch.client.transport.TransportClient.doExecute (TransportClient.java:360)
        at org.elasticsearch.client.support.AbstractClient.execute (AbstractClient.java:405)
        at org.elasticsearch.client.support.AbstractClient.execute (AbstractClient.java:394)
        at org.elasticsearch.client.support.AbstractClient $ ClusterAdmin.execute (AbstractClient.java:706)
        at org.elasticsearch.action.ActionRequestBuilder.execute (ActionRequestBuilder.java:46)
        at com.liferay.portal.search.elasticsearch6.internal.connection.BaseElasticsearchConnection.getClusterHealthResponse (BaseElasticsearchConnection.java:90)
        at com.liferay.portal.search.elasticsearch6.internal.connection.ElasticsearchConnectionManager.getClusterHealthResponse (ElasticsearchConnectionManager.java:81)
        at com.liferay.portal.search.elasticsearch6.internal.ElasticsearchSearchEngine.waitForYellowStatus (ElasticsearchSearchEngine.java:341)
        at com.liferay.portal.search.elasticsearch6.internal.ElasticsearchSearchEngine.initialize (ElasticsearchSearchEngine.java:112)



2019-04-10 07: 31: 09.581 WARN [liferay / search_writer / SYSTEM_ENGINE-5] [ProxyMessageListener: 88] com.liferay.portal.kernel.search.SearchException: Unable to commit indices
com.liferay.portal.kernel.search.SearchException: Unable to commit indices


The Liferay service starts correctly since I access <ip_nodo_cluster>: 8080 to each node of the cluster.

I have performed the test that you have told me. I have added a widget of type "News" on the inciial page of node A, I save it. I open browser and I enter node B and I do not see that change reflected in node B.

What I can be doing wrong? What can I check in the configuration to know where the fault is?

Thanks in advanced!!! 

Regards!!
thumbnail
Olaf Kock, modified 6 Years ago. Liferay Legend Posts: 6441 Join Date: 9/23/08 Recent Posts
Admin CAUCE
File portal-setup.propierties
...
liferay.home = / opt / liferay-ce-portal-7.1.0-ga1
...
I guess the filename is a typo, as it gets generated by Liferay as portal-setup-wizard.properties.
With regards to the directory name though: I didn't find it in the documentation (it might be missing there), but in the announcement: If you're really running GA1: Please upgrade to the latest GA. Clustering is available out-of-the-box from GA3 on. For prior versions, you'd need to deploy extra code to enable clustering (e.g. build the plugin yourself). I'd say it's not worth to build and ignore all of the fixes that were introduced since GA1.
Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Thank you very much for the information Olaf,

I am now configuring the new version of Liferay CE 7.1 Liferay Community Edition Portal 7.1.2 CE GA3.

Apparently, enabling the cluster seems simple, now my question is, how is it configured for the cache replication between the cluster nodes?

Regards!!!
thumbnail
Andrew Jardine, modified 6 Years ago. Liferay Legend Posts: 2416 Join Date: 12/22/10 Recent Posts
You probably don't need to mess with that part but if you are dying to get under the hood -- In your portal.properties there is a configuration
##
## Ehcache
##

    #
    # Set the classpath to the location of the Ehcache config file for internal
    # caches. Edit the file specified in the property
    # "ehcache.multi-vm.config.location" to enable clustered cache.
    #
    # Env: LIFERAY_EHCACHE_PERIOD_MULTI_PERIOD_VM_PERIOD_CONFIG_PERIOD_LOCATION
    # Env: LIFERAY_EHCACHE_PERIOD_SINGLE_PERIOD_VM_PERIOD_CONFIG_PERIOD_LOCATION
    #
    ehcache.single.vm.config.location=/ehcache/liferay-single-vm.xml
    ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml
Here you can see the two files used for ehcache, one for single nodes, and the multi for cluster. Both of these files are in the portal-impl, but you can grab them out of the source to see their content. If you want to change them, or add your own containers, you can make the changes and then change the location based on the properties above. The few times I have had to do this I have placed the files in /webapps/ROOT/WEB-INF/classes/META-INF/ehcache and then update the properties to include a /META-INF at the front.

Most of the time though, I think the defaults that Liferay provides suffice.
thumbnail
Olaf Kock, modified 6 Years ago. Liferay Legend Posts: 6441 Join Date: 9/23/08 Recent Posts
To answer directly to the question about cache replication: I wouldn't do that. Cache invalidation (as provided by cluster link out of the box) is enough: Any time one node changes an object, the other node will expire that exact object from its cache (if it was cached before). In the (maybe rare) case that the object is required afterwards, it will be loaded from the database, and be guaranteed to be fresh until the next cache invalidation (or timeout, or cache size overflow).

Unless you measure that this provides a bottleneck: Don't change it.

Both of your machines might serve vastly different content - then it's of no use to cache objects on both nodes if they're only required on one of them.

Also, get clarity about your reason for clustering: Is it to withstand high load, or to be highly available in case one machine goes down. How often does that happen and which price are you willing to pay: Must the user not know at all that they're on a different server?

Often, people want to enable session-replication, which is the replication of the application server's session object in the application-server cluster. This is independent from Liferay, and typically also quite a memory- and CPU-hog. If you cluster for increased load, this is the last that you want to do. If you cluster for high-availability, this is a valuable (read: extremely expensive) hack that looks like you nailed it, but will bite you later.
Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Hi Andrew,

Finally I was able to launch the Liferay cluster but I have configured it with the LR version 7.1.2 GA3. The version that I was with is the LR 7.1 version, which shows that it does not support cluster by default.

Thank you very much for your answer.
thumbnail
Andrew Jardine, modified 6 Years ago. Liferay Legend Posts: 2416 Join Date: 12/22/10 Recent Posts
Hah! glad you got it sorted out. For future reference, always best to start a thread with "I am using Liferay X.X GAX" -- will almost always get you to an answer quicker emoticon
Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Hehehehe you are very right, my fault I hope it does not happen again.

Now I'm trying to configure ElasticSearch to put it on an external server.

I already have the ElasticSearch service installed on another machine, and on the server side of ElasticSearch I have changed the following configuration in the file "elasticsearch.yml". In principle I will not mount an ElasticSearch cluster but simply a single server:

cluster.name: LiferayElasticsearchCluster
network.host: 10.200.23.9 (ip of the ElasticSearch server)

For the rest I have not touched anything else in this configuration. I start the ElasticSearch service.

On the side of the Liferay nodes I have done the following:

I have created a configuration file under ... / osgi / configs / com.liferay.portal.search.elasticsearch.configuration.ElasticsearchConfiguration.config which contains the following:
​​​​​​​
operationMode = "REMOTE"
TransportAddresses = "10.200.23.9:9300"

This is the only configuration in the LR nodes. Now my question is, do we have to make any more changes within the LR configurations? Every time I restart the Liferay service, this file changes me to the following:

# \ Highly \ recommended \ for \ all \ non-prodcution \ usage \ (e.g., \ practice, \ tests, \ diagnostics): \ n # logExceptionsOnly = "false"
# \ If \ running \ Elasticsearch \ from \ a \ different \ computer: \ ntransportAddresses = "10.200.23.9:9300"
operationMode = "REMOTE"

Thank you again!
thumbnail
Andrew Jardine, modified 6 Years ago. Liferay Legend Posts: 2416 Join Date: 12/22/10 Recent Posts
Is it working? The one thing I am not sure about is the name of your file. Looking at github, I can see that the id for this configuration is a little different from what you are using. On github you have 

https://github.com/liferay/liferay-portal/blob/master/modules/apps/portal-search-elasticsearch6/portal-search-elasticsearch6-api/src/main/java/com/liferay/portal/search/elasticsearch6/configuration/ElasticsearchConfiguration.java#L26

.. your filename is com.liferay.portal.search.elasticsearch.configuration.ElasticsearchConfiguration.config  -- notice the trailing "6" on the elasticsearch package in github.
Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Good again Andrew

I have already tried that option, but I still do not take the configuration to connect to my ElasticSearch server (ES).

I understand that the server side of ES is well configured. I installed the same version of ES as the one that comes embedded with Liferay (LR) 7.1.2 GA3, which is version 6.5.0.

The configuration file "elasticsearch.yml" I have only set the following values:

cluster.name: LiferayElasticsearchCluster

node.name: $ {HOSTNAME}

network.host: 10.200.23.9 (this IP is the IP of the ES server)

http.port: 9200

I understand that the server side of ElasticSearch is correct. Check service status:
[root @ JRDELS11 elasticsearch] $ systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-04-16 15:34:14 WEST; 41min ago
     Docs: http://www.elastic.co
 Main PID: 8582 (java)
    Tasks: 36 (limit: 4915)
   CGroup: /system.slice/elasticsearch.service
           ├─8582 / usr / bin / java -Xms1g -Xmx1g -XX: + UseConcMarkSweepGC -XX: CMSInitiatingOccupancyFraction = 75 -XX: + UseCMSInitiatingOccupancyOnly -XX: + AlwaysPreTouch -Xss1m -Djava.awt.headless = true -Dfile.
           └─8632 / usr / share / elasticsearch / modules / x-pack-ml / platform / linux-x86_64 / bin / controller

Apr 16 15:34:14 JRDELS11 systemd [1]: Started Elasticsearch.

I check open ports.

[root @ JRDELS11 elasticsearch] $ netstat -ant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN
tcp 0 0 10.200.23.9:22 10.60.152.33:5922 ESTABLISHED
tcp6 0 0 ::: 111 ::: * LISTEN
tcp6 0 0 10.200.23.9:9200 ::: * LISTEN
tcp6 0 0 10.200.23.9:9300 ::: * LISTEN
tcp6 0 0 ::: 22 ::: * LISTEN

I understand that the ES server part is correctly configured and working.

On the side of the nodes of the LR cluster, according to the documentation, you can configure the connection to a remote ES server or the osgi configuration files or from the control panel in the Elasticsearch 6 configuration

According to the documentation, if I do it from the LR control panel, I would only have to change the option of "Embedded" to "Remote" and that the value of the name of the elasticsearch cluster is equal to the one defined in the server. of is. Is this configuration correct or would I need something else to connect LR to my ES on remote?

Thanks in advanced.

Regards!!
thumbnail
Andrew Jardine, modified 6 Years ago. Liferay Legend Posts: 2416 Join Date: 12/22/10 Recent Posts
In the control panel, double check the port settings. Also, ddon't forget that after you make all of these changes you also need to REINDEX by going to Control Panel > Configuration > Search and then hitting the Reindex (ALL) option. 

That last step is pretty important. If you don't do that then you have a connection to an empty index emoticon. An easy test once you have done all of that is go to Control Panel > Users and Organizations. The list of users should be displayed there and if you can't find any users, then the connection is either not in place correctly or the reindexing would have failed.
Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Good afternoon again,

I have correctly configured the name of the file [...] osgi / configs / com.liferay.portal.search.elasticsearch6.configuration.ElasticsearchConfiguration.config adding the configuration:
OperationMode = "REMOTE"
transportAddresses = "10.200.23.9:9300" [IP of the remote ES server - it is not on the same machine as LR]

Made these changes, at the time of lifting the LR service gives some errors that finally does not let me lift the service by port 8080 and logically I can not access the LR administration console. Then I leave some of the errors to raise the sevice, the file catalina.out:

​​​​​​​Loading file:/opt/liferay-ce-portal-7.1.2-ga3-test/portal-setup-wizard.properties
2019-04-22 15:14:52.576 INFO  [main][PortalContextLoaderListener:139] JVM arguments: -Djava.util.logging.config.file=/opt/liferay-ce-portal-7.1.2-ga3-test/tomcat-9.0.10/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Dfile.encoding=UTF8 -Djava.net.preferIPv4Stack=true -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false -Duser.timezone=GMT -Xms2560m -Xmx2560m -XX:MaxNewSize=1536m -XX:MaxMetaspaceSize=384m -XX:MetaspaceSize=384m -XX:NewSize=1536m -XX:SurvivorRatio=7 -Dignore.endorsed.dirs= -Dcatalina.base=/opt/liferay-ce-portal-7.1.2-ga3-test/tomcat-9.0.10 -Dcatalina.home=/opt/liferay-ce-portal-7.1.2-ga3-test/tomcat-9.0.10 -Djava.io.tmpdir=/opt/liferay-ce-portal-7.1.2-ga3-test/tomcat-9.0.10/temp
2019-04-22 15:14:55.426 INFO  [main][DialectDetector:158] Using dialect org.hibernate.dialect.PostgreSQLDialect for PostgreSQL 10.5
2019-04-22 15:14:57.190 INFO  [main][ModuleFrameworkImpl:1326] Starting initial bundles
2019-04-22 15:14:59.522 INFO  [main][ModuleFrameworkImpl:1601] Started initial bundles
2019-04-22 15:14:59.523 INFO  [main][ModuleFrameworkImpl:1636] Starting dynamic bundles
2019-04-22 15:15:14.686 INFO  [main][ModuleFrameworkImpl:1725] Started dynamic bundles
2019-04-22 15:15:14.687 INFO  [main][ModuleFrameworkImpl:413] Navigate to Control Panel &gt; Configuration &gt; Gogo Shell and enter "lb" to see all bundles
2019-04-22 15:15:20.677 ERROR [Framework Event Dispatcher: Equinox Container: 887023c8-5deb-4b2c-9b45-ad16f32b264a][com_liferay_portal_search:97] FrameworkEvent ERROR
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{GOxE7E5DR4CeDBst6Re0yA}{localhost}{127.0.0.1:9300}]]
        at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:349)
        at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:247)
        at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60)
        at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:382)
        at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:395)
        at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:384)
        at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:46)
        at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:53)
...
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:350)
        at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:492)

    __    ____________________  _____  __
   / /   /  _/ ____/ ____/ __ \/   \ \/ /
  / /    / // /_  / __/ / /_/ / /| |\  /
 / /____/ // __/ / /___/ _, _/ ___ |/ /
/_____/___/_/   /_____/_/ |_/_/  |_/_/

Starting Liferay Community Edition Portal 7.1.2 CE GA3 (Judson / Build 7102 / January 7, 2019)

2019-04-22 15:15:22.244 INFO  [main][StartupHelper:72] There are no patches installed
2019-04-22 15:15:23.692 INFO  [main][AutoDeployDir:193] Auto deploy scanner started for /opt/liferay-ce-portal-7.1.2-ga3-test/deploy
2019-04-22 15:15:24.113 ERROR [main][PortalInstances:261] NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{GOxE7E5DR4CeDBst6Re0yA}{localhost}{127.0.0.1:9300}]]
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{GOxE7E5DR4CeDBst6Re0yA}{localhost}{127.0.0.1:9300}]]
        at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:349)
....

2019-04-22 15:15:30.991 WARN  [liferay/search_writer/SYSTEM_ENGINE-2][ProxyMessageListener:88] NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{GOxE7E5DR4CeDBst6Re0yA}{localhost}{127.0.0.1:9300}]]
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{GOxE7E5DR4CeDBst6Re0yA}{localhost}{127.0.0.1:9300}]]
        at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:349)
        at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:247)
...
2019-04-22 15:15:47.385 INFO  [main][ThemeHotDeployListener:108] 1 theme for classic-theme is available for use
2019-04-22 15:15:47.490 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
2019-04-22 15:15:47.520 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
2019-04-22 15:15:47.533 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
2019-04-22 15:15:47.542 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
2019-04-22 15:15:47.550 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
2019-04-22 15:15:47.558 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
2019-04-22 15:15:47.607 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
...
​​​​​​​22-Apr-2019 15:15:51.779 GRAVE [http-nio-8080-exec-1] org.apache.catalina.core.ApplicationDispatcher.invoke El Servlet.service() para servlet [jsp] lanzó una excepción
 java.lang.NullPointerException
 


And from there, it no longer allows me to lift the service. And that I only changed the configuration of LR to connect with the ES. I do not know if something will be wrong for me to stay this way.

I hope to find light at the end of the tunnel and that surely is silly but I do not know where to look!

Best regards and many thanks in advance.
thumbnail
Jorge Díaz, modified 6 Years ago. Liferay Master Posts: 753 Join Date: 1/9/14 Recent Posts
Hi Admin Cauce,


You configuration Liferay side was not correctly setup, as Liferay is trying to connect to 127.0.0.1 machine:
2019-04-22 15:15:20.677 ERROR [Framework Event Dispatcher: Equinox Container: 887023c8-5deb-4b2c-9b45-ad16f32b264a][com_liferay_portal_search:97] FrameworkEvent ERROR
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{GOxE7E5DR4CeDBst6Re0yA}{localhost}{127.0.0.1:9300}]]
        at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:349)
Try do the following:
  1. Start Liferay
  2. After starting Liferay go to Control Panel => Configuration => System Settings => Platform => Search => Elasticsearch and setup here:
    • Operation Mode  => REMOTE
    • Transport Addresses  => your elasticsearch hostname and port
  3. Go to Control Panel => Configuration => Search
  4. Execute a full reindex (click in Reindex all search indexes)
  5. Check log file
  6. If everything goes fine, try restarting

After doing all that configuration, if you want a *.config file, you can go to Control Panel => Configuration => System Settings => Platform => Search => Elasticsearch click in the "kebab menú" (three dots menu) and select "Export"

You will export current configuration to a file that can be deployed to osgi/config.
Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Hallelujah!!!

Thanks for this last information. I have done these steps and already, I think, it connects correctly. I think the problem was that when I configured my ES server in the Control Panel of the LR, I was missing the step of "Reindexing all indexes" and then restarting the LR service.

Now I am looking at the logs on the ES server when a full reindexing is done:​​​​​​​

[2019-04-23T10: 11: 00,793] [INFO] [o.e.c.m.MetaDataDeleteIndexService] [JRDELS11] [liferay-20099 / xi4TUHX7QQuzoJ1fm1toTA] deleting index
[2019-04-23T10: 11: 00,862] [INFO] [oecmMetaDataCreateIndexService] [JRDELS11] [liferay-20099] creating index, cause [api], templates [], shards [1] / [0], mappings [LiferayDocumentType ]
[2019-04-23T10: 11: 01,054] [INFO] [o.e.c.m.MetaDataMappingService] [JRDELS11] [liferay-20099 / hSZFoRZSSPuHieksfEdZ1g] update_mapping [LiferayDocumentType]
[2019-04-23T10: 11: 01,305] [INFO] [o.e.c.m.MetaDataMappingService] [JRDELS11] [liferay-20099 / hSZFoRZSSPuHieksfEdZ1g] update_mapping [LiferayDocumentType]
[2019-04-23T10: 11: 08,528] [INFO] [o.e.c.m.MetaDataMappingService] [JRDELS11] [liferay-20099 / hSZFoRZSSPuHieksfEdZ1g] update_mapping [LiferayDocumentType]
[2019-04-23T10: 11: 08,618] [INFO] [o.e.c.m.MetaDataMappingService] [JRDELS11] [liferay-20099 / hSZFoRZSSPuHieksfEdZ1g] update_mapping [LiferayDocumentType]
[2019-04-23T10: 11: 08,688] [INFO] [o.e.c.m.MetaDataMappingService] [JRDELS11] [liferay-20099 / hSZFoRZSSPuHieksfEdZ1g] update_mapping [LiferayDocumentType]

And on the other hand, I already got the Liferay service up correctly without the problems I had before. In the file catalina.out I do not see any message of ERROR. Now I just have to add the second node to the cluster. Is there a procedure to add a second LR node to the cluster? I mean, not the configuration in the portal.propierties file, but the way to add it as a cluster.

Best regards and many thanks for the information.
thumbnail
Jorge Díaz, modified 6 Years ago. Liferay Master Posts: 753 Join Date: 1/9/14 Recent Posts
Hi Admin CAUCE,

Admin CAUCE
And on the other hand, I already got the Liferay service up correctly without the problems I had before. In the file catalina.out I do not see any message of ERROR. Now I just have to add the second node to the cluster. Is there a procedure to add a second LR node to the cluster? I mean, not the configuration in the portal.propierties file, but the way to add it as a cluster.

All configuration related to System Settings are stored in database (inside configuration_ table), so it is not neccessary to do additional tasks in case of adding a new node to cluster.

Note: in case you have already setup more than one Liferay node and you do any change to Elasticsearch configuration in Liferay, after doing the change in first node, you will have to reboot rest of the nodes.
Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Thanks for teh solution.

Now I have this other problem in one of the nodes when I try tu start,
​​​​​​​2019-04-24 09:10:51.025 ERROR [Start Level: Equinox Container: 7100e0e6-fb71-4e0f-9a56-d3d491cc3a6e][Cache:224] Unable to set localhost. This prevents creation of a GUID. Cause was: JRDLRC10: JRDLRC10: Nombre o servicio desconocido
java.net.UnknownHostException: JRDLRC10: JRDLRC10: Nombre o servicio desconocido
&nbsp; &nbsp; &nbsp; &nbsp; at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.Cache.<clinit>(Cache.java:222)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.config.ConfigurationHelper.createCache(ConfigurationHelper.java:305)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.config.ConfigurationHelper.createDefaultCache(ConfigurationHelper.java:223)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.CacheManager.configure(CacheManager.java:759)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.CacheManager.doInit(CacheManager.java:464)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.CacheManager.init(CacheManager.java:388)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.CacheManager.<init>(CacheManager.java:264)

...
[b]Caused by: java.net.UnknownHostException: JRDLRC10: Nombre o servicio desconocido
&nbsp; &nbsp; &nbsp; &nbsp; at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
&nbsp; &nbsp; &nbsp; &nbsp; at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
&nbsp; &nbsp; &nbsp; &nbsp; at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
&nbsp; &nbsp; &nbsp; &nbsp; at java.net.InetAddress.getLocalHost(InetAddress.java:1500)[/b]
[b][/b]
</init></clinit>
thumbnail
Olaf Kock, modified 6 Years ago. Liferay Legend Posts: 6441 Join Date: 9/23/08 Recent Posts
Admin CAUCE
Now I have this other problem in one of the nodes when I try tu start,
​​​​​​​2019-04-24 09:10:51.025 ERROR [Start Level: Equinox Container: 7100e0e6-fb71-4e0f-9a56-d3d491cc3a6e][Cache:224] Unable to set localhost. This prevents creation of a GUID. Cause was: JRDLRC10: JRDLRC10: Nombre o servicio desconocido
java.net.UnknownHostException: JRDLRC10: JRDLRC10: Nombre o servicio desconocido
&nbsp; &nbsp; &nbsp; &nbsp; at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.Cache.<clinit>(Cache.java:222)
</clinit>
Did you google the error message (no pun intended ;) )? It is actually pretty good, and the first hit on stackoverflow even has to do with ehcache.

Maybe a hostname typo somewhere?
Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Finally I found the error.

I was missing my DNS configuration in the resolv.conf file of the machine. Problem solved.

Now I have 3 questions or questions to ask:

1 - Currently, when I launch the Liferay service, I pick it up with the "root" user, since within the Liferay path, it is configured with owner and group to "root". Could it be changed for LR to lift the service with the user, for example, "tomcat" created manually for this service? If so, how should I do it?

2 - You could mount a service to stop and / or start the LR service using the systemctl, since to stop the service, I do it with a kill -9 <id process>, and to boot it sh [path_lr] / tomcat /bin/catalina.sh start, so that it is more comfortable for the administrator.

3 - I'm thinking about doing a process to deploy the war files that you want to deploy to all the nodes of the cluster simultaneously, for example, create a folder where files are deposited, and launch a script that copies those files in the path deploy of each of the nodes, so that both of them have the same version of the file deployed, and since it has the property of hot deploy, this is transparent and easier for the developer. It's a good idea?

regards
thumbnail
Olaf Kock, modified 6 Years ago. Liferay Legend Posts: 6441 Join Date: 9/23/08 Recent Posts
Admin CAUCE

1 - Currently, when I launch the Liferay service, I pick it up with the "root" user, since within the Liferay path, it is configured with owner and group to "root". Could it be changed for LR to lift the service with the user, for example, "tomcat" created manually for this service? If so, how should I do it?
No root. Period. Check this blog article series (there are linked articles on the next chapters)

2 - You could mount a service to stop and / or start the LR service using the systemctl, since to stop the service, I do it with a kill -9 <id process>, and to boot it sh [path_lr] / tomcat /bin/catalina.sh start, so that it is more comfortable for the administrator.
Don't do kill -9 unless you've absolutely exhausted all of your other options. Check the blog series linked above or just go with systemctl.

3 - I'm thinking about doing a process to deploy the war files that you want to deploy to all the nodes of the cluster simultaneously, for example, create a folder where files are deposited, and launch a script that copies those files in the path deploy of each of the nodes, so that both of them have the same version of the file deployed, and since it has the property of hot deploy, this is transparent and easier for the developer. It's a good idea?
If this is a good idea depends on your other deployment strategies. My main concern for deploying new software is always: Can I rebuild the same server if the current one fails hard? Just operating a single server without taking care of the occasional server outage and recovery: You're fine any way.
Otherwise, you should always automate the deployment. How exactly you do that largely depends on your other infrastructure and tools that you're using for deployment.

Notice that maintenance of a running server sometimes is also the undeployment of existing  components, not only the deployment of additional ones.

I know of people who'll never hotdeploy. Others do this for a limited amount of times before restarting the server. And yet again others will just hotdeploy as they like. Whatever you do: automate the (un)deployment as well as you can. Optimize for recovery, not for the individual update of the system.
Admin CAUCE, modified 6 Years ago. Junior Member Posts: 48 Join Date: 9/25/18 Recent Posts
Hi again Olaf,

Thank you very much for your tips. I have implemented some improvements, such as securing the service start with a "non-root" user and also creating a systemctl to start and stop it in a more comfortable, perfect way.

It is already practically everything in operation with the correctly deployed developments and doing load tests using the jmeter to see how it behaves before a large number of threads, which is the sharing that it will have in the real production environment with the users that access .

I was looking at documentation to look at the Liferay version update process. I understand that when I release a higher version than I have, version 7.1.2, and that it is stable, it is convenient to perform an update. My question is, what process must be followed to perform a correct version update in Liferay?

Thank you very much again for all the information provided.

regards
thumbnail
Olaf Kock, modified 6 Years ago. Liferay Legend Posts: 6441 Join Date: 9/23/08 Recent Posts
Admin CAUCE

I was looking at documentation to look at the Liferay version update process. I understand that when I release a higher version than I have, version 7.1.2, and that it is stable, it is convenient to perform an update. My question is, what process must be followed to perform a correct version update in Liferay?
As I'm typically on DXP, rather than CE, I can't tell you from experience if you need to run the upgrade tool for minor upgrades as well, or just for major ones. Sorry, you'd need to find that out for yourself, or through somebody else chiming in to this thread.

For major upgrades, you definitely need the tool.
thumbnail
Achmed Tyrannus Albab, modified 5 Years ago. Regular Member Posts: 158 Join Date: 3/5/10 Recent Posts
Hi Admin CAUCE and everyone else,

Good job on configuring the cluster. Now i may need some assistance.
Considering this is my very first time setting a cluster environment, i may have missed pivotal configuration(s) even after reading the documentation for 100th time at https://portal.liferay.dev/docs/7-1/deploy/-/knowledge_base/d/liferay-clustering .

Before going forward let me introduce you with my settings.
I am using Liferay CE liferay-portal-7.1.2-ga3(tomcat) on 2 linux machines fronted by a nginx on both server each.
Both pointing to a single database server. And also on top there is a load balancer.

My previous setup (being the smartass i was) , both liferay connected to the same database, and using dbstore for data files.
And i did a rsync for whichever directories that i think that mattered. Of course this setting failed almost miserably. 
The issue was (as Olaf had mentioned) change to server A doesn't get affected on server B, for at least not immediately.
It only does after almost an hour or so, or as soon as I clear the database cache on server B. I did some more tweaking and of course it didnt work.

So now im back at the documentation that I had failed to understand the first time around. Still I am not getting it right.
Sorry for being long winded, but here is the start of my question:
  1. All nodes should point to the same database or database cluster. - DONE
  2. Documents and Media repositories must have the same configuration and be accessible to all nodes of the cluster. - DONE DBSTORE
  3. Search should be on a separate search server that is optionally clustered. - DON'T HAVE EXTRA SERVER FOR THIS
  4. Cluster Link must be enabled so the cache replicates across all nodes of the cluster. - PROBABLY WHERE MY ISSUE IS
  5. Applications must be auto-deployed to each node individually. - HAVEN'T REACH HERE YET
After setting 
cluster.link.enabled=true&nbsp;
in portal-ext.properties, i started liferay on server A.
This is my error log:
....
INFO &nbsp;[main][ModuleFrameworkImpl:1636] Starting dynamic bundles
INFO &nbsp;[Start Level: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][JGroupsClusterChannelFactory:158] Autodetecting JGroups outgoing IP address and interface for www.google.com:80
INFO &nbsp;[Start Level: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][JGroupsClusterChannelFactory:197] Setting JGroups outgoing IP address to 11.11.1.10 and interface to ens192
INFO &nbsp;[Start Level: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][JGroupsReceiver:91] Accepted view [DEV1-15243|0] (1) [DEV1-15243]
INFO &nbsp;[Start Level: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][JGroupsClusterChannel:105] Create a new JGroups channel {channelName: liferay-channel-control, localAddress: DEV1-15243, properties: UDP(discard_incompatible_packets=true;internal_thread_pool_min_threads=2;internal_thread_pool_keep_alive_time=30000;time_service_interval=500;thread_pool_max_threads=10;internal_thread_pool_queue_enabled=true;mcast_group_addr=239.255.0.1;ergonomics=true;enable_unicast_bundling=true;port_range=50;loopback_copy=false;thread_naming_pattern=cl;suppress_time_out_of_buffer_space=60000;internal_thread_pool_rejection_policy=discard;internal_thread_pool_enabled=true;stats=true;oob_thread_pool_enabled=true;oob_thread_pool_rejection_policy=discard;suppress_time_different_version_warnings=60000;mcast_send_buf_size=100000;id=21;thread_pool_rejection_policy=Discard;logical_addr_cache_max_size=2000;suppress_time_different_cluster_warnings=60000;loopback=true;timer_rejection_policy=abort;oob_thread_pool_min_threads=2;max_bundle_timeout=20;enable_diagnostics=true;mcast_recv_buf_size=500000;disable_loopback=false;internal_thread_pool_max_threads=4;external_port=0;oob_thread_pool_max_threads=10;log_discard_msgs=true;name=UDP;oob_thread_pool_keep_alive_time=30000;bind_addr=10.88.2.20;wheel_size=200;bundler_capacity=20000;log_discard_msgs_version=true;enable_batching=true;tick_time=50;timer_max_threads=4;ucast_send_buf_size=100000;thread_pool_queue_enabled=true;enable_bundling=true;ucast_recv_buf_size=64000;oob_thread_pool_queue_enabled=false;thread_pool_keep_alive_time=30000;bind_port=0;thread_pool_min_threads=2;ignore_dont_bundle=true;ip_ttl=8;bind_interface_str=;diagnostics_ttl=8;tos=8;loopback_separate_thread=true;logical_addr_cache_expiration=120000;oob_thread_pool_queue_max_size=500;diagnostics_addr=224.0.75.75;receive_on_all_interfaces=false;mcast_port=23301;internal_thread_pool_queue_max_size=500;timer_queue_max_size=500;thread_pool_queue_max_size=10000;max_bundle_size=64000;physical_addr_max_fetch_attempts=1;ip_mcast=true;timer_min_threads=2;thread_pool_enabled=true;bundler_type=transfer-queue;timer_keep_alive_time=5000;logical_addr_cache_reaper_interval=60000;timer_type=new3;diagnostics_port=7500;who_has_cache_timeout=2000):PING(async_discovery_use_separate_thread_per_request=false;ergonomics=true;stagger_timeout=0;force_sending_discovery_rsps=true;async_discovery=false;timeout=3000;always_send_physical_addr_with_discovery_request=true;max_members_in_discovery_request=500;send_cache_on_join=false;num_initial_srv_members=0;break_on_coord_rsp=true;stats=true;use_disk_cache=false;num_initial_members=10;name=PING;discovery_rsp_expiry_time=60000;id=6;return_entire_cache=false):MERGE3(check_interval=48000;stats=true;min_interval=10000;ergonomics=true;name=MERGE3;id=54;max_participants_in_merge=100;max_interval=30000):FD_SOCK(get_cache_timeout=1000;sock_conn_timeout=1000;client_bind_port=0;ergonomics=true;start_port=0;port_range=50;suspect_msg_interval=5000;num_tries=3;bind_interface_str=;stats=true;external_port=0;name=FD_SOCK;bind_addr=127.0.0.1;keep_alive=true;id=3):FD_ALL(use_time_service=true;stats=true;timeout_check_interval=2000;ergonomics=true;name=FD_ALL;interval=8000;id=29;timeout=40000;msg_counts_as_heartbeat=false):VERIFY_SUSPECT(num_msgs=1;use_mcast_rsps=false;bind_interface_str=;stats=true;ergonomics=true;name=VERIFY_SUSPECT;bind_addr=127.0.0.1;id=13;timeout=1500;use_icmp=false):NAKACK2(resend_last_seqno_max_times=3;use_mcast_xmit=false;ergonomics=true;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=30000;become_server_queue_size=50;xmit_interval=500;print_stability_history_on_failed_xmit=false;resend_last_seqno=true;max_xmit_req_size=511600;discard_delivered_msgs=true;suppress_time_non_member_warnings=60000;max_msg_batch_size=500;xmit_table_num_rows=100;stats=true;xmit_from_random_member=false;log_discard_msgs=true;log_not_found_msgs=true;xmit_table_resize_factor=1.2;name=NAKACK2;id=57;max_rebroadcast_timeout=2000;use_mcast_xmit_req=false):UNICAST3(ergonomics=true;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=60000;ack_threshold=5;sync_min_interval=2000;max_retransmit_time=60000;xmit_interval=500;max_xmit_req_size=511600;conn_close_timeout=10000;max_msg_batch_size=500;conn_expiry_timeout=0;ack_batches_immediately=true;xmit_table_num_rows=100;stats=true;xmit_table_resize_factor=1.2;log_not_found_msgs=true;name=UNICAST3;id=64):STABLE(cap=0.1;stability_delay=0;stats=true;ergonomics=true;name=STABLE;desired_avg_gossip=50000;max_bytes=4000000;id=16;send_stable_msgs_to_coord_only=true):GMS(max_join_attempts=10;print_local_addr=true;handle_concurrent_startup=true;view_bundling=true;leave_timeout=1000;log_view_warnings=true;install_view_locally_first=false;ergonomics=true;use_delta_views=true;resume_task_timeout=20000;use_flush_if_present=true;use_merger2=true;print_physical_addrs=true;join_timeout=2000;view_ack_collection_timeout=2000;stats=true;num_prev_views=10;merge_timeout=5000;max_bundling_time=50;name=GMS;num_prev_mbrs=50;id=14;log_collect_msgs=false;membership_change_policy=org.jgroups.protocols.pbcast.GMS$DefaultMembershipPolicy@2ec4eccd):UFC(min_threshold=0.4;stats=true;ergonomics=true;max_credits=2000000;name=UFC;min_credits=800000;id=45;max_block_time=5000;ignore_synchronous_response=false):MFC(min_threshold=0.4;stats=true;ergonomics=true;max_credits=2000000;name=MFC;min_credits=800000;id=44;max_block_time=5000;ignore_synchronous_response=false):FRAG2(frag_size=60000;stats=true;ergonomics=true;name=FRAG2;id=5):RSVP(ack_on_delivery=true;stats=true;ergonomics=true;name=RSVP;resend_interval=2000;id=55;throw_exception_on_timeout=true;timeout=10000)}
INFO &nbsp;[Start Level: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][JGroupsReceiver:91] Accepted view [DEV1-34758|0] (1) [DEV1-34758]
INFO &nbsp;[Start Level: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][JGroupsClusterChannel:105] Create a new JGroups channel {channelName: liferay-channel-transport-0, localAddress: DEV1-34758, properties: UDP(discard_incompatible_packets=true;internal_thread_pool_min_threads=2;internal_thread_pool_keep_alive_time=30000;time_service_interval=500;thread_pool_max_threads=10;internal_thread_pool_queue_enabled=true;mcast_group_addr=239.255.0.2;ergonomics=true;enable_unicast_bundling=true;port_range=50;loopback_copy=false;thread_naming_pattern=cl;suppress_time_out_of_buffer_space=60000;internal_thread_pool_rejection_policy=discard;internal_thread_pool_enabled=true;stats=true;oob_thread_pool_enabled=true;oob_thread_pool_rejection_policy=discard;suppress_time_different_version_warnings=60000;mcast_send_buf_size=100000;id=21;thread_pool_rejection_policy=Discard;logical_addr_cache_max_size=2000;suppress_time_different_cluster_warnings=60000;loopback=true;timer_rejection_policy=abort;oob_thread_pool_min_threads=2;max_bundle_timeout=20;enable_diagnostics=true;mcast_recv_buf_size=500000;disable_loopback=false;internal_thread_pool_max_threads=4;external_port=0;oob_thread_pool_max_threads=10;log_discard_msgs=true;name=UDP;oob_thread_pool_keep_alive_time=30000;bind_addr=10.88.2.20;wheel_size=200;bundler_capacity=20000;log_discard_msgs_version=true;enable_batching=true;tick_time=50;timer_max_threads=4;ucast_send_buf_size=100000;thread_pool_queue_enabled=true;enable_bundling=true;ucast_recv_buf_size=64000;oob_thread_pool_queue_enabled=false;thread_pool_keep_alive_time=30000;bind_port=0;thread_pool_min_threads=2;ignore_dont_bundle=true;ip_ttl=8;bind_interface_str=;diagnostics_ttl=8;tos=8;loopback_separate_thread=true;logical_addr_cache_expiration=120000;oob_thread_pool_queue_max_size=500;diagnostics_addr=224.0.75.75;receive_on_all_interfaces=false;mcast_port=23302;internal_thread_pool_queue_max_size=500;timer_queue_max_size=500;thread_pool_queue_max_size=10000;max_bundle_size=64000;physical_addr_max_fetch_attempts=1;ip_mcast=true;timer_min_threads=2;thread_pool_enabled=true;bundler_type=transfer-queue;timer_keep_alive_time=5000;logical_addr_cache_reaper_interval=60000;timer_type=new3;diagnostics_port=7500;who_has_cache_timeout=2000):PING(async_discovery_use_separate_thread_per_request=false;ergonomics=true;stagger_timeout=0;force_sending_discovery_rsps=true;async_discovery=false;timeout=3000;always_send_physical_addr_with_discovery_request=true;max_members_in_discovery_request=500;send_cache_on_join=false;num_initial_srv_members=0;break_on_coord_rsp=true;stats=true;use_disk_cache=false;num_initial_members=10;name=PING;discovery_rsp_expiry_time=60000;id=6;return_entire_cache=false):MERGE3(check_interval=48000;stats=true;min_interval=10000;ergonomics=true;name=MERGE3;id=54;max_participants_in_merge=100;max_interval=30000):FD_SOCK(get_cache_timeout=1000;sock_conn_timeout=1000;client_bind_port=0;ergonomics=true;start_port=0;port_range=50;suspect_msg_interval=5000;num_tries=3;bind_interface_str=;stats=true;external_port=0;name=FD_SOCK;bind_addr=127.0.0.1;keep_alive=true;id=3):FD_ALL(use_time_service=true;stats=true;timeout_check_interval=2000;ergonomics=true;name=FD_ALL;interval=8000;id=29;timeout=40000;msg_counts_as_heartbeat=false):VERIFY_SUSPECT(num_msgs=1;use_mcast_rsps=false;bind_interface_str=;stats=true;ergonomics=true;name=VERIFY_SUSPECT;bind_addr=127.0.0.1;id=13;timeout=1500;use_icmp=false):NAKACK2(resend_last_seqno_max_times=3;use_mcast_xmit=false;ergonomics=true;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=30000;become_server_queue_size=50;xmit_interval=500;print_stability_history_on_failed_xmit=false;resend_last_seqno=true;max_xmit_req_size=511600;discard_delivered_msgs=true;suppress_time_non_member_warnings=60000;max_msg_batch_size=500;xmit_table_num_rows=100;stats=true;xmit_from_random_member=false;log_discard_msgs=true;log_not_found_msgs=true;xmit_table_resize_factor=1.2;name=NAKACK2;id=57;max_rebroadcast_timeout=2000;use_mcast_xmit_req=false):UNICAST3(ergonomics=true;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=60000;ack_threshold=5;sync_min_interval=2000;max_retransmit_time=60000;xmit_interval=500;max_xmit_req_size=511600;conn_close_timeout=10000;max_msg_batch_size=500;conn_expiry_timeout=0;ack_batches_immediately=true;xmit_table_num_rows=100;stats=true;xmit_table_resize_factor=1.2;log_not_found_msgs=true;name=UNICAST3;id=64):STABLE(cap=0.1;stability_delay=0;stats=true;ergonomics=true;name=STABLE;desired_avg_gossip=50000;max_bytes=4000000;id=16;send_stable_msgs_to_coord_only=true):GMS(max_join_attempts=10;print_local_addr=true;handle_concurrent_startup=true;view_bundling=true;leave_timeout=1000;log_view_warnings=true;install_view_locally_first=false;ergonomics=true;use_delta_views=true;resume_task_timeout=20000;use_flush_if_present=true;use_merger2=true;print_physical_addrs=true;join_timeout=2000;view_ack_collection_timeout=2000;stats=true;num_prev_views=10;merge_timeout=5000;max_bundling_time=50;name=GMS;num_prev_mbrs=50;id=14;log_collect_msgs=false;membership_change_policy=org.jgroups.protocols.pbcast.GMS$DefaultMembershipPolicy@1197b2de):UFC(min_threshold=0.4;stats=true;ergonomics=true;max_credits=2000000;name=UFC;min_credits=800000;id=45;max_block_time=5000;ignore_synchronous_response=false):MFC(min_threshold=0.4;stats=true;ergonomics=true;max_credits=2000000;name=MFC;min_credits=800000;id=44;max_block_time=5000;ignore_synchronous_response=false):FRAG2(frag_size=60000;stats=true;ergonomics=true;name=FRAG2;id=5):RSVP(ack_on_delivery=true;stats=true;ergonomics=true;name=RSVP;resend_interval=2000;id=55;throw_exception_on_timeout=true;timeout=10000)}
INFO &nbsp;[main][ModuleFrameworkImpl:1725] Started dynamic bundles
INFO &nbsp;[main][ModuleFrameworkImpl:413] Navigate to Control Panel &gt; Configuration &gt; Gogo Shell and enter "lb" to see all bundles
WARN &nbsp;[Elasticsearch initialization thread][EmbeddedElasticsearchConnection:288] Liferay is configured to use embedded Elasticsearch as its search engine. Do NOT use embedded Elasticsearch in production. Embedded Elasticsearch is useful for development and demonstration purposes. Refer to the documentation for details on the limitations of embedded Elasticsearch. Remote Elasticsearch connections can be configured in the Control Panel.
ERROR [Framework Event Dispatcher: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][com_liferay_portal_search:97] FrameworkEvent ERROR
java.lang.IllegalStateException: Unable to initialize Elasticsearch cluster: {........
First question before i go to the next, am i supposed to set else where like in nginx or tomcat within liferay to allow the cluster to work?
I even tried purging the elasticsearch directory but error persists. Am i supposed to start server A and server B together to make this work?

Thanks in advance everyone.

thumbnail
Jorge Díaz, modified 5 Years ago. Liferay Master Posts: 753 Join Date: 1/9/14 Recent Posts
Hi Achmed,
It seems you are having some trouble with Elasticsearch:
WARN  [Elasticsearch initialization thread][EmbeddedElasticsearchConnection:288] Liferay is configured to use embedded Elasticsearch as its search engine. Do NOT use embedded Elasticsearch in production. Embedded Elasticsearch is useful for development and demonstration purposes. Refer to the documentation for details on the limitations of embedded Elasticsearch. Remote Elasticsearch connections can be configured in the Control Panel.
ERROR [Framework Event Dispatcher: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][com_liferay_portal_search:97] FrameworkEvent ERROR
java.lang.IllegalStateException: Unable to initialize Elasticsearch cluster: {........
Before configuring your cluster, you should configure the Remote Elasticsearch: