RE: Liferay cluster configurationRE: Liferay cluster configurationhttps://liferay.dev/en/c/message_boards/find_thread?p_l_id=119785333&threadId=1129368422024-03-29T06:23:29Z2024-03-29T06:23:29ZRE: Liferay cluster configurationJorge Diazhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1180964472019-12-17T07:52:11Z2019-12-17T07:52:11ZHi Achmed,<br />It seems you are having some trouble with Elasticsearch:<br /><blockquote><table><tr><td><em>WARN [Elasticsearch initialization thread][EmbeddedElasticsearchConnection:288] Liferay is configured to use embedded Elasticsearch as its search engine. Do NOT use embedded Elasticsearch in production. Embedded Elasticsearch is useful for development and demonstration purposes. Refer to the documentation for details on the limitations of embedded Elasticsearch. Remote Elasticsearch connections can be configured in the Control Panel.<br /></em></td></tr><tr></tr></table><table><tr><td><em>ERROR [Framework Event Dispatcher: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][com_liferay_portal_search:97] FrameworkEvent ERROR</em></td></tr><tr></tr></table><em>java.lang.IllegalStateException: Unable to initialize Elasticsearch cluster: {........<br /></em></blockquote>Before configuring your cluster, you should configure the Remote Elasticsearch:<br /><ul style="list-style: disc outside;"><li> <a href="https://portal.liferay.dev/docs/7-0/deploy/-/knowledge_base/d/configuring-elasticsearch-for-liferay-0#embedded">https://portal.liferay.dev/docs/7-0/deploy/-/knowledge_base/d/configuring-elasticsearch-for-liferay-0#embedded</a><a href="https://portal.liferay.dev/docs/7-0/deploy/-/knowledge_base/d/configuring-elasticsearch-for-liferay-0#embedded-vs-remote-operation-mode">-vs-remote-operation-mode</a></li></ul>Jorge Diaz2019-12-17T07:52:11ZRE: Liferay cluster configurationAchmed Tyrannus Albabhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1180960982019-12-17T03:28:21Z2019-12-17T03:28:21Z<html><head></head><body>Hi Admin CAUCE and everyone else,<br><br>Good job on configuring the cluster. Now i may need some assistance.<br>Considering this is my very first time setting a cluster environment, i may have missed pivotal configuration(s) even after reading the documentation for 100th time at <a href="https://portal.liferay.dev/docs/7-1/deploy/-/knowledge_base/d/liferay-clustering">https://portal.liferay.dev/docs/7-1/deploy/-/knowledge_base/d/liferay-clustering</a> .<br><br>Before going forward let me introduce you with my settings.<br>I am using Liferay CE liferay-portal-7.1.2-ga3(tomcat) on 2 linux machines fronted by a nginx on both server each. <br>Both pointing to a single database server. And also on top there is a load balancer.<br><br>My previous setup (being the smartass i was) , both liferay connected to the same database, and using dbstore for data files.<br>And i did a rsync for whichever directories that i think that mattered. Of course this setting failed almost miserably. <br>The issue was (as Olaf had mentioned) change to server A doesn't get affected on server B, for at least not immediately.<br>It only does after almost an hour or so, or as soon as I clear the database cache on server B. I did some more tweaking and of course it didnt work.<br><br>So now im back at the documentation that I had failed to understand the first time around. Still I am not getting it right.<br>Sorry for being long winded, but here is the start of my question:<br><ol style="list-style: decimal outside;" start="1"><li><a href="https://portal.liferay.dev/docs/7-1/deploy/-/knowledge_base/d/point-all-nodes-to-the-same-database">All nodes should point to the same database or database cluster.</a> - DONE</li><li><a href="https://portal.liferay.dev/docs/7-1/deploy/-/knowledge_base/d/configure-documents-and-media-the-same-for-all-nodes">Documents and Media repositories must have the same configuration and be accessible to all nodes of the cluster.</a> - DONE DBSTORE</li><li><a href="https://portal.liferay.dev/docs/7-1/deploy/-/knowledge_base/d/clustering-search">Search should be on a separate search server that is optionally clustered.</a> - DON'T HAVE EXTRA SERVER FOR THIS</li><li><a href="https://portal.liferay.dev/docs/7-1/deploy/-/knowledge_base/d/enabling-cluster-link">Cluster Link must be enabled so the cache replicates across all nodes of the cluster.</a> - PROBABLY WHERE MY ISSUE IS</li><li><a href="https://portal.liferay.dev/docs/7-1/deploy/-/knowledge_base/d/auto-deploy-to-all-nodes">Applications must be auto-deployed to each node individually.</a> - HAVEN'T REACH HERE YET</li></ol>After setting <pre><code>cluster.link.enabled=true&nbsp;</code></pre>in portal-ext.properties, i started liferay on server A.<br>This is my error log:<br><pre><code>....
INFO &nbsp;[main][ModuleFrameworkImpl:1636] Starting dynamic bundles
INFO &nbsp;[Start Level: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][JGroupsClusterChannelFactory:158] Autodetecting JGroups outgoing IP address and interface for www.google.com:80
INFO &nbsp;[Start Level: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][JGroupsClusterChannelFactory:197] Setting JGroups outgoing IP address to 11.11.1.10 and interface to ens192
INFO &nbsp;[Start Level: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][JGroupsReceiver:91] Accepted view [DEV1-15243|0] (1) [DEV1-15243]
INFO &nbsp;[Start Level: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][JGroupsClusterChannel:105] Create a new JGroups channel {channelName: liferay-channel-control, localAddress: DEV1-15243, properties: UDP(discard_incompatible_packets=true;internal_thread_pool_min_threads=2;internal_thread_pool_keep_alive_time=30000;time_service_interval=500;thread_pool_max_threads=10;internal_thread_pool_queue_enabled=true;mcast_group_addr=239.255.0.1;ergonomics=true;enable_unicast_bundling=true;port_range=50;loopback_copy=false;thread_naming_pattern=cl;suppress_time_out_of_buffer_space=60000;internal_thread_pool_rejection_policy=discard;internal_thread_pool_enabled=true;stats=true;oob_thread_pool_enabled=true;oob_thread_pool_rejection_policy=discard;suppress_time_different_version_warnings=60000;mcast_send_buf_size=100000;id=21;thread_pool_rejection_policy=Discard;logical_addr_cache_max_size=2000;suppress_time_different_cluster_warnings=60000;loopback=true;timer_rejection_policy=abort;oob_thread_pool_min_threads=2;max_bundle_timeout=20;enable_diagnostics=true;mcast_recv_buf_size=500000;disable_loopback=false;internal_thread_pool_max_threads=4;external_port=0;oob_thread_pool_max_threads=10;log_discard_msgs=true;name=UDP;oob_thread_pool_keep_alive_time=30000;bind_addr=10.88.2.20;wheel_size=200;bundler_capacity=20000;log_discard_msgs_version=true;enable_batching=true;tick_time=50;timer_max_threads=4;ucast_send_buf_size=100000;thread_pool_queue_enabled=true;enable_bundling=true;ucast_recv_buf_size=64000;oob_thread_pool_queue_enabled=false;thread_pool_keep_alive_time=30000;bind_port=0;thread_pool_min_threads=2;ignore_dont_bundle=true;ip_ttl=8;bind_interface_str=;diagnostics_ttl=8;tos=8;loopback_separate_thread=true;logical_addr_cache_expiration=120000;oob_thread_pool_queue_max_size=500;diagnostics_addr=224.0.75.75;receive_on_all_interfaces=false;mcast_port=23301;internal_thread_pool_queue_max_size=500;timer_queue_max_size=500;thread_pool_queue_max_size=10000;max_bundle_size=64000;physical_addr_max_fetch_attempts=1;ip_mcast=true;timer_min_threads=2;thread_pool_enabled=true;bundler_type=transfer-queue;timer_keep_alive_time=5000;logical_addr_cache_reaper_interval=60000;timer_type=new3;diagnostics_port=7500;who_has_cache_timeout=2000):PING(async_discovery_use_separate_thread_per_request=false;ergonomics=true;stagger_timeout=0;force_sending_discovery_rsps=true;async_discovery=false;timeout=3000;always_send_physical_addr_with_discovery_request=true;max_members_in_discovery_request=500;send_cache_on_join=false;num_initial_srv_members=0;break_on_coord_rsp=true;stats=true;use_disk_cache=false;num_initial_members=10;name=PING;discovery_rsp_expiry_time=60000;id=6;return_entire_cache=false):MERGE3(check_interval=48000;stats=true;min_interval=10000;ergonomics=true;name=MERGE3;id=54;max_participants_in_merge=100;max_interval=30000):FD_SOCK(get_cache_timeout=1000;sock_conn_timeout=1000;client_bind_port=0;ergonomics=true;start_port=0;port_range=50;suspect_msg_interval=5000;num_tries=3;bind_interface_str=;stats=true;external_port=0;name=FD_SOCK;bind_addr=127.0.0.1;keep_alive=true;id=3):FD_ALL(use_time_service=true;stats=true;timeout_check_interval=2000;ergonomics=true;name=FD_ALL;interval=8000;id=29;timeout=40000;msg_counts_as_heartbeat=false):VERIFY_SUSPECT(num_msgs=1;use_mcast_rsps=false;bind_interface_str=;stats=true;ergonomics=true;name=VERIFY_SUSPECT;bind_addr=127.0.0.1;id=13;timeout=1500;use_icmp=false):NAKACK2(resend_last_seqno_max_times=3;use_mcast_xmit=false;ergonomics=true;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=30000;become_server_queue_size=50;xmit_interval=500;print_stability_history_on_failed_xmit=false;resend_last_seqno=true;max_xmit_req_size=511600;discard_delivered_msgs=true;suppress_time_non_member_warnings=60000;max_msg_batch_size=500;xmit_table_num_rows=100;stats=true;xmit_from_random_member=false;log_discard_msgs=true;log_not_found_msgs=true;xmit_table_resize_factor=1.2;name=NAKACK2;id=57;max_rebroadcast_timeout=2000;use_mcast_xmit_req=false):UNICAST3(ergonomics=true;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=60000;ack_threshold=5;sync_min_interval=2000;max_retransmit_time=60000;xmit_interval=500;max_xmit_req_size=511600;conn_close_timeout=10000;max_msg_batch_size=500;conn_expiry_timeout=0;ack_batches_immediately=true;xmit_table_num_rows=100;stats=true;xmit_table_resize_factor=1.2;log_not_found_msgs=true;name=UNICAST3;id=64):STABLE(cap=0.1;stability_delay=0;stats=true;ergonomics=true;name=STABLE;desired_avg_gossip=50000;max_bytes=4000000;id=16;send_stable_msgs_to_coord_only=true):GMS(max_join_attempts=10;print_local_addr=true;handle_concurrent_startup=true;view_bundling=true;leave_timeout=1000;log_view_warnings=true;install_view_locally_first=false;ergonomics=true;use_delta_views=true;resume_task_timeout=20000;use_flush_if_present=true;use_merger2=true;print_physical_addrs=true;join_timeout=2000;view_ack_collection_timeout=2000;stats=true;num_prev_views=10;merge_timeout=5000;max_bundling_time=50;name=GMS;num_prev_mbrs=50;id=14;log_collect_msgs=false;membership_change_policy=org.jgroups.protocols.pbcast.GMS$DefaultMembershipPolicy@2ec4eccd):UFC(min_threshold=0.4;stats=true;ergonomics=true;max_credits=2000000;name=UFC;min_credits=800000;id=45;max_block_time=5000;ignore_synchronous_response=false):MFC(min_threshold=0.4;stats=true;ergonomics=true;max_credits=2000000;name=MFC;min_credits=800000;id=44;max_block_time=5000;ignore_synchronous_response=false):FRAG2(frag_size=60000;stats=true;ergonomics=true;name=FRAG2;id=5):RSVP(ack_on_delivery=true;stats=true;ergonomics=true;name=RSVP;resend_interval=2000;id=55;throw_exception_on_timeout=true;timeout=10000)}
INFO &nbsp;[Start Level: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][JGroupsReceiver:91] Accepted view [DEV1-34758|0] (1) [DEV1-34758]
INFO &nbsp;[Start Level: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][JGroupsClusterChannel:105] Create a new JGroups channel {channelName: liferay-channel-transport-0, localAddress: DEV1-34758, properties: UDP(discard_incompatible_packets=true;internal_thread_pool_min_threads=2;internal_thread_pool_keep_alive_time=30000;time_service_interval=500;thread_pool_max_threads=10;internal_thread_pool_queue_enabled=true;mcast_group_addr=239.255.0.2;ergonomics=true;enable_unicast_bundling=true;port_range=50;loopback_copy=false;thread_naming_pattern=cl;suppress_time_out_of_buffer_space=60000;internal_thread_pool_rejection_policy=discard;internal_thread_pool_enabled=true;stats=true;oob_thread_pool_enabled=true;oob_thread_pool_rejection_policy=discard;suppress_time_different_version_warnings=60000;mcast_send_buf_size=100000;id=21;thread_pool_rejection_policy=Discard;logical_addr_cache_max_size=2000;suppress_time_different_cluster_warnings=60000;loopback=true;timer_rejection_policy=abort;oob_thread_pool_min_threads=2;max_bundle_timeout=20;enable_diagnostics=true;mcast_recv_buf_size=500000;disable_loopback=false;internal_thread_pool_max_threads=4;external_port=0;oob_thread_pool_max_threads=10;log_discard_msgs=true;name=UDP;oob_thread_pool_keep_alive_time=30000;bind_addr=10.88.2.20;wheel_size=200;bundler_capacity=20000;log_discard_msgs_version=true;enable_batching=true;tick_time=50;timer_max_threads=4;ucast_send_buf_size=100000;thread_pool_queue_enabled=true;enable_bundling=true;ucast_recv_buf_size=64000;oob_thread_pool_queue_enabled=false;thread_pool_keep_alive_time=30000;bind_port=0;thread_pool_min_threads=2;ignore_dont_bundle=true;ip_ttl=8;bind_interface_str=;diagnostics_ttl=8;tos=8;loopback_separate_thread=true;logical_addr_cache_expiration=120000;oob_thread_pool_queue_max_size=500;diagnostics_addr=224.0.75.75;receive_on_all_interfaces=false;mcast_port=23302;internal_thread_pool_queue_max_size=500;timer_queue_max_size=500;thread_pool_queue_max_size=10000;max_bundle_size=64000;physical_addr_max_fetch_attempts=1;ip_mcast=true;timer_min_threads=2;thread_pool_enabled=true;bundler_type=transfer-queue;timer_keep_alive_time=5000;logical_addr_cache_reaper_interval=60000;timer_type=new3;diagnostics_port=7500;who_has_cache_timeout=2000):PING(async_discovery_use_separate_thread_per_request=false;ergonomics=true;stagger_timeout=0;force_sending_discovery_rsps=true;async_discovery=false;timeout=3000;always_send_physical_addr_with_discovery_request=true;max_members_in_discovery_request=500;send_cache_on_join=false;num_initial_srv_members=0;break_on_coord_rsp=true;stats=true;use_disk_cache=false;num_initial_members=10;name=PING;discovery_rsp_expiry_time=60000;id=6;return_entire_cache=false):MERGE3(check_interval=48000;stats=true;min_interval=10000;ergonomics=true;name=MERGE3;id=54;max_participants_in_merge=100;max_interval=30000):FD_SOCK(get_cache_timeout=1000;sock_conn_timeout=1000;client_bind_port=0;ergonomics=true;start_port=0;port_range=50;suspect_msg_interval=5000;num_tries=3;bind_interface_str=;stats=true;external_port=0;name=FD_SOCK;bind_addr=127.0.0.1;keep_alive=true;id=3):FD_ALL(use_time_service=true;stats=true;timeout_check_interval=2000;ergonomics=true;name=FD_ALL;interval=8000;id=29;timeout=40000;msg_counts_as_heartbeat=false):VERIFY_SUSPECT(num_msgs=1;use_mcast_rsps=false;bind_interface_str=;stats=true;ergonomics=true;name=VERIFY_SUSPECT;bind_addr=127.0.0.1;id=13;timeout=1500;use_icmp=false):NAKACK2(resend_last_seqno_max_times=3;use_mcast_xmit=false;ergonomics=true;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=30000;become_server_queue_size=50;xmit_interval=500;print_stability_history_on_failed_xmit=false;resend_last_seqno=true;max_xmit_req_size=511600;discard_delivered_msgs=true;suppress_time_non_member_warnings=60000;max_msg_batch_size=500;xmit_table_num_rows=100;stats=true;xmit_from_random_member=false;log_discard_msgs=true;log_not_found_msgs=true;xmit_table_resize_factor=1.2;name=NAKACK2;id=57;max_rebroadcast_timeout=2000;use_mcast_xmit_req=false):UNICAST3(ergonomics=true;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=60000;ack_threshold=5;sync_min_interval=2000;max_retransmit_time=60000;xmit_interval=500;max_xmit_req_size=511600;conn_close_timeout=10000;max_msg_batch_size=500;conn_expiry_timeout=0;ack_batches_immediately=true;xmit_table_num_rows=100;stats=true;xmit_table_resize_factor=1.2;log_not_found_msgs=true;name=UNICAST3;id=64):STABLE(cap=0.1;stability_delay=0;stats=true;ergonomics=true;name=STABLE;desired_avg_gossip=50000;max_bytes=4000000;id=16;send_stable_msgs_to_coord_only=true):GMS(max_join_attempts=10;print_local_addr=true;handle_concurrent_startup=true;view_bundling=true;leave_timeout=1000;log_view_warnings=true;install_view_locally_first=false;ergonomics=true;use_delta_views=true;resume_task_timeout=20000;use_flush_if_present=true;use_merger2=true;print_physical_addrs=true;join_timeout=2000;view_ack_collection_timeout=2000;stats=true;num_prev_views=10;merge_timeout=5000;max_bundling_time=50;name=GMS;num_prev_mbrs=50;id=14;log_collect_msgs=false;membership_change_policy=org.jgroups.protocols.pbcast.GMS$DefaultMembershipPolicy@1197b2de):UFC(min_threshold=0.4;stats=true;ergonomics=true;max_credits=2000000;name=UFC;min_credits=800000;id=45;max_block_time=5000;ignore_synchronous_response=false):MFC(min_threshold=0.4;stats=true;ergonomics=true;max_credits=2000000;name=MFC;min_credits=800000;id=44;max_block_time=5000;ignore_synchronous_response=false):FRAG2(frag_size=60000;stats=true;ergonomics=true;name=FRAG2;id=5):RSVP(ack_on_delivery=true;stats=true;ergonomics=true;name=RSVP;resend_interval=2000;id=55;throw_exception_on_timeout=true;timeout=10000)}
INFO &nbsp;[main][ModuleFrameworkImpl:1725] Started dynamic bundles
INFO &nbsp;[main][ModuleFrameworkImpl:413] Navigate to Control Panel &gt; Configuration &gt; Gogo Shell and enter "lb" to see all bundles
WARN &nbsp;[Elasticsearch initialization thread][EmbeddedElasticsearchConnection:288] Liferay is configured to use embedded Elasticsearch as its search engine. Do NOT use embedded Elasticsearch in production. Embedded Elasticsearch is useful for development and demonstration purposes. Refer to the documentation for details on the limitations of embedded Elasticsearch. Remote Elasticsearch connections can be configured in the Control Panel.
ERROR [Framework Event Dispatcher: Equinox Container: e9a28482-d79f-413e-904c-6xxxxxxxxxxx][com_liferay_portal_search:97] FrameworkEvent ERROR
java.lang.IllegalStateException: Unable to initialize Elasticsearch cluster: {........
</code></pre>First question before i go to the next, am i supposed to set else where like in nginx or tomcat within liferay to allow the cluster to work?<br>I even tried purging the elasticsearch directory but error persists. Am i supposed to start server A and server B together to make this work?<br><br>Thanks in advance everyone.<br><br><pre><code></code></pre></body></html>Achmed Tyrannus Albab2019-12-17T03:28:21ZRE: Liferay cluster configurationOlaf Kockhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1134094672019-04-25T16:26:53Z2019-04-25T16:26:53Z<blockquote>Admin CAUCE<br /><br />I was looking at documentation to look at the Liferay version update process. I understand that when I release a higher version than I have, version 7.1.2, and that it is stable, it is convenient to perform an update. My question is, what process must be followed to perform a correct version update in Liferay?<br /></blockquote>As I'm typically on DXP, rather than CE, I can't tell you from experience if you need to <a href="https://dev.liferay.com/en/discover/deployment/-/knowledge_base/7-1/running-the-upgrade-process">run the upgrade tool</a> for minor upgrades as well, or just for major ones. Sorry, you'd need to find that out for yourself, or through somebody else chiming in to this thread.<br /><br />For major upgrades, you definitely need the tool.Olaf Kock2019-04-25T16:26:53ZRE: Liferay cluster configurationAdmin CAUCEhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1134082592019-04-25T14:49:43Z2019-04-25T14:49:43ZHi again Olaf,<br /><br />Thank you very much for your tips. I have implemented some improvements, such as securing the service start with a "non-root" user and also creating a systemctl to start and stop it in a more comfortable, perfect way.<br /><br />It is already practically everything in operation with the correctly deployed developments and doing load tests using the jmeter to see how it behaves before a large number of threads, which is the sharing that it will have in the real production environment with the users that access .<br /><br />I was looking at documentation to look at the Liferay version update process. I understand that when I release a higher version than I have, version 7.1.2, and that it is stable, it is convenient to perform an update. My question is, what process must be followed to perform a correct version update in Liferay?<br /><br />Thank you very much again for all the information provided.<br /><br />regardsAdmin CAUCE2019-04-25T14:49:43ZRE: Liferay cluster configurationOlaf Kockhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1133771762019-04-24T13:49:24Z2019-04-24T13:49:24Z<blockquote>Admin CAUCE<br /><br />1 - Currently, when I launch the Liferay service, I pick it up with the "root" user, since within the Liferay path, it is configured with owner and group to "root". Could it be changed for LR to lift the service with the user, for example, "tomcat" created manually for this service? If so, how should I do it?<br /></blockquote>No root. Period. Check <a href="https://community.liferay.com/blogs/-/blogs/securing-liferay-chapter-1-introduction-basics-and-operating-system-level">this blog article series</a> (there are linked articles on the next chapters)<br /><br /><blockquote>2 - You could mount a service to stop and / or start the LR service using the systemctl, since to stop the service, I do it with a kill -9 <id process>, and to boot it sh [path_lr] / tomcat /bin/catalina.sh start, so that it is more comfortable for the administrator.<br /></blockquote>Don't do <span style="font-family: Courier New, Courier, monospace">kill -9</span> unless you've absolutely exhausted all of your other options. Check the blog series linked above or just go with systemctl.<br /><br /><blockquote>3 - I'm thinking about doing a process to deploy the war files that you want to deploy to all the nodes of the cluster simultaneously, for example, create a folder where files are deposited, and launch a script that copies those files in the path deploy of each of the nodes, so that both of them have the same version of the file deployed, and since it has the property of hot deploy, this is transparent and easier for the developer. It's a good idea?<br /></blockquote>If this is a good idea depends on your other deployment strategies. My main concern for deploying new software is always: Can I rebuild the same server if the current one fails hard? Just operating a single server without taking care of the occasional server outage and recovery: You're fine any way.<br />Otherwise, you should always automate the deployment. How exactly you do that largely depends on your other infrastructure and tools that you're using for deployment.<br /><br />Notice that maintenance of a running server sometimes is also the undeployment of existing components, not only the deployment of additional ones.<br /><br />I know of people who'll never hotdeploy. Others do this for a limited amount of times before restarting the server. And yet again others will just hotdeploy as they like. Whatever you do: automate the (un)deployment as well as you can. Optimize for recovery, not for the individual update of the system.Olaf Kock2019-04-24T13:49:24ZRE: Liferay cluster configurationAdmin CAUCEhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1133740322019-04-24T11:26:23Z2019-04-24T11:26:23ZFinally I found the error.<br /><br />I was missing my DNS configuration in the resolv.conf file of the machine. Problem solved.<br /><br />Now I have 3 questions or questions to ask:<br /><br />1 - Currently, when I launch the Liferay service, I pick it up with the "root" user, since within the Liferay path, it is configured with owner and group to "root". Could it be changed for LR to lift the service with the user, for example, "tomcat" created manually for this service? If so, how should I do it?<br /><br />2 - You could mount a service to stop and / or start the LR service using the systemctl, since to stop the service, I do it with a kill -9 <id process>, and to boot it sh [path_lr] / tomcat /bin/catalina.sh start, so that it is more comfortable for the administrator.<br /><br />3 - I'm thinking about doing a process to deploy the war files that you want to deploy to all the nodes of the cluster simultaneously, for example, create a folder where files are deposited, and launch a script that copies those files in the path deploy of each of the nodes, so that both of them have the same version of the file deployed, and since it has the property of hot deploy, this is transparent and easier for the developer. It's a good idea?<br /><br />regardsAdmin CAUCE2019-04-24T11:26:23ZRE: Liferay cluster configurationOlaf Kockhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1133711072019-04-24T09:11:16Z2019-04-24T09:11:16Z<html><head></head><body><blockquote>Admin CAUCE<br>Now I have this other problem in one of the nodes when I try tu start,<br><pre><code>2019-04-24 09:10:51.025 ERROR [Start Level: Equinox Container: 7100e0e6-fb71-4e0f-9a56-d3d491cc3a6e][Cache:224] Unable to set localhost. This prevents creation of a GUID. Cause was: JRDLRC10: JRDLRC10: Nombre o servicio desconocido
java.net.UnknownHostException: JRDLRC10: JRDLRC10: Nombre o servicio desconocido
&nbsp; &nbsp; &nbsp; &nbsp; at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.Cache.<clinit>(Cache.java:222)
</clinit></code></pre></blockquote>Did you <a href="https://lmgtfy.com/?q=Unable+to+set+localhost.+This+prevents+creation+of+a+GUID">google the error message</a> (no pun intended ;) )? It is actually pretty good, and the <a href="https://serverfault.com/questions/779804/fix-error-unable-to-set-localhost-this-prevents-creation-of-a-guid">first hit</a> on stackoverflow even has to do with ehcache.<br><br>Maybe a hostname typo somewhere?</body></html>Olaf Kock2019-04-24T09:11:16ZRE: Liferay cluster configurationAdmin CAUCEhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1133703082019-04-24T08:12:20Z2019-04-24T08:12:20Z<html><head></head><body>Thanks for teh solution.<br><br>Now I have this other problem in one of the nodes when I try tu start,<br><pre><code>2019-04-24 09:10:51.025 ERROR [Start Level: Equinox Container: 7100e0e6-fb71-4e0f-9a56-d3d491cc3a6e][Cache:224] Unable to set localhost. This prevents creation of a GUID. Cause was: JRDLRC10: JRDLRC10: Nombre o servicio desconocido
java.net.UnknownHostException: JRDLRC10: JRDLRC10: Nombre o servicio desconocido
&nbsp; &nbsp; &nbsp; &nbsp; at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.Cache.<clinit>(Cache.java:222)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.config.ConfigurationHelper.createCache(ConfigurationHelper.java:305)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.config.ConfigurationHelper.createDefaultCache(ConfigurationHelper.java:223)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.CacheManager.configure(CacheManager.java:759)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.CacheManager.doInit(CacheManager.java:464)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.CacheManager.init(CacheManager.java:388)
&nbsp; &nbsp; &nbsp; &nbsp; at net.sf.ehcache.CacheManager.<init>(CacheManager.java:264)
...
[b]Caused by: java.net.UnknownHostException: JRDLRC10: Nombre o servicio desconocido
&nbsp; &nbsp; &nbsp; &nbsp; at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
&nbsp; &nbsp; &nbsp; &nbsp; at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
&nbsp; &nbsp; &nbsp; &nbsp; at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
&nbsp; &nbsp; &nbsp; &nbsp; at java.net.InetAddress.getLocalHost(InetAddress.java:1500)[/b]
[b][/b]
</init></clinit></code></pre></body></html>Admin CAUCE2019-04-24T08:12:20ZRE: Liferay cluster configurationJorge Diazhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1133509782019-04-23T09:51:12Z2019-04-23T09:51:12ZHi Admin CAUCE,<br /><br /><blockquote>Admin CAUCE<br />And on the other hand, I already got the Liferay service up correctly without the problems I had before. In the file catalina.out I do not see any message of ERROR. Now I just have to add the second node to the cluster. Is there a procedure to add a second LR node to the cluster? I mean, not the configuration in the portal.propierties file, but the way to add it as a cluster.<br /></blockquote><br />All configuration related to System Settings are stored in database (inside configuration_ table), so it is not neccessary to do additional tasks in case of adding a new node to cluster.<br /><br /><strong>Note:</strong> in case you have already setup more than one Liferay node and you do any change to Elasticsearch configuration in Liferay, after doing the change in first node, you will have to reboot rest of the nodes.Jorge Diaz2019-04-23T09:51:12ZRE: Liferay cluster configurationAdmin CAUCEhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1133502792019-04-23T09:18:59Z2019-04-23T09:18:59Z<html><head></head><body>Hallelujah!!!<br><br>Thanks for this last information. I have done these steps and already, I think, it connects correctly. I think the problem was that when I configured my ES server in the Control Panel of the LR, I was missing the step of "Reindexing all indexes" and then restarting the LR service.<br><br>Now I am looking at the logs on the ES server when a full reindexing is done:<br><br><pre><code>[2019-04-23T10: 11: 00,793] [INFO] [o.e.c.m.MetaDataDeleteIndexService] [JRDELS11] [liferay-20099 / xi4TUHX7QQuzoJ1fm1toTA] deleting index
[2019-04-23T10: 11: 00,862] [INFO] [oecmMetaDataCreateIndexService] [JRDELS11] [liferay-20099] creating index, cause [api], templates [], shards [1] / [0], mappings [LiferayDocumentType ]
[2019-04-23T10: 11: 01,054] [INFO] [o.e.c.m.MetaDataMappingService] [JRDELS11] [liferay-20099 / hSZFoRZSSPuHieksfEdZ1g] update_mapping [LiferayDocumentType]
[2019-04-23T10: 11: 01,305] [INFO] [o.e.c.m.MetaDataMappingService] [JRDELS11] [liferay-20099 / hSZFoRZSSPuHieksfEdZ1g] update_mapping [LiferayDocumentType]
[2019-04-23T10: 11: 08,528] [INFO] [o.e.c.m.MetaDataMappingService] [JRDELS11] [liferay-20099 / hSZFoRZSSPuHieksfEdZ1g] update_mapping [LiferayDocumentType]
[2019-04-23T10: 11: 08,618] [INFO] [o.e.c.m.MetaDataMappingService] [JRDELS11] [liferay-20099 / hSZFoRZSSPuHieksfEdZ1g] update_mapping [LiferayDocumentType]
[2019-04-23T10: 11: 08,688] [INFO] [o.e.c.m.MetaDataMappingService] [JRDELS11] [liferay-20099 / hSZFoRZSSPuHieksfEdZ1g] update_mapping [LiferayDocumentType]</code></pre><br>And on the other hand, I already got the Liferay service up correctly without the problems I had before. In the file catalina.out I do not see any message of ERROR. Now I just have to add the second node to the cluster. Is there a procedure to add a second LR node to the cluster? I mean, not the configuration in the portal.propierties file, but the way to add it as a cluster.<br><br>Best regards and many thanks for the information.</body></html>Admin CAUCE2019-04-23T09:18:59ZRE: Liferay cluster configurationJorge Diazhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1133488342019-04-23T08:35:04Z2019-04-23T08:35:04ZHi Admin Cauce,<br /><br /><br />You configuration Liferay side was not correctly setup, as Liferay is trying to connect to 127.0.0.1 machine:<br /><blockquote><table><tr><td>2019-04-22 15:15:20.677 ERROR [Framework Event Dispatcher: Equinox Container: 887023c8-5deb-4b2c-9b45-ad16f32b264a][com_liferay_portal_search:97] FrameworkEvent ERROR<br /></td></tr><tr></tr></table><table><tr><td>NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{GOxE7E5DR4CeDBst6Re0yA}{localhost}{127.0.0.1:9300}]]</td></tr><tr></tr></table> at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:349)</blockquote>Try do the following:<ol style="list-style: decimal outside;" start="1"><li>Start Liferay</li><li>After starting Liferay go to <strong>Control Panel => Configuration => System Settings => Platform => Search => Elasticsearch</strong> and setup here:<ul style="list-style: disc outside;"><li> Operation Mode => REMOTE</li><li> Transport Addresses => your elasticsearch hostname and port</li></ul></li><li>Go to <strong>Control Panel => Configuration => Search</strong></li><li>Execute a full reindex (click in <em>Reindex all search indexes</em>)</li><li>Check log file</li><li>If everything goes fine, try restarting</li></ol><br />After doing all that configuration, if you want a *.config file, you can go to <strong>Control Panel => Configuration => System Settings => Platform => Search => Elasticsearch</strong> click in the "kebab menú" (three dots menu) and select "Export"<br /><br />You will export current configuration to a file that can be deployed to osgi/config.Jorge Diaz2019-04-23T08:35:04ZRE: Liferay cluster configurationAdmin CAUCEhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1133481612019-04-23T08:12:31Z2019-04-23T08:12:31Z<html><head></head><body>Good afternoon again,<br><br>I have correctly configured the name of the file [...] osgi / configs / com.liferay.portal.search.elasticsearch6.configuration.ElasticsearchConfiguration.config adding the configuration:<br><pre><code>OperationMode = "REMOTE"
transportAddresses = "10.200.23.9:9300" [IP of the remote ES server - it is not on the same machine as LR]</code></pre><br>Made these changes, at the time of lifting the LR service gives some errors that finally does not let me lift the service by port 8080 and logically I can not access the LR administration console. Then I leave some of the errors to raise the sevice, the file catalina.out:<br><br><pre><code>Loading file:/opt/liferay-ce-portal-7.1.2-ga3-test/portal-setup-wizard.properties
2019-04-22 15:14:52.576 INFO [main][PortalContextLoaderListener:139] JVM arguments: -Djava.util.logging.config.file=/opt/liferay-ce-portal-7.1.2-ga3-test/tomcat-9.0.10/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Dfile.encoding=UTF8 -Djava.net.preferIPv4Stack=true -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false -Duser.timezone=GMT -Xms2560m -Xmx2560m -XX:MaxNewSize=1536m -XX:MaxMetaspaceSize=384m -XX:MetaspaceSize=384m -XX:NewSize=1536m -XX:SurvivorRatio=7 -Dignore.endorsed.dirs= -Dcatalina.base=/opt/liferay-ce-portal-7.1.2-ga3-test/tomcat-9.0.10 -Dcatalina.home=/opt/liferay-ce-portal-7.1.2-ga3-test/tomcat-9.0.10 -Djava.io.tmpdir=/opt/liferay-ce-portal-7.1.2-ga3-test/tomcat-9.0.10/temp
2019-04-22 15:14:55.426 INFO [main][DialectDetector:158] Using dialect org.hibernate.dialect.PostgreSQLDialect for PostgreSQL 10.5
2019-04-22 15:14:57.190 INFO [main][ModuleFrameworkImpl:1326] Starting initial bundles
2019-04-22 15:14:59.522 INFO [main][ModuleFrameworkImpl:1601] Started initial bundles
2019-04-22 15:14:59.523 INFO [main][ModuleFrameworkImpl:1636] Starting dynamic bundles
2019-04-22 15:15:14.686 INFO [main][ModuleFrameworkImpl:1725] Started dynamic bundles
2019-04-22 15:15:14.687 INFO [main][ModuleFrameworkImpl:413] Navigate to Control Panel &gt; Configuration &gt; Gogo Shell and enter "lb" to see all bundles
2019-04-22 15:15:20.677 ERROR [Framework Event Dispatcher: Equinox Container: 887023c8-5deb-4b2c-9b45-ad16f32b264a][com_liferay_portal_search:97] FrameworkEvent ERROR
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{GOxE7E5DR4CeDBst6Re0yA}{localhost}{127.0.0.1:9300}]]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:349)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:247)
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:382)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:395)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:384)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:46)
at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:53)
...
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:350)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:492)
__ ____________________ _____ __
/ / / _/ ____/ ____/ __ \/ \ \/ /
/ / / // /_ / __/ / /_/ / /| |\ /
/ /____/ // __/ / /___/ _, _/ ___ |/ /
/_____/___/_/ /_____/_/ |_/_/ |_/_/
Starting Liferay Community Edition Portal 7.1.2 CE GA3 (Judson / Build 7102 / January 7, 2019)
2019-04-22 15:15:22.244 INFO [main][StartupHelper:72] There are no patches installed
2019-04-22 15:15:23.692 INFO [main][AutoDeployDir:193] Auto deploy scanner started for /opt/liferay-ce-portal-7.1.2-ga3-test/deploy
2019-04-22 15:15:24.113 ERROR [main][PortalInstances:261] NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{GOxE7E5DR4CeDBst6Re0yA}{localhost}{127.0.0.1:9300}]]
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{GOxE7E5DR4CeDBst6Re0yA}{localhost}{127.0.0.1:9300}]]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:349)
....
2019-04-22 15:15:30.991 WARN [liferay/search_writer/SYSTEM_ENGINE-2][ProxyMessageListener:88] NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{GOxE7E5DR4CeDBst6Re0yA}{localhost}{127.0.0.1:9300}]]
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{GOxE7E5DR4CeDBst6Re0yA}{localhost}{127.0.0.1:9300}]]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:349)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:247)
...
2019-04-22 15:15:47.385 INFO [main][ThemeHotDeployListener:108] 1 theme for classic-theme is available for use
2019-04-22 15:15:47.490 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
2019-04-22 15:15:47.520 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
2019-04-22 15:15:47.533 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
2019-04-22 15:15:47.542 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
2019-04-22 15:15:47.550 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
2019-04-22 15:15:47.558 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
2019-04-22 15:15:47.607 ERROR [main][PortletLocalServiceImpl:347] Unable to register remote portlet for company 20099 because it does not exist
...
22-Apr-2019 15:15:51.779 GRAVE [http-nio-8080-exec-1] org.apache.catalina.core.ApplicationDispatcher.invoke El Servlet.service() para servlet [jsp] lanzó una excepción
java.lang.NullPointerException
</code></pre><br><br>And from there, it no longer allows me to lift the service. And that I only changed the configuration of LR to connect with the ES. I do not know if something will be wrong for me to stay this way.<br><br>I hope to find light at the end of the tunnel and that surely is silly but I do not know where to look!<br><br>Best regards and many thanks in advance.</body></html>Admin CAUCE2019-04-23T08:12:31ZRE: Liferay cluster configurationAndrew Jardinehttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1132607212019-04-16T16:02:01Z2019-04-16T16:02:01ZIn the control panel, double check the port settings. Also, ddon't forget that after you make all of these changes you also need to REINDEX by going to Control Panel > Configuration > Search and then hitting the Reindex (ALL) option. <br /><br />That last step is pretty important. If you don't do that then you have a connection to an empty index <img alt="emoticon" src="@theme_images_path@/emoticons/happy.gif" >. An easy test once you have done all of that is go to Control Panel > Users and Organizations. The list of users should be displayed there and if you can't find any users, then the connection is either not in place correctly or the reindexing would have failed.Andrew Jardine2019-04-16T16:02:01ZRE: Liferay cluster configurationAdmin CAUCEhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1132591312019-04-16T15:21:20Z2019-04-16T15:21:20Z<html><head></head><body>Good again Andrew<br><br>I have already tried that option, but I still do not take the configuration to connect to my ElasticSearch server (ES).<br><br>I understand that the server side of ES is well configured. I installed the same version of ES as the one that comes embedded with Liferay (LR) 7.1.2 GA3, which is version 6.5.0.<br><br>The configuration file "elasticsearch.yml" I have only set the following values:<br><br><pre><code>cluster.name: LiferayElasticsearchCluster
node.name: $ {HOSTNAME}
network.host: 10.200.23.9 (this IP is the IP of the ES server)
http.port: 9200</code></pre><br>I understand that the server side of ElasticSearch is correct. Check service status:<br><pre><code>[root @ JRDELS11 elasticsearch] $ systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-04-16 15:34:14 WEST; 41min ago
Docs: http://www.elastic.co
Main PID: 8582 (java)
Tasks: 36 (limit: 4915)
CGroup: /system.slice/elasticsearch.service
├─8582 / usr / bin / java -Xms1g -Xmx1g -XX: + UseConcMarkSweepGC -XX: CMSInitiatingOccupancyFraction = 75 -XX: + UseCMSInitiatingOccupancyOnly -XX: + AlwaysPreTouch -Xss1m -Djava.awt.headless = true -Dfile.
└─8632 / usr / share / elasticsearch / modules / x-pack-ml / platform / linux-x86_64 / bin / controller
Apr 16 15:34:14 JRDELS11 systemd [1]: Started Elasticsearch.</code></pre><br>I check open ports.<br><br><pre><code>[root @ JRDELS11 elasticsearch] $ netstat -ant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN
tcp 0 0 10.200.23.9:22 10.60.152.33:5922 ESTABLISHED
tcp6 0 0 ::: 111 ::: * LISTEN
tcp6 0 0 10.200.23.9:9200 ::: * LISTEN
tcp6 0 0 10.200.23.9:9300 ::: * LISTEN
tcp6 0 0 ::: 22 ::: * LISTEN</code></pre><br>I understand that the ES server part is correctly configured and working.<br><br>On the side of the nodes of the LR cluster, according to the documentation, you can configure the connection to a remote ES server or the osgi configuration files or from the control panel in the Elasticsearch 6 configuration<br><br>According to the documentation, if I do it from the LR control panel, I would only have to change the option of "Embedded" to "Remote" and that the value of the name of the elasticsearch cluster is equal to the one defined in the server. of is. Is this configuration correct or would I need something else to connect LR to my ES on remote?<br><br>Thanks in advanced.<br><br>Regards!!</body></html>Admin CAUCE2019-04-16T15:21:20ZRE: Liferay cluster configurationAndrew Jardinehttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1132562062019-04-16T14:13:50Z2019-04-16T14:13:50ZIs it working? The one thing I am not sure about is the name of your file. Looking at github, I can see that the id for this configuration is a little different from what you are using. On github you have <br /><br /><a href="https://github.com/liferay/liferay-portal/blob/master/modules/apps/portal-search-elasticsearch6/portal-search-elasticsearch6-api/src/main/java/com/liferay/portal/search/elasticsearch6/configuration/ElasticsearchConfiguration.java#L26">https://github.com/liferay/liferay-portal/blob/master/modules/apps/portal-search-elasticsearch6/portal-search-elasticsearch6-api/src/main/java/com/liferay/portal/search/elasticsearch6/configuration/ElasticsearchConfiguration.java#L26<br /><br /></a>.. your filename is com.liferay.portal.search.elasticsearch.configuration.ElasticsearchConfiguration.config -- notice the trailing "6" on the elasticsearch package in github.Andrew Jardine2019-04-16T14:13:50ZRE: Liferay cluster configurationAdmin CAUCEhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1132477102019-04-16T08:15:05Z2019-04-16T08:15:05Z<html><head></head><body>Hehehehe you are very right, my fault I hope it does not happen again.<br><br>Now I'm trying to configure ElasticSearch to put it on an external server.<br><br>I already have the ElasticSearch service installed on another machine, and on the server side of ElasticSearch I have changed the following configuration in the file "elasticsearch.yml". In principle I will not mount an ElasticSearch cluster but simply a single server:<br><br><pre><code>cluster.name: LiferayElasticsearchCluster
network.host: 10.200.23.9 (ip of the ElasticSearch server)</code></pre><br>For the rest I have not touched anything else in this configuration. I start the ElasticSearch service.<br><br>On the side of the Liferay nodes I have done the following:<br><br>I have created a configuration file under ... / osgi / configs / com.liferay.portal.search.elasticsearch.configuration.ElasticsearchConfiguration.config which contains the following:<br><br><pre><code>operationMode = "REMOTE"
TransportAddresses = "10.200.23.9:9300"</code></pre><br>This is the only configuration in the LR nodes. Now my question is, do we have to make any more changes within the LR configurations? Every time I restart the Liferay service, this file changes me to the following:<br><br><pre><code># \ Highly \ recommended \ for \ all \ non-prodcution \ usage \ (e.g., \ practice, \ tests, \ diagnostics): \ n # logExceptionsOnly = "false"
# \ If \ running \ Elasticsearch \ from \ a \ different \ computer: \ ntransportAddresses = "10.200.23.9:9300"
operationMode = "REMOTE"</code></pre><br>Thank you again!</body></html>Admin CAUCE2019-04-16T08:15:05ZRE: Liferay cluster configurationAndrew Jardinehttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1132360882019-04-15T13:36:39Z2019-04-15T13:36:39ZHah! glad you got it sorted out. For future reference, always best to start a thread with "I am using Liferay X.X GAX" -- will almost always get you to an answer quicker <img alt="emoticon" src="@theme_images_path@/emoticons/happy.gif" >Andrew Jardine2019-04-15T13:36:39ZRE: Liferay cluster configurationAdmin CAUCEhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1132301252019-04-15T10:17:05Z2019-04-15T10:17:05ZHi Andrew,<br /><br />Finally I was able to launch the Liferay cluster but I have configured it with the LR version 7.1.2 GA3. The version that I was with is the LR 7.1 version, which shows that it does not support cluster by default.<br /><br />Thank you very much for your answer.Admin CAUCE2019-04-15T10:17:05ZRE: Liferay cluster configurationOlaf Kockhttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1131639482019-04-11T15:43:56Z2019-04-11T15:43:56ZTo answer directly to the question about <em>cache replication</em>: I wouldn't do that. Cache invalidation (as provided by cluster link out of the box) is enough: Any time one node changes an object, the other node will expire that exact object from its cache (if it was cached before). In the (maybe rare) case that the object is required afterwards, it will be loaded from the database, and be guaranteed to be fresh until the next cache invalidation (or timeout, or cache size overflow).<br /><br />Unless you measure that this provides a bottleneck: Don't change it. <br /><br />Both of your machines might serve vastly different content - then it's of no use to cache objects on both nodes if they're only required on one of them. <br /><br />Also, get clarity about your reason for clustering: Is it to withstand high load, or to be highly available in case one machine goes down. How often does that happen and which price are you willing to pay: Must the user not know at all that they're on a different server? <br /><br />Often, people want to enable session-replication, which is the replication of the application server's session object in the application-server cluster. This is independent from Liferay, and typically also quite a memory- and CPU-hog. If you cluster for increased load, this is the last that you want to do. If you cluster for high-availability, this is a valuable (read: extremely expensive) hack that looks like you nailed it, but will bite you later.Olaf Kock2019-04-11T15:43:56ZRE: Liferay cluster configurationAndrew Jardinehttps://liferay.dev/en/c/message_boards/find_message?p_l_id=119785333&messageId=1131543982019-04-11T14:38:38Z2019-04-11T14:38:38Z<html><head></head><body>You probably don't need to mess with that part but if you are dying to get under the hood -- In your portal.properties there is a