Details
-
Bug
-
Status: Closed
-
Resolution: Fixed
-
6.2.3 CE GA4, 6.2.10 EE GA1, 6.2.X EE
-
6.2.x
-
Committed
-
4
-
Regression Bug
Description
STEPS TO REPRODUCE
1.- Set up a cluster of Liferay with two nodes, in ports 8080 and 9080 for HTTP connector for example.
The following properties can be used, for node 1:
cluster.link.enabled=true lucene.replicate.write=true portal.instance.protocol=http portal.instance.http.port=8080 browser.launcher.url= cluster.executor.debug.enabled=true index.dump.process.documents.enabled=false
For node 2:
cluster.link.enabled=true lucene.replicate.write=true portal.instance.protocol=http portal.instance.http.port=9080 browser.launcher.url= cluster.executor.debug.enabled=true index.dump.process.documents.enabled=false
2.- Include the following lines in WEB-INF/classes/log4j.properties:
log4j.logger.com.liferay.portal.kernel.messaging.proxy.ProxyMessageListener=DEBUG log4j.logger.com.liferay.portal.search.lucene.LuceneHelperImpl=DEBUG log4j.logger.com.liferay.portal.search.lucene.IndexAccessorImpl=DEBUG
3.- Start only node 2 and instance an Asset Publisher in the front page.
4.- Create lots of documents and contents unitl you get a index of some GB (I used a 3 GB index)
5.- Start node 1, and access the Front Page. If it was started before, delete the lucene directories before starting this node.
ACTUAL
The index dump takes several minutes and hangs the rests of requests to the server (none of them gets a response, locking out the HTTP thread pool)
The following traces appears, showing that it takes more than 30 minutes to process it:
14:28:07,727 INFO [http-bio-8080-exec-3][LuceneHelperImpl:963] Start loading lucene index files from cluster node mbp-de-sergio-4-42600 14:28:55,723 DEBUG [http-bio-8080-exec-3][IndexAccessorImpl:363] Lucene store type file 14:59:14,985 INFO [http-bio-8080-exec-3][LuceneHelperImpl:977] Lucene index files loaded successfully
EXPECTED
The index dump should be fast enough to avoid locking all the requests to the server
IMPORTANT NOTE
A new property had been included to manage how the index is processed when obtained from other nodes:
index.dump.process.documents.enabled=true