-
Type:
Bug
-
Status: Closed
-
Resolution: Fixed
-
Affects Version/s: 6.2.X EE, 7.0.X EE, Master
-
Fix Version/s: 6.2.X EE, 7.0.0 DXP FP30, 7.0.4 CE GA5, 7.1.X, Master
-
Component/s: Fault Tolerance > Clustering Framework
-
Branch Version/s:6.2.x
-
Backported to Branch:Committed
-
Story Points:1.75
-
Fix Priority:3
-
Git Pull Request:
Steps to Reproduce:
- Start a cluster with a couple of nodes
- Simulate a connection lost between both nodes, so that both become master.
- Recreate the connection, but make master answer node last more than clusterable.advice.call.master.timeout seconds (i.e. by adding a breakpoint in com.liferay.portal.scheduler.multiple.internal.ClusterSchedulerEngine.getScheduledJobs(StorageType))
- Stop master node.
- Execute the following code in the only available node:
import com.liferay.portal.kernel.scheduler.*; import com.liferay.portal.kernel.scheduler.StorageType; def schedulerJobs = SchedulerEngineHelperUtil.getScheduledJobs(StorageType.MEMORY_CLUSTERED); out.println(schedulerJobs.size());
Actual Results:
Scheduled memory clustered jobs are empty.
Expected Results:
Even if an error occurs, jobs shouldn't have lost, maybe not fully synchronized, but at least not lost.