PUBLIC - Liferay Portal Community Edition
  1. PUBLIC - Liferay Portal Community Edition
  2. LPS-39276

Lock service should always use the primary database node if read/write splitting is enabled

    Details

    • Type: Bug Bug
    • Status: Closed
    • Resolution: Fixed
    • Affects Version/s: 6.1.20 EE GA2, 6.2.0 CE B2
    • Fix Version/s: 6.2.0 CE B3, 6.2.0 CE RC5
    • Labels:
    • Environment:
      Tomcat 7 + PostgreSQL 9.1. Portal 6.1.x EE GIT ID: 016e467e88a4b5923df85212967a3a3047f2b2f1.
      Tomcat 7 + PostgreSQL 9.1. Portal 6.2.x CE GIT ID: cf60e9ef9b44c04d52ec45714f463bf1f76dae7a.
    • Fix Priority:
      4
    • Similar Issues:
      Show 5 results 

      Description

      Background

      Read/Write database splitting is a means of utilizing a hot standby database in order to gain performance while sacrificing some accuracy. Some services eg. LockLocalService and CounterLocalService are so critical to the portal that they should always read from the primary DB instance.

      Initial set-up

      1) Set-up PostgreSQL with hot standby enabled.

      1/a) Debian/Ubuntu based distros make it easy to manage multiple instances by offering the pg*cluster commands. I have two instances: one is acting as primary (main - write node) and another as a standby (standby - read node).

      $ pg_lsclusters 
      Version Cluster   Port Status Owner    Data directory                     Log file
      9.1     main      5432 online postgres /var/lib/postgresql/9.1/main       /var/log/postgresql/postgresql-9.1-main.log
      9.1     standby   6432 online,recovery postgres /var/lib/postgresql/9.1/standby    /var/log/postgresql/postgresql-9.1-standby.log
      

      1/b) Enable archive mode on instance main.

      /etc/postgresql/9.1/main/postgresql.conf
      wal_level = hot_standby
      archive_mode = on
      archive_command = 'test ! -f /var/lib/postgresql/9.1/_archive/%f && cp %p /var/lib/postgresql/9.1/_archive/%f'
      max_wal_senders = 3
      

      Please note that directory /var/lib/postgresql/9.1/_archive doesn't exist by default, you'll have to create it.

      1/c) Configure instance standby for being a hot standby.

      /etc/postgresql/9.1/standby/postgresql.conf
      hot_standby = on
      
      /var/lib/postgresql/9.1/standby/recovery.conf
      standby_mode = 'on'
      restore_command = 'cp /var/lib/postgresql/9.1/_archive/%f %p'
      

      2) Configure Liferay to use R/W DB splitting plus clustering.

      portal-ext.properties
      #### Enable R/W database splitting
      spring.configs=\
        META-INF/base-spring.xml,\
        \
        META-INF/hibernate-spring.xml,\
        META-INF/infrastructure-spring.xml,\
        META-INF/management-spring.xml,\
        \
        META-INF/util-spring.xml,\
        \
        META-INF/jpa-spring.xml,\
        \
        META-INF/executor-spring.xml,\
        \
        META-INF/audit-spring.xml,\
        META-INF/cluster-spring.xml,\
        META-INF/editor-spring.xml,\
        META-INF/jcr-spring.xml,\
        META-INF/ldap-spring.xml,\
        META-INF/messaging-core-spring.xml,\
        META-INF/messaging-misc-spring.xml,\
        META-INF/mobile-device-spring.xml,\
        META-INF/notifications-spring.xml,\
        META-INF/poller-spring.xml,\
        META-INF/rules-spring.xml,\
        META-INF/scheduler-spring.xml,\
        META-INF/scripting-spring.xml,\
        META-INF/search-spring.xml,\
        META-INF/workflow-spring.xml,\
        \
        META-INF/counter-spring.xml,\
        META-INF/mail-spring.xml,\
        META-INF/portal-spring.xml,\
        META-INF/portlet-container-spring.xml,\
        META-INF/staging-spring.xml,\
        META-INF/virtual-layouts-spring.xml,\
        \
        # *** Enable dynamic data-sources for R/W splitting ***
        META-INF/dynamic-data-source-spring.xml,\
        #META-INF/shard-data-source-spring.xml,\
        #META-INF/memcached-spring.xml,\
        #META-INF/monitoring-spring.xml,\
        \
        META-INF/ext-spring.xml
      
      ### Use the address of instance "main"
      jdbc.default.driverClassName=org.postgresql.Driver
      jdbc.default.url=jdbc:postgresql://192.168.47.1:5432/lportal_6120
      jdbc.default.username=lportal_6120
      jdbc.default.password=password
      
      jdbc.write.driverClassName=org.postgresql.Driver
      jdbc.write.url=jdbc:postgresql://192.168.47.1:5432/lportal_6120
      jdbc.write.username=lportal_6120
      jdbc.write.password=password
      
      ### Use the address of instance "standby"
      jdbc.read.driverClassName=org.postgresql.Driver
      jdbc.read.url=jdbc:postgresql://192.168.47.1:6432/lportal_6120
      jdbc.read.username=lportal_6120
      jdbc.read.password=password
      
      ### Enable ClusterLink
      cluster.link.enabled=true
      

      Steps to reproduce

      1) Start up Liferay.
      2) Synchronize the standby manually.

      postgres=# select pg_switch_xlog();
       pg_switch_xlog 
      ----------------
       0/1F000000
      

      3) Restart Liferay.

      Checkpoint: Log into both database and check table Lock_. Each table should have one, but different record.

      4) Execute the following Beanshell script (under Control Panel/Server admin/Scripting).

      import com.liferay.portal.model.*;
      import com.liferay.portal.service.*;
      
      lock = LockLocalServiceUtil.getLock(
          "com.liferay.portal.kernel.scheduler.SchedulerEngine",
          "com.liferay.portal.kernel.scheduler.SchedulerEngine");
      
      System.out.println("UUID: " + lock.getUuid() + " --- " + lock.getCreateDate());
      

      Actual: You see that record which is in the standby instance.
      Expected: You should've seen that record which is in the main instance.

      Notes for QA

      • For reproducing the issue, PostgreSQL 9.x is required, because 8.x doesn't support hot standby.
      • If you don't have a Debian/Ubuntu machine at hand, you can use the one which we have here.
      • You'll have to create the another instance (standby) with pg_createcluster.
      • You dont't need to set-up a fully functional Liferay cluster, having enabled cluster.link.enabled=true is completely enough.

        Activity

        There are no comments yet on this issue.

          People

          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:
              Days since last comment:
              1 year, 27 weeks, 4 days ago

              Development

                Structure Helper Panel