Details

    • Branch Version/s:
      6.1.x
    • Backported to Branch:
      Committed
    • Similar Issues:
      Show 5 results 

      Activity

      Hide
      Shinn Lok added a comment -

      The portlet fails on first load (or if "Resources Importer Test Portlet" doesn't exists) because of a local thread cache issue. I am working around the problem by storing the group for the next iteration instead of depending on the findercache.

      Shuyang describes it below:

      There are two ways to do sync call over message bus
      1) DefaultSynchronousMessageSender
      Working with BaseAsyncDestination which means running listener on threadpool, supports timeout, not efficient.
      2) DirectSynchronousMessageSender
      Working with SynchronousDestination which means caller run(invoke listener in caller thread), don't support timeout, very efficient.

      Normally, you should use 2 whenever possible, because it is faster, unless you need timeout feature.

      1) has a seriously limitation on thread local caching, which is causing this problem.
      Before calling 1), the current thread may already populate some stuff in its thread local cache via entitycache or findercache. On asyncing to thread pool thread, if the async job modify the previously cached stuff, it will update entitycache and findercache properly. However it will only affect the thread pool thread's thread local cache. So when the execution join back the caller thread, it will see stale cache result from its thread local cache.

      So there are two possible solutions for this problem:
      1) Always prefer SynchronousDestination, in this case, makes the destination.hot_deploy as com.liferay.portal.kernel.messaging.SynchronousDestination
      See solution 1's patches in the attachment
      This solution is very efficient, however not developer friendly. For people who don't know MessageBus well, it is very difficult.

      2) Force SynchronousMessageListener to clean all thread local cache on returning.
      This is very inefficient, as the clear will be global, we have no idea what entry should be removed, so we can only remove everything.
      This is architecture wrong, as MessageBus and Caching are completely two irrelevant components, force them to have knowledge of each other is not elegant. I personally feel it very disgusting.
      However this is very developer friendly, people don't need to worry about the threading detail. It will just work, however in a very slow manner.

      Using SynchronousDestination for destination.hot_deploy will have an impact to other parts of the system, as it used to be truely async, only forced to be sync for this test portlet. Switching to SynchronousDestination will force everyone to be sync.

      So if possible, we should rewrite the test portlet to avoid this problem.

      But again, 1) or 2) is a very general question also appears in other pieces of system. If we choose 1), then do nothing, leave the actual problem to developer. If we choose 2), apply the patch from attach, we won't see this problem again, but suffer bad performance.

      I may schedule sometime early next year to refactor the message bus code a little bit, some of them created by mike in his first iteration is quite old. I may find a better solution for this problem by then. But for now, we have to deal with it either by 1) or 2).

      Show
      Shinn Lok added a comment - The portlet fails on first load (or if "Resources Importer Test Portlet" doesn't exists) because of a local thread cache issue. I am working around the problem by storing the group for the next iteration instead of depending on the findercache. Shuyang describes it below: There are two ways to do sync call over message bus 1) DefaultSynchronousMessageSender Working with BaseAsyncDestination which means running listener on threadpool, supports timeout, not efficient. 2) DirectSynchronousMessageSender Working with SynchronousDestination which means caller run(invoke listener in caller thread), don't support timeout, very efficient. Normally, you should use 2 whenever possible, because it is faster, unless you need timeout feature. 1) has a seriously limitation on thread local caching, which is causing this problem. Before calling 1), the current thread may already populate some stuff in its thread local cache via entitycache or findercache. On asyncing to thread pool thread, if the async job modify the previously cached stuff, it will update entitycache and findercache properly. However it will only affect the thread pool thread's thread local cache. So when the execution join back the caller thread, it will see stale cache result from its thread local cache. So there are two possible solutions for this problem: 1) Always prefer SynchronousDestination, in this case, makes the destination.hot_deploy as com.liferay.portal.kernel.messaging.SynchronousDestination See solution 1's patches in the attachment This solution is very efficient, however not developer friendly. For people who don't know MessageBus well, it is very difficult. 2) Force SynchronousMessageListener to clean all thread local cache on returning. This is very inefficient, as the clear will be global, we have no idea what entry should be removed, so we can only remove everything. This is architecture wrong, as MessageBus and Caching are completely two irrelevant components, force them to have knowledge of each other is not elegant. I personally feel it very disgusting. However this is very developer friendly, people don't need to worry about the threading detail. It will just work, however in a very slow manner. Using SynchronousDestination for destination.hot_deploy will have an impact to other parts of the system, as it used to be truely async, only forced to be sync for this test portlet. Switching to SynchronousDestination will force everyone to be sync. So if possible, we should rewrite the test portlet to avoid this problem. But again, 1) or 2) is a very general question also appears in other pieces of system. If we choose 1), then do nothing, leave the actual problem to developer. If we choose 2), apply the patch from attach, we won't see this problem again, but suffer bad performance. I may schedule sometime early next year to refactor the message bus code a little bit, some of them created by mike in his first iteration is quite old. I may find a better solution for this problem by then. But for now, we have to deal with it either by 1) or 2).
      Hide
      Edward Gonzales added a comment -

      Hello everyone! We are in the process of removing component "Theme" from LPS. Please make the necessary adjustments to affected your filters. Thanks!

      Show
      Edward Gonzales added a comment - Hello everyone! We are in the process of removing component "Theme" from LPS. Please make the necessary adjustments to affected your filters. Thanks!
      Hide
      Lawrence Lee added a comment -

      Committed on:
      Portal 6.1.x GIT ID: 1c72f5604c1f74270cbcb7d886c1179e11cc40e8.
      Portal 6.2.x GIT ID: 78ccb9137ff3cc7d6d35bc0d1a0e4daf8d19b04e.

      Show
      Lawrence Lee added a comment - Committed on: Portal 6.1.x GIT ID: 1c72f5604c1f74270cbcb7d886c1179e11cc40e8. Portal 6.2.x GIT ID: 78ccb9137ff3cc7d6d35bc0d1a0e4daf8d19b04e.

        People

        • Votes:
          0 Vote for this issue
          Watchers:
          0 Start watching this issue

          Dates

          • Created:
            Updated:
            Resolved:

            Development

              Structure Helper Panel