PUBLIC - Liferay Faces
  1. PUBLIC - Liferay Faces
  2. FACES-1464

BridgeRequestScopeCacheImpl does not respect javax.portlet.faces.MAX_MANAGED_REQUEST_SCOPES

    Details

    • Similar Issues:
      Show 5 results 

      Description

      This issue was first mentioned in the following forum post:
      http://www.liferay.com/community/forums/-/message_boards/view_message/18723724

      In order to conserve memory, developers need to have more flexibility and control of the cache size.

        Activity

        Hide
        Neil Griffin added a comment -

        Hi Jens,

        I have been reading the JavaDoc specs for ConcurrentHashMap and also a blog titled ConcurrentHashMap Revealed in order to gain a better understanding of the problem.

        Since the default value of "com.liferay.faces.bridge.bridgeRequestScopePreserved" is false, and now that the FACES-1445 and FACES-1463 memory leaks are fixed, the BridgeRequestScopeCache should not be causing java.lang.OutOfMemoryError anymore.

        For example, consider the typical use-case of 10,000 users concurrently visiting a Liferay Portal page with a JSF portlet. That will cause 10,000 instances of BridgeRequestScopeImpl to be stored in the BridgeRequestScopeCache, but these instances will be removed at the end of the RENDER_PHASE of the portlet lifecycle.

        In the forum post, you wrote:

        The cache implementation extends the ConcurrentHashMap with the intention to limit the capacity [...] The setting MAX_MANAGED_REQUEST_SCOPES has no effect, because you can only set the initial and not the maximal capacity of a ConcurrentHashMap.

        You are correct here – javax.portlet.faces.MAX_MANAGED_REQUEST_SCOPES doesn't work, because the ConcurrentHashMap doesn't have the ability to limit the number of map entries. Instead, it specifies the initial capacity of the ConcurrentHashMap.

        You also wrote:

        the size of a ConcurrentHashMap is fixed to 2^30.

        But I am not sure that this is correct. According to the blog, as segments grow, they reach an ultimate capacity of 2^30.

        Because of the performance benefits of ConcurrentHashMap in a multithreaded environment, we would like to keep ConcurrentHashMap as the base class. However, in order to implement the standard javax.portlet.faces.MAX_MANAGED_REQUEST_SCOPES feature, we could override the ConcurrentHashMap#put(key, value) method in order to achieve an LRU type of mechanism. But we think that the default value should be -1 or something like that, so that there is no-limit (unlimited) by default.

        What do you think?

        Thanks,

        Neil

        Show
        Neil Griffin added a comment - Hi Jens, I have been reading the JavaDoc specs for ConcurrentHashMap and also a blog titled ConcurrentHashMap Revealed in order to gain a better understanding of the problem. Since the default value of "com.liferay.faces.bridge.bridgeRequestScopePreserved" is false, and now that the FACES-1445 and FACES-1463 memory leaks are fixed, the BridgeRequestScopeCache should not be causing java.lang.OutOfMemoryError anymore. For example, consider the typical use-case of 10,000 users concurrently visiting a Liferay Portal page with a JSF portlet. That will cause 10,000 instances of BridgeRequestScopeImpl to be stored in the BridgeRequestScopeCache, but these instances will be removed at the end of the RENDER_PHASE of the portlet lifecycle. In the forum post , you wrote: The cache implementation extends the ConcurrentHashMap with the intention to limit the capacity [...] The setting MAX_MANAGED_REQUEST_SCOPES has no effect, because you can only set the initial and not the maximal capacity of a ConcurrentHashMap. You are correct here – javax.portlet.faces.MAX_MANAGED_REQUEST_SCOPES doesn't work, because the ConcurrentHashMap doesn't have the ability to limit the number of map entries. Instead, it specifies the initial capacity of the ConcurrentHashMap. You also wrote: the size of a ConcurrentHashMap is fixed to 2^30. But I am not sure that this is correct. According to the blog , as segments grow, they reach an ultimate capacity of 2^30. Because of the performance benefits of ConcurrentHashMap in a multithreaded environment, we would like to keep ConcurrentHashMap as the base class. However, in order to implement the standard javax.portlet.faces.MAX_MANAGED_REQUEST_SCOPES feature, we could override the ConcurrentHashMap#put(key, value) method in order to achieve an LRU type of mechanism. But we think that the default value should be -1 or something like that, so that there is no-limit (unlimited) by default. What do you think? Thanks, Neil
        Hide
        Jens Meinecke added a comment -

        I agree with you except one point: the limit of the map. I think it should be limited because of the robustness. What is the reason for an unlimited cache in your opinion?

        Limitation of the cache means that elements will be overwritten every time in the worst case. So effectivly you feel like having no cache but your portal will run.

        No limitation of the cache can end in another memory leak and your portal will crash in the worst case. Perhaps there is no other leak but you do not have control what the customers do in their portals. That is why I would prefer this way.

        Show
        Jens Meinecke added a comment - I agree with you except one point: the limit of the map. I think it should be limited because of the robustness. What is the reason for an unlimited cache in your opinion? Limitation of the cache means that elements will be overwritten every time in the worst case. So effectivly you feel like having no cache but your portal will run. No limitation of the cache can end in another memory leak and your portal will crash in the worst case. Perhaps there is no other leak but you do not have control what the customers do in their portals. That is why I would prefer this way.
        Hide
        Neil Griffin added a comment -

        Hi Jens,

        Thanks for getting back to us. We will override ConcurrentHashMap#put(key, value) and provide a way for you to set the max in your environment with javax.portlet.faces.MAX_MANAGED_REQUEST_SCOPES. That will also satisfy the JSR 329 spec requirement. We'll try and do it this week.

        Best Regards,

        Neil

        Show
        Neil Griffin added a comment - Hi Jens, Thanks for getting back to us. We will override ConcurrentHashMap#put(key, value) and provide a way for you to set the max in your environment with javax.portlet.faces.MAX_MANAGED_REQUEST_SCOPES. That will also satisfy the JSR 329 spec requirement. We'll try and do it this week. Best Regards, Neil
        Hide
        Neil Griffin added a comment -

        The reason for the unlimited cache:

        When "com.liferay.faces.bridge.bridgeRequestScopePreserved" is false, BridgeRequestScope is basically the same as RequestScope. So if 10,000 simultaneous requests come in, and the app server is able to handle it, then the BridgeRequestScopeCache should be able to handle it as well. Setting a default maximum would effectively limit the number of simultaneous requests.

        Show
        Neil Griffin added a comment - The reason for the unlimited cache: When "com.liferay.faces.bridge.bridgeRequestScopePreserved" is false, BridgeRequestScope is basically the same as RequestScope. So if 10,000 simultaneous requests come in, and the app server is able to handle it, then the BridgeRequestScopeCache should be able to handle it as well. Setting a default maximum would effectively limit the number of simultaneous requests.
        Hide
        Jens Meinecke added a comment -

        Ok, limiting the maximum of simultaneous requests is not an option. I thought if you limit a cache, you limit the number of elements can be accessed very fast (or stored), but not the maximum of elements can be processed through a cache.

        I am not very familiar with the details of the JSF specification. So it is up to you now. I am sure you will make the right decision.

        Show
        Jens Meinecke added a comment - Ok, limiting the maximum of simultaneous requests is not an option. I thought if you limit a cache, you limit the number of elements can be accessed very fast (or stored), but not the maximum of elements can be processed through a cache. I am not very familiar with the details of the JSF specification. So it is up to you now. I am sure you will make the right decision.
        Hide
        Neil Griffin added a comment -

        Thanks Jens, this issue is fixed now.

        Show
        Neil Griffin added a comment - Thanks Jens, this issue is fixed now.

          People

          • Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development

                Subcomponents

                  Structure Helper Panel