I have been reading the JavaDoc specs for ConcurrentHashMap and also a blog titled ConcurrentHashMap Revealed in order to gain a better understanding of the problem.
Since the default value of "com.liferay.faces.bridge.bridgeRequestScopePreserved" is false, and now that the
FACES-1445 and FACES-1463 memory leaks are fixed, the BridgeRequestScopeCache should not be causing java.lang.OutOfMemoryError anymore.
For example, consider the typical use-case of 10,000 users concurrently visiting a Liferay Portal page with a JSF portlet. That will cause 10,000 instances of BridgeRequestScopeImpl to be stored in the BridgeRequestScopeCache, but these instances will be removed at the end of the RENDER_PHASE of the portlet lifecycle.
In the forum post, you wrote:
The cache implementation extends the ConcurrentHashMap with the intention to limit the capacity [...] The setting MAX_MANAGED_REQUEST_SCOPES has no effect, because you can only set the initial and not the maximal capacity of a ConcurrentHashMap.
You are correct here – javax.portlet.faces.MAX_MANAGED_REQUEST_SCOPES doesn't work, because the ConcurrentHashMap doesn't have the ability to limit the number of map entries. Instead, it specifies the initial capacity of the ConcurrentHashMap.
You also wrote:
the size of a ConcurrentHashMap is fixed to 2^30.
But I am not sure that this is correct. According to the blog, as segments grow, they reach an ultimate capacity of 2^30.
Because of the performance benefits of ConcurrentHashMap in a multithreaded environment, we would like to keep ConcurrentHashMap as the base class. However, in order to implement the standard javax.portlet.faces.MAX_MANAGED_REQUEST_SCOPES feature, we could override the ConcurrentHashMap#put(key, value) method in order to achieve an LRU type of mechanism. But we think that the default value should be -1 or something like that, so that there is no-limit (unlimited) by default.
What do you think?