We're updating the issue view to help you get more done. 

NPEs with RamCache/QueryCache involvement cause server crash

Description

Debian 9.9, SMP Debian 4.9.168-1+deb9u3 (2019-06-16) x86_64 GNU/Linux, Oracle Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode), Tomcat 8.5.14-1+deb9u3

Environment

We have upgraded our webservers from Lucee 5.2.7.63 to 5.3.2.77 (release) on monday. The former has been rock solid for us, but we need a new feature in 5.3 (), so we decided it's time to take the plunge.

Apart from due to onSessionEnd NPEing, which we don't see as critical, we didn't see any issues for the first couple of hours. Then one of the servers suddenly became unresponsive. As this was after office-hours, I didn't get a chance to take a closer look at the cause.

A second server OOMed yesterday at midday. The hprof was some 11GB in size and I couldn't get it to parse in MAT. A third server then tanked yesterday evening, though it didn't OOMdump, it just went to an extremely high load and didn't seem to service any requests. This time I got the chance to take a closer look at what happened.

Whenever a server stalled, there wasn't much of any warning signs beforehand, so no slowly increasing heap issues, nor any slowly increasing load. It seems to happen purely at random out of the blue.

catalina.out shows lots and lots of these for the time of the event, though they don't show the cause, just the result:

1 2 3 4 5 25-Jun-2019 18:40:46.552 SCHWERWIEGEND [ajp-nio-8009-exec-35] org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun java.lang.ThreadDeath at java.lang.Thread.stop(Thread.java:853) at lucee.commons.io.StopThread.run(SystemUtil.java:1538)

lucee/server/global.log just shows the onSessionEnd NPEs and from time to time the following, which I guess may actually be related to the problem:

 

1 2 3 4 5 6 2019-06-25 12:19:22.288 application java.util.ConcurrentModificationException at org.apache.commons.collections4.map.AbstractReferenceMap$ReferenceBaseIterator.checkMod(AbstractReferenceMap.java:809) at org.apache.commons.collections4.map.AbstractReferenceMap$ReferenceBaseIterator.hasNext(AbstractReferenceMap.java:783) at org.apache.commons.collections4.map.AbstractReferenceMap$ReferenceValues.toArray(AbstractReferenceMap.java:576) at lucee.runtime.cache.ram.RamCache$Controler._run(RamCache.java:239) at lucee.runtime.cache.ram.RamCache$Controler.run(RamCache.java:230)

In lucee/web/exception.log I see the issue starting with a couple of timeouts of the very same code, all of them timing out on a cached query. The database in question doesn't show any issues at all, my guess is that there's some sort of deadlock on the cache-put-operation for a cached query. The dump only shows the named locks, and this list increases over time of course, as more and more threads are stalling, but this one here is the first log entry and the rest all show the same pattern:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 "ERROR","ajp-nio-8009-exec-35","06/25/2019","18:40:46","",";request /somepath/player.anycboxsite/public/player.cfm (/var/www/somepath/player.anycboxsite/public/player.cfm) has run into a timeout (20 seconds) and has been st opped. Open locks at this time (lockmemcachedfirstcall_pcgames_2.0_1_webpage_article_1281978_page_1authorized_false, lockformcnewwrite_pcgames_2.0_1_webpage_article_1281978_page_1authorized_false, lockmemcachedfirstcall_pcgames_2.0_1_web page_article_1268594_page_1authorized_false, lockformcnewwrite_pcgames_2.0_1_webpage_article_1268594_page_1authorized_false).;lucee.runtime.exp.RequestTimeoutException: request /somepath/player.anycboxsite/public/player.cfm (/var/ www/somepath/player.anycboxsite/public/player.cfm) has run into a timeout (20 seconds) and has been stopped. Open locks at this time (lockmemcachedfirstcall_pcgames_2.0_1_webpage_article_1281978_page_1authorized_false, lockformcne wwrite_pcgames_2.0_1_webpage_article_1281978_page_1authorized_false, lockmemcachedfirstcall_pcgames_2.0_1_webpage_article_1268594_page_1authorized_false, lockformcnewwrite_pcgames_2.0_1_webpage_article_1268594_page_1authorized_false). at org.apache.commons.collections4.map.AbstractHashedMap.getEntry(AbstractHashedMap.java:461) at org.apache.commons.collections4.map.AbstractReferenceMap.getEntry(AbstractReferenceMap.java:427) at org.apache.commons.collections4.map.AbstractReferenceMap.get(AbstractReferenceMap.java:244) at lucee.runtime.cache.ram.RamCache.put(RamCache.java:191) at lucee.runtime.cache.tag.timespan.TimespanCacheHandler.set(TimespanCacheHandler.java:88) at lucee.runtime.tag.Query._doEndTag(Query.java:688) at lucee.runtime.tag.Query.doEndTag(Query.java:537) at somepath.player_anycboxsite322._public.player_cfm$cf.udfCall1(/somepath/player.anycboxsite/public/player.cfm:1034) at somepath.player_anycboxsite322._public.player_cfm$cf.udfCall(/somepath/player.anycboxsite/public/player.cfm)

None of the named locks in that list have any relation to that script in question that seems to stall here. I checked that part of the code and we're not using any locking in that specific area, so I am fairly certain that the named locks at the beginning are just the ordinary couple of pages that are being regenerated for memcached and have nothing to do with that RamCache-problem for the query.

This is the query-bit that's stated as hanging here:

 

1 2 3 4 5 6 7 8 9 10 11 <cfquery name="local.qGetVideoData" datasource="myds" cachedwithin="#createTimespan(0, 1, 0, 0)#"> Select va.vid, va.article_id, a.headline, a.intro, va.path_hd, va.path_sd, va.embed_thumb from transcode.video_asset va , public.article a where va.vid in (<cfqueryparam cfsqltype="cf_sql_integer" value="#arguments.videoIDList#" list="true">) and a.id = va.article_id and p.article_id = va.article_id and p.site_id = <cfqueryparam cfsqltype="cf_sql_integer" value="#variables.iSiteId#"> and p.start_date <= now() and ( p.end_date > now() or p.end_date IS NULL) </cfquery>

A little time later on, there are other cached queries showing the same issue and again I guess that there must be some deadlock issue with the query caching under concurrency.

I have since updated the servers to 5.3.3.53, but judging from the first errors I see in the logs, the issue has not gone away, other cached queries are affected, too, the aforementioned script just happens to be one of the most requested. We haven't had a server stall so far, but I guess that it's only a matter of time. So 5.3.3.53 at least may be affected as well, though I cannot say for sure until one of the machines actually goes down again. Here's one fresh from the log a short while after the upgrade to 5.3.3.53:

 

1 2 3 4 5 6 7 8 9 10 11 12 13 request /somepath/index.cfm (somepath/webserver/index.cfm) has run into a timeout (400 seconds) and has been stopped. Open locks at this time (lockformcnewwrite_gamezone_2.0_webpage_menu_445_b340_artikel-archiv/artikel-vom-09-05-2018/seite-3/authorized_false, lockformcnewwrite_pcgames_2.0_1_webpage_article_1259454_page_1authorized_false). StackTrace string lucee.runtime.exp.RequestTimeoutException: request somepath/webserver/index.cfm (somepath/webserver/index.cfm) has run into a timeout (400 seconds) and has been stopped. Open locks at this time (lockformcnewwrite_gamezone_2.0_webpage_menu_445_b340_artikel-archiv/artikel-vom-09-05-2018/seite-3/authorized_false, lockformcnewwrite_pcgames_2.0_1_webpage_article_1259454_page_1authorized_false). at org.apache.commons.collections4.map.AbstractHashedMap.getEntry(AbstractHashedMap.java:461) at org.apache.commons.collections4.map.AbstractReferenceMap.getEntry(AbstractReferenceMap.java:427) at org.apache.commons.collections4.map.AbstractReferenceMap.get(AbstractReferenceMap.java:244) at lucee.runtime.cache.ram.RamCache._getQuiet(RamCache.java:142) at lucee.runtime.cache.ram.RamCache.getCacheEntry(RamCache.java:155) at lucee.runtime.cache.CacheSupport.getValue(CacheSupport.java:216) at lucee.runtime.cache.tag.timespan.TimespanCacheHandler.get(TimespanCacheHandler.java:64) at lucee.runtime.cache.tag.timespan.TimespanCacheHandler.get(TimespanCacheHandler.java:195) at lucee.runtime.tag.Query._doEndTag(Query.java:603) at lucee.runtime.tag.Query.doEndTag(Query.java:537)

The pattern in the stack traces is always the same - starting with org.apache.commons.collections4.map.AbstractHashedMap.getEntry(AbstractHashedMap.java:461) up until the lucee.runtime.tag.Query.doEndTag(Query.java:537).

The datasources in question are PostgreSQL datasources defined in the server admin. My guess is that there has been some change to the query caching that may be susceptible to lock congestion under concurrency.

I'll try with the latest snapshot next, but as the machines in question are running in production, I'll probably have to downgrade back to 5.2 for the time being, though I'd like to wait for the next server stall before making any rash decisions.

The issue is very likely hard to reproduce, one of our servers went for more than a day and didn't stall once in that time - this was unfortunately the one we're running FusionReactor on. We're also tracking Tomcat and general server metrics using munin and as I said, there are really no warning signs to see before it all goes south, I guess that as long as the request queue doesn't grow too much, everything is shiny. When it does however, things go really bad, as per usual.

Status

Assignee

Unassigned

Reporter

Markus Wollny

Labels

None

Affects versions

5.3.3.60
5.3.2.77

Priority

New