Lucee 4 and 5 consumes too much heap memory with large uploads due to bug in


I'm pretty sure I just isolated and fixed a problem that Lucee has had for over 3 years. It may have even been there in Railo.

It may even be the solution for the complaint made in this old ticket:

It appears the code written for the isToBig() is not reliable or not accurate somehow. It attempts to guess if there is enough heap memory left to process a huge post request using the heap instead of reverting to the alternative behavior, which is a very efficient disk based method. The disk based approach only consumes a trivial amount of heap (like up to 1mb), whereas the isToBig is able to consume ALL the heap and accidentally throw java heap space through the inaccuracy of trying to guess how much memory is needed.

This can make Lucee very unstable / able to crash. This is a serious flaw in lucee, that was easy to fix.

If we are going to persist in allowing further optimization beyond the 1mb default, I suggest that value become an Lucee admin option on the Request page named "Limit Request Upload Size" with a comment describing the behavior. I'm talking about giving us access to modify this value in
DiskFileItemFactory(DiskFileItemFactory.DEFAULT_SIZE_THRESHOLD, (File) tempDir)

Obviously the user must also configure tomcat (or other server's) connector appropriately to allow a significantly larger post then default, which I have also done. I was able to upload a file that is larger then the heap, and I also monitored the heap during processing and saw it didn't use excessive memory anymore, so basically you had the solution there, but you weren't letting it run.

I'll submit my working commit in a moment.




Michael Forell
June 14, 2019, 12:58 PM

I’d like to confirm, there is an error.


Here is a stacktrace:


Michael Offner
June 19, 2020, 1:56 PM

i cannot simply apply your fix, it trade one issue with an other, this code allows to request the input stream more than once, what for example is necessary when you call getHTTPRequestData(), that change would brake that function.

What we could do, is to not store the content in memory in case it reaches a certain size.

Sure we already have a limitation of 50mb, but i would say only to keep objects in memory that are much smaller. question also is, why does the method isToBig not work as expected (could be a multi threading issue)

Zac Spitzer
June 19, 2020, 2:13 PM

Is this also complicated by any threads each reprocessing the pageContext? i.e. LDEV-2903

Michael Offner
December 8, 2020, 2:44 PM

Zac Spitzer
February 5, 2021, 11:06 AM





Bruce Kirkpatrick




Fix versions


5.3.8 Sprint 3