I initially brought up this issue on the dev forums: https://dev.lucee.org/t/massive-amounts-of-debug-logging-in-5-3-5-92/6921
To sum it up, we run our lucee application in AWS's Elastic Beanstalk service. After upgrading from 220.127.116.11 to 18.104.22.168, huge log files are being created on the EC2 instance, outside of the docker container. Logging levels are set to ERROR level in the lucee admin console, and none of the tomcat or lucee log files are bloating. Debugging is also turned off in the lucee admin console.
All we do to upgrade Lucee versions is modify the FROM line in our Docker file. This was not an issue with: "FROM lucee/lucee52:22.214.171.124". It only became an issue with "FROM lucee/lucee:126.96.36.199". My last test with "FROM lucee/lucee:188.8.131.52-SNAPSHOT" still contained this logging issue.
The contents of the log files appear to be debug level logging related to the application making external HTTP and/or S3 requests. The reason the files are so large is because they contain the entire file content (images, pdfs, etc) that have been uploaded to the server by users. I have included a sample of a log that has been redacted for senstive info. If you need to see some of the full (very large) logs, let me know and I'll work on cleaning one up.
For now, we have turned the logging off entirely by adding `--log-driver=none` to the `docker run` command in /opt/elasticbeanstalk/hooks/appdeploy/enacts/00run.sh. This is just a temporary solution because we have to keep adding it manually every time Elastic Beanstalk spawns a new EC2.
If I had to guess, there is some sort of console output that Elastic Beanstalk is picking up and logging. We've run into similar issues in a separate Node.js application where every `console.log()` left behind in the application code was being picked up in log files and eventually eating up all of the disk space of the EC2 instance.
Sample log file, named something like /var/log/eb-docker/containers/eb-current-app/eb-063d69495c9f-stdouterr.log:
OS: Docker running on 64bit Amazon Linux/2.14.3
Java Version: 11.0.6
Tomcat Version: 9.0.33
Lucee Version: 184.108.40.206
Thanks Michael. At first I was scratching my head because there’s no trace of a “commons-logging.properties” file in our system. Then I re-read your response more carefully…
It turns out that we have an old, unused custom jar file that gets deployed with our application. After removing this jar, I can no longer replicate the issue. When I restore the jar, the issue returns. I still need to get my hands on the jar’s source, but it’s safe to say this jar is the problem. It must have jets3t debug logging turned on merely by loading into tomcat’s classpath (since it’s never requested or executed). Then any jets3t operations in newer lucee versions are subsequently getting logged.
Once I review the source of our custom jar, I’ll report back with what I find. I’m hoping it will also explain why the log files were only being generated outside of the docker container.
So, I’m no java expert, but here’s what I’ve gathered from the source of our custom jar. Compiled within the jar is a log4j.properties file. Within that file contains the line:
ConsoleAppender explains why the logs were only being created outside of docker. By default, Elastic Beanstalk collects all console output and logs it.
What I don’t understand is how simply having this jar file in the project (deployed to /usr/local/tomcat/lucee) is causing lucee to inherit the log4j settings. Luckily, we can just remove this jar from the project since we don’t need it anymore. But this crossing of wires is a tiny bit concerning should we ever need to implement logging in a custom jar in the future. Admittedly, that concern is probably rooted in my lack of understanding of the java ecosystem. I imagine there’s a way to encapsulate the logging directives.
could you please attacj your jar here for testing.
we should make sure that this never happens in Lucee.
I’ve uploaded the jar and the source to the original post on this ticket.