Issue
There is a Java 11 (SpringBoot 2.5.1) application with simple workflow:
- Upload archives (as multipart files with size 50-100 Mb each)
- Unpack them in memory
- Send each unpacked file as a message to a queue via JMS
When I run the app locally java -jar app.jar
its memory usage (in VisualVM) looks like a saw: high peaks (~ 400 Mb) over a stable baseline (~ 100 Mb).
When I run the same app in a Docker container memory consumption grows up to 700 Mb and higher until an OutOfMemoryError. It appears that GC does not work at all. Even when memory options are present (java -Xms400m -Xmx400m -jar app.jar
) the container seems to completely ignore them still consuming much more memory.
So the behavior in the container and in OS are dramatically different.
I tried this Docker image in DockerDesktop Windows 10
and in OpenShift 4.6
and got two similar pictures for the memory usage.
Dockerfile
FROM bellsoft/liberica-openjdk-alpine:11.0.9-12
RUN addgroup -S apprunner && adduser -S apprunner -G apprunner
COPY target/app.jar /home/apprunner/app.jar
USER apprunner:apprunner
WORKDIR /home/apprunner
EXPOSE 8080
ENTRYPOINT java -Xms400m -Xmx400m -jar app.jar
Java versions
# HOST
java -version
java 11.0.10 2021-01-19 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.10+8-LTS-162)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.10+8-LTS-162, mixed mode)
# CONTAINER
java -version
openjdk version "11.0.9" 2020-10-20 LTS
OpenJDK Runtime Environment (build 11.0.9+12-LTS)
OpenJDK 64-Bit Server VM (build 11.0.9+12-LTS, mixed mode)
Could it be that there is a "special settings" in liberica-openjdk v11.0.9-12
that turn off GC or something like that?
Please help me to figure out what is wrong in this case and how to make the app behave the same way inside and outside a container?
UPDATE: JVM works as expected never exceeding the memory limit. Actually the problem is in the container behaviour:
- it gives 200Mb overhead;
- it never releases memory.
The #(1) caused the app got OutOfMemory error in OpenShift cluster when (-Xmx=500m and the container memory limit is 600Mb). So, in fact, only 400Mb=(600-200) were available for JVM.
The #(2) made impression that GC does not work, although it did: periodical logging of used memory size had shown that after growing up to the peak (~400Mb) its level returns back (to ~100Mb), at the same time the overall container memory remains on the highest level (with a minimal decrease ~20Mb).
So the solution in this case is considering The container overhead and set OpenShift container allowed memory limit to 800Mb.
Solution
In Java 11, you can find out the flags that have been passed to the JVM and the "ergonomic" ones that have been set by the JVM by adding -XX:+PrintCommandLineFlags
to the JVM options.
That should tell you if the container you are using is overriding the flags you have given.
Having said that, its is (IMO) unlikely that the container is what is overriding the parameters.
It is not unusual for a JVM to use more memory that the -Xmx
option says. The explanation is that that option only controls the size of the Java heap. A JVM consumes a lot of memory that is not part of the Java heap; e.g. the executable and native libraries, the native heap, metaspace, off-heap memory allocations, stack frames, mapped files, and so on. Depending on your application, this could easily exceed 300MB.
Secondly, OOMEs are not necessarily caused by running out of heap space. Check what the "reason" string says.
Finally, this could be a difference in your app's memory utilization in a containerized environment versus when you run it locally.
Answered By - Stephen C
Answer Checked By - Clifford M. (JavaFixing Volunteer)