Troubleshooting Memory Issues in Java Applications
Last updated January 18, 2024
Table of Contents
Tuning the memory use of your application requires understanding both how Java uses memory and how you can gain visibility into your application’s memory use.
JVM memory usage
The JVM uses memory in a number of different ways. The primary, but not singular, use of memory is in the heap. Outside of the heap, memory is also consumed by Metaspace and the stack.
Java Heap - The heap is where your class instantiations (or objects) are stored. Instance variables are stored in objects. When discussing Java memory and optimization, we most often discuss the heap because we have the most control over it, and it is where garbage collection (and GC optimizations) take place. Heap size is controlled by the -Xms
and -Xmx
JVM flags. Read more about GC and The Heap
Java Stack - Each thread has its own call stack. The stack stores primitive local variables and object references, along with the call stack (method invocations) itself. The stack is cleaned up as stack frames move out of context, so no GC is performed here. The -Xss
JVM option controls how much memory gets allocated for each thread’s stack.
Metaspace - Metaspace stores the class definitions of your objects. The size of Metaspace is controlled by setting -XX:MetaspaceSize
.
Additional JVM overhead - In addition to the above, some memory is consumed by the JVM itself. This holds the C libraries for the JVM and some C memory allocation overhead that it takes to run the rest of the memory pools above. Visibility tools that run on the JVM won’t show this overhead, so while they can give an idea of how an application uses memory, they can’t show the total memory use of the JVM process. This type of memory can be affected by Tuning glibc Memory Behavior.
Configuring Java to run in a container
The JVM attempts to set default allocation values for its various memory categories based on what the operating system reports is available. However, when running inside of a container (such as a Heroku dyno or a Docker container), the values reported by the OS may be incorrect. You can work around this by configuring the JVM to use cgroup memory limits instead.
On Java 8, the use of cgroup memory limits is an experimental feature, and can be enabled by adding the following options to your JVM process (either in your Procfile
or with a config variable):
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
On Java 9 and newer, the option is no longer experimental:
-XX:+UseContainerSupport
These options might reduce the overall memory footprint of your JVM process. If they don’t, you will need to investigate how the JVM is using memory before making any another adjustments.
Profiling memory use of a Java application
It is important to understand how an application will use memory in both a development and production environment. The majority of memory issues can be reproduced in any environment without significant effort. It is often easier to troubleshoot memory issues on your local machine, because you’ll have access to more tools and won’t have to be as concerned with side effects that monitoring tools may cause.
There are a number of tools available for gaining insight into Java application memory use. Some are packaged with the Java runtime itself and should already be on your development machine. Some are available from third parties. This is not meant to be an exhaustive list, but rather a starting point to your exploration of these tools.
Tools that come with the Java runtime include jmap
for doing heap dumps and gathering memory statistics, jstack
for inspecting the threads running at any given time, jstat
for general JVM statistic gathering, and jhat
for analyzing heap dumps. Read more about these tools in the Oracle Docs or at IBM developer works
Heroku Application Metrics provides plots for heap and non-heap memory, along with GC activity. Details on how to enable this functionality are available in the Language Runtime Metrics docs.
VisualVM combines all of the tools above into a GUI-based package that is more friendly for some users.
YourKit is a good commercially available tool.
Heroku memory limits
The amount of physical memory available to your application depends on your dyno type. Your application is allowed to consume more memory than this, but the dyno will begin to page it to disk. This can seriously hurt performance and is best avoided. You’ll see R14 errors in your application logs when this paging starts to happen.
The default support for most JVM-based languages sets -Xss512k
and sets Xmx
dynamically based on dyno type. These defaults enable most applications to avoid R14 errors. See the Language Support Docs for your chosen language and framework for a full set of defaults.
Profiling a Java application on Heroku
The use of the profiling tools mentioned above is different on Heroku’s cloud because of the platform’s process isolation model. However, Heroku’s JVM languages support provides tools that simplify how you connect these tools to Heroku.
The tools described in this section require the heroku-cli-java
plugin for the Heroku CLI. Install it like so:
$ heroku plugins:install heroku-cli-java
Connecting profiling tools to a dyno
You can use Heroku Exec to attach many Java profiling and debugging tools to a running JVM process in a web dyno. Once the feature is enabled, you can capture thread dumps and heap dumps or attach common GUI based tools like JConsole or VisualVM. For example, you can run:
$ heroku java:visualvm
This starts a VisualVM session connected to the web.1
dyno. Optionally, you can use the --dyno
flag to specify which dyno you want to connect to.
Generating Thread Dumps
With Heroku Exec enabled, you can generate a thread dump for an application process running in a dyno with the command:
$ heroku java:jstack
This prints a thread dump from the web.1
dyno to your console. Optionally, you can provide the --dyno
flag with the name of the dyno you want to receive a dump from.
You cannot use the -F
option with jstack
on Heroku.
Generating Heap Dumps
With Heroku Exec enabled, you can generate a heap dump for an application process running in a dyno with the command:
$ heroku java:jmap
This prints a histogram of the heap from the web.1
dyno to your console. Optionally, you can provide the --dyno
flag to the heroku java:jmap
command with the name of the dyno you want to receive a dump from. If you want a binary heap dump in HPROF format, you can run:
$ heroku java:jmap --hprof
Binary files can then be analyzed with tools such as VisualVM, jhat
, and Eclipse MAT.
If you need to use more advanced jmap
options, you can run heroku ps:exec
to start a shell session into the dyno, and run jmap
there. If you generate a binary heap dump, you can copy it out of the dyno by running heroku ps:copy
.
You cannot use the -F
option with jmap
on Heroku.
For JRuby, you must explicitly add jvm-common to your buildpacks by running heroku buildpacks:add -i 1 heroku/jvm
Configuring NativeMemoryTracking
If your app is experiencing high levels of native memory usage (i.e., the difference between Total RSS and JVM Heap), then you might need to configure your application to print Native Memory Tracking information when it shuts down. To do so, set the following configuration variable:
$ heroku config:set JAVA_OPTS="-XX:NativeMemoryTracking=detail -XX:+UnlockDiagnosticVMOptions -XX:+PrintNMTStatistics"
With this configuration, you can also debug memory usage in an isolated environment by run the following command to start a one-off dyno:
$ heroku run bash
Then run your app process in the background by adding a &
to the end of your Procfile
command. For example:
$ java -jar myapp.jar &
If you’d prefer to attach to a live process, rather than an isolated process, you can use Heroku Exec to inspect a running web dyno by running heroku ps:exec
.
In either case, you can capture the process ID (PID) for the Java process by running this command:
$ jps
4 Main
105 Jps
In this example the PID is 4. Now you can use tools like jstack
, jmap
and jcmd
against this process. For example:
$ jcmd 4 VM.native_memory summary
4:
Native Memory Tracking:
Total: reserved=1811283KB, committed=543735KB
- Java Heap (reserved=393216KB, committed=390656KB)
(mmap: reserved=393216KB, committed=390656KB)
- Class (reserved=1095741KB, committed=54165KB)
(classes #8590)
(malloc=10301KB #14097)
(mmap: reserved=1085440KB, committed=43864KB)
- Thread (reserved=22290KB, committed=22290KB)
(thread #30)
(stack: reserved=22132KB, committed=22132KB)
(malloc=92KB #155)
(arena=66KB #58)
...
For more information on using jcmd
to debug native memory, see Oracle documentation on Native Memory Tracking.
Generating Thread Dumps from your app
It is also possible to generate a thread dump from application code. This is useful when trying to generate a dump as the process exits. For example, you might add some code like this to a Java application:
Runtime.getRuntime().addShutdownHook(new Thread() {
@Override
public void run() {
final java.lang.management.ThreadMXBean threadMXBean = java.lang.management.ManagementFactory.getThreadMXBean();
final java.lang.management.ThreadInfo[] threadInfos = threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
for (java.lang.management.ThreadInfo threadInfo : threadInfos) {
System.out.println(threadInfo.getThreadName());
final Thread.State state = threadInfo.getThreadState();
System.out.println(" java.lang.Thread.State: " + state);
final StackTraceElement[] stackTraceElements = threadInfo.getStackTrace();
for (final StackTraceElement stackTraceElement : stackTraceElements) {
System.out.println(" at " + stackTraceElement);
}
System.out.println("\n");
}
}
});
Or in a Scala Play application you can add an app/Global.scala
file with these contents:
object Global extends WithFilters() {
override def onStop(app: Application) {
var threadMXBean = java.lang.management.ManagementFactory.getThreadMXBean();
var threadInfos = threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds, 100);
threadInfos.foreach { threadInfo =>
if (threadInfo != null) {
println(s"""
'${threadInfo.getThreadName}': ${threadInfo.getThreadState}
at ${threadInfo.getStackTrace.mkString("\n at ")}
""")
}
}
}
}
Both of these examples will print thread information to stdout when the processes is shutdown. In this way, if a process becomes deadlocked, you can restart it with a command like:
$ heroku ps:restart web.1
And the stack information will appear in the logs.
Verbose GC flags
If the above information is not detailed enough, there are also some JVM options that you can use to get verbose output at GC time in your logs. Add the following flags to your Java opts: -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps
$ heroku config:set JAVA_OPTS='-Xss512k -XX:+UseCompressedOops -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps'
2012-07-07T04:27:59+00:00 app[web.2]: {Heap before GC invocations=43 (full 0):
2012-07-07T04:27:59+00:00 app[web.2]: PSYoungGen total 192768K, used 190896K [0x00000000f4000000, 0x0000000100000000, 0x0000000100000000)
2012-07-07T04:27:59+00:00 app[web.2]: eden space 188800K, 100% used [0x00000000f4000000,0x00000000ff860000,0x00000000ff860000)
2012-07-07T04:27:59+00:00 app[web.2]: from space 3968K, 52% used [0x00000000ffc20000,0x00000000ffe2c1e0,0x0000000100000000)
2012-07-07T04:27:59+00:00 app[web.2]: to space 3840K, 0% used [0x00000000ff860000,0x00000000ff860000,0x00000000ffc20000)
2012-07-07T04:27:59+00:00 app[web.2]: ParOldGen total 196608K, used 13900K [0x00000000e8000000, 0x00000000f4000000, 0x00000000f4000000)
2012-07-07T04:27:59+00:00 app[web.2]: object space 196608K, 7% used [0x00000000e8000000,0x00000000e8d93070,0x00000000f4000000)
2012-07-07T04:27:59+00:00 app[web.2]: PSPermGen total 50816K, used 50735K [0x00000000dda00000, 0x00000000e0ba0000, 0x00000000e8000000)
2012-07-07T04:27:59+00:00 app[web.2]: object space 50816K, 99% used [0x00000000dda00000,0x00000000e0b8bee0,0x00000000e0ba0000)
2012-07-07T04:27:59+00:00 app[web.2]: 2012-07-07T04:27:59.361+0000: [GC
2012-07-07T04:27:59+00:00 app[web.2]: Desired survivor size 3866624 bytes, new threshold 1 (max 15)
2012-07-07T04:27:59+00:00 app[web.2]: [PSYoungGen: 190896K->2336K(192640K)] 204796K->16417K(389248K), 0.0058230 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2012-07-07T04:27:59+00:00 app[web.2]: Heap after GC invocations=43 (full 0):
2012-07-07T04:27:59+00:00 app[web.2]: PSYoungGen total 192640K, used 2336K [0x00000000f4000000, 0x0000000100000000, 0x0000000100000000)
2012-07-07T04:27:59+00:00 app[web.2]: eden space 188800K, 0% used [0x00000000f4000000,0x00000000f4000000,0x00000000ff860000)
2012-07-07T04:27:59+00:00 app[web.2]: from space 3840K, 60% used [0x00000000ff860000,0x00000000ffaa82d0,0x00000000ffc20000)
2012-07-07T04:27:59+00:00 app[web.2]: to space 3776K, 0% used [0x00000000ffc50000,0x00000000ffc50000,0x0000000100000000)
2012-07-07T04:27:59+00:00 app[web.2]: ParOldGen total 196608K, used 14080K [0x00000000e8000000, 0x00000000f4000000, 0x00000000f4000000)
2012-07-07T04:27:59+00:00 app[web.2]: object space 196608K, 7% used [0x00000000e8000000,0x00000000e8dc0330,0x00000000f4000000)
2012-07-07T04:27:59+00:00 app[web.2]: PSPermGen total 50816K, used 50735K [0x00000000dda00000, 0x00000000e0ba0000, 0x00000000e8000000)
2012-07-07T04:27:59+00:00 app[web.2]: object space 50816K, 99% used [0x00000000dda00000,0x00000000e0b8bee0,0x00000000e0ba0000)
2012-07-07T04:27:59+00:00 app[web.2]: }
Heroku Labs: log-runtime-metrics
There is a Heroku labs feature called log-runtime-metrics. This prints diagnostic information such as total memory use to your application logs.
New Relic
For some JVM languages and Java frameworks you can use the New Relic Java agent.
Memory tips for running on Heroku
- Be mindful of thread use and stack size. The default option
-Xss512k
means that each thread will use 512kb of memory. The JVM default without this option is 1MB. - Be mindful of heavyweight monitoring agents. Some Java agents can use a significant amount of memory on their own and make memory problems worse while you try to troubleshoot your issue. If you’re having memory issues removing any agents is a good first step. The memory logging agent mentioned above has a very small memory footprint so it won’t cause these issues.
If you’re still having memory issues you can always contact Heroku Support.