Beliebte Suchanfragen
|
//

Create and Understand Java Heapdumps (Act 4)

3.8.2011 | 7 minutes of reading time

During the second act of this series we explained Java Memory Leaks . We elaborated how Memory Leaks can occur in Java and how they differ from traditional Memory Leaks. Todays post is going to explain how to have a closer look at all the objects residing in the Java heap. This includes references between the objects, which are required to identify memory leaks. The most accessible tool to approach analyzing memory issues are so called Java heapdumps, which can be created automatically in the event of an OutOfMemoryError by passing the JVM command line parameter -XX:+HeapDumpOnOutOfMemoryError. In this case the JVM will write the dump right before it exits. Analyzing these kind of dumps is also called “Post mortem”, as the JVM is already dead. But of course one can also create dumps manually at any point during runtime. More on that later.
While we take on heapdumps in this post, we will see other alternatives for hunting lost memory during the next act.

Java Heapdumps
A heapdump is a textual or binary representation of the Java heap which is usually written to a file. Various tools can reconstruct the objects and their references from the information stored in the heap dump file. The only information missing is the heap area the object was residing in. This is quite sad, as we just learned from act 3 the importance and meaning of the heap areas. However, this information is not required to locate Java Memory Leaks.

Wanted: Dead or alive
Heapdumps can contain to kinds of objects:

  • Alive objects only (those are objects, which can be reached via a GC root reference)
  • All objects (including the no longer referenced, but not yet garbage collected objects)

Because the live objects can be easily determined from various VM internal mechanisms, a heapdump containing only live objects can be created acceptable fast. Creating a full heap dump with dead objects as well takes much longer and also consumes more disk space.
However, the live objects are sufficient to search for Memory Leaks. Unreferenced objects are only of interest in respect to Object Cycling or GC Thrashing. In those cases one wants to find unnecessary or excessive object allocation.

Creating Heapdumps
As said initially, heap dumps can be created in 2 ways. But the creation of a dump will halt the JVM and all running processes and applications. This is required to be able to dump a consistent state of the memory. Due to this halt, it is not recommended to use this functionality on a production system very often, if at all.

  1. When using the JVM parameter -XX:+HeapDumpOnOutOfMemoryError the JVM will create a heap dump whenever it encounters an OutOfMemoryError before it quits. There is no reason not to use this parameter. The JVM has died and any negative impact on performance is irrelevant. But the size of the dumps can be an issue for certain servers. As the dump is very often the only point of information after a crash it should be created and preserved by the server administrator to allow a “post-mortem” analysis. Unfortunately they are very often simple deleted.
  2. Using the JVM tool jmap, heapdumps can be created from the command line:
    jmap -dump:live,format=b,file=<filename> <PID>
    

    The option live is important, as it will restrict the dump to live objects only. If you want to have also dead objects, simply omit this option. Format “b” (for “binary”) is recommended. While there is an ASCII version as well, manually analyzing it is almost impossible due to the amount of data.

Heap – inside out
There are a few tools for analyzing heapdumps offline. But detailing a comparison here is not worth the effort, as there is a clear winner: Eclipse Memory Analyzer Toolkit (Eclipse MAT) Eclipse MAT can read, parse and analyze dumps of various formats and sizes. It has an easy to use, but sometimes a bit too powerful, user interface and is completely free to use.
While older tools required to load the whole dump into memory, requiring very often 1.5 times the size of the dump as free memory, MAT will index the dump initially which allows fast analysis using just very few system resources. This enables users to work even with very large heapdumps.

After opening a dump with Eclipse MAT following objects are usually listed very often:

  • char[]
  • java.lang.Object[]
  • java.lang.Class
  • java.lang.String
  • byte[]
  • java.util.HashMap
  • java.util.HashMap$Entry
  • short[]
  • int[]
  • java.util.ArrayList
  • java.lang.Integer

Those objects are in every dump in large amounts. But are those objects the issue? How can I find the real root cause for a memory leak?

Extensive Leaks
When looking at an OutOfMemoryError induced heapdump post mortem, one can usually assume that the Memory Leak filled the majority of the memory. But how big is the leak? While the dump contains all object instances, Eclipse MAT shows in its Class Histogramm only class names, amount of instances and the sum of their shallow size. Shallow Size is basically the sum of the sizes of all contained primitive data types, as well as the size of the references to other objects. But the so called Retained Heap is more important for finding leaks, as it is the total size of all objects being kept alive by this dominator. To get this number, MAT needs to calculate the Retained Size by following all object references. As sorting by Retained Heap is very helpful for finding leaks, MAT displays the largest retained heaps in a pie chart. A prominent example for a lot of Retained Heap in web applications is usually the

org.apache.catalina.session.StandardManager

The purpose is clear: keep user data for the time of their session. But very often frameworks store a lot of data here as well. We already blogged about Ajax4JSF consuming a lot memory in the session. But also custom written, and unfortunately often incorrect, caches appear very often in the pie charts. This makes it very easy to locate memory leaks and their cause quickly.


To determine who is creating these objects, or find out what the purpose of some structures is, the actual instances with their incoming and outgoing references are required. To get them, choose List Objects with incoming References from the context menu. Now a tree structure is displayed, showing all instances with all incoming references. Those references keep the object alive and prevented them from being garbage collected. Outgoing References are interesting as well, because they show the actual contents of the instances, helping to find out their purpose. They are especially interesting when the actual dominator, like the SessionManagers, should not be removed, but the excessive objects referenced by it should.

Sneaking Memory Leaks
Finding Memory Leaks long before an OutOfMemoryError occurs is possible by creating multiple snapshots with the previously described method. A single snapshot alone can seldom help deciding whether the amount of objects is ok or not. But slowly growing object counts can hint at leaks. Eclipse MAT allows comparing two heapdump snapshots and finding growing structures and instance counts. But this requires two well timed dumps. With some amount of load they should be minutes apart, with few load hours or days. Having a greater time lag helps in separating normal jitter from real issues. The following example nicely depicts the obvious growing amount of XML related classes:


Something better
But comparing snapshots is cumbersome and in some cases even misleading. And we actually never want an OutOfMemoryError to happen in production. Even worse, heapdumps do not tell us about the lines of code actually creating objects and not removing them. There has to be a better tool than heapdumps!
And there are: Profiler and some APM tools allow recoding statistics and access pathes during runtime. The information provided by those tools is often more significant and easier to act on than heapdumps. For those reasons, we are going to have a closer look at those tools during our next installment of our OutOfMemoryError tragedy.

|

share post

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.