The Java heap, where every Java object is allocated, is the area of memory you're most intimately connected with when writing Java applications. The JVM was designed to insulate us from the host machine's peculiarities, so it's natural to think about the heap when you think about memory. You've no doubt encountered a Java heap Java applications run in the virtualized environment of the Java runtime, but the runtime itself is a native program written in a language (such as C) that consumes native resources, including native memory. Native memory is the memory available to the runtime process, as distinguished from the Java heap memory that a Java application uses. Every virtualized resource — including the Java heap and Java threads — must be stored in native memory, along with the data used by the virtual machine as it runs. This means that the limitations on native memory imposed by the host machine's hardware and operating system (OS) affect what you can do with your Java application. This article is one of two covering the same topic on different platforms. In both, you'll learn what native memory is, how the Java runtime uses it, what running out of it looks like, and how to debug a native I'll start by explaining the limitations on native memory imposed by the OS and the underlying hardware. If you're familiar with managing dynamic memory in a language such as C, then you may want to skip to the next section. Many of the restrictions that a native process experiences are imposed by the hardware, not the OS. Every computer has a processor and some random-access memory (RAM), also known as physical memory. A processor interprets a stream of data as instructions to execute; it has one or more processing units that perform integer and floating-point arithmetic as well as more advanced computations. A processor has a number of registers — very fast memory elements that are used as working storage for the calculations that are performed; the register size determines the largest number that a single calculation can use. The processor is connected to physical memory by the memory bus. The size of the physical address (the address used by the processor to index physical RAM) limits the amount of memory that can be addressed. For example, a 16-bit physical address can address from 0x0000 to 0xFFFF, which gives 2^16 = 65536 unique memory locations. If each address references a byte of storage, a 16-bit physical address would allow a processor to address 64KB of memory. Processors are described as being a certain number of bits. This normally refers to the size of the registers, although there are exceptions — such as 390 31-bit — where it refers to the physical address size. For desktop and server platforms, this number is 31, 32, or 64; for embedded devices and microprocessors, it can be as low as 4. The physical address size can be the same as the register width but could be larger or smaller. Most 64-bit processors can run 32-bit programs when running a suitable OS. Table 1 lists some popular Linux and Windows architectures with their register and physical address sizes: Table 1. Register and physical address size of some popular processor architectures
Operating systems and virtual memory If you were writing applications to run directly on the processor without an OS, you could use all memory that the processor can address (assuming enough physical RAM is connected). But to enjoy features such as multitasking and hardware abstraction, nearly everybody uses an OS of some kind to run their programs. In multitasking OSs such as Windows and Linux, more than one program uses system resources, including memory. Each program needs to be allocated regions of the physical memory to work in. It's possible to design an OS such that every program works directly with physical memory and is trusted to use only the memory it has been given. Some embedded OSs work like this, but it's not practical in an environment consisting of many programs that are not tested together because any program could corrupt the memory of other programs or the OS itself. Virtual memory allows multiple processes to share physical memory without being able to corrupt one another's data. In an OS with virtual memory (such as Windows, Linux, and many others), each program has its own virtual address space — a logical region of addresses whose size is dictated by the address size on that system (so 31, 32, or 64 bits for desktop and server platforms). Regions in a process's virtual address space can be mapped to physical memory, to a file, or to any other addressable storage. The OS can move data held in physical memory to and from a swap area (the page file on Windows or aswap partition on Linux) when it isn't being used, to make the best use of physical memory. When a program tries to access memory using a virtual address, the OS in combination with on-chip hardware maps that virtual address to the physical location. That location could be physical RAM, a file, or the page file/swap partition. If a region of memory has been moved to swap space, then it is loaded back into physical memory before being used. Figure 1 shows how virtual memory works by mapping regions of process address space to shared resources: Figure 1. Virtual memory mapping process address spaces to physical resources ![]() Each instance of a program runs as a process. A process on Linux and Windows is a collection of information about OS-controlled resources (such as file and socket information), typically one virtual address space (more than one on some architectures), and at least one thread of execution. The virtual address space size can be smaller than the processor's physical address size. Intel x86 32-bit originally had a 32-bit physical address that allowed the processor to address 4GB of storage. Later, a feature called Physical Address Extension (PAE) was added that expanded the physical address size to 36-bit — allowing up to 64GB of RAM to be installed and addressed. PAE allowed OSs to map 32-bit 4GB virtual address spaces onto a large physical address range, but it did not allow each process to have a 64GB virtual address space. This means that if you put more than 4GB of memory into a 32-bit Intel server, you can't map all of it directly into a single process. The Address Windowing Extensions feature allows a Windows process to map a portion of its 32-bit address space as a sliding window into a larger area of memory. Linux uses similar technologies based on mapping regions into the virtual address space. This means that although you can't directly reference more than 4GB of memory, you can work with larger regions of memory. Although each process has its own address space, a program typically can't use all of it. The address space is divided into user space and kernel space. The kernel is the main OS program and contains the logic for interfacing to the computer hardware, scheduling programs, and providing services such as networking and virtual memory. As part of the computer boot sequence, the OS kernel runs and initialises the hardware. Once the kernel has configured the hardware and its own internal state, the first user-space process starts. If a user program needs a service from the OS, it can perform an operation — called a system call — that jumps into the kernel program, which then performs the request. System calls are generally required for operations such as reading and writing files, networking, and starting new processes. The kernel requires access to its own memory and the memory of the calling process when executing a system call. Because the processor that is executing the current thread is configured to map virtual addresses using the address-space mapping for the current process, most OSs map a portion of each process address space to a common kernel memory region. The portion of the address space mapped for use by the kernel is called kernel space; the remainder, which can be used by the user application, is called user space. The balance between kernel and user space varies by OS and even among instances of the same OS running on different hardware architectures. The balance is often configurable and can be adjusted to give more space to user applications or the kernel. Squeezing the kernel area can cause problems such as restricting the number of users that can be logged on simultaneously or the number of processes that can run; smaller user space means that the application programmer has less room to work in. By default, 32-bit Windows has a 2GB user space and a 2GB kernel space. The balance can be shifted to a 3GB user space and a 1GB kernel space on some versions of Windows by adding the Figure 2. Address-space layout for 32-bit Windows ![]() Figure 3 shows the address-space arrangements for 32-bit Linux: Figure 3. Address-space layout for 32-bit Linux ![]() A separate kernel address space is also used on Linux 390 31-bit, where the smaller 2GB address space makes dividing a single address space undesirable; however, the 390 architecture can work with multiple address spaces simultaneously without compromising performance. The process address space must contain everything a program requires — including the program itself and the shared libraries (DLLs on Windows, .so files on Linux) that it uses. Not only can shared libraries take up space that a program can't use to store data in, they can also fragment the address space and reduce the amount of memory that can be allocated as a continuous chunk. This is noticeable in programs running on Windows x86 with a 3GB user space. DLLs are built with a preferred loading address: when a DLL is loaded, it's mapped into the address space at a particular location unless that location is already occupied, in which case it's rebased and loaded elsewhere. With the 2GB user space available when Windows NT was originally designed, it made sense for the system libraries to be built to load near the 2GB boundary — thereby leaving most of the user region free for the application to use. When the user region is extended to 3GB, the system shared libraries still load near 2GB — now in the middle of the user space. Although there's a total user space of 3GB, it's impossible to allocate a 3GB block of memory because the shared libraries get in the way. Using the A native-memory leak or excessive native memory use will cause different problems depending on whether you exhaust the address space or run out of physical memory. Exhausting the address space normally happens only with 32-bit processes — because the maximum 4GB is easy to allocate. A 64-bit process has a user space of hundreds or thousands of gigabytes, which is hard to fill up even if you try. If you do exhaust the address space of a Java process, then the Java runtime can start to display strange symptoms that I'll describe later in the article. When running on a system with more process address space than physical memory, a memory leak or excessive use of native memory forces the OS to swap out the backing storage for some of the native process's virtual address space. Accessing a memory address that has been swapped is much slower than reading a resident (in physical memory) address because the OS must pull the data from the hard drive. It's possible to allocate enough memory to exhaust all physical memory and all swap memory (paging space); on Linux, this triggers the kernel out-of-memory (OOM) killer, which forcibly kills the most memory-hungry process. On Windows, allocations start failing in the same way as they would if the address space were full. If you simultaneously try to use more virtual memory than there is physical memory, it will be obvious there's a problem long before the process is killed for being a memory hog. The system will thrash — that is, spend most of its time copying memory back and forth from swap space. When this happens, the performance of the computer and the individual applications will become so poor that the user can't fail to notice there's a problem. When a JVM's Java heap is swapped out, the garbage collector's performance becomes extremely poor, to the extent that the application can appear to hang. If multiple Java runtimes are in use on a single machine at the same time, the physical memory must be sufficient to fit all of the Java heaps. How the Java runtime uses native memory The Java runtime is an OS process that is subject to the hardware and OS constraints I outlined in the preceding section. Runtime environments provide capabilities that are driven by some unknown user code; that makes it impossible to predict which resources the runtime environment will require in every situation. Every action a Java application takes inside the managed Java environment can potentially affect the resource requirements of the runtime that provides that environment. This section describes how and why Java applications consume native memory. The Java heap and garbage collection The Java heap is the area of memory where objects are allocated. Most Java SE implementations have one logical heap, although some specialist Java runtimes such as those implementing the Real Time Specification for Java (RTSJ) have multiple heaps. A single physical heap can be split up into logical sections depending on the garbage collection (GC) algorithm used to manage the heap memory. These sections are typically implemented as contiguous slabs of native memory that are under the control of the Java memory manager (which includes the garbage collector). The heap's size is controlled from the Java command line using the Reserving native memory is not the same as allocating it. When native memory is reserved, it is not backed with physical memory or other storage. Although reserving chunks of the address space will not exhaust physical resources, it does prevent that memory from being used for other purposes. A leak caused by reserving memory that is never used is just as serious as leaking allocated memory. Some garbage collectors minimise the use of physical memory by decommitting (releasing the backing storage for) parts of the heap as the used area of heap shrinks. More native memory is required to maintain the state of the memory-management system maintaining the Java heap. Data structures must be allocated to track free storage and record progress when collecting garbage. The exact size and nature of these data structures varies with implementation, but many are proportional to the size of the heap. The Just-in-time (JIT) compiler The JIT compiler compiles Java bytecode to optimised native executable code at run time. This vastly improves the run-time speed of Java runtimes and allows Java applications to run at speeds comparable to native code. Bytecode compilation uses native memory (in the same way that a static compiler such as Java applications are composed of classes that define object structure and method logic. They also use classes from the Java runtime class libraries (such as How classes are stored varies by implementation. The Sun JDK uses the permanent generation (PermGen) heap area. The IBM implementation from Java 5 onward allocates slabs of native memory for each classloader and stores the class data in there. Modern Java runtimes have technologies such as class sharing that may require mapping areas of shared memory into the address space. To understand how these allocation mechanisms affect your Java runtime's native footprint, you need to read the technical documentation for that implementation. However, some universal truths affect all implementations. At the most basic level, using more classes uses more memory. (This may mean your native memory usage just increases or that you must explicitly resize an area — such as the PermGen or the shared-class cache — to allow all the classes to fit in.) Remember that it's not only your application that needs to fit; frameworks, application servers, third-party libraries, and Java runtimes contain classes that are loaded on demand and occupy space. Java runtimes can unload classes to reclaim space, but only under strict conditions. It's impossible to unload a single class; classloaders are unloaded instead, taking all the classes they loaded with them. A classloader can be unloaded only if:
It's worth noting that the three default classloaders that the Java runtime creates for all Java applications — bootstrap, extension, and application — can never meet these criteria; therefore, any system classes (such as Even when a classloader is eligible for collection, the runtime collects classloaders only as part of a GC cycle. Some implementations unload classloaders only on some GC cycles. It's also possible for classes to be generated at run time, without you realising it. Many JEE applications use JavaServer Pages (JSP) technology to produce Web pages. Using JSP generates a class for each .jsp page executed that will last the lifetime of the classloader that loaded them — typically the lifetime of the Web application. Another common way to generate classes is by using Java reflection. The way reflection works varies among Java implementations, but Sun and IBM implementations both use the method I'll describe now. When using the The Java runtime uses the JNI method the first few times a class is reflected on, but after being used a number of times, the accessor is inflated into a bytecode accessor, which involves building a class and loading it through a new classloader. Doing lots of reflection can cause many accessor classes and classloaders to be created. Holding references to the reflecting objects causes these classes to stay alive and continue occupying space. Because creating the bytecode accessors is quite slow, the Java runtime can cache these accessors for later use. Some applications and frameworks also cache reflection objects, thereby increasing their native footprint. JNI allows native code (applications written in native compiled languages such as C and C++) to call Java methods and vice versa. The Java runtime itself relies heavily on JNI code to implement class-library functions such as file and network I/O. A JNI application can increase a Java runtime's native footprint in three ways:
The new I/O (NIO) classes added in Java 1.4 introduced a new way of performing I/O based on channels and buffers. As well as I/O buffers backed by memory on the Java heap, NIO added support for direct It's easy to become confused about where direct Figure 4. Memory topology for direct and non-direct java.nio.ByteBuffer s![]() Direct The pathological case would be that the native heap becomes full and one or more direct Every thread in an application requires memory to store its stack (the area of memory used to hold local variables and maintain state when calling functions). Every Java thread requires stack space to run. Depending on implementation, a Java thread can have separate native and Java stacks. In addition to stack space, each thread requires some native memory for thread-local storage and internal data structures. The stack size varies by Java implementation and by architecture. Some implementations allow you to specify the stack size for Java threads. Values between 256KB and 756KB are typical. Although the amount of memory used per thread is quite small, for an application with several hundred threads, the total memory use for thread stacks can be large. Running an application with many more threads than available processors to run them is usually inefficient and can result in poor performance as well as increased memory usage. How can I tell if I'm running out of native memory? A Java runtime copes quite differently with running out of Java heap compared to running out of native heap, although both conditions can present with similar symptoms. A Java application finds it extremely difficult to function when the Java heap is exhausted — because it's difficult for a Java application to do anything without allocating objects. The poor GC performance and In contrast, once a Java runtime has started up and the application is in steady state, it can continue to function with complete native-heap exhaustion. It doesn't necessarily show any odd behaviour, because actions that require a native-memory allocation are much rarer than actions that require Java-heap allocations. Although actions that require native memory vary by JVM implementation, some popular examples are: starting a thread, loading a class, and performing certain kinds of network and file I/O. Native out-of-memory behaviour is also less consistent than Java heap out-of-memory behaviour, because there's no single point of control for native heap allocations. Whereas all Java heap allocations are under control of the Java memory-management system, any native code — whether it is inside the JVM, the Java class libraries, or application code — can perform a native-memory allocation and have it fail. The code that attempts the allocation can then handle it however its designer wants: it could throw an The lack of predictable behaviour means there's no one simple way to identify native-memory exhaustion. Instead, you need to use data from the OS and from the Java runtime to confirm the diagnosis. Examples of running out of native memory To help you see how native-memory exhaustion affects the Java implementation you're using, this article's sample code (seeDownload) contains some Java programs that trigger native-heap exhaustion in different ways. The examples use a native library written in C to consume all of the native address space and then try to perform some action that uses native memory. The examples are supplied already built, although instructions on compiling them are provided in the README.html file in the sample package's top-level directory. The
The output for each demo has been captured for a Sun and an IBM Java runtime running on 32-bit Windows. The binaries provided have been tested on:
The following version of the Sun Java runtime was used to capture the output:
The version of the IBM Java runtime used was:
Trying to start a thread when out of native memory The The output from the
Calling Trying to handle the first The examples supplied with this article trigger clusters of When the same test case is executed on the Sun Java runtime, the following console output is produced:
Although the stack trace and the error message are slightly different, the behaviour is essentially the same: the native allocation fails and a Trying to allocate a direct The
In this scenario, an When run on the Sun Java runtime, this test case produces the following console output:
Debugging approaches and techniques The first thing to do when faced with a Checking your vendor's documentationThe guidelines in this article are general debugging principles applied to understanding native out-of-memory scenarios. Your runtime vendor may supply its own debugging instructions that it expects you to follow when working with its support teams. If you are raising a problem ticket with your runtime vendor (including IBM), always check its debugging and diagnostic documentation to see what steps you are expected to take when submitting a problem report. The method for checking the heap utilisation varies among Java implementations. On IBM implementations of Java 5 and 6, the javacore file produced when the
This section shows how much Java heap was free when the javacore was produced. Note that the values are in hexadecimal format. If the
It is also possible for an
When the Sun implementation runs out of Java heap memory, it uses the exception message to show that it is Java heap that is exhausted:
IBM and Sun implementations both have a verbose GC option that produces trace data showing how full the heap is at every GC cycle. This information can be plotted with a tool such as the IBM Monitoring and Diagnostic Tools for Java - Garbage Collection and Memory Visualizer (GCMV) to show if the Java heap is growing (see Resources). If you have determined that your out-of-memory condition was not caused by Java-heap exhaustion, the next stage is to profile your native-memory usage. The PerfMon tool supplied with Windows allows you to monitor and record many OS and process metrics, including native-memory use (see Resources). It allows counters to be tracked in real time or stored in a log file to be reviewed offline. Use the Private Bytes counter to show the total address-space usage. If this approaches the limit of the user space (between 2 and 3GB as previously discussed), you should expect to see native out-of-memory conditions. Linux has no PerfMon equivalent, but you have several alternatives. Command-line tools such as GCMV was originally written to plot verbose GC logs, allowing users to view changes in Java heap usage and GC performance when tuning the garbage collector. GCMV was later extended to allow it to plot other data sources, including Linux and AIX native-memory data. GCMV is shipped as a plug-in for the IBM Support Assistant (ISA). To plot a Linux native-memory profile with GCMV, you must first collect native-memory data using a script. GCMV's Linux native-memory parser reads output from the Linux
Figure 5 shows the location of the script in the ISA help file. If you do not have the GCMV Tool entry in your help file, it is most likely you do not have the GCMV plug-in installed. Figure 5. Location of Linux native memory data capture script in ISA help dialog ![]() The script provided in the GCMV help uses a
If your version of
If you are running an older version of
with
Copy the script from the help panel into a file (in this example called memscript.sh), find the process id (PID) of the Java process you want to monitor (in this example, 1234), and run:
This will write the native-memory log into ps.out. To plot memory usage:
Once you have the profile of the native-memory use over time, you need to decide whether you are seeing a native-memory leak or are just trying to do too much in the available space. The native-memory footprint for even a well-behaved Java application is not constant from start-up. Several of the Java runtime systems — particularly the JIT compiler and classloaders — initialise over time, which can consume native memory. The memory growth from initialisation will plateau, but if your scenario has an initial native-memory footprint close to the limit of the address space, then this warm-up phase can be enough to cause native out-of-memory. Figure 6 shows an example GCMV native-memory plot from a Java stress test with the warm-up phase highlighted. Figure 6. Example Linux native-memory plot from GCMV showing warm-up phase ![]() It's also possible to have a native footprint that varies with workload. If your application creates more threads to handle incoming workload or allocates native-backed storage such as direct Running out of native memory because of JVM warm-up-phase native-memory growth, and growth proportional to load, are examples of trying to do too much in the available space. In these scenarios your options are:
A genuine native-memory leak manifests as a continual growth in native heap that doesn't drop when load is removed or when the garbage collector runs. The rate of memory leak can vary with load, but the total leaked memory will not drop. Memory that is leaked is unlikely to be referenced, so it can be swapped out and stay swapped out. When faced with a leak, your options are limited. You can increase the amount of user space (so there is more room to leak into) but that will only buy you time before you eventually run out of memory. If you have enough physical memory and address space, you can allow the leak to continue on the basis that you will restart your application before the process address space is exhausted. What's using my native memory? Once you have determined you are running out of native memory, the next logical question is: What's using that memory? Answering this question is hard because, by default, Windows and Linux do not store information about which code path is allocated a particular chunk of memory. Your first step when trying to understand where your native memory has gone is to work out roughly how much native memory will be used based on your Java settings. An accurate value is difficult to work out without in-depth knowledge of the JVM's workings, but you can make a rough estimate based on the following guidelines:
If your total is much less than your maximum user space, you are not necessarily safe. Many other components in a Java runtime could allocate enough memory to cause problems; however, if your initial calculations suggest you are close to your maximum user space, it is likely that you will have native-memory issues. If you suspect you have a native-memory leak or you want to understand exactly where your memory is going, several tools can help. Microsoft provides the UMDH (user-mode dump heap) and LeakDiag tools for debugging native-memory growth on Windows (see Resources). They both work in a similar way: recording which code path allocated a particular area of memory and providing a way to locate sections of code that allocate memory that isn't freed later. I refer you to the article "Umdhtools.exe: How to use Umdh.exe to find memory leaks on Windows" for instructions on using UMDH (see Resources). In this article, I'll focus on what the output of UMDH looks like when run against a leaky JNI application. The samples pack for this article contains a Java application called For
The important line is
This shows the leak coming from the leakyjniapp.dll module in the At the time of writing, Linux does not have a UMDH or LeakDiag equivalent. But there are still ways to debug native-memory leaks on Linux. The many memory debuggers available on Linux typically fall into one of the following categories:
For simple scenarios that can tolerate the performance overhead, Valgrind The Some Java runtimes use thread stacks and processor registers in unusual ways; this can confuse some debugging tools, which expect native programs to abide by standard conventions of register use and stack structure. When using Valgrind to debug leaking JNI applications, you may find that many warnings are thrown up about the use of memory, and some thread stacks will look odd; these are due to the way the Java runtime structures its data internally and are nothing to worry about. To trace the
The The Valgrind prints many warnings and errors while the command runs (most of which are uninteresting in this context), and finally prints a list of leaking call stacks in ascending order of amount of memory leaked. The end of the summary section of the Valgrind output for
The second line of the stacks shows that the memory was leaked by the Several proprietary debugging applications that can debug native-memory leaks are also available. More tools (both open-source and proprietary) are being developed all the time, and it's worth researching the current state of the art. Currently debugging native-memory leaks on Linux with the freely available tools is more challenging than doing the same on Windows. Whereas UMDH allows native leaks on Windows to be debugged in situ, on Linux you will probably need to do some traditional debugging rather than rely on a tool to solve the problem for you. Here are some suggested debugging steps:
If you identify a leak or memory growth that you think is coming out of the Java runtime itself, you may want to engage your runtime vendor to debug further. Removing the limit: Making the change to 64-bit It is easy to hit native out-of-memory conditions with 32-bit Java runtimes because the address space is relatively small. The 2 to 4GB of user space that 32-bit OSs provide is often less than the amount of physical memory attached to the system, and modern data-intensive applications can easily scale to fill the available space. If your application cannot be made to fit in a 32-bit address space, you can gain a lot more user space by moving to a 64-bit Java runtime. If you are running a 64-bit OS, then a 64-bit Java runtime will open the door to huge Java heaps and fewer address-space-related headaches. Table 2 lists the user spaces currently available with 64-bit OSs: Table 2. User space sizes on 64-bit OSs
Moving to 64-bit is not a universal solution to all native-memory problems, however; you still need sufficient physical memory to hold all of your data. If your Java runtime won't fit in physical memory then performance will be intolerably poor because the OS is forced to thrash Java runtime data back and forth from swap space. For the same reason, moving to 64-bit is no permanent solution to a memory leak — you are just providing more space to leak into, which will only buy time between forced restarts. It's not possible to use 32-bit native code with a 64-bit runtime; any native code (JNI libraries, JVM Tool Interface [JVMTI], JVM Profiling Interface [JVMPI], and JVM Debug Interface [JVMDI] agents) must be recompiled for 64-bit. A 64-bit runtime's performance can also be slower than the corresponding 32-bit runtime on the same hardware. A 64-bit runtime uses 64-bit pointers (native address references), so the same Java object on 64-bit takes up more space than an object containing the same data on 32-bit. Larger objects mean a bigger heap to hold the same amount of data while maintaining similar GC performance, which makes the OS and hardware caches less efficient. Surprisingly, a larger Java heap does not necessarily mean longer GC pause times, because the amount of live data on the heap might not have increased, and some GC algorithms are more effective with larger heaps. Some modern Java runtimes contain technology to mitigate 64-bit "object bloat" and improve performance. These features work by using shorter references on 64-bit runtimes. This is called compressed references on IBM implementations and compressed oops on Sun implementations. A comparative study of Java runtime performance is beyond this article's scope, but if you are considering a move to 64-bit it is worth testing your application early to understand how it performs. Because changing the address size affects the Java heap, you will need to retune your GC settings on the new architecture rather than just port your existing settings. An understanding of native memory is essential when you design and run large Java applications, but it's often neglected because it is associated with the grubby hardware and OS details that the Java runtime was designed to save us from. The JRE is a native process that must work in the environment defined by these grubby details. To get the best performance from your Java application, you must understand how the application affects the Java runtime's native-memory use. Running out of native memory can look similar to running out of Java heap, but it requires a different set of tools to debug and solve. The key to fixing native-memory issues is to understand the limits imposed by the hardware and OS that your Java application is running on, and to combine this with knowledge of the OS tools for monitoring native-memory use. By following this approach, you'll be equipped to solve some of the toughest problems your Java application can throw at you.
Information about download methods Learn
Get products and technologies
Discuss
Andrew Hall joined IBM's Java Technology Centre in 2004, starting in the system test team where he stayed for two years. He spent 18 months in the Java service team, where he debugged dozens of native-memory problems on many platforms. He is currently a developer in the Java Reliability, Availability and Serviceability team. In his spare time he enjoys reading, photography, and juggling. |
java >