GC overhead limit exceeded

来自:https://plumbr.eu/outofmemoryerror/gc-overhead-limit-exceeded

Java runtime environment contains a built-in Garbage Collection (GC)process. In many other programming languages, the developers need to manually allocate and free memory regions so that the freed memory can be reused.

Java applications on the other hand only need to allocate memory. Whenever a particular space in memory is no longer used, a separate process called Garbage Collection clears the memory for them. How the GC detects that a particular part of memory is explained in more details in the Garbage Collection Handbook, but you can trust the GC to do its job well.

The java.lang.OutOfMemoryError: GC overhead limit exceeded error means that the GC tried to free memory but was pretty much unable to get anything done. By default it happens when the JVM spends more than 98% of the total time in GC and when after GC less than 2% of the heap is recovered.

The java.lang.OutOfMemoryError: GC overhead limit exceeded error will be displayed when your application has exhausted pretty much all the available memory and GC has repeatedly failed to clean it.

The cause

The java.lang.OutOfMemoryError: GC overhead limit exceeded error is the JVM’s way of signalling that your application spends too much time doing garbage collection with too little result. By default the JVM is configured to throw this error if it spends more than 98% of the total time doing GC and when after the GC less than 2% of the heap is recovered.

What would happen if this GC overhead limit would not exist? Note that the java.lang.OutOfMemoryError: GC overhead limit exceeded error is thrown only when 2% of the memory was freed after several GC cycles. This means that the little amount GC was able to clean will be quickly filled again, forcing GC to restart the cleaning process again. This forms a vicious cycle where the CPU is 100% busy with GC and no actual work can be done. End users of the application face extreme slowdowns – operations which normally complete in milliseconds take minutes to finish.

So the “java.lang.OutOfMemoryError: GC overhead limit exceeded” message is a pretty nice example of a fail fast principle in action.

Solution to the problemSee how Plumbr‘s automatic root cause detection helps.TRY PLUMBR

Examples

In the following example we create a “GC overhead limit exceeded” error by initializing a Map and adding key-value pairs into the map in an unterminated loop:


class Wrapper {
  public static void main(String args[]) throws Exception {
    Map map = System.getProperties();
    Random r = new Random();
    while (true) {
      map.put(r.nextInt(), "value");
    }
  }
}

As you might guess this cannot end well. And, indeed, when we launch the above program with:

java -Xmx100m -XX:+UseParallelGC Wrapper

we soon face the java.lang.OutOfMemoryError: GC overhead limit exceededmessage. But the above example is tricky. When launched with different Java heap size or a different GC algorithm, my Mac OS X 10.9.2 with Hotspot 1.7.0_45 will choose to die differently. For example, when I run the program with smaller Java heap size like this:

java -Xmx10m -XX:+UseParallelGC Wrapper

the application will die with a more common java.lang.OutOfMemoryError: Java heap space message that is thrown on Map resize. And when I run it with other garbage collection algorithms besides ParallelGC, such as -XX:+UseConcMarkSweepGC or -XX:+UseG1GC, the error is caught by the default exception handler and is without stacktrace as the heap is exhausted to the extent where the stacktrace cannot even be filled on Exception creation.

These variations are truly good examples that demonstrate that in resource-constrained situations you cannot predict the way your application is going to die so do not base your expectations on a specific sequence of actions to be completed.

Solution to the problemTRY PLUMBR

The solution

As a tongue-in-cheek solution, if you just wished to get rid of the “java.lang.OutOfMemoryError: GC overhead limit exceeded” message, adding the following to your startup scripts would achieve just that:

-XX:-UseGCOverheadLimit

I would strongly suggest NOT to use this option though – instead of fixing the problem you just postpone the inevitable: the application running out of memory and needing to be fixed. Specifying this option just masks the original java.lang.OutOfMemoryError: GC overhead limit exceeded error with a more familiar message java.lang.OutOfMemoryError: Java heap space.

Another way to give (temporary) relief to GC is to give more memory to the JVM process. Again this is as easy as adding (or increasing if present) just one parameter in your startup scripts:

java -Xmx1024m com.yourcompany.YourClass

In the above example the Java process is given 1GB of heap. Increasing its value will solve your GC overhead limit problem if your application suffered from insufficient memory in the first place.

But if you wish to make sure you have solved the underlying cause instead of masking the symptoms of the “java.lang.OutOfMemoryError: GC overhead limit exceeded” error, you should not stop here. For this you have an arsenal of different tools at your fingertips such as profilers and memory dump analyzers. But be prepared to invest a lot of time and be aware that such tools pose a significant overhead to your Java runtime, thus they are not suitable for production usage.

Our recommendation – try Plumbr for free. When it detects the cause for the java.lang.OutOfMemoryError, it contains and incident alert that contains the exact location of the problem along with solution guidelines.

时间: 2024-10-09 03:41:04

GC overhead limit exceeded的相关文章

JVM运行报错:GC overhead limit exceeded

今天在折腾OOM和java的4种引用类型的时候,在运行过程中JVM报了一个错误: java.lang.OutOfMemoryError: GC overhead limit exceeded 这个错误平时遇到的概率很少很少,今天无意中遇到了,这里做个记录.oracle/sun官网的解释是: The concurrent collector will throw an OutOfMemoryError if too much time is being spent in garbage colle

Unhandled event loop exception GC overhead limit exceeded

1.错误描述 java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.zip.ZipFile.<init>(ZipFile.java:466) at java.util.zip.ZipFile.<init>(ZipFile.java:145) at java.util.zip.ZipFile.<init>(ZipFile.java:159) at org.eclipse.jdt.intern

java.lang.OutOfMemoryError:GC overhead limit exceeded

之所以写下来,是因为不想在下次遇到事有到处找,很多时候错误信息一样,但是导致的原因却有很多. 我试驾了一个虚拟机启动采纳数:-XX:-UseGCOverheadLimit 把原文链接写上,标示感谢: http://www.cnblogs.com/hucn/p/3572384.html 我遇到这样的问题,本地部署时抛出异常java.lang.OutOfMemoryError:GC overhead limit exceeded导致服务起不来,查看日志发现加载了太多资源到内存,本地的性能也不好,gc

排查java.lang.OutOfMemoryError: GC overhead limit exceeded

帮助客户排查java.lang.OutOfMemoryError: GC overhead limit exceeded错误记录: 具体网址: https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=269134815562958&id=1554559.1&displayIndex=2&_afrWindowMode=0&_adf.ctrl-state=2t8bqbn6s_165 文档id: 155455

[Hadoop] - Hadoop Mapreduce Error: GC overhead limit exceeded

在运行mapreduce的时候,出现Error: GC overhead limit exceeded,查看log日志,发现异常信息为 2015-12-11 11:48:44,716 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: GC overhead limit exceeded at java.io.DataInputStream.readU

Spark 1.4.1中Beeline使用的gc overhead limit exceeded

最近使用SparkSQL做数据的打平操作,就是把多个表的数据经过关联操作导入到一个表中,这样数据查询的过程中就不需要在多个表中查询了,在数据量大的情况下,这样大大提高了查询效率. 我启动了thriftserver,然后通过beeline去连接thriftserver, 打平操作进行的很顺利,但是在执行groupby操作的时候beeline报了一个错误:gc overhead limit exceeded 我分析可能是thriftserver报的错误.但是查看了thriftserver的日志没有任

java.lang.OutOfMemoryError:GC overhead limit exceeded填坑心得

我遇到这样的问题,本地部署时抛出异常java.lang.OutOfMemoryError:GC overhead limit exceeded导致服务起不来,查看日志发现加载了太多资源到内存,本地的性能也不好,gc时间消耗的较多.解决这种问题两种方法是,增加参数,-XX:-UseGCOverheadLimit,关闭这个特性,同时增加heap大小,-Xmx1024m.坑填了,but why? OOM大家都知道,就是JVM内存溢出了,那GC overhead limit exceed呢? GC ov

spark-OutOfMemory:GC overhead limit exceeded 解决

今天准备跑自己的spark程序,但是运行过程中遇到了OutOfMemory:GC overhead limit exceeded的错误. 原以为是数据集太大,google了一番,以为是内存不够了,但是在webui一看其实数据集好像也不是很大,但是还是尝试提高了内存配置,通过提高spark.executor.memory和spark.shuffle.memoryFraction,降低spark.storage.memoryFraction,来提高机器可用的堆空间. 再次运行发现,问题依旧.在苦恼中

Spark OOM:java heap space,OOM:GC overhead limit exceeded解决方法

问题描述: 在使用spark过程中,有时会因为数据增大,而出现下面两种错误: java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError:GC overhead limit exceeded 这两种错误之前我一直认为是executor的内存给的不够,但是仔细分析发现其实并不是executor内存给的不足,而是driver的内存给的不足.在standalone client模式下用spark-submit提交任务时(

gc overhead limit exceeded eclipse

在Eclipse打包的时候报错:gc overhead limit exceeded eclipse 原因是Eclipse默认配置内存太小需要更改Eclipse安装文件夹下的eclipse.ini文件. Eclipse.ini默认文件如下: -startup plugins/org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar --launcher.library plugins/org.eclipse.equinox.launcher.wi