Skip to content
kreezxil edited this page May 4, 2019 · 4 revisions

Whether on a server or in a client the following submitted by Hellwolf95 will help any large pack to run for longer without a restart.

Treasure

-XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+UseNUMA -XX:+CMSParallelRemarkEnabled -XX:MaxTenuringThreshold=15 -XX:MaxGCPauseMillis=30 -XX:GCPauseIntervalMillis=150 -XX:+UseAdaptiveGCBoundary -XX:-UseGCOverheadLimit -XX:+UseBiasedLocking -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=15 -Dfml.ignorePatchDiscrepancies=true -Dfml.ignoreInvalidMinecraftCertificates=true -XX:+UseFastAccessorMethods -XX:+UseCompressedOops -XX:+OptimizeStringConcat -XX:+AggressiveOpts -XX:ReservedCodeCacheSize=2048m -XX:+UseCodeCacheFlushing -XX:SoftRefLRUPolicyMSPerMB=10000 -XX:ParallelGCThreads=2

What it means

-XX:+DisableExplicitGC

By default calls to System.gc() are enabled (-XX:-DisableExplicitGC). Use -XX:+DisableExplicitGC to disable calls to System.gc(). Note that the JVM still performs garbage collection when necessary.

-XX:+UseConcMarkSweepGC

Use concurrent mark-sweep collection for the old generation. (Introduced in 1.4.1)

-XX:+UseParNewGC

The parallel copying collector (Enabled using -XX:+UseParNewGC). Like the original copying collector, this is a stop-the-world collector. However this collector parallelizes the copying collection over multiple threads, which is more efficient than the original single-thread copying collector for multi-CPU machines (though not for single-CPU machines). This algorithm potentially speeds up young generation collection by a factor equal to the number of CPUs available, when compared to the original singly-threaded copying collector.

-XX:+UseNUMA

The Parallel Scavenger garbage collector has been extended to take advantage of machines with NUMA (Non Uniform Memory Access) architecture. Most modern computers are based on NUMA architecture, in which it takes a different amount of time to access different parts of memory. Typically, every processor in the system has a local memory that provides low access latency and high bandwidth, and remote memory that is considerably slower to access.

In the Java HotSpot Virtual Machine, the NUMA-aware allocator has been implemented to take advantage of such systems and provide automatic memory placement optimizations for Java applications. The allocator controls the eden space of the young generation of the heap, where most of the new objects are created. The allocator divides the space into regions each of which is placed in the memory of a specific node. The allocator relies on a hypothesis that a thread that allocates the object will be the most likely to use the object. To ensure the fastest access to the new object, the allocator places it in the region local to the allocating thread. The regions can be dynamically resized to reflect the allocation rate of the application threads running on different nodes. That makes it possible to increase performance even of single-threaded applications. In addition, "from" and "to" survivor spaces of the young generation, the old generation, and the permanent generation have page interleaving turned on for them. This ensures that all threads have equal access latencies to these spaces on average.

The NUMA-aware allocator is available on the Solaris™ operating system starting in Solaris 9 12/02 and on the Linux operating system starting in Linux kernel 2.6.19 and glibc 2.6.1.

The NUMA-aware allocator can be turned on with the -XX:+UseNUMA flag in conjunction with the selection of the Parallel Scavenger garbage collector. The Parallel Scavenger garbage collector is the default for a server-class machine. The Parallel Scavenger garbage collector can also be turned on explicitly by specifying the -XX:+UseParallelGC option.

The -XX:+UseNUMA flag was added in Java SE 6u2.

Note: There was a known bug in the Linux Kernel that may cause the JVM to crash when being run with -XX:UseNUMA. The bug was fixed in 2012, so this should not affect the latest versions of the Linux Kernel. To see if your Kernel has this bug, you can run the native reproducer.

-XX:+CMSParallelRemarkEnabled

The option CMSParallelRemarkEnabled means the remarking is done in parallel to program execution – which is what you want if your server has many cores (and most servers do).

-XX:MaxTenuringThreshold=15

Each object in Java heap has a header which is used by Garbage Collection (GC) algorithm. The young space collector (which is responsible for object promotion) uses a few bit(s) from this header to track the number of collections object that have survived (32-bit JVM use 4 bits for this, 64-bit probably some more).

During young space collection, every single object is copied. The Object may be copied to one of survival spaces (one which is empty before young GC) or to the old space. For each object being copied, GC algorithm increases it's age (number of collection survived) and if the age is above the current tenuring threshold it would be copied (promoted) to old space. The Object could also be copied to the old space directly if the survival space gets full (overflow).

The journey of Object has the following pattern:

allocated in eden copied from eden to survival space due to young GC copied from survival to (other) survival space due to young GC (this could happen few times) promoted from survival (or possible eden) to old space due to young GC (or full GC) the actual tenuring threshold is dynamically adjusted by JVM, but MaxTenuringThreshold sets an upper limit on it.

If you set MaxTenuringThreshold=0, all objects will be promoted immediately.

-XX:MaxGCPauseMillis=30

Sets a target for the maximum GC pause time. This is a soft goal, and the JVM will make its best effort to achieve it.

-XX:GCPauseIntervalMillis=150

In addition to the pause time goal, you can specify the length of the time period during which the pause can occur. You can specify the minimum mutator usage with this time span (GCPauseIntervalMillis) along with the pause time goal. The default value for MaxGCPauseMillis is 200 milliseconds. The default value for GCPauseIntervalMillis (0) is the equivalent of no requirement on the time span.

-XX:+UseAdaptiveGCBoundary

The garbage collector is allowed to move the boundary between the tenured generation and the young generation as needed (within prescribed limits) to better achieve performance goals. This mechanism is off by default; to activate it add this to the command line: option -XX:+UseAdaptiveGCBoundary .

-XX:-UseGCOverheadLimit

UseGCOverheadLimit is a policy that limits the proportion of the VM’s time that is spent in GC before an OutOfMemory error is thrown. Here it is disabled with - flag

-XX:+UseBiasedLocking

Enables a technique for improving the performance of uncontended synchronization. An object is "biased" toward the thread which first acquires its monitor via a monitorenter bytecode or synchronized method invocation; subsequent monitor-related operations performed by that thread are relatively much faster on multiprocessor machines. Some applications with significant amounts of uncontended synchronization may attain significant speedups with this flag enabled; some applications with certain patterns of locking may see slowdowns, though attempts have been made to minimize the negative impact. source

-XX:SurvivorRatio=8

Ratio of eden/survivor space size. The default value is 8.

-XX:TargetSurvivorRatio=90

Desired percentage of survivor space used after scavenge.

-XX:MaxTenuringThreshold=15

Sets the maximum tenuring threshold for use in adaptive GC sizing. The current largest value is 15. The default value is 15 for the parallel collector and is 4 for CMS.

-Dfml.ignorePatchDiscrepancies=true

citation needed

-Dfml.ignoreInvalidMinecraftCertificates=true

remove's forge's iron hand with the jar modding. it has no use if you use the vanilla client.

-XX:+UseFastAccessorMethods

Use optimized versions of GetField.

-XX:+UseCompressedOops

Enables the use of compressed pointers (object references represented as 32 bit offsets instead of 64-bit pointers) for optimized 64-bit performance with Java heap sizes less than 32gb.

-XX:+OptimizeStringConcat

Optimize String concatenation operations where possible. (Introduced in Java 6 Update 20)

-XX:+AggressiveOpts

Turn on point performance compiler optimizations that are expected to be default in upcoming releases. (Introduced in 5.0 update 6.)

-XX:ReservedCodeCacheSize=2048m

Reserved code cache size (in bytes) - maximum code cache size. [Solaris 64-bit, amd64, and -server x86: 2048m; in 1.5.0_06 and earlier, Solaris 64-bit and amd64: 1024m.]

-XX:+UseCodeCacheFlushing

If the code cache grows constantly, e.g., because of a memory leak caused by hot deployments, increasing the code cache size will only delay its inevitable overflow. To avoid overflow, we can try an interesting and relatively new option: to let the JVM dispose some of the compiled code when the code cache fills up. This may be done by specifying the flag -XX:+UseCodeCacheFlushing. Using this flag, we can at least avoid the switch to interpreted-only mode when we face code cache problems. However, I would still recommend to tackle the root cause as soon as possible once a code cache problem has manifested itself, i.e., identify the memory leak and fix it.

-XX:SoftRefLRUPolicyMSPerMB=10000

keeps the soft reference to 10 seconds per MB. If it lags your computer, feel free to reduce the number of remove the line entirely.

-XX:ParallelGCThreads=2

Sets the number of threads used during parallel phases of the garbage collectors. The default value varies with the platform on which the JVM is running.

Clone this wiki locally