-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issue: Memory leak: Docker container restarting periodically #1049
Comments
Quick thought, could be there are no memory limits and the JVM just assumes the whole node resources? It would help if you can provide an independent reproducer, the one provided is missing methods and seems to rely on Spring. |
The soft references should be removed on a GC, no? |
Also I wouldn't say that the screenshots show anything extraordinary, there is one Ruby instance, which means that previous instances were cleaned up. |
Yes, very good point. |
@abelsromero We have already set the container resource limit to 2 GB using the resources section in deployment.yaml: Also, the jvm options -Xmx, -Xms are set to 2GB. |
Sure, but you also need to make sure that the JVM will not request more than 2GB, otherwise it will be OOM killed. |
@robertpanzer I have already tried this. Created one single instance of asciidoctor in the constructor and re-used it in each run of pdf-generation-thread.. |
@robertpanzer This is the heap dump captured after the pdf-generator-thread finished its execution and no thread was actively in running state. |
I have done that. Edited the above comment |
And why is the container killed? |
Please provide the output of |
Application :
Spring boot microservice
Deployment env: Docker container running on openstack kubernetes
Heap size: 2GB
We are using the asciidoctorj library to covert asciidoc file to pdf file using below logic:
Scheduler Thread - runs every 5 minutes
1. Check if there are any new updates in api-documentation- swagger schema .
2. If yes, then
i. Create new thread for pdf-generation and submit to executorService.
ii. The pdf-generation-thread performs 3 tasks-
a. Create new adoc file using a json schema - using Swagger2Markup library.
ii. Convert this adoc file to pdf - using asciidoctorj library.
iii. Shutdown asciidoctor.
iii. Get bytes[] of the generated pdf file and store in in-memory concurrent hashmap. - hashmap with max 2/3 keys.
iv. Clean up the adoc file and the generated pdf files.
3. Wait for pdf-generation thread to finish its execution.
4. Shutdown executorService.
The heap memory usage keeps on increasing after each run of the pdf-generation thread. The heapdumps of the application show that, there are many live "jruby related" objects.
I have attached one heapdump snapshot from jprofiler.
When the heap memory is full the application fails and the docker container is automatically restarted.
Sample codeSnippet:
@Scheduled(fixedDelay = 300000, initialDelay = 25000)
public void generatePDFSchedule() {
// if there are merge events
if (!CollectionUtils.isEmpty(mergeEventQueue) ) {
mergeEventQueue.clear();
executorService = Executors.newSingleThreadExecutor();
Future future = executorService.submit(() -> generatePDForDomain(domain));
try {
future.get();
executorService.shutdown();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
}
private void generatePDForDomain(String domain) {
if (!StringUtils.isEmpty(domain)) {
try {
generateAdocFiles(domain);
generatePDFFile(domain);
}
} finally {
cleanUpFiles(domain);
}
}
}
private void generatePDFFile(String domain) throws IOException {
Map<String, Object> options =
OptionsBuilder.options().inPlace(true).backend("pdf").safe(SafeMode.UNSAFE).asMap();
Asciidoctor asciidoctor=null;
try {
File adocFile =
new File("/tmp/adocFile.adoc"));
asciidoctor = Asciidoctor.Factory().create();
asciidoctor.convertFile(adocFile, options);
} finally {
if (asciidoctor != null) {
asciidoctor.shutdown();
}
}
}
The text was updated successfully, but these errors were encountered: