Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Native memory leak - GZIPInputStream is not closed #949

Closed
csegedicsaba opened this issue Jun 16, 2024 · 14 comments
Closed

Native memory leak - GZIPInputStream is not closed #949

csegedicsaba opened this issue Jun 16, 2024 · 14 comments
Milestone

Comments

@csegedicsaba
Copy link

Using jwt token with GZIP compression cause native (not heap) memory leak because the GZIPInputStream is not closed.

After large number of decompressing cca. 500 MB of extra memory is used by the java process. This extra usage can see in /proc/1/status only. The java native memory tracking will not show it.

Running application in kubernetes will result kubernetes level OOM because this extra memory usage when memory limit is set.

Hacking GzipCompressionAlgorithm.doDecompress(InputStream is) like this there is no memory leak:

protected InputStream doDecompress(InputStream is) throws IOException {
    GZIPInputStream gzipIs = new GZIPInputStream(is);
    byte[] tmpIs = gzipIs.readAllBytes();
    gzipIs.close();
    return Streams.of(tmpIs);
}
@lhazlewood
Copy link
Contributor

lhazlewood commented Jun 16, 2024

Thanks for the report! We'll look into this, but note: GZIP is not a compression algorithm defined by the JWT RFCs (only DEFLATE is defined), so only use GZIP if both the JWT issuer and consumer support it.

@lhazlewood
Copy link
Contributor

Also, have you confirmed this is always solved by your fix?

I ask because the stream shouldn't be copied to memory in its entirety - that defeats the purpose of the stream. The consumer of the opened stream should close it, and those usages are usually wrapped in finally blocks to ensure it's always closed.

Do you have a test case that demonstrates the behavior without using the intermediate hack?

@csegedicsaba
Copy link
Author

This was quick and dirty solution to make sure closing the stream will solve the issue.
I hope the final solution will be more professional :-)

Create a jwt token with GZIP compression, decompress 10.000.000 times.
With jcmd run a gc and check the memory usage with jcmd and in /proc/1/status.
I tested in docker.

@lhazlewood
Copy link
Contributor

Thanks for the suggestion - do you have a docker container to share for quick testing? It'd be much easier than starting from scratch for us.

lhazlewood added a commit that referenced this issue Jun 16, 2024
@lhazlewood
Copy link
Contributor

lhazlewood commented Jun 16, 2024

@csegedicsaba please see #950 for the fix. The only difference is the changed code is now wrapped in a try/finally block that closes the InputStream. If you can test it, please do. I still have testing to do on my end, but I wanted to make it available for you just in case.

lhazlewood added a commit that referenced this issue Jun 16, 2024
@csegedicsaba
Copy link
Author

Thx @lhazlewood!
There is no memory leak with this fix.

@csegedicsaba
Copy link
Author

@lhazlewood here is a project for testing: https://github.com/csegedicsaba/jjwt-memory-leak-poc
I see that there is only memory leak when using multiple threads.

@lhazlewood
Copy link
Contributor

@csegedicsaba thank you so much for this, it's so helpful! Also, I don't know what happened to my previous (now-deleted) comment, but this test project will definitely help. I'll be able to merge the fix shortly. Cheers! 🥂

@lhazlewood
Copy link
Contributor

@csegedicsaba I was able to confirm the memory usage differences when running jcmd 1 VM.native_memory inside the container.

However, I didn't experience OutOfMemoryExceptions when running docker run -it --name poc poc. Any reason why?

@lhazlewood lhazlewood added this to the 0.12.6 milestone Jun 17, 2024
lhazlewood added a commit that referenced this issue Jun 17, 2024
lhazlewood added a commit that referenced this issue Jun 17, 2024
@csegedicsaba
Copy link
Author

There is no OutOfMemoryException because the memory leak is not inside the memory managed by jvm.
From this reason is very hard to identify this issue.

I think the best way to test is to run without and with the fix and check the whole memory usage with 'docker stats poc'.

@lhazlewood
Copy link
Contributor

Yes, that's what I did and noticed the memory differences, but I was curious how an application developer or infrastructure engineer might see this surface in a production application.

@csegedicsaba
Copy link
Author

When will the next version (0.12.6) be released?

@lhazlewood
Copy link
Contributor

@csegedicsaba I can probably try to get it out asap, there's likely no need to delay for additional features/fixes. Those could always go into a 0.12.7 release if necessary.

@lhazlewood
Copy link
Contributor

@csegedicsaba 0.12.6 has been released 🥂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants