-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configurable memory limit #420
Comments
Are there any examples you can share? It seems bizarre that a few hundred MiB documents can cause OOM. While limiting buffers is one solution, it seems like this is probably a pathological case that the parser should be able to handle. |
This (300MiB) is our most problematic example, it and similarly sized ones have OOM'd a 15GB server, but other smaller ones have similarly painful consequences. The code we use to handle html is here, it doesn't seem like it should have any specific issues Edit: Here's the memory and cpu usage when that file was accessed, the cut is due to the oom killer |
Other than downloading it from Firefox Send, it's possible to regenerate that file by running:
The file will be located at:
The file is a rendered code highlighting of a 946k lines source code file. |
As a point of interest, I just ran the jni-android-sys HTML through the kuchiki find_matches example program. htop didn't report the program taking more than 4gb of memory over the course of its execution. |
We took a closer look at the metrics for the VM and there were four bursts of downloads from S3, so it's likely that four requests happened at the same time. That would explain OOMing if a single parse takes around 4GB of RAM. An increase in memory usage 10x the size of the file seems a little large to me, is it possible to improve that? We're hoping to start parsing only a single file at a time which should help, but ideally we'd use less memory in the first place. |
I've filed kuchiki-rs/kuchiki#73 for one quick win for memory usage for your use case. |
I’ve been running into an issue where large documents (few hundred MiB) being parsed cause massive amounts of memory usage that can slow or even crash (drastically crash, sometimes causing OOM kills) in production and the ability to limit the memory usage of kuchiki would be invaluable. Having a memory limit could also allow the usage of preallocated buffers, which would do wonders for performance as well
The text was updated successfully, but these errors were encountered: