-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kafka::MessageSizeTooLarge #323
Comments
we are seeing the same thing with version |
same here. version |
The problem is misleading parameter name. This parameter is for removing larger record from record batch, not limit the size of record batch. For example, if you set |
thanks setting the chunk_limit_size does work, this should probably be added to the README documentation, but with:
we get lots of warnings of this type:
That size is larger than the kafka max message size, but there are no errors. |
From looking at the code, what I am seeing is that there's some balancing to do across these numbers. As @repeatedly suggested in a Kafka config with a max message of 5MB you really need a chunk limit of quite a bit less than that (in his example 2MB). We went through this ourselves recently and are finding stability with a Kafka max of 1MB and max chunk size of 600K. I totally agree that this whole area of sizing across the chunk, record and kafka message really needs a specific section in the kafka plugin documentation. |
@juliantaylor this may provide some context fluent/fluentd#2084 |
I took another look into the buffer code here: If I'm right the number at the end of the "chunk bytes limit exceeds for an emitted event stream.." is equal to the number of bytes added during the formatting activity. @repeatedly in this case where people are seeing many of these due to the formatting expansion, would it be advised to use the chunk_full_threshold as a way to always give the chunk the headroom for that expansion in size? That may help reduce the chunk.rollback occurrences? |
My |
This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days |
TODO: #323 (comment) need to update explanation. |
config:
Fluentd works fine for a while, then gets stuck on a large chunk. Usually seems to happen when one of the logs it's matching has a spike in throughput
The max message bytes limit on the kafka brokers is 2000000.
The text was updated successfully, but these errors were encountered: