Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling fragmented packets #43

Open
hemachandray opened this issue Oct 1, 2020 · 4 comments
Open

Handling fragmented packets #43

hemachandray opened this issue Oct 1, 2020 · 4 comments

Comments

@hemachandray
Copy link

We are doing dpi and using subtemplate multilists, so we increased netflow packet size to 2840 which is greater than interface MTU size 1500. So the netflow packets are fragmented by the time they reach ipfixcol2. In our scenario these decoded ipfix messages are further forwarded to Kafka.

With the change in the netflow packet size to 2840, we observed a significant drop in message count in kafka.

Does this mean ipfixcol2 is not handling fragmented packets??

Any help is highly appreciated.

@Lukas955
Copy link
Collaborator

Lukas955 commented Oct 1, 2020

Hi,

your MTU change should not have any impact on the collector. It is using standard system TCP/UDP sockets and from its point of view it always gets the whole packet from the system.

I assume that you send IPFIX over UDP. In that case, it is possible that there might be some packet drops on your network - you can check that by increasing verbosity of the collector using -v (or -vv). If there are any NetFlow/IPFIX packet drops you can see Unexpected Sequence number (expected:....) warnings.

By the way, how many flows records you are approximately processing per second?

Lukas

@Lukas955
Copy link
Collaborator

Lukas955 commented Oct 7, 2020

Did you resolve the issue?

@hemachandray
Copy link
Author

Thanks for your reply

flows processed per second are somewhere in between 300k and 330 k.
And we do observe a lot of 'unexpected sequence number messages'

@Lukas955
Copy link
Collaborator

Did you increase maximum size of UDP socket on your system as written in UDP plugin description?

sysctl -w net.core.rmem_max=16777216          # 16MB

By default, socket size is quiet small and some NetFlow/IPFIX packets might be lost before the collector is able to read them - especially when you increased MTU. This should help to reduce unexpected sequence number... messages.

Note: To permanently preserve socket size, you will have to store the configuration in sysctl.conf, otherwise the value is reset to default after system reboot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants