-
Notifications
You must be signed in to change notification settings - Fork 645
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTP2 connections not used efficiently #2262
Comments
@kitkars When you use one connection this means that you are running also with one thread. Enable Reactor Netty logging |
@violetagg First of all, thanks for helping everyone via SO / Gitter / GitHub. I have been using gRPC for a while and I love it. It might not be apples to apples comparison. But the concept should be same more or less. That is, gRPC also uses netty + HTTP2 behind the scenes. gRPC can send multiple independent requests via single HTTP2 persistent connection (something like rSocket does). So I started assuming that WebFlux with HTTP2 and persistent connection, I should be able to send multiple requests as well. If you say, I need to increase the connections - for ex: to make 10 parallel requests, open 10 HTTP2 connections, then I do not see the real benefit of HTTP2. Because thats what HTTP1.1 does anyway. HTTP2 removes the head of line blocking via multiplexing with single connection. So I am not sure how thread is connected to this as things are non-blocking. Please correct me if my understanding is totally incorrect and what I should expect from WebFlux-HTTP2. Regarding the PR, I cloned and tried to build. There were some unit tests issues I am facing. Let me work on it and get back. Thanks again for the quick response. |
Use |
Thanks @violetagg. It provides much much better performance now. 👍I noticed one weird thing. I would appreciate if you could clarify. That is the performance is better if I use My dependencies: Server-Controller:
Server Config:
Client:
If you look at the server log, I am not sure what do we mean by terminating channel for every 2 responses. (from where this 2 comes from?) Server side log:
Client side log:
Attached the log files for your reference. |
@kitkars I updated the PR #2257 and it is now in its final state. Thanks for the example that you provided. However I need to clarify some things: The API below configures the HTTP/2 initial settings. https://datatracker.ietf.org/doc/html/rfc7540#section-5.1.2
On the server you have
On the client you have
If your intention was to further limit the streams that the client can open, then with the mentioned PR above we are introducing a new API so that you can do this
Or the
Can you please test with the final changes and can you please clarify what was your original intention with |
Your last PR update fixes all the problems I was facing. 🥇 👍 My setup:
My expectation was whatever the requests client sends - they all should complete in more or less ~3 seconds. I have been playing with this for sometime. Just to quickly summarize Scenarios:
Regarding your question for my original intention behind setting But this is where I find it little bit confusing (from user's perspective). That is - why do we have
|
If you provide an allocation strategy then you do not need to configure I pointed this in the javadoc but if you think we need to do more clarifications, please tell us.
|
Yeah, That would help. |
When we use HTTP2, single connection is enough for a remote host to send multiple requests. Unlike HTTP1.1, HTTP2 does not wait for the previous request to complete to send another request.
Server-Controller:
Server-Config:
Client:
Expected Behavior:
All the 10 requests should get completed in ~1 second.
Actual Behavior:
All the 10 requests get completed in ~5 seconds. (To be honest, I have no clue why 5 seconds. if we send requests 1 after another, then it should have taken 10 seconds)
I tried with
maxConcurrentStreams(10)
- no luck. Same behavior.The text was updated successfully, but these errors were encountered: