Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

avoid multiple wrapping #19

Open
costela opened this issue Jul 20, 2022 · 2 comments · May be fixed by #22
Open

avoid multiple wrapping #19

costela opened this issue Jul 20, 2022 · 2 comments · May be fixed by #22

Comments

@costela
Copy link
Contributor

costela commented Jul 20, 2022

Since it's possible to have multiple uses of httpsnoop in a project, coming from different imported modules (e.g. when using server-timing), it's also possible to end up wrapping a ResponseWriter multiple times. This is not ideal because it has a sensible performance impact:

cpu: Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz
BenchmarkBaseline
BenchmarkBaseline-8                     745539247                1.477 ns/op           0 B/op          0 allocs/op
BenchmarkCaptureMetrics
BenchmarkCaptureMetrics-8                3790958               296.2 ns/op           225 B/op          7 allocs/op
BenchmarkCaptureMetricsTwice
BenchmarkCaptureMetricsTwice-8           1912039               581.3 ns/op           450 B/op         14 allocs/op

(these benchmarks were made after switching httptest.NewServer for direct h.ServeHTTP calls, in order to avoid the unrelated overhead of the server; see #20 )

Ideally we'd return the same Metrics instance when re-wrapping an already wrapped ResponseWriter.

In case we explicitly want to measure at two different places in the middleware-chain, we'd still be able to Unwrap before re-wrapping.

WDYT?

@felixge
Copy link
Owner

felixge commented Jul 24, 2022

This is not ideal because it has a sensible performance impact:

300ns overhead doesn't seem like a big deal to me in the context of an http requests. You'd need a program that is serving 3.3M requests/second to waste 1 CPU core with this overhead.

Anyway, I'm not against getting rid of this overhead, I just don't have a lot of time right now to review your patch 😢. Could you explain your motivation for this optimization a little more? Did you find this overhead to be problematic in a real world situation?

If this patch is just for the fun of optimization, I think the requirements would be that it doesn't break any existing usage of the API. Is this the case for your PR? I wasn't sure based on the PR description.

@costela
Copy link
Contributor Author

costela commented Jul 24, 2022

300ns overhead doesn't seem like a big deal to me in the context of an http requests.

You're right, but the allocations add up and have knock-on effects. We're not quite at the 1Mio RPS just yet, but we can already feel the effects on GC and latency that 7 allocations per request can have. It's definitely not a make-or-break thing, but it's measurable.

Did you find this overhead to be problematic in a real world situation?

Kinda: we operate a video CDN, and we're trying to squeeze a bit more performance out of our edge services. When profiling these services this double-wrap case is one of the things that caught our eye and we thought it might be worth addressing.
Unfortunately though, my original assessment wasn't quite right: even with this PR, we won't be able to completely get rid of the server-timing overhead, because it just uses Wrap directly with its custom hooks and isn't interested in the metrics.
So all-in-all, we're back to "kinda" 😉

the requirements would be that it doesn't break any existing usage of the API.

I can only imagine one case where this change might change people's metrics:

  • using httpsnoop on multiple places in the middleware-chain.
  • writing to the response-writer on the way down the chain, before hitting the actual wrapped final handler.

Without this PR, both metrics would show different values. With this PR, both metrics would show the same values.
But this feels like a bit of an edge-case? Not sure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants