Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix insertion performance issues for our Group structures #168

Closed
wants to merge 3 commits into from

Conversation

yoshuawuyts
Copy link
Owner

smol-rs/futures-lite#93 (comment) showed a performance issue with FutureGroup. The issue turned out to be related to the bitvec crate performing really poorly if resize was called trivially. By instead guarding against calls to resize this issue goes away entirely.

This did not show up in our benchmarks because never exercised insert as part of the actual benchmark; we factored it out as part of the test setup. This is theoretically more "correct", but it also didn't surface the issue. We should probably consider adding end-to-end benchmarks for all of our operations too to catch any potential issues elsewhere. Especially for our Group structures. Thanks!

Benchmark results

Using the benchmarks authored by @notgull in this repo, running for 10.000 times (rather than 1 million):

before

group/futures_concurrency::FutureGroup
                        time:   [2.5047 ms 2.5065 ms 2.5082 ms]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) low mild

after

     Running benches/co.rs (target/release/deps/co-71ce600071789f52)
group/futures_concurrency::FutureGroup
                        time:   [672.93 µs 675.00 µs 678.70 µs]
                        change: [-73.137% -73.055% -72.953%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild

@yoshuawuyts
Copy link
Owner Author

yoshuawuyts commented Mar 19, 2024

Oh, CI is failing because we're not resizing when we need to. That should be easy enough to fix, without having to hit the case we were hitting before. Not going to debug this right now. If someone wants to snipe what the fix should be, please feel free to though! - I think what's happening is that we're forgetting to give our bitvecs an initial size. Either that, or there's an off-by-one error somewhere.

@yoshuawuyts
Copy link
Owner Author

Filed #175 instead, closing in favor of the reworked approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant