-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fair chaining APIs #105
Comments
One idea I had was to add an additional trait, something like this: trait SchedulingWeight {
fn scheduling_weight(&self) -> usize;
} Then for tuples we'd have something like: impl<A, B> SchedulingWeight for (A, B)
where
A: SchedulingWeight,
B: SchedulingWeight,
{
fn scheduling_weight(&self) -> usize {
self.0.scheduling_weight() + self.1.scheduling_weight()
}
} In the implementation of There are some significant challenges with this approach. Mainly, most futures do not implement So I think a more workable solution is to extend the future trait with a Also, if Rust gets impl specialization or refinement, that might open some other options up to us. |
Adding something like |
I'm re-visiting some of the outstanding issues on this repo, and I didn't get around to saying it last time: but I really like the idea of a Maybe we should ACP that to get an experiment going? |
I think an experiment seems like a good idea. I wonder if we could do it as a separate crate to prove the idea and then use that as evidence in an ACP? |
Now that #104 exists to close #85, there is a real question about fairness and chaining. I've made the case before that in order to guarantee fairness, the scheduling algorithm needs to know about all types it operates on. When we were still using permutations and even just rng-based starting points, I believe this to be true. But I'm slowly coming around to @eholk's idea that this may not be the case.
Benefits
If we resolve this, I think we may be able to improve our ergonomics. Take for example the following code, which I believe to be quite representative of
futures-concurrency
's ergonomics:The tuple instantiation imo looks quite foreign. In this repo's style, we'd probably instead choose to name the futures, flattening the operation somewhat:
But while I'd argue this is more pleasant to read, we can't expect people to always do this. The earlier example is often to easier to write, and thus will be written as such. But a chaining API could probably be even easier to author as well:
We used to have this API, but we ended up removing it. And I think there's definitely a case to be made to add this back. Just like we'd be expected to have both:
async_iter::AsyncIterator::chain
andimpl async_iter::Chain for tuple
, so could we have this for both variants ofmerge
.Implementation
I'd love to hear more from @eholk here. But my initial hunch is that perhaps something like
ExactSizeIterator
could help us. But rather than return how many items are contained in an iterator, it'd return the number of iterators contained within. That way outer iterators can track how often they should call inner iterators before moving on. I think this may need specialization to work though?I think even if we can't make the API strictly fair, it might still be worth adding the chaining API - and we can possibly resolve the fairness issues in the stdlib? Or maybe we can add a nightly flag with the specializations on it as part of this lib? Do folks have thoughts on this?
The text was updated successfully, but these errors were encountered: