-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CoreService refactor #169
CoreService refactor #169
Conversation
668a759
to
429581f
Compare
@@ -80,7 +80,7 @@ where | |||
Hash::of((0b1, lhs, rhs)) | |||
} | |||
|
|||
pub trait SupportedDigest: Digest + private::Sealed + Sized + 'static { | |||
pub trait SupportedDigest: Digest + private::Sealed + Default + Sized + 'static { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sha256
already implements Default
; this just makes #[derive(Default)]
work more places.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Much simpler. Really good work refactoring tricky code. Looks good to me, but I'd love @peterhuene review as well.
if last_checkpoint != checkpoint { | ||
state.checkpoint(); // for the side-effect of updating map_index | ||
last_checkpoint = checkpoint; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this contains a bug copied over from my initial implementation of server restart that I haven't gotten around to fixing.
Let's say the initial stream looks like (in record insert order):
Leaf | Checkpoint |
---|---|
A | C1 |
B | C2 |
C | C3 |
D | C3 |
E | C3 |
F | C4 |
I think this code would correctly create a checkpoint C1
containing A
(C1 != <empty-hash>
) and a checkpoint C2
containing B
(C2 != C1
), but would not correctly checkpoint C3
because we're checkpointing at the transition from leaf B
to C
(C3 != C2
) in the stream, which results in only including leaf C
in the checkpoint.
The next checkpoint would then incorrectly include leafs D
, E
, and F
(C4 != C3
).
Does that make sense or am I completely off-base?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yeah I was a little confused about this but wanted to stick with existing behavior. I'm currently working on a followup refactor that avoids this problem entirely, but I can retrofit this PR if you'd prefer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll approve and leave it up to you whether you want to include it now or wait to the refactor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than what I believe to be that existing bug, these changes look really good and make this much easier to wrap one's brain around.
OK, since this is such a big diff I'm going to get it out of the way. |
This flattens all of the (other) server services into
CoreService
. I've mostly kept the external interfaces the same for now, with some notable exceptions:CoreService
now contains anArc<Inner>
, so no longer needs a wrappingArc
log
andmap
services went away, so too haveCoreService::log_data()
and::map_data()
. The related proof methods have been flattened intoCoreService
as e.g.::log_consistency_proof
The
core.rs
diff is completely useless; see just the new version here: https://github.com/bytecodealliance/registry/blob/core-svc-refactor/crates/server/src/services/core.rsThe new implementation reuses a lot of the existing services' code but does make some functional changes:
CancellationToken
has gone away; the service can now be stopped by dropping all references to it. This also meant theStopHandle
could be reduced to a normal tokioJoinHandle
which can beawait
ed to give the record processing loop a chance to finish any queued updates.