-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ECO-483] Add order history !!!!! #485
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
src/rust/dbv2/migrations/2023-09-20-193413_add_limit_order_events/up.sql
Show resolved
Hide resolved
} | ||
|
||
fn poll_interval(&self) -> Option<std::time::Duration> { | ||
Some(std::time::Duration::from_secs(60 * 60)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like we're going to want a much lower update interval for this, an hour latency is kind of high. I was thinking 1-5 seconds, it's not like this is a huge amount of work that it's doing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I intended it to be 5 seconds as that's what I did in the ready
function but forgot to do it here. Thanks for spotting that
for x in limit_events { | ||
sqlx::query!( | ||
r#" | ||
INSERT INTO aggregator.aggregated_events VALUES ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something tells me we should be using a SQL transaction here, because we might insert half the events to user history before erroring out. Then it won't insert any events into this table and we'll get duplicated work on the events that didn't error out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, just did that 👍
.fetch_all(&self.pool) | ||
.await | ||
.map_err(|e| DataAggregationError::ProcessingError(anyhow!(e)))?; | ||
let change_events = sqlx::query!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to sort by the version and event idx so that we always aggregate the change size events in the order that they happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we ? I don't see the point, could you explain why this is a necessity ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only case I could come up with where this could be an issue is if someone can close an order by changing size to 0. Is that a possibility ?
.map_err(|e| DataAggregationError::ProcessingError(anyhow!(e)))?; | ||
} | ||
} | ||
for x in &change_events { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect that with this you may get a size underflow if an order is created at 30, then changes size to 50, then fills 45, then changes size to 60, then fills 60. The total filled size is 105 but the size you'll have in the user history before you process fill events is 60.
I believe that you have to process/aggregate the events in the order that they happen. The txn version and event idx form a total ordering so that should help you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The underflow isn't a problem IMO. If the events are indexed in order, that means the underflow will only last inside the transaction, but will be gone once the transaction is done (DB transactions). If they are not, then ordering or not ordering will not make a difference because, but still, the underflow might last max 5 secs between aggregator scans. We can talk more about this on a call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to update the processor, need to aggregate with a SQL transaction, and need to aggregate in the order of events as they occur. See comments :-)
- add an `order_type` to general order representation - update the swap logic to be correct
aggregated_events
table to keep up which events were aggregated.user_history
👉 general data.user_history_limit
👉 limit order specific data.user_history_market
👉 market order specific data.user_hisotry_swap
👉 swap order specific data.add_limit_order_events/up.sql
.event_types/up.sql
.