Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ECO-483] Add order history !!!!! #485

Merged
merged 7 commits into from
Sep 27, 2023
Merged

[ECO-483] Add order history !!!!! #485

merged 7 commits into from
Sep 27, 2023

Conversation

CRBl69
Copy link
Collaborator

@CRBl69 CRBl69 commented Sep 25, 2023

  • Add database only docker file to ease development process.
  • Add aggregated_events table to keep up which events were aggregated.
  • Added 4 tables for user history:
    • user_history 👉 general data.
    • user_history_limit 👉 limit order specific data.
    • user_history_market 👉 market order specific data.
    • user_hisotry_swap 👉 swap order specific data.
  • Add user history processor to aggregator.
    • On each tick, takes into account all events that were not already aggregated (place, change, cancel, fill) and creates/updates the order.
  • Fix naming in add_limit_order_events/up.sql.
  • Add triggers in event_types/up.sql.

@vercel
Copy link

vercel bot commented Sep 25, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
econia ❌ Failed (Inspect) Sep 27, 2023 7:50pm

@CRBl69 CRBl69 requested a review from alnoki September 25, 2023 10:29
@vercel vercel bot temporarily deployed to Preview September 26, 2023 09:10 Inactive
@vercel vercel bot temporarily deployed to Preview September 26, 2023 09:54 Inactive
@vercel vercel bot temporarily deployed to Preview September 26, 2023 16:34 Inactive
}

fn poll_interval(&self) -> Option<std::time::Duration> {
Some(std::time::Duration::from_secs(60 * 60))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like we're going to want a much lower update interval for this, an hour latency is kind of high. I was thinking 1-5 seconds, it's not like this is a huge amount of work that it's doing.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I intended it to be 5 seconds as that's what I did in the ready function but forgot to do it here. Thanks for spotting that

for x in limit_events {
sqlx::query!(
r#"
INSERT INTO aggregator.aggregated_events VALUES (
Copy link
Contributor

@elliottdehn elliottdehn Sep 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something tells me we should be using a SQL transaction here, because we might insert half the events to user history before erroring out. Then it won't insert any events into this table and we'll get duplicated work on the events that didn't error out.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, just did that 👍

.fetch_all(&self.pool)
.await
.map_err(|e| DataAggregationError::ProcessingError(anyhow!(e)))?;
let change_events = sqlx::query!(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to sort by the version and event idx so that we always aggregate the change size events in the order that they happen.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we ? I don't see the point, could you explain why this is a necessity ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only case I could come up with where this could be an issue is if someone can close an order by changing size to 0. Is that a possibility ?

.map_err(|e| DataAggregationError::ProcessingError(anyhow!(e)))?;
}
}
for x in &change_events {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect that with this you may get a size underflow if an order is created at 30, then changes size to 50, then fills 45, then changes size to 60, then fills 60. The total filled size is 105 but the size you'll have in the user history before you process fill events is 60.

I believe that you have to process/aggregate the events in the order that they happen. The txn version and event idx form a total ordering so that should help you.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The underflow isn't a problem IMO. If the events are indexed in order, that means the underflow will only last inside the transaction, but will be gone once the transaction is done (DB transactions). If they are not, then ordering or not ordering will not make a difference because, but still, the underflow might last max 5 secs between aggregator scans. We can talk more about this on a call.

Copy link
Contributor

@elliottdehn elliottdehn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to update the processor, need to aggregate with a SQL transaction, and need to aggregate in the order of events as they occur. See comments :-)

@vercel vercel bot temporarily deployed to Preview September 27, 2023 07:19 Inactive
@vercel vercel bot temporarily deployed to Preview September 27, 2023 07:24 Inactive
@vercel vercel bot temporarily deployed to Preview September 27, 2023 19:50 Inactive
@elliottdehn elliottdehn merged commit 8dc71b3 into main Sep 27, 2023
2 of 3 checks passed
@elliottdehn elliottdehn deleted the ECO-483 branch September 27, 2023 19:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants