-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: improve the performance of TPC-H Q9 #764
Comments
I cloned |
Hi @xiaguan , thanks for your interest! The reason for the "skewed" tree in RisingLight is that:
risinglight/src/planner/rules/plan.rs Lines 95 to 99 in 604b4a1
As you can see, this rule is conditional and may not work if the condition is not met after predicate pushdown. We can change it to an unconditional rule so that all combinations can be covered. rw!("join-reorder";
"(join ?type ?cond2 (join ?type ?cond1 ?left ?mid) ?right)" =>
"(join ?type (and ?cond1 ?cond2) ?left (join ?type true ?mid ?right))"
), Another defect of the optimizer is that we don't swap the children of a join node now. That is to say, A possible solution may be like this: rw!("join-swap";
"(proj ?exprs (join ?type ?cond ?left ?right))" =>
"(proj ?exprs (join ?type ?cond ?right ?left))"
), Notice that we put a These are some of my ideas. But they have not been proven to work. If you are interested, feel free to continue this work! |
risinglight/src/planner/rules/rows.rs Line 24 in a0882cd
|
For the first issue, yes, we should provide row number information from storage to the optimizer. Currently, row number statistics are available in disk storage but not in memory storage. For the second issue, we can reduce the memory usage by optimizing the hash join executor. Now it collects all input chunks from both side at the beginning (code). This can be refactored into a streaming style. Besides, a better join order may also help reduce the memory usage. |
The rule you provided above works. My other attempts have all failed. I plan to take a look at other problems. |
This query takes 104s on the dataset of scale factor 1. DuckDB only needs <1s.
We should investigate the main overhead and optimize it.
The text was updated successfully, but these errors were encountered: