-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
improve the robust of balance region scheduler #85
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: bufferflies <[email protected]>
Signed-off-by: bufferflies <[email protected]>
Signed-off-by: bufferflies <[email protected]>
|
||
### Store pick strategy | ||
|
||
It can arrange all the store based on label, like TiKV and TiFlash and allow low score group has more chance to scheduler. But the first score region should has highest priority to be selected. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide more detail about label and how low score group can have more chance to schedule
#### Consider Influence to leader | ||
|
||
Normally, one operator is made of region, source store and target store, the key works finished by region leader such as snapshot generate, snapshot send. It is not friendly to the leader if majority operator is add follow. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"It is not friendly to the leader if majority operator is add follow"---could you explain a bit more detail regarding this? Because for a region leader, the add follower operator should be up to 1 or 2. Or you mean the whole store.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I means that region leade generates and sends snapshot will occupy cpu and io resources.
|
||
![](https://latex.codecogs.com/gif.image?\dpi{200}&space;\bg_white&space;Influence=\sum_{i=0}^{j}step_{i}.Influence&space;\newline&space;Cost&space;=&space;200*ln{\frac{region_{size}}{100KiB}}) | ||
|
||
Cost equals 200 if operator influence is 1Mb or equals 600 if operator influence is 1gb. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1Mb--->1MB
1gb --->1GB
ln should be log10? Otherwise 200*ln1MB/100KB won't be 200.
Why use formula log regionsize/100KB, this makes little difference when region size is 500MB and 1GB, for example--but the actual cost difference of 500MB and 1GB is much bigger.
|
||
#### Operator life cycle | ||
|
||
The operator life cycle can divide into some stages: create, executing(started), complete. PD will check operator stage by region heart beats and cancel operator if one operator‘s running time exceed the fixed value(10m). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10m may not be enough.
Why not make it configurable.
What if Tikv crashed, the heartbeat request may not carry the operator info anymore, how will PD handle it?
|
||
The operator life cycle can divide into some stages: create, executing(started), complete. PD will check operator stage by region heart beats and cancel operator if one operator‘s running time exceed the fixed value(10m). | ||
|
||
It will be better if we can calculate every step expecting execute duration by major factor includes region size, IO limit and operator concurrency like this: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no per snapshot limit and thus each snapshot's speed cannot be just 100MB/6.
Also snapshot generator duration cannot be ignore in single RocksDB instance version, as we have to scan to get the region's snapshot.
I think the total time threshold should be pretty conservative, probably 1hr at least.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree this threshold should be conservative. If one region has two region peer and needs one peer, it will wait one hour to try another target store if the origin target store is down or other reason.
|
||
### Sync global config | ||
|
||
There are some global config that all components need to synchronize like `region-max-size`, `io-limit`. Using ETCD api to implement global config may be a good idea like [this](<[https://github.com/pingcap/tidb/pull/31010/files](https://github.com/pingcap/tidb/pull/31010/files)>) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern is that such configuration is likely same for the whole cluster. Does it worth to ask every tikv report these values?
Even if some TiKv changes the value, then which region size value PD will use for calculating the formula above then?
To me, the region size should be cluster level config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the PR 31010, TiKV doesn't need to report pd and watch this config.
|
||
Canceling operator can depends on TiKV not by PD, but TiKV should notify PD after canceled one operator. | ||
|
||
## Questions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We noticed that when scale-out new node, it's much faster to move the data over if the new node is not the leader until the data is moved over. But of course in some scenarios, we hope the new node can act as leader ASAP. So it will be better to have an option to enable both scenarios.
For scale-in an old node, for current implementation, is transfer leaders the first step before moving data?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In past, scale-in an node will evict leader first.
The new region peers act as leader should depend on configs in different scenarios.
|
||
Normally, one operator is made of region, source store and target store, the key works finished by region leader such as snapshot generate, snapshot send. It is not friendly to the leader if majority operator is add follow. | ||
|
||
It will add new store limit as new limit type to decrease leader loads of every store. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you provide more details about how should we use this new limit type to decrease the load?
Signed-off-by: bufferflies <[email protected]>
89b76f4
to
509e480
Compare
Signed-off-by: bufferflies <[email protected]>
Signed-off-by: bufferflies <[email protected]>
31d7868
to
eef4f8c
Compare
|
||
### Sync global config | ||
|
||
There are some global config that all components need to synchronize like `region-max-size`, `io-limit`. Using ETCD api to implement global config may be a good idea like [this](<[https://github.com/pingcap/tidb/pull/31010/files](https://github.com/pingcap/tidb/pull/31010/files)>) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean sync the config from TiKV to PD, or from PD to TiKV?
No description provided.