Skip to content

Commit

Permalink
gramma && rename title
Browse files Browse the repository at this point in the history
Signed-off-by: bufferflies <[email protected]>
  • Loading branch information
bufferflies committed Dec 29, 2021
1 parent 6da63b8 commit 509e480
Showing 1 changed file with 13 additions and 15 deletions.
28 changes: 13 additions & 15 deletions text/0083-Improve-the-robust-of-balance-scheduler.md
Original file line number Diff line number Diff line change
@@ -1,46 +1,44 @@
# Improve the robust of balance scheduler
# Improve the robust of balance region scheduler

- RFC PR: [https://github.com/tikv/rfcs/pull/85](https://github.com/tikv/rfcs/pull/83)
- Tracking Issue: [https://github.com/tikv/pd/issues/](https://github.com/tikv/pd/issues/4428)

## Summary

Make scheduler more robust for dynamic region size.
Make schedulers more robust for dynamic region size.

## Motivation

We have observed many different situations when the region size is different. The major drawback comes from this aspects:
We have observed many different situations when the region size is different. The major drawback comes from these aspects:

1. Balance region scheduler pick source store in order of store's score, the second store will be picked after the first store has not met some filter or retry times exceed fixed value, this problem is also exist in target pick strategy.
2. Operator has an import effect on region leader, and the leader is responsible in the operator life cycle. But the region leader will not be limited by any filter.
3. There are some factor that influence execution time of operator such as region size, IO limit, cpu load. PD needs to be more flexible to manage operator's life not fixed config.
1. Balance region scheduler picks source store in order of store's score, the lower store will be picked after the higher store has not met some filter or retry times exceed fixed value. If the count of placement rules or tikv is bigger, the lower store has less chance to balance like TiFlash.
2. splitting rocksDB and sending them by region leader will occupy cpu and io resources.
3. There are some factors that influence execution time of an operator such as region size, IO limit, cpu load. PD needs to be more flexible to manage operator's timeout threshold rather than not fixed value.
4. PD should know some global config about TiKV like `region-max-size`, `region-report-interval`. This config should synchronize with PD.

## Detailed design

### Store pick strategy

It can arrange all the store based on label, like TiKV and TiFlash and allow low score group has more chance to scheduler. But the first score region should has highest priority to be selected.
It can arrange all the stores based on label, like TiKV and TiFlash and allow low score groups more chances to schedule. But the first score region should have the highest priority to be selected.

#### Consider Influence to leader

Normally, one operator is made of region, source store and target store, the key works finished by region leader such as snapshot generate, snapshot send. It is not friendly to the leader if majority operator is add follow.

It will add new store limit as new limit type to decrease leader loads of every store.
It will add a new store limit to decrease leader loads of every store. Picking region should check if the leader token is available.

### Operator control

#### Store limit cost

Different size region occupy tokens should be different. Maybe can use this formula:
Different size regions occupy tokens should be different. Maybe can use this formula:

![](https://latex.codecogs.com/gif.image?\dpi{200}&space;\bg_white&space;Influence=\sum_{i=0}^{j}step_{i}.Influence&space;\newline&space;Cost&space;=&space;200*ln{\frac{region_{size}}{100KiB}})

Cost equals 200 if operator influence is 1Mb or equals 600 if operator influence is 1gb.
Cost equals 200 if operator influence is 1MB or equals 600 if operator influence is 1GB.

#### Operator life cycle

The operator life cycle can divide into some stages: create, executing(started), complete. PD will check operator stage by region heart beats and cancel operator if one operator‘s running time exceed the fixed value(10m).
The operator life cycle can be divided into some stages: create, executing(started), complete. PD will check operator stage by region heartbeat and cancel if one operator‘s running time exceeds the fixed value(10m).

It will be better if we can calculate every step expecting execute duration by major factor includes region size, IO limit and operator concurrency like this:

Expand All @@ -56,9 +54,9 @@ There are some global config that all components need to synchronize like `regio

## Alternatives

Removing peer may not influence the cluster performance, it can be replace by leader store limit.
Removing peer may not influence the cluster performance, it can be replaced by leader store limit.

Canceling operator can depends on TiKV not by PD, but TiKV should notify PD after canceled one operator.
Canceling operators can depend on TiKV not by PD, but TiKV should notify PD after canceling one operator.

## Questions

Expand Down

0 comments on commit 509e480

Please sign in to comment.