-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue#398 that decreasing replicas will make zookeeper unrecoverable when zookeeper not running. #406
base: master
Are you sure you want to change the base?
Conversation
…make cluster of zookeeper Unrecoverable Signed-off-by: hongchunhua <[email protected]>
…recoverable when zookeeper not running. Signed-off-by: hongchunhua <[email protected]>
Signed-off-by: hongchunhua <[email protected]>
Signed-off-by: hongchunhua <[email protected]>
Signed-off-by: hongchunhua <[email protected]>
Codecov Report
@@ Coverage Diff @@
## master #406 +/- ##
==========================================
- Coverage 84.11% 84.04% -0.08%
==========================================
Files 12 12
Lines 1643 1667 +24
==========================================
+ Hits 1382 1401 +19
- Misses 177 185 +8
+ Partials 84 81 -3
Continue to review full report at Codecov.
|
Signed-off-by: hongchunhua <[email protected]>
} | ||
// The node that have been removed with reconfig also can still provide services for all online clients. | ||
// So We can remove it firstly, it will avoid to error that client can't connect to server on preStop. | ||
r.log.Info("Do reconfig to remove node.", "Remove ids", strings.Join(removes, ",")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The check above returns error at line no: 303 ensures zookeeper is running.
Later in teardown script remove operation is performed.(https://github.com/pravega/zookeeper-operator/blob/master/docker/bin/zookeeperTeardown.sh#L45.) Do you still think removing node is required here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's no matter removing node on teardown script, do it or not will not affect cluster.
But if only do reconfig on teardown script, it will not chance to retry doing reconfig after pod exited when zookeeper is unserviceable.
So I think it may be better to do reconfig on checking the cluster scale down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@stop-coding Did you see Do reconfig to remove node.
message being present in the logs in your use case?
I think @anishakj is suggesting that catching the UpdateNode
error and returning on line 303 should be enough to fix the issue, hence lines 305 to 324 would never get executed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jkhalack Sorry for my late reply.
Only catching the UpdateNode error is not enough to ensure do reconfigure successfully on preStop, for example that pod exiting but the cluster broken again. We indeed hope that updating node size and doing reconfigure is atomicity, but it's not realistic.
It known that updating "Spec.Replicas" to k8s will tell pod to create or exit. If it fail on scale down, we can stop updating "Spec.Replicas" until cluster recovery, that will ensure have done reconfigure before pod exit.
So I think that doing reconfigure on checking the cluster scale down is better.
i would like to know if this pull request will be included in the next release? |
Change log description
Fixes the bug that decreasing replicas will make zookeeper unrecoverable when zookeeper not running.
Purpose of the change
Fixes #398
What the code does
Add protection for setting Stateful when zookeeper not running.
If zookeeper not running, we will prohibited to update replicas status until zookeeper resume.
When user decrease replicas value, it will remove node with reconfig firstly.
Keep do that remove node with reconfig on preStop before pod exit.
How to verify it
Create an cluster that size is 3 (kubectl create -f zk.yaml).
Wait all pod running, named: zk-0\zk-1\zk-2.
Delete zk-1\zk-2 pod, it make cluster of zookeeper unable to provide services.
"kubectl edit zk" that change replicas to 1 immediately.
Wait some time, replicas will decrease to 1.
Now, checking that:
Is zk-0 is all right?