-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add deletion validation for IPPool webhook #137
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @haijianyang. Thanks for your PR. I'm waiting for a metal3-io member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
dcf499d
to
7ce225a
Compare
/test-ubuntu-integration-main |
test case needs to be updated as well. |
E2E tests of cluster-api-provider-metal3 use clusterName to bind IPPool, IPPoolManager will set CAPI cluster as the owner of IPPool. So when CAPI cluster was deleted, kubernetes will try to delete the bound IPPool (provisioning-pool and baremetalv4-pool). However, the resources associated with the IPPool have not been released, so the ValidateDelete The use of clusterName may cause the CAPI cluster and IPPool to depend on each other. Do we have any solutions. # IPPools of cluster-api-provider-metal3 e2e test
---
apiVersion: ipam.metal3.io/v1alpha1
kind: IPPool
metadata:
name: provisioning-pool
namespace: ${NAMESPACE}
spec:
clusterName: ${CLUSTER_NAME}
namePrefix: ${CLUSTER_NAME}-prov
pools:
- start: ${PROVISIONING_POOL_RANGE_START}
end: ${PROVISIONING_POOL_RANGE_END}
prefix: ${PROVISIONING_CIDR}
---
apiVersion: ipam.metal3.io/v1alpha1
kind: IPPool
metadata:
name: baremetalv4-pool
namespace: ${NAMESPACE}
spec:
clusterName: ${CLUSTER_NAME}
namePrefix: ${CLUSTER_NAME}-bmv4
pools:
- start: ${BAREMETALV4_POOL_RANGE_START}
end: ${BAREMETALV4_POOL_RANGE_END}
prefix: ${EXTERNAL_SUBNET_V4_PREFIX}
gateway: ${EXTERNAL_SUBNET_V4_HOST} // ip-address-manager
func (m *IPPoolManager) SetClusterOwnerRef(cluster *clusterv1.Cluster) error {
if cluster == nil {
return errors.New("Missing cluster")
}
// Verify that the owner reference is there, if not add it and update object,
// if error requeue.
_, err := findOwnerRefFromList(m.IPPool.OwnerReferences,
cluster.TypeMeta, cluster.ObjectMeta)
if err != nil {
if ok := errors.As(err, ¬FoundErr); !ok {
return err
}
m.IPPool.OwnerReferences, err = setOwnerRefInList(
m.IPPool.OwnerReferences, false, cluster.TypeMeta,
cluster.ObjectMeta,
)
if err != nil {
return err
}
}
return nil
} |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
@furkatgofurov7 https://jenkins.nordix.org/job/metal3_ipam_main_integration_test_ubuntu/32/ returns 404. Could you help check the cause please? |
/test-ubuntu-integration-main |
hey, looks like we have some problems reaching Jenkins in the CI, let me see if I can find out the cause |
/test-ubuntu-integration-main |
/retest |
@jessehu build 32 doesn't exists anymore so that is why you get the 404 error, oldest build log is build 65 at the moment. |
I believe Jenkins workers back up now. /test-ubuntu-integration-main |
/test-ubuntu-integration-main |
2 similar comments
/test-ubuntu-integration-main |
/test-ubuntu-integration-main |
@haijianyang can you pleases rebase the PR? |
/test-ubuntu-integration-main |
1 similar comment
/test-ubuntu-integration-main |
7ce225a
to
765a4a2
Compare
/test-ubuntu-integration-main |
logs-jenkins-metal3_ipam_main_integration_test_ubuntu-79/k8s_management_cluster/kube-system/kube-apiserver-kind-control-plane/kube-apiserver/stdout.log W0201 07:16:11.110250 1 dispatcher.go:195] rejected by webhook "validation.ippool.ipam.metal3.io": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"admission webhook \"validation.ippool.ipam.metal3.io\" denied the request: IPPool.ipam.metal3.io \"provisioning-pool\" is forbidden: IPPool cannot be deleted because it is in use", Reason:"Forbidden", Details:(*v1.StatusDetails)(0xc0165419e0), Code:403}}
W0201 07:16:11.798602 1 dispatcher.go:195] rejected by webhook "validation.ippool.ipam.metal3.io": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"admission webhook \"validation.ippool.ipam.metal3.io\" denied the request: IPPool.ipam.metal3.io \"provisioning-pool\" is forbidden: IPPool cannot be deleted because it is in use", Reason:"Forbidden", Details:(*v1.StatusDetails)(0xc016c382a0), Code:403}}
W0201 07:16:12.725203 1 dispatcher.go:195] rejected by webhook "validation.ippool.ipam.metal3.io": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"admission webhook \"validation.ippool.ipam.metal3.io\" denied the request: IPPool.ipam.metal3.io \"provisioning-pool\" is forbidden: IPPool cannot be deleted because it is in use", Reason:"Forbidden", Details:(*v1.StatusDetails)(0xc0163e7680), Code:403}} https://jenkins.nordix.org/job/metal3_ipam_main_integration_test_ubuntu/79/consoleFull Kind=IPPool\\\" metal3/provisioning-pool: admission webhook \\\"validation.ippool.ipam.metal3.io\\\" denied the request: IPPool.ipam.metal3.io \\\"provisioning-pool\\\" is forbidden: IPPool cannot be deleted because it is in use\"\nDeleting IPPool=\"provisioning-pool\" Namespace=\"metal3\"\nRetrying with backoff Cause=\"error deleting \\\"ipam.metal3.io/v1alpha1 |
it seems failing the move step https://github.com/metal3-io/metal3-dev-env/blob/df300e37653cc29e94c20dd06debd014b6a1854c/tests/roles/run_tests/tasks/move.yml#L171 |
|
cc @kashifest @haijianyang move operation is a special case, and we always check if pivoting is succesful in every CI run. Technically, when move is performed, all provider specific objects are deleted from the source cluster and recreated in the target cluster, meaning we were always able to delete the object (no webhook validation check). But now since the validation is in place, it won't be able to do a move properly, hence fails. Two options I could think of could be:
|
Before deletion in |
Whether it is possible to add a field(spec、annotation、label) to IPPool to describe whether to allow IPPool to be deleted without releasing IP resources? For example: apiVersion: ipam.metal3.io/v1alpha1
kind: IPPool
metadata:
name: ip-pool-cluster
namespace: default
annotations:
ipclaim.ipam.metal3.io/allow-force-delete: 'true'
spec:
pools:
- start: 10.255.160.1
end: 10.255.160.1
prefix: 16
gateway: 10.255.0.1
namePrefix: "ip-pool-cluster"
--- |
/test-centos-e2e-integration-main |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
Stale issues close after 30d of inactivity. Reopen the issue with /close |
@metal3-io-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Can one of the admins verify this patch? |
1 similar comment
Can one of the admins verify this patch? |
The webhook should reject IPPool deletion if any IP has been allocated
Fixes #135