-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cassandra codebase refactor #697
Conversation
c2ad0bd
to
32eb62a
Compare
func (s *CassandraDataCentreStatus) FromInstAPI(instModel *models.CassandraDataCentre) { | ||
s.GenericDataCentreStatus.FromInstAPI(&instModel.GenericDataCentreFields) | ||
|
||
s.Nodes = make([]*Node, 0, len(instModel.Nodes)) | ||
for _, instNode := range instModel.Nodes { | ||
node := &Node{} | ||
node.FromInstAPI(instNode) | ||
s.Nodes = append(s.Nodes, node) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can remove 0,
from make
.
Also, if you want to preallocate the slice based on the number of nodes it is also good to use direct assignment (s.Nodes[i] = node)
instead of appending, you leverage the fact that the slice is preallocated. This is a bit more efficient and arguably simpler, because it avoids the overhead of slice append operations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually it makes sense. Changed it with index operator []
semathic.
RestoreFrom *CassandraRestoreFrom `json:"restoreFrom,omitempty"` | ||
OnPremisesSpec *OnPremisesSpec `json:"onPremisesSpec,omitempty"` | ||
Cluster `json:",inline"` | ||
GenericClusterSpec `json:",inline"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typically, ClusterSpec
format is used, it is clear and avoid the redundancy of "generic".
The same for ClusterStatus etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not available to do this thing with ClusterStatus
because it still exists in the code. So for now I want to keep it as it is. When we refactor all the cluster resources to this pattern it will be changed too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
if s.CloudProvider != models.ONPREMISES { | ||
return fmt.Errorf("cloud provider %s is unavailable for data centre: %s, available value: %s", | ||
s.CloudProvider, s.Name, models.ONPREMISES) | ||
} | ||
|
||
if s.Region != models.CLIENTDC { | ||
return fmt.Errorf("region %s is unavailable for data centre: %s, available value: %s", | ||
s.Region, s.Name, models.CLIENTDC) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it is better to make it as 1 check and then write available values in return?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The returning errors are a bit difference so I think the following impl is better.
) | ||
|
||
ctx := context.Background() | ||
|
||
err = handleCreateOnPremisesClusterResources(ctx, bootstrap) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can just put ctx as an func argument and use it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
if c.Status.State != models.DeletedStatus { | ||
err := r.startClusterJobs(c, l) | ||
if err != nil { | ||
return reconcile.Result{}, fmt.Errorf("failed to start cluster jobs, err: %w", err) | ||
} | ||
} | ||
|
||
if c.Spec.OnPremisesSpec != nil && c.Spec.OnPremisesSpec.EnableAutomation { | ||
return r.handleOnPremises(c, l) | ||
} | ||
|
||
return models.ExitReconcile, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if cluster status is deleted. With your logic it will just continue executing, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I started refactoring I decided to try to keep flow as it has been before. Anyway I agree it looks a bit strange and there is cassandra details api call inside handleOnPremises
method.
I decided to move it inside the if statement above.
fcddd6b
to
1133a55
Compare
apis/clusters/v1beta1/structs.go
Outdated
*g = make(GenericResizeSettings, 0, len(instModels)) | ||
for _, instModel := range instModels { | ||
*g = append(*g, &ResizeSettings{ | ||
NotifySupportContacts: instModel.NotifySupportContacts, | ||
Concurrency: instModel.Concurrency, | ||
}) | ||
} | ||
} | ||
|
||
func (g *GenericResizeSettings) ToInstAPI() []*models.ResizeSettings { | ||
instaModels := make([]*models.ResizeSettings, 0, len(*g)) | ||
for _, setting := range *g { | ||
instaModels = append(instaModels, &models.ResizeSettings{ | ||
NotifySupportContacts: setting.NotifySupportContacts, | ||
Concurrency: setting.Concurrency, | ||
}) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove 0 from make here as you did in other cases
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
1133a55
to
6f4db10
Compare
} | ||
|
||
func (cs *CassandraDataCentre) debeziumFromInstAPI(instModels []*models.Debezium) { | ||
cs.Debezium = make([]DebeziumCassandraSpec, 0, len(instModels)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
improve the slice preallocation
Debezium: cs.DebeziumFromInstAPI(iDC.Debezium), | ||
ShotoverProxy: cs.ShotoverProxyFromInstAPI(iDC.ShotoverProxy), | ||
func (cs *CassandraDataCentre) shotoverProxyFromInstAPI(instModels []*models.ShotoverProxy) { | ||
cs.ShotoverProxy = make([]ShotoverProxySpec, 0, len(instModels)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.
return nil | ||
} | ||
|
||
func (s *GenericDataCentreSpec) validateImmutableCloudProviderSettingsUpdate(oldSettings []*CloudProviderSettings) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this fnc is similar to equals funcs
for i, dc := range c.Spec.DataCentres { | ||
for j, debezium := range dc.Debezium { | ||
if debezium.ClusterRef != nil { | ||
cdcID, err := clusterresources.GetDataCentreID(r.Client, ctx, debezium.ClusterRef) | ||
if err != nil { | ||
l.Error(err, "Cannot get cluster ID", | ||
"Cluster reference", debezium.ClusterRef, | ||
) | ||
return nil, err | ||
} | ||
iCassandraSpec.DataCentres[i].Debezium[j].KafkaDataCentreID = cdcID | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please make a func of it
func (r *CassandraReconciler) startClusterJobs(c *v1beta1.Cassandra, l logr.Logger) error { | ||
err := r.startSyncJob(c) | ||
if err != nil { | ||
l.Error(err, "Cannot start cluster status job", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
status -> sync
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cannot start cluster synchronizer
c.Status.FromInstAPI(instaModel) | ||
err = r.Status().Patch(context.Background(), c, patch) | ||
if err != nil { | ||
return err | ||
} | ||
|
||
if !areDCsEqual { | ||
if !c.Status.DCEquals(iCassandra.Status.DataCentres) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move c.Status.DCEquals(iCassandra.Status.DataCentres)
before c.Status.FromInstAPI(instaModel)
6f4db10
to
f4e65c3
Compare
f4e65c3
to
1cdc862
Compare
return instaModel, nil | ||
} | ||
|
||
func (r *CassandraReconciler) applyDebeziumDependenciesIfNecessary(c *v1beta1.Cassandra, spec *models.CassandraCluster) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mergeDebezium
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
func (r *CassandraReconciler) startSyncJob(c *v1beta1.Cassandra) error { | ||
job := r.newWatchStatusJob(c) | ||
|
||
err := r.Scheduler.ScheduleJob(c.GetJobID(scheduler.StatusChecker), scheduler.ClusterStatusInterval, job) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
StatusChecker -> Synchronizer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should change it after refactor of the all cluster resources
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reminder: #661
func (r *CassandraReconciler) startClusterJobs(c *v1beta1.Cassandra, l logr.Logger) error { | ||
err := r.startSyncJob(c) | ||
if err != nil { | ||
l.Error(err, "Cannot start cluster status job", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cannot start cluster synchronizer
err = r.Status().Patch(context.Background(), c, patch) | ||
if err != nil { | ||
return err | ||
} | ||
|
||
if !areDCsEqual { | ||
if !dcsDiffer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should be differ, like: if dcsDiffer
-> then do smth.
or change var name to dcsEqual
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
1cdc862
to
113d526
Compare
113d526
to
2d7a827
Compare
This PR provides refactor of Cassandra CR code base.