Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cassandra codebase refactor #697

Merged
merged 1 commit into from
Feb 6, 2024
Merged

Conversation

worryg0d
Copy link
Collaborator

@worryg0d worryg0d commented Feb 2, 2024

This PR provides refactor of Cassandra CR code base.

@worryg0d worryg0d added the refactor Code improvements or refactorings label Feb 2, 2024
@worryg0d worryg0d self-assigned this Feb 2, 2024
Comment on lines 179 to 188
func (s *CassandraDataCentreStatus) FromInstAPI(instModel *models.CassandraDataCentre) {
s.GenericDataCentreStatus.FromInstAPI(&instModel.GenericDataCentreFields)

s.Nodes = make([]*Node, 0, len(instModel.Nodes))
for _, instNode := range instModel.Nodes {
node := &Node{}
node.FromInstAPI(instNode)
s.Nodes = append(s.Nodes, node)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can remove 0, from make.
Also, if you want to preallocate the slice based on the number of nodes it is also good to use direct assignment (s.Nodes[i] = node) instead of appending, you leverage the fact that the slice is preallocated. This is a bit more efficient and arguably simpler, because it avoids the overhead of slice append operations

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually it makes sense. Changed it with index operator [] semathic.

RestoreFrom *CassandraRestoreFrom `json:"restoreFrom,omitempty"`
OnPremisesSpec *OnPremisesSpec `json:"onPremisesSpec,omitempty"`
Cluster `json:",inline"`
GenericClusterSpec `json:",inline"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typically, ClusterSpec format is used, it is clear and avoid the redundancy of "generic".
The same for ClusterStatus etc.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not available to do this thing with ClusterStatus because it still exists in the code. So for now I want to keep it as it is. When we refactor all the cluster resources to this pattern it will be changed too

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

Comment on lines +397 to +406
if s.CloudProvider != models.ONPREMISES {
return fmt.Errorf("cloud provider %s is unavailable for data centre: %s, available value: %s",
s.CloudProvider, s.Name, models.ONPREMISES)
}

if s.Region != models.CLIENTDC {
return fmt.Errorf("region %s is unavailable for data centre: %s, available value: %s",
s.Region, s.Name, models.CLIENTDC)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it is better to make it as 1 check and then write available values in return?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The returning errors are a bit difference so I think the following impl is better.

Comment on lines 331 to 335
)

ctx := context.Background()

err = handleCreateOnPremisesClusterResources(ctx, bootstrap)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just put ctx as an func argument and use it

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

Comment on lines 387 to 405
if c.Status.State != models.DeletedStatus {
err := r.startClusterJobs(c, l)
if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to start cluster jobs, err: %w", err)
}
}

if c.Spec.OnPremisesSpec != nil && c.Spec.OnPremisesSpec.EnableAutomation {
return r.handleOnPremises(c, l)
}

return models.ExitReconcile, nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if cluster status is deleted. With your logic it will just continue executing, right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I started refactoring I decided to try to keep flow as it has been before. Anyway I agree it looks a bit strange and there is cassandra details api call inside handleOnPremises method.
I decided to move it inside the if statement above.

@worryg0d worryg0d force-pushed the cassandra-codebase-refactor branch 2 times, most recently from fcddd6b to 1133a55 Compare February 5, 2024 09:43
Comment on lines 768 to 785
*g = make(GenericResizeSettings, 0, len(instModels))
for _, instModel := range instModels {
*g = append(*g, &ResizeSettings{
NotifySupportContacts: instModel.NotifySupportContacts,
Concurrency: instModel.Concurrency,
})
}
}

func (g *GenericResizeSettings) ToInstAPI() []*models.ResizeSettings {
instaModels := make([]*models.ResizeSettings, 0, len(*g))
for _, setting := range *g {
instaModels = append(instaModels, &models.ResizeSettings{
NotifySupportContacts: setting.NotifySupportContacts,
Concurrency: setting.Concurrency,
})
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove 0 from make here as you did in other cases

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

}

func (cs *CassandraDataCentre) debeziumFromInstAPI(instModels []*models.Debezium) {
cs.Debezium = make([]DebeziumCassandraSpec, 0, len(instModels))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

improve the slice preallocation

Debezium: cs.DebeziumFromInstAPI(iDC.Debezium),
ShotoverProxy: cs.ShotoverProxyFromInstAPI(iDC.ShotoverProxy),
func (cs *CassandraDataCentre) shotoverProxyFromInstAPI(instModels []*models.ShotoverProxy) {
cs.ShotoverProxy = make([]ShotoverProxySpec, 0, len(instModels))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.

return nil
}

func (s *GenericDataCentreSpec) validateImmutableCloudProviderSettingsUpdate(oldSettings []*CloudProviderSettings) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this fnc is similar to equals funcs

Comment on lines 164 to 167
for i, dc := range c.Spec.DataCentres {
for j, debezium := range dc.Debezium {
if debezium.ClusterRef != nil {
cdcID, err := clusterresources.GetDataCentreID(r.Client, ctx, debezium.ClusterRef)
if err != nil {
l.Error(err, "Cannot get cluster ID",
"Cluster reference", debezium.ClusterRef,
)
return nil, err
}
iCassandraSpec.DataCentres[i].Debezium[j].KafkaDataCentreID = cdcID
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please make a func of it

func (r *CassandraReconciler) startClusterJobs(c *v1beta1.Cassandra, l logr.Logger) error {
err := r.startSyncJob(c)
if err != nil {
l.Error(err, "Cannot start cluster status job",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

status -> sync

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cannot start cluster synchronizer

Comment on lines 754 to 760
c.Status.FromInstAPI(instaModel)
err = r.Status().Patch(context.Background(), c, patch)
if err != nil {
return err
}

if !areDCsEqual {
if !c.Status.DCEquals(iCassandra.Status.DataCentres) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move c.Status.DCEquals(iCassandra.Status.DataCentres) before c.Status.FromInstAPI(instaModel)

return instaModel, nil
}

func (r *CassandraReconciler) applyDebeziumDependenciesIfNecessary(c *v1beta1.Cassandra, spec *models.CassandraCluster) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mergeDebezium

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Comment on lines +681 to 684
func (r *CassandraReconciler) startSyncJob(c *v1beta1.Cassandra) error {
job := r.newWatchStatusJob(c)

err := r.Scheduler.ScheduleJob(c.GetJobID(scheduler.StatusChecker), scheduler.ClusterStatusInterval, job)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

StatusChecker -> Synchronizer

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should change it after refactor of the all cluster resources

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reminder: #661

func (r *CassandraReconciler) startClusterJobs(c *v1beta1.Cassandra, l logr.Logger) error {
err := r.startSyncJob(c)
if err != nil {
l.Error(err, "Cannot start cluster status job",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cannot start cluster synchronizer

err = r.Status().Patch(context.Background(), c, patch)
if err != nil {
return err
}

if !areDCsEqual {
if !dcsDiffer {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it should be differ, like: if dcsDiffer -> then do smth.
or change var name to dcsEqual.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@testisnullus testisnullus merged commit 055fe4f into main Feb 6, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
refactor Code improvements or refactorings
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants