Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build configurations for state syncer #314

Merged
merged 4 commits into from
Aug 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 11 additions & 12 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
We welcome contributions :)

## Submitting PRs
* Make sure to check existing issues and file an issue before starting to work on a feature/bug. This will help prevent duplication of work.
* Make sure to check existing issues and file an issue before starting to work on a feature/bug. This will help prevent duplication of work.
Also refer to [Collaboration](./README.md) for communication channels.

## Setting up for local Development
Expand All @@ -20,12 +20,12 @@ $ADMIRAL_HOME/tests/create_cluster.sh 1.16.8
export KUBECONFIG=~/.kube/config
```
* Install [Prerequisites](./docs/Examples.md#Prerequisite) and make sure to install istio control plane in cluster. Alternatively, you can use the script to install istio control plane on the cluster created in previous step:

Mac: `$ADMIRAL_HOME/tests/install_istio.sh 1.10.4 osx`

Mac (Apple Silicon): `$ADMIRAL_HOME/tests/install_istio.sh 1.7.4 osx-arm64`
Mac: `$ADMIRAL_HOME/tests/install_istio.sh 1.20.2 osx`

Linux: `$ADMIRAL_HOME/tests/install_istio.sh 1.7.4 linux`
Mac (Apple Silicon): `$ADMIRAL_HOME/tests/install_istio.sh 1.20.2 osx-arm64`

Linux: `$ADMIRAL_HOME/tests/install_istio.sh 1.20.2 linux`

* Set up necessary permissions and configurations for Admiral

Expand Down Expand Up @@ -82,7 +82,7 @@ go install sigs.k8s.io/[email protected]
go install k8s.io/code-generator v0.24.2
go install google.golang.org/[email protected]
make setup
```
```

### Generate `*.pb.go` files from `*.proto` files
```bash
Expand All @@ -94,7 +94,7 @@ go generate ./...
make model-gen
```

* If you've made changes to protobuf model objects and need to re-generate their clientsets, use following steps and checkin the generated files
* If you've made changes to protobuf model objects and need to re-generate their clientsets, use following steps and checkin the generated files
### Generate clientsets
```bash
sh hack/update-codegen.sh
Expand All @@ -114,25 +114,24 @@ make gen-yaml
cd $ADMIRAL_HOME/tests
./run.sh "1.16.8" "1.7.4" "../out"
```
* Multi-cluster
* Multi-cluster
```
TODO
```

## Before PR
## Before PR
1. Clone repository
1. Add unit tests and fmea tests(in case applicable) along with the checked in code.
1. Confirm that the unit test coverage did not drop with your change.
1. Run regression and make sure it is not failing
1. Please update any bdd tests in case applicable

## During PR
1. Create Pull Request from your branch to the master branch.
1. Create Pull Request from your branch to the master branch.
1. Make sure the build succeeds
1. Maintainers on Admiral Repository will review the pull request.
1. PR will be merged after code is reviewed and all checks are passing

## After PR
1. When merging the PR, ensure that all commits are squashed into a single commit. (This can be done in advance via interactive rebase or through the github UI)
1. Once the changes are deployed to qal environment, verify the fix looks good and bdds are successful.

2 changes: 1 addition & 1 deletion admiral/cmd/admiral/cmd/root.go
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ func GetRootCmd(args []string) *cobra.Command {
// This is required for PERF tests only.
// Perf tests requires remote registry object for validations.
// There is no way to inject this object
// There is no other away to propogate this object to perf suite
// There is no other away to propagate this object to perf suite
if params.KubeconfigPath == loader.FakeKubeconfigPath {
cmd.SetContext(context.WithValue(cmd.Context(), "remote-registry", remoteRegistry))
}
Expand Down
44 changes: 0 additions & 44 deletions admiral/pkg/clusters/clusterIdentitySyncer.go

This file was deleted.

77 changes: 77 additions & 0 deletions admiral/pkg/clusters/configSyncer.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
package clusters

import (
"fmt"
"github.com/istio-ecosystem/admiral/admiral/pkg/controller/util"
"github.com/pkg/errors"

"github.com/istio-ecosystem/admiral/admiral/pkg/controller/common"
"github.com/istio-ecosystem/admiral/admiral/pkg/registry"
"github.com/sirupsen/logrus"
)

func updateClusterIdentityCache(
remoteRegistry *RemoteRegistry,
sourceClusters []string,
identity string) error {

if remoteRegistry == nil {
return fmt.Errorf("remote registry is not initialized")
}
if remoteRegistry.AdmiralCache == nil {
return fmt.Errorf("admiral cache is not initialized")
}

if remoteRegistry.AdmiralCache.SourceToDestinations == nil {
return fmt.Errorf("source to destination cache is not populated")
}
// find assets this identity needs to call
destinationAssets := remoteRegistry.AdmiralCache.SourceToDestinations.Get(identity)
for _, cluster := range sourceClusters {
sourceClusterIdentity := registry.NewClusterIdentity(identity, true)
err := remoteRegistry.ClusterIdentityStoreHandler.AddUpdateIdentityToCluster(sourceClusterIdentity, cluster)
if err != nil {
return err
}
for _, destinationAsset := range destinationAssets {
destinationClusterIdentity := registry.NewClusterIdentity(destinationAsset, false)
err := remoteRegistry.ClusterIdentityStoreHandler.AddUpdateIdentityToCluster(destinationClusterIdentity, cluster)
if err != nil {
return err
}
}
}
logrus.Infof("source asset=%s is present in clusters=%v, and has destinations=%v",
identity, sourceClusters, destinationAssets)
return nil
}

func updateRegistryConfigForClusterPerEnvironment(ctxLogger *logrus.Entry, remoteRegistry *RemoteRegistry, registryConfig registry.IdentityConfig) error {
task := "updateRegistryConfigForClusterPerEnvironment"
defer util.LogElapsedTimeForTask(ctxLogger, task, registryConfig.IdentityName, "", "", "processingTime")()
k8sClient, err := remoteRegistry.ClientLoader.LoadKubeClientFromPath(common.GetKubeconfigPath())
if err != nil && common.GetSecretFilterTags() == "admiral/syncrtay" {
ctxLogger.Infof(common.CtxLogFormat, task, registryConfig.IdentityName, "", "", "unable to get kube client")
return errors.Wrap(err, "unable to get kube client")
}
for _, clusterConfig := range registryConfig.Clusters {
clusterName := clusterConfig.Name
ctxLogger.Infof(common.CtxLogFormat, task, registryConfig.IdentityName, "", clusterName, "processing cluster")
for _, environmentConfig := range clusterConfig.Environment {
environmentName := environmentConfig.Name
ctxLogger.Infof(common.CtxLogFormat, task, registryConfig.IdentityName, "", clusterName, "processing environment="+environmentName)
err := remoteRegistry.ConfigSyncer.UpdateEnvironmentConfigByCluster(
ctxLogger,
environmentName,
clusterName,
registryConfig,
registry.NewConfigMapWriter(k8sClient, ctxLogger),
)
if err != nil {
ctxLogger.Errorf(common.CtxLogFormat, task, registryConfig.IdentityName, "", clusterName, "processing environment="+environmentName+" error="+err.Error())
return err
}
}
}
return nil
}
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ import (
func TestUpdateClusterIdentityState(t *testing.T) {
var (
sourceCluster1 = "cluster1"
foobarIdentity = "intuit.foobar.service"
helloWorldIdentity = "intuit.helloworld.service"
foobarIdentity = "org.foobar.service"
helloWorldIdentity = "org.helloworld.service"
remoteRegistryHappyCase = &RemoteRegistry{
ClusterIdentityStoreHandler: registry.NewClusterIdentityStoreHandler(),
AdmiralCache: &AdmiralCache{
Expand Down
80 changes: 33 additions & 47 deletions admiral/pkg/clusters/configwriter.go
Original file line number Diff line number Diff line change
@@ -1,16 +1,15 @@
package clusters

import (
"errors"
"sort"
"strconv"
"strings"

"github.com/istio-ecosystem/admiral/admiral/pkg/controller/admiral"
"github.com/istio-ecosystem/admiral/admiral/pkg/controller/common"
"github.com/istio-ecosystem/admiral/admiral/pkg/registry"
"github.com/istio-ecosystem/admiral/admiral/pkg/util"
"github.com/sirupsen/logrus"
networkingV1Alpha3 "istio.io/api/networking/v1alpha3"
"sort"
"strconv"
"strings"
)

// IstioSEBuilder is an interface to construct Service Entry objects
Expand Down Expand Up @@ -54,14 +53,10 @@ func (b *ServiceEntryBuilder) BuildServiceEntriesFromIdentityConfig(ctxLogger *l
if err != nil {
return serviceEntries, err
}
ports, err := getServiceEntryPorts(identityConfigEnvironment)
if err != nil {
return serviceEntries, err
}
if se, ok := seMap[env]; !ok {
tmpSe = &networkingV1Alpha3.ServiceEntry{
Hosts: []string{common.GetCnameVal([]string{env, strings.ToLower(identity), common.GetHostnameSuffix()})},
Ports: ports,
Ports: identityConfigEnvironment.Ports,
Location: networkingV1Alpha3.ServiceEntry_MESH_INTERNAL,
Resolution: networkingV1Alpha3.ServiceEntry_DNS,
SubjectAltNames: []string{common.SpiffePrefix + common.GetSANPrefix() + common.Slash + identity},
Expand All @@ -72,15 +67,19 @@ func (b *ServiceEntryBuilder) BuildServiceEntriesFromIdentityConfig(ctxLogger *l
tmpSe = se
tmpSe.Endpoints = append(tmpSe.Endpoints, ep)
}
serviceEntries = append(serviceEntries, tmpSe)
sort.Sort(WorkloadEntrySorted(tmpSe.Endpoints))
seMap[env] = tmpSe
}
}
for _, se := range seMap {
serviceEntries = append(serviceEntries, se)
}
return serviceEntries, err
}

// getIngressEndpoints constructs the endpoint of the ingress gateway/remote endpoint for an identity
// by reading the information directly from the IdentityConfigCluster.
func getIngressEndpoints(clusters []registry.IdentityConfigCluster) (map[string]*networkingV1Alpha3.WorkloadEntry, error) {
func getIngressEndpoints(clusters map[string]*registry.IdentityConfigCluster) (map[string]*networkingV1Alpha3.WorkloadEntry, error) {
ingressEndpoints := map[string]*networkingV1Alpha3.WorkloadEntry{}
var err error
for _, cluster := range clusters {
Expand All @@ -99,46 +98,32 @@ func getIngressEndpoints(clusters []registry.IdentityConfigCluster) (map[string]
return ingressEndpoints, err
}

// getServiceEntryPorts constructs the ServicePorts of the service entry that should be built
// for the given identityConfigEnvironment.
func getServiceEntryPorts(identityConfigEnvironment registry.IdentityConfigEnvironment) ([]*networkingV1Alpha3.ServicePort, error) {
port := &networkingV1Alpha3.ServicePort{Number: uint32(common.DefaultServiceEntryPort), Name: util.Http, Protocol: util.Http}
var err error
if len(identityConfigEnvironment.Ports) == 0 {
err = errors.New("identityConfigEnvironment had no ports for: " + identityConfigEnvironment.Name)
}
for _, servicePort := range identityConfigEnvironment.Ports {
//TODO: 8090 is supposed to be set as the common.SidecarEnabledPorts (includeInboundPorts) which we check that in the rollout, but we don't have that information here so assume it is 8090
if servicePort.TargetPort.IntValue() == 8090 {
protocol := util.GetPortProtocol(servicePort.Name)
port.Name = protocol
port.Protocol = protocol
}
}
ports := []*networkingV1Alpha3.ServicePort{port}
return ports, err
}

// getServiceEntryEndpoint constructs the remote or local endpoints of the service entry that
// should be built for the given identityConfigEnvironment.
func getServiceEntryEndpoint(ctxLogger *logrus.Entry, clientCluster string, serverCluster string, ingressEndpoints map[string]*networkingV1Alpha3.WorkloadEntry, identityConfigEnvironment registry.IdentityConfigEnvironment) (*networkingV1Alpha3.WorkloadEntry, error) {
func getServiceEntryEndpoint(
ctxLogger *logrus.Entry,
clientCluster string,
serverCluster string,
ingressEndpoints map[string]*networkingV1Alpha3.WorkloadEntry,
identityConfigEnvironment *registry.IdentityConfigEnvironment) (*networkingV1Alpha3.WorkloadEntry, error) {
//TODO: Verify Local and Remote Endpoints are constructed correctly
var err error
endpoint := ingressEndpoints[serverCluster]
tmpEp := endpoint.DeepCopy()
tmpEp.Labels["type"] = identityConfigEnvironment.Type
if clientCluster == serverCluster {
//Local Endpoint Address if the identity is deployed on the same cluster as it's client and the endpoint is the remote endpoint for the cluster
tmpEp.Address = identityConfigEnvironment.ServiceName + common.Sep + identityConfigEnvironment.Namespace + common.GetLocalDomainSuffix()
for _, servicePort := range identityConfigEnvironment.Ports {
//There should only be one mesh port here (http-service-mesh), but we are preserving ability to have multiple ports
protocol := util.GetPortProtocol(servicePort.Name)
if _, ok := tmpEp.Ports[protocol]; ok {
tmpEp.Ports[protocol] = uint32(servicePort.Port)
ctxLogger.Infof(common.CtxLogFormat, "LocalMeshPort", servicePort.Port, "", serverCluster, "Protocol: "+protocol)
} else {
err = errors.New("failed to get Port for protocol: " + protocol)
for _, service := range identityConfigEnvironment.Services {
if service.Weight == -1 {
// its not a weighted service, which means we should have only one service endpoint
//Local Endpoint Address if the identity is deployed on the same cluster as it's client and the endpoint is the remote endpoint for the cluster
tmpEp.Address = service.Name + common.Sep + identityConfigEnvironment.Namespace + common.GetLocalDomainSuffix()
tmpEp.Ports = service.Ports
break
}
// TODO: this needs fixing, because there can be multiple services when we have weighted endpoints, and this
// will only choose one service
tmpEp.Address = service.Name + common.Sep + identityConfigEnvironment.Namespace + common.GetLocalDomainSuffix()
tmpEp.Ports = service.Ports
}
}
return tmpEp, err
Expand All @@ -147,15 +132,16 @@ func getServiceEntryEndpoint(ctxLogger *logrus.Entry, clientCluster string, serv
// getExportTo constructs a sorted list of unique namespaces for a given cluster, client assets,
// and cname, where each namespace is where a client asset of the cname is deployed on the cluster. If the cname
// is also deployed on the cluster then the istio-system namespace is also in the list.
func getExportTo(ctxLogger *logrus.Entry, registryClient registry.IdentityConfiguration, clientCluster string, isServerOnClientCluster bool, clientAssets []map[string]string) ([]string, error) {
func getExportTo(ctxLogger *logrus.Entry, registryClient registry.IdentityConfiguration, clientCluster string, isServerOnClientCluster bool, clientAssets map[string]string) ([]string, error) {
clientNamespaces := []string{}
var err error
var clientIdentityConfig registry.IdentityConfig
for _, clientAsset := range clientAssets {
for clientAsset := range clientAssets {
// For each client asset of cname, we fetch its identityConfig
clientIdentityConfig, err = registryClient.GetIdentityConfigByIdentityName(clientAsset["name"], ctxLogger)
clientIdentityConfig, err = registryClient.GetIdentityConfigByIdentityName(clientAsset, ctxLogger)
if err != nil {
ctxLogger.Infof(common.CtxLogFormat, "buildServiceEntry", clientAsset["name"], common.GetSyncNamespace(), "", "could not fetch IdentityConfig: "+err.Error())
// TODO: this should return an error.
ctxLogger.Infof(common.CtxLogFormat, "buildServiceEntry", clientAsset, common.GetSyncNamespace(), "", "could not fetch IdentityConfig: "+err.Error())
continue
}
for _, clientIdentityConfigCluster := range clientIdentityConfig.Clusters {
Expand Down
Loading