Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Certify multi-namespace support #61

Open
LucasRoesler opened this issue Mar 20, 2021 · 11 comments
Open

Certify multi-namespace support #61

LucasRoesler opened this issue Mar 20, 2021 · 11 comments

Comments

@LucasRoesler
Copy link
Member

The certifier currently does not know about multi-namespace support in faas-netes. Additionally, we would like to eventually bring namespace support to faasd. We should update the certifier to validate that multi-namespace support works as expected. It is likely to find bugs for us and help verify when faasd is done with its own version of namespace support.

I think the ideal flow for this is to add a new config or env variable that is a list of namespaces, for exmaple

CERTIFIER_NAMESPACES=testa,testb,testc

when this value is set, the tests (where it makes sense) would run multiple times, one per namespace in the list.

When the value is empty, the certifier should default to its current behavior.

@nitishkumar71
Copy link
Member

nitishkumar71 commented Mar 22, 2021

I will work on it.

@LucasRoesler
Copy link
Member Author

Cool, 👍 @nitishkumar71 i think the first step is to create a PR that adds a config object and parsing from env variables, something like

type Config struct {
   Namespaces []string
}

func FronEnv() Config {
   // read CERTIFIER_NAMESPACES variable, parse as csv string
}

Once we have that, we can start using it in the test cases, starting with the deploy test. I think it is simplest to just take it all in small steps, one test as a time essentially.

@LucasRoesler
Copy link
Member Author

@nitishkumar71 just want to follow up on how this is going. The expected change should be extending this section of the existing code

var (
config = Config{}
defaultNamespace = ""
swarm = flag.Bool("swarm", false, "helper flag to run only swarm-compatible tests only")
token = flag.String("token", "", "authentication Bearer token override, enables auth automatically")
)
func init() {
flag.StringVar(&config.Gateway, "gateway", "", "set the gateway URL, if empty use the gateway_url env variable")
flag.BoolVar(
&config.AuthEnabled,
"enableAuth",
false,
fmt.Sprintf("enable/disable authentication. The auth will be parsed from the default config in %s", filepath.Join(sdkConfig.DefaultDir, sdkConfig.DefaultFile)),
)
flag.BoolVar(&config.SecretUpdate, "secretUpdate", true, "enable/disable secret update tests")
flag.BoolVar(&config.ScaleToZero, "scaleToZero", true, "enable/disable scale from zero tests")
}
to add the new parsing

We don't need to use the new value in the tests yet, we want to go through each test 1 by 1, but we can can a log line to pretty pring the Config here

so that we can see the parsing is working as expected.

@nitishkumar71
Copy link
Member

@LucasRoesler Sorry for delay. I have included logic to get namespaces name from env variables. Which test do you think would be better to first stage for it?

@LucasRoesler
Copy link
Member Author

@LucasRoesler Sorry for delay. I have included logic to get namespaces name from env variables. Which test do you think would be better to first stage for it?

No worries, i will take a look at your PR tomorrow. I wrote up this issue as a suggested first endpoint we should verify #63

@LucasRoesler
Copy link
Member Author

@nitishkumar71 Now that we have the verification for ListNamespaces, i think we need to look at the function lifecycle. I think there are two ways we can approach this

  1. always pick a non-default namespace, if one is provided in the config. This has the benefit of simplicity. But it means we would need to run the test suite twice to verify the behavior for the empty/default case as well.

  2. we can automatically run the whole suite twice by moving all tests to being sub-tests and then running it in a loop, essentially treating it like a table tests (see https://dave.cheney.net/2019/05/07/prefer-table-driven-tests), for example

    func TestCertify(t *testing.T) {
    
        testNamespaces := []string{""}
        if len(config.Namespaces) > 0 {
            testNamespaces = append(testNamespaces, config.Namespaces[0])
        }
        
        for _, namespace := testNamespaces {
            // all of the tests we identify to include 
            // t.Run(fmt.Sprint("test function deployment in %s", namespace), func(t *testing.T) {
            //    
            // })
        }
    
    }
    
    // then other tests that don't need to verify namespace behavior, really only those methods/endpoints that can't accept a namespace as an input

    this saves some repetition of logic, but is a little less friendly for running individual tests via the IDE integration, for example
    image

  3. a variation of (2), identify which tests need to verify namespace behavior (which is most) and then do a minim table driven test in each of those, it would look similar to (2) but per each test. This means we need to repeat the looping logic in all of these tests, but individual test cases are easier to run via the IDE.

  4. create one "happy path" test for the function life cycle with a non default namespace. So instead of doubling all of the tests. This requires the least amount of refactor to existing tests and avoids doubling the test times, while still verify most of the behavior for namespaces. However, it will probably miss some things.

  5. some other idea? I am open to alternative suggestions

@nitishkumar71
Copy link
Member

nitishkumar71 commented Apr 8, 2021

Ideally, Option 3 and 4 look good to me.
The only Drawbacks I can think of are

  1. In option 3 we will add more time to test
  2. In Option 4 we will have to add additional repeated code, assuming this understanding is correct.

Possibly We can try to find some other application, who needs to perform similar tests and look at their approach.

@LucasRoesler
Copy link
Member Author

Ok, Let's try option 3 then with the deploy portion of the tests, they are a natural place to start, i'll create a new issue, in the mean time we can also look at some other approaches and then compare with how we feel option 3 looks in the deploy tests

@LucasRoesler
Copy link
Member Author

@nitishkumar71 now that we have a pattern to work with based on the Deploy tests, this leaves the invoke_test.go, scaling_test.go, logs_test.go ,and secretCRUD_test.go.

Do you want to do one of those and I can take one of the others?

@nitishkumar71
Copy link
Member

@LucasRoesler Yes, I can. Do let me know which one should I take?

@LucasRoesler
Copy link
Member Author

@nitishkumar71 you can to take Secrets (#67) or invoke (#69)

I will take logs (#68)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants