Skip to content

Brainstorm meeting (Dec 16th 2020) testing

Kenneth Hoste edited this page Dec 16, 2020 · 1 revision
  • Some aspects of tests are site-specific, how do we deal with that?
    • Option 1: Only defined paramaterized tests, require that sites fill in the blanks (in some type of "library") => lots of work for sites
    • Option 2: Only have tests with hardcoded configurations => easy, but not very flexible
    • Option 3: ReFrame "test libraries"
    • Option 4: ...
    • Option 5 (Victor): add site-specific stuff to each test, as needed => pretty big investment
  • Functionality tests may be difficult if the tests behave differently based on
  • "Press-and-play" use vs easy-to-contribute are orthogonal aspects
  • Functionality testing is definitely more important type of testing
    • Performance testing is sort of secondary (still important though)
    • Performance testing is a lot harder to make portable across sites
  • Auto-detection of site-specific stuff
    • Vasileios suggests to only use this to generate a site-specific configuration file that should be reviewed/tweaked
  • Generic names for "partitions" to use in tests, so tests that don't work can be skipped by ReFrame?
    • another option is to use tags in tests (to allow filtering of tests)
      • via self.tags = {'cuda'}
      • test filtering is then done via -t option to reframe command
    • partitions are used to indicate of which parts of the system they should be run (via valid_systems in ReFrame tests)
      • self.valid_systems = ['*:cpu_haswell'] => all systems that have a partition named cpu_haswell
      • partition names are only defined in ReFrame, mapping to site-specific system aspects is done in ReFrame config
  • site_configuration in ReFrame can be generated programmatically
    • specifies valid systems and partitions + (programming) environments
      • this part could be done site-agnostic (with a naming scheme defined by EESSI)
    • also specifies mapping of partitions to site-specific stuff
      • this has to be provided by each site (but should be minimal effort)
  • sites could (eventually) choose to run the performance tests with just logging enabled (not compare with reference numbers)
    • maybe some sets of performance references could be included with the EESSI tests, and comparison could be made when a site runs the tests
  • upcoming changes in ReFrame with "test libraries" will make it a lot more simple to separate test implementation and site-specific details (tweaks for specific tests, performance reference data, etc.)
  • how "large" should tests be?
    • different sizes; CSCS only does small vs large
    • small tests usually are also used as correctness
    • max. runtime for tests should probably be in the order of minutes
  • for now, we'll work in the tests subdirectory of the software-layer repo
    • CI could be done in GitHub Actions (mickey mouse tests), but also on generoso (VM cluster @ CSCS) for larger tests
  • tests could also be run through the EESSI pilot client container we have
    • that's important w.r.t. allowing sites that don't have CernVM-FS installed native to also run tests...
    • cfr. Alan's experiments with GROMACS in EESSI at JUWELS
    • this does require two different tests:
      • one native variant which prepares the environment by loading modules from EESSI
      • one container variant which includes "module load" commands in the tests to run
Clone this wiki locally