Skip to content
Lucas Meneghel Rodrigues edited this page Sep 22, 2011 · 4 revisions

Testing autotest

This document aims to document the progress of unit tests written for the autotest code base.

Unittests - goals

We now have a fair number of unit tests throughout the code, with roughly 30% test coverage. We don't currently have concrete goals to increase coverage, but we are aiming for a few things:

  • Avoid any unit test failures. utils/unittest_suite.py makes it easy to check this. Here is some documentation about unittest_suite.py.
  • When writing brand-new code, include unit tests.
  • Make sure to update tests whenever functionality is added to modules already under test, to keep those modules under test.
  • When working on new, untested modules (or parts of modules), add new unit tests for the code under development. Over time this should bring more of the frequently updated code under test.

Standards

Our current standard for unittests is that they should be isolated in their own files, which are located in the same directory as the class/module they test, and should have ths same filename, followed by a _unittest (i.e. global_config.py has as unittest global_config_unittest.py). This is to isolate the test code so it doesn't clutter code, and to make it easy to distinguish unittest modules from regular code (for the test runner). Keeping the unittest code in the same directory as the code it tests insures that the unittest has as similar an environment as possible to the code under test.

We use PyUnit? for our framework. It is the python version of JUnit and works essentially the same way. This means that you need to import unittest into your testing code, and that your test class needs to inherit from unittest.TestCase?. Other than that the only thing you need to remember is that all tests must be methods whose names must begin with test (i.e. def test_foo(self):) You can define the two methods setUp() and tearDown() as well in your class. the setUp method is run before every test, and tearDown is run after every test. Unit tests should be capable of being run standalone, which requires that they have the following boiler plate at the beginning and end of the unittest module

#!/usr/bin/python2.4
import unittest

... All of your unittest code ...

if __name__ == "__main__":
        unittest.main()

The unittest.main() will automatically find all test methods in the unittest class and run them.

For more documentation on PyUnit?, see http://www.python.org/doc/2.4.3/lib/module-unittest.

Mocking out dependencies

For mocking out dependencies, we have a powerful and easy to use mocking library in client/common_lib/test_utils/mock.py. We heavily favor appropriate usage of this library in unit tests. See MockHowTo for a nice tutorial.

When defining mock objects explicitly (i.e. not using the mocking library), if mock classes have broad usage, they should be defined in their own file which is located in the same directory as the object/module they mock, and should have the same filename, followed by a _mock (ssh_host.py has as its mock ssh_host_mock.py). They need only define as much behavior as is needed by the test. Keeping them in separate files makes it clear that a mock is in use when imported by a unittest. If the mock has usage only within the test under consideration, then the mock class or functionality can be placed in the unittest file. It should be declared at the top of the file.

Things that will make all our lives easier

  1. If you see a "from blah import *" while working with code then fix it. This will make it easier to figure out what the hell we need to do to get the code under test
  2. Document methods to indicate the argument types that are being passed in. We all like to think our code is self documenting but that is a lie.
  3. Better variable names (this does not contradict the point above).
  4. People should be writing unit tests for all new code. If that is not possible then document why or post a bug indicating what is preventing you from writing unit tests

Major classes to get under test

  1. SSHHost: This is the main guy that needs to be put under test soon
  2. The parser components -- many of the individual components are under test, but the main algorithm is not yet.
  3. The scheduler components -- the scheduler is presently at 55% test coverage
  4. The new CLI -- new CLI code has excellent (>90%) test coverage
  5. AFE and TKO: the large base of GWT code for these web frontends is completely untested.

Functional Tests

Besides Unit tests, we will need a suite of functional tests that ensure that we have good end to end coverage of Autotest. Going forward we would like an automated mechanism for running these tests, but to start we will be running them by hand. So we need a first stab at a document which describes the types of tests that we would like to run.

Basic autotest functionality

  1. A simple test that involves doing a clean pull of the client code from the repository, and manually running a number of standard tests (e.g. sleeptest) through the client/bin/autotest executable.
  2. As a second test we can ensure that we can only put a subset of the client code base on a machine along with a control file that lists deps, and test that autotest will correctly pull the correct test packages and then run the tests.

Basic autoserv functionality

  1. The simplest server functional test is to run a number of tests (e.g. sleeptest again) through the command line 'server/autoserv' command on a set of dedicated (and fresh and clean) machines.
  2. Perhaps we could check that the various command line arguments that can be passed to autoserv are working.

Functional tests of the web frontend

  1. For now we need to establish a simple protocol for creating a job, monitoring that job, and viewing the results. Later we can make use of Selenium http://selenium.openqa.org/ . There exists a firefox plugin for Selenium that will make creating automated tests for the UI easy.
Clone this wiki locally