This project is a simple CI/build bot for patchwork.
Patchwork is a web interface for patches posted to mailing lists, and can also handle test results being reported against said patches.
Currently this project only includes simple checks and build testing, all Linux kernel-centric. Patches are not tested against existing kernel selftests.
Please see the wiki for how to interact with NIPA.
The main goal of NIPA is to minimize the amount of time netdev and BPF maintainers have to spend validating patches.
As soon as patches hit the mailing list NIPA needs to validate them and report errors to patchwork. If patch is deemed bad maintainers can simply discard it from patchwork.
Because of load generated on the ML and the test systems results are not reported directly to the authors of patches, we don't want to facilitate "post just to be tested" scenarios.
The system needs to be easily run by individual developers. The intention is to package it as a container in due course. Having everyone test their patches locally allows for better scaling (no need for big central infrastructure) and hopefully creates an incentive for contributing.
The project is split into multiple programs with different uses.
pw_poller.py fetches emails from patchwork and runs tests in worker threads. There is one worker thread for each tree, enabling testing multiple series at a time (although admittedly the concurrency is limited because pw_poller.py itself also needs the trees to de-mux patches). Poller creates a directory with results for each series, and sub-dirs for each patch.
Once tests are done another daemon - pw_upload.py uploads the results as checks to patchwork.
ingest_mdir.py is supposed to serve the purpose of testing patches locally, it can be pointed at a directory and run all the checks on patches that directory contains (patches are expected to be generated by git format-patch). ingest_mdir.py has not been tested in a while so it's probably broken.
Configuration is read from INI files in main project directory.
There is a main config file called nipa.config but each script allows script-specific settings to be applied (see sources).
NIPA supports org mode file format for easy reading in Emacs, and XML-based output.
Tests can be either written in Python and be passed the Series / Patch objects, or written as scripts which then return 0 on success, 250 on warning, or other values on error.
Tests also return (or print to a special file descriptor) the info which will be displayed in patchwork's short summary.
Series tests are run once on the entire series. pw_upload.py multiplicates them to each patch since patchwork does not support "series checks"
Check if subject prefix contains the tree name. Check if number of patches in the series is not larger than 15. Check if series has a cover letter (require one only if there are more than two patches, otherwise the series is trivial).
Check if any of the patches in the series contains a Fixes tag.
If the tree name does not contain "next" in it assume that the patches are targeting current release cycle, therefore they are fixes.
Check that the Signed-off-by tag matches the From field. This test was taken from GregKH's repo, but there's a number of versions of this check circulating.
The check may be a little looser than some may expect, because it's satisfied if authors name or email address match between From and Signed-off-by, not necessarily both of them.
The original test validates the committer had signed-off the commit as well as the author, which is obviously meaningless when the test infra applies the patches to the tree by itself.
Check that the Fixes tag is correct. This test was taken from GregKH's repo, but there's a number of versions of this check circulating.
The hash is expected to be present in the tree to which patch is being applied. This is a slight departure from GregKH's original where the hash is checked against Linus's tree.
Check if there are any inline keywords in the C source files.
Try to catch static functions without the inline keyword in headers.
Run selected tests of kernel's scripts/checkpatch.pl on the patches.
Check if allmodconfig-configured kernel builds with the patch. Catch new errors and warnings with W=1 C=1 flags.
For now comparison is only by warning count, so warnings may get silently replaced by a different one.
Check if allmodconfig-configured kernel builds for 32bit platforms.
Check if addresses pointed out by get_maintainers.pl are included in the To/Cc of the mails.
Warn if not all included, error if nobody is included or author of a change blamed by a Fixes tag is not.
Run kernel-doc and check for warnings/errors. Similarly to build tests only compare the number of errors for now.
Run get_maintainers.pl --self-test.
Currently disabled because it's extremely slow.
Warn if patch is adding uses of deprecated APIs.
Warn if patch is explicitly CCing the stable tree which is against netdev policy.
Check for patch attestation (as generated by patatt). Warn when there is no signature or if the key for a signature isn't available. Fail if the signature doesn't match the attestation.
We'd like to thank Netronome Systems and Meta Platforms for allowing its employees to work on NIPA as part of their employment.
- build one-by-one for a PR
- add tree aliases (bpf, bpf-next, ipsec, ipsec-next, etc.)
- run coccicheck
- rev xmas tree
- make a better MAINTAINERS check than checkpatch
- add a marker for patches with replies from buildbot
- split the apply try from the test tree
- on a pull fixes may point to the commits in the pull
- series ID injection
- misspell-fixer
- make htmldocs
- split out uploader to separate user
- add async tests