This repository contains algorithms developed by the MaGeZ team submitted to the 2024 PETRIC reconstruction challenge.
- Matthias Ehrhardt, Univeristy of Bath, United Kingdom
- Georg Schramm, KU Leuven, Belgium
- Zeljko Kereta, Univeristy College London, United Kingdom
Better-preconditioned SVRG based on the works of:
- R. Twyman et al., "An Investigation of Stochastic Variance Reduction Algorithms for Relative Difference Penalized 3D PET Image Reconstruction," in IEEE TMI, 4, 2023, link
- J. Nuyts et al., ""A concave prior penalizing relative differences for maximum-a-posteriori reconstruction in emission tomography," in IEEE TNS, 49, 2022, link
The update can be summarized as:
where
Similar to ALG1 but: The stepsize is set to 2.2 in the first epoch, and is then computed using the Barzilai-Borwein rule as described in Algorithm 2 in https://arxiv.org/abs/1605.04131, with m = 1 but using the short stepsize (see https://en.wikipedia.org/wiki/Barzilai-Borwein_method) adapted to preconditioned gradient ascent methods
Using the same setup as in ALG2 but with two minor differences. First, we use a slightly smaller number of subsets. Second, we use a non-stochastic selection of subset indices by considering the cyclic group corresponding to the given number of subsets, finding all of its generators (i.e. the set of all coprimes of the number of subsets that are smaller), and then creating indices by consider specific generators at a time.
The organisers will provide GPU-enabled cloud runners which have access to larger private datasets for evaluation. To gain access, you must register. The organisers will then create a private team submission repository for you.
The organisers will import your submitted algorithm from main.py
and then run & evaluate it.
Please create this file! See the example main_*.py
files for inspiration.
SIRF, CIL, and CUDA are already installed (using synerbi/sirf).
Additional dependencies may be specified via apt.txt
, environment.yml
, and/or requirements.txt
.
-
(required)
main.py
: must define aclass Submission(cil.optimisation.algorithms.Algorithm)
and a (potentially empty) list ofsubmission_callbacks
, e.g.: -
apt.txt
: passed toapt install
-
environment.yml
: passed toconda install
, e.g.:name: winning-submission channels: [conda-forge, nvidia] dependencies: - cupy - cuda-version =11.8 - pip - pip: - git+https://github.com/MyResearchGroup/prize-winning-algos
-
requirements.txt
: passed topip install
, e.g.:cupy-cuda11x git+https://github.com/MyResearchGroup/prize-winning-algos
Tip
You probably should create either an environment.yml
or requirements.txt
file (but not both).
You can also find some example notebooks here which should help you with your development:
The organisers will execute (after installing nvidia-docker & downloading https://petric.tomography.stfc.ac.uk/data/ to /path/to/data
):
# 1. git clone & cd to your submission repository
# 2. mount `.` to container `/workdir`:
docker run --rm -it --gpus all -p 6006:6006 \
-v /path/to/data:/mnt/share/petric:ro \
-v .:/workdir -w /workdir synerbi/sirf:edge-gpu /bin/bash
# 3. install metrics & GPU libraries
conda install monai tensorboard tensorboardx jupytext cudatoolkit=11.8
pip uninstall torch # monai installs pytorch (CPU), so remove it
pip install tensorflow[and-cuda]==2.14 # last to support cu118
pip install torch --index-url https://download.pytorch.org/whl/cu118
pip install git+https://github.com/TomographicImaging/Hackathon-000-Stochastic-QualityMetrics
# 4. optionally, conda/pip/apt install environment.yml/requirements.txt/apt.txt
# 5. run your submission
python petric.py &
# 6. optionally, serve logs at <http://localhost:6006>
tensorboard --bind_all --port 6006 --logdir ./output
See the wiki/Home and wiki/FAQ for more info.
Tip
petric.py
will effectively execute:
from main import Submission, submission_callbacks # your submission (`main.py`)
from petric import data, metrics # our data & evaluation
assert issubclass(Submission, cil.optimisation.algorithms.Algorithm)
Submission(data).run(numpy.inf, callbacks=metrics + submission_callbacks)
Warning
To avoid timing out (currently 10 min runtime, will likely be increased a bit for the final evaluation after submissions close), please disable any debugging/plotting code before submitting!
This includes removing any progress/logging from submission_callbacks
and any debugging from Submission.__init__
.
data
to test/train yourAlgorithm
s is available at https://petric.tomography.stfc.ac.uk/data/ and is likely to grow (more info to follow soon)- fewer datasets will be available during the submission phase, but more will be available for the final evaluation after submissions close
- please contact us if you'd like to contribute your own public datasets!
metrics
are calculated byclass QualityMetrics
withinpetric.py
- this does not contribute to your runtime limit
- effectively, only
Submission(data).run(np.inf, callbacks=submission_callbacks)
is timed
- when using the temporary leaderboard, it is best to:
- change
Horizontal Axis
toRelative
- untick
Ignore outliers in chart scaling
- see the wiki for details
- change
Any modifications to petric.py
are ignored.