This repository contains all of the code required to replicate my (Matt Bilton’s) Masters thesis results on Bayesian Optimal Experimental Design (OED). The code in this repositry depends on four Python packages which I wrote over the course of my Masters:
arraytainers
, which are basically dictionaries or lists of Numpy arrays which act like arrays under some circumstances (e.g. when being added together, when being acted on by a Numpy function), but which like dictionaries/list under other circumstances (e.g. when accessing an element using a key).approx_post
, which is used to create amortised variational inference posterior approximations.oed_toolbox
, which just provides some simple wrapper routines to compute and optimise some common OED criteria.surrojax_gp
, which is used to create simple Gaussian Process (GP) surogate models.
There are five items of note within the main repository
- Folders named
chapter_i
, each of which contains the code and figures corresponding to Chapteri
of the thesis. - The
beam
andbreast
folder, which contains all of the code required to run the Dolfinx hyperelasticity simulations for the cantilever beam and breast. Unlike thechapter_i
folders, the code in this folder must be run in thedolfinx/lab
Docker container; we’ll explain how to do this shortly. computation_helpers.py
, which containers helper functions for computations repeated across different notebooks (e.g. computing approximate posterior distributions through simple quadrature)plotting_helpers.py
, which contains helper functions to produce plots repeated across different notebooks (e.g. plotting multiple probability distributions against one another)requirements.txt
, which contains a list of the dependencies one must install to run the code in this repository. Importantly, these do not contain the dependencies required to run the code in thefenics_models
folder: once again, the code in this folder must be run in the dolfinx Docker container.
Within each of the chapter_i
folders, as well as within the fenics_models
folder, one can find:
- A series of Jupyter notebooks along with the the
json
data produced by those notebooks. Each corresponds to a particular piece of analysis performed within the chapter, and are numbered to indicated the order in which they were initially run. Although the notebooks have a definite ordering to them, they don’t need to be re-run in this same order since the outputs produced by each notebook (which may be required for other notebooks to run) have already been saved. For example, notebook[3]
in thechapter_5
folder requires access to thenonlinear_beam_gp.json
file produced by notebook[4]
in thechapter_4
folder; this data, however, has already been saved within thechapter_4
folder. - A
figs
folder, which contains all of the raw images produced by the Jupyter notebooks in that folder, along with figures created bycombining’ these raw images, and
hand-drawn’ images created using Inkscape. The figures in thesefig
folders are organised into further subcategory folders.
In addition to what was previously mentioned, the fenics_models
folder also contains:
fenics_helpers.py
, which defines a series of helper functions used by the notebooks in this folder,- A
data
folder, which contains all the data produced by the notebooks in this folder.
The set-up steps required to run the code in this repositry depend on whether you wish to run the Fenics code included in the fenics_models
folder or not:
- If you don’t want to run the Fenics code, then you don't need to worry about creating a Docker container.
- If you do want to run the Fenics code, then you will need to install the appropriate Docker container.
Let's now go over these two situations in more detail.
If you’re not interested in running the Fenics code, it’s sufficient just to install the requirements listed in the requirements.txt
file; this can be done by navigating your terminal to the repository and executing:
pip install -r requirements.txt
Importantly, since this repository uses the Jax package (which you can read about here), you need to make sure your system is capable of installing Jax, which is described in detail here. Put simply, running Jax basically requires you to use a Linux system. If you’re using a Windows-based system, we highly recommend you install Windows Subsystem for Linux (WSL), which will provide you with a light-weight version of Ubuntu within which you can run this repository’s code; instructions on how to install WSL can be found here.
To run the Fenics code included in this repository, you’ll need to use the dolfinx/lab
Docker image, which can be found here. Unfortunately, the Fenics project has a habit of regularly updating the dolfinx API (which you can read about here) in a backwards incompatible manner. Consequently, we cannot guarantee that the code in the fenics_models
folder will work with the current version of the dolfinx/lab
image. To get around this, we’ve created our own Docker image which contains the appropriate version of the dolfinx/lab
image to run the fenics_models
code; this image can be found here. This image also contains all of the dependencies listed in the requirements.txt
file. Although the instructions to install this Docker image can be found in the README
of the aforementioned Docker Hub repository, we’ll also give them here:
- Install Docker; instructions on how to do this can be found here.
- Clone this repository by running:
git clone https://github.com/MABilton/bayesian_oed_bilton_masters
- Navigate your terminal to inside of the pulled repository.
- Run the command:
This will download the Docker image, create an accompanying container, and then launch that container. Note that Docker must be open for this command to work.
docker run --init -ti -p 8888:8888 -v "$(pwd)":/root/shared mabilton/bayesian_oed_bilton_masters:latest
- Open the Jupyter notebook link which appears in the terminal. If the first link doesn’t work, try using the second.
- Run as many of the notebooks as you see fit – no other installations should be required.
- Once you’re done, the container can simply be closed with
Ctrl + C
in your terminal. - Should you want to open the container again from where you left off, just run the command:
where
docker start -i CONTAINER_ID
CONTAINER_ID
is the ID of the Docker container created in Step 3. To find the ID of your container, execute in your terminal:Thedocker ps -a
CONTAINER_ID
will be shown in the first column next to the name of the image (i.e.mabilton/bayesian_oed_bilton_masters:latest
)