diff --git a/docs/Usage/Preprocessing.md b/docs/Usage/Preprocessing.md index 084d3c8..2c4f953 100644 --- a/docs/Usage/Preprocessing.md +++ b/docs/Usage/Preprocessing.md @@ -12,6 +12,7 @@ This module currently allows you to use the following preprocessing methods: - **Pixel size matching**: Rescaling of your tomogram to a similar pixel size as the training data - **Fourier amplitude matching**: Rescaling of Fourier components to pronounce different features in the tomograms (adapted from [DeePiCt](https://github.com/ZauggGroup/DeePiCt)) +- **Deconvolution**: Deconvolution filter to enhance tomogram contrast (described in the [Warp publication](https://www.nature.com/articles/s41592-019-0580-y).) ## Table of Contents - [When to use what?](#when-to-use-what) @@ -26,12 +27,12 @@ This module currently allows you to use the following preprocessing methods: We are still exploring when it makes sense to use which preprocessing technique. But here are already some rules of thumb: -1. Whenever your pixel sizes differs by a lot from around 10-12Å / pixel, you should consider using pixel size matching. We recommend to match to a pixel size of 10Å. +1. Whenever your pixel sizes differs by a lot from around 10-12Å / pixel, you should consider using pixel size matching. We recommend to match to a pixel size of 10Å.
It is also possible to do this rescaling on-the-fly, see our [segmentation instructions](https://teamtomo.org/membrain-seg/Usage/Segmentation/#on-the-fly-rescaling). 2. The Fourier amplitude matching only works in some cases, depending on the CTFs of input and target tomograms. Our current recommendation is: If you're not satisfied with MemBrain's segmentation performance, why not give the amplitude matching a shot? +3. Deconvolution: This can make sense if your input tomogram has a very low signal-to-noise ratio. We still recommend [Cryo-CARE](https://github.com/juglab/cryoCARE_pip) as a denoising method, but this deconvolution can provide an easy-to-use alternative. -More detailed guidelines are in progress! ## Usage You can control all commands of this preprocessing module by typing `tomo_preprocessing`+ some options. @@ -51,17 +52,31 @@ tomo_preprocessing --help - **match_pixel_size**: Tomogram rescaling to specified pixel size. Example: -`tomo_preprocessing match_pixel_size --input-tomogram --output-path --pixel-size-out 10.0 --pixel-size-in ` +```shell +tomo_preprocessing match_pixel_size --input-tomogram --output-path --pixel-size-out 10.0 --pixel-size-in ` +``` - **match_seg_to_tomo**: Segmentation rescaling to fit to target tomogram's shape. Example: -`tomo_preprocessing match_seg_to_tomo --seg-path --orig-tomo-path --output-path ` +```shell +tomo_preprocessing match_seg_to_tomo --seg-path --orig-tomo-path --output-path ` +``` - **extract_spectrum**: Extracts the radially averaged amplitude spectrum from the input tomogram. Example: -`tomo_preprocessing extract_spectrum --input-path --output-path ` +```shell +tomo_preprocessing extract_spectrum --input-path --output-path +``` - **match_spectrum**: Match amplitude of Fourier spectrum from input tomogram to target spectrum. Example: -`tomo_preprocessing match_spectrum --input --target --output ` - +```shell +tomo_preprocessing match_spectrum --input --target --output +``` +- **deconvolve**: Perform the deconvolution filter to enhance tomogram contrast. Example: +```shell +tomo_preprocessing deconvolve --input --output --pixel-size +``` ### **Pixel Size Matching** -Pixel size matching is recommended when your tomogram pixel sizes differs strongly from the training pixel size range (roughly 10-14Å). You can perform it using the command +Pixel size matching is recommended when your tomogram pixel sizes differs strongly from the training pixel size range (roughly 10-14Å).
+**IMPORTANT NOTE**: MemBrain-seg can now also perform the rescaling on-the-fly during segmentation, making the below worklow redundant if you are not interested in the rescaled tomograms. You can check the on-the-fly rescaling at our [segmentation instructions](https://teamtomo.org/membrain-seg/Usage/Segmentation/#on-the-fly-rescaling) + +If you prefer to not do it on-the-fly, you can perform the pixel size matching using the command ```shell tomo_preprocessing match_pixel_size --input-tomogram --output-path --pixel-size-out 10.0 --pixel-size-in @@ -91,4 +106,12 @@ This extracts the radially averaged Fourier spectrum and stores it into a .tsv f ```shell tomo_preprocessing match_spectrum --input --target --output ``` -Now, the input tomograms Fourier components are re-scaled based on the equalization kernel computed from the input tomogram's radially averaged Fourier intensities, and the previously extracted .tsv file. \ No newline at end of file +Now, the input tomograms Fourier components are re-scaled based on the equalization kernel computed from the input tomogram's radially averaged Fourier intensities, and the previously extracted .tsv file. + + +### **Deconvolution** + +Deconvolution can be applied in a single step as preprocesing before performing the segmentation using the command +```shell +tomo_preprocessing deconvolve --input --output --pixel-size +``` \ No newline at end of file diff --git a/docs/Usage/Segmentation.md b/docs/Usage/Segmentation.md index 993c57a..bf78fdf 100644 --- a/docs/Usage/Segmentation.md +++ b/docs/Usage/Segmentation.md @@ -71,11 +71,17 @@ You can also compute the connected components [after you have segmented your tom ### more membrain segment arguments: -**--tomogram-path**: TEXT Path to the tomogram to be segmented [default: None] +**--tomogram-path:** Path to the tomogram to be segmented [default: None] -**--ckpt-path** TEXT Path to the pre-trained model checkpoint that should be used. [default: None] +**--ckpt-path:** Path to the pre-trained model checkpoint that should be used. [default: None] -**--out-folder** TEXT Path to the folder where segmentations should be stored. [default: ./predictions] +**--out-folder:** Path to the folder where segmentations should be stored. [default: ./predictions] + +**--rescale-patches / --no-rescale-patches:** Should patches be rescaled on-the-fly during inference? + +**--in-pixel-size**: pixel size of your tomogram (only important if --rescale-patches flag is set) + +**--out-pixel-size**: pixel size to which patches will be rescaled internally (should normally be 10) **--store-probabilities / --no-store-probabilities**: Should probability maps be output in addition to segmentations? [default: no-store-probabilities] @@ -101,6 +107,20 @@ Running MemBrain-seg on a GPU requires at least roughly 8GB of GPU space. ### Emergency tip: In case you don't have enough GPU space, you can also try adjusting the `--sliding-window-size` parameter. By default, it is set to 160. Smaller values will require less GPU space, but also lead to worse segmentation results! +## On-the-fly rescaling +Since v0.0.2, we provide the option to rescale patches on-the-fly during inference. That means, if your tomogram pixel size is very different from our training pixel size (10Angstrom), you do not need to rescale your tomograms to the correponding pixel size in advance. + +Instead, you can set the `--rescale-patches` flag and membrain-seg will do everything for you internally. + +Example: Your tomogram has pixel size 17.92: +```shell +membrain segment --tomogram-path --ckpt-path --rescale-patches --input-pixel-size 17.92 +``` + +This will rescale small patches of your tomogram internally to 10A, feed them into our network, and scale back to the original pixel size. That means, your output segmentation mask corresponds directly to your input tomogram. + +Note: MemBrain-seg automatically reads teh pixel size also from your tomogram header. That means, you only need to pas the `--input-pixel-size` flag if your header is corrupt, e.g. after processing in Cryo-CARE. + ## Connected components If you have segmented your tomograms already, but would still like to extract the connected components of the segmentation, you don't need to re-do the segmentation, but can simply use the following command: ```shell @@ -118,6 +138,15 @@ membrain thresholds --scoremap-path ``` In this way, you can pass as many thresholds as you would like and the function will output one segmentation for each. +## Skeletonization +It is now also possible to generate a skeletonized version of the membrane segmentations, similar to the output of [TomoSegMemTV](https://github.com/anmartinezs/pyseg_system/tree/master/code/tomosegmemtv). + +For this, you can ue the `membrain skeletonize` command: +```shell +membrain skeletonize --label-path +``` + +You only need to input the path to the segmentation that has been generated my MemBrain-seg. The output of this function will be a skeletonized version of this. ## Post-Processing If you have pre-processed your tomogram using pixel size matching, you may want to [rescale](./Preprocessing.md#pixel-size-matching) your diff --git a/docs/index.md b/docs/index.md index bd6b648..322e98b 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,43 +1,49 @@ -# Membrain-Seg -[Membrain-Seg](https://github.com/teamtomo/membrain-seg/) is a Python project developed by [teamtomo](https://github.com/teamtomo) for membrane segmentation in 3D for cryo-electron tomography (cryo-ET). This tool aims to provide researchers with an efficient and reliable method for segmenting membranes in 3D microscopic images. Membrain-Seg is currently under early development, so we may make breaking changes between releases. +# MemBrain-seg +MemBrain-seg is a practical tool for membrane segmentation in cryo-electron tomograms. It's built on the U-Net architecture and makes use of a pre-trained model for efficient performance. +The U-Net architecture and training parameters are largely inspired by nnUNet2. -

- -

-# Overview -MemBrain-seg is a practical tool for membrane segmentation in cryo-electron tomograms. It's built on the U-Net architecture and makes use of a pre-trained model for efficient performance. -The U-Net architecture and training parameters are largely inspired by nnUNet1. +Our current best model is available for download [here](https://drive.google.com/file/d/1tSQIz_UCsQZNfyHg0RxD-4meFgolszo8/view?usp=sharing). Please let us know how it works for you. +If the given model does not work properly, you may want to try one of our previous versions: + +Other (older) model versions: +- [v9 -- best model until 10th Aug 2023](https://drive.google.com/file/d/15ZL5Ao7EnPwMHa8yq5CIkanuNyENrDeK/view?usp=sharing) +- [v9b -- model for non-denoised data until 10th Aug 2023](https://drive.google.com/file/d/1TGpQ1WyLHgXQIdZ8w4KFZo_Kkoj0vIt7/view?usp=sharing) -If you wish, you can also train a new model using your own data, or combine it with our available public dataset. (soon to come!) +If you wish, you can also train a new model using your own data, or combine it with our (soon to come!) publicly-available dataset. To enhance segmentation, MemBrain-seg includes preprocessing functions. These help to adjust your tomograms so they're similar to the data our network was trained on, making the process smoother and more efficient. Explore MemBrain-seg, use it for your needs, and let us know how it works for you! + +Preliminary [documentation](https://teamtomo.org/membrain-seg/) is available, but far from perfect. Please let us know if you encounter any issues, and we are more than happy to help (and get feedback what does not work yet). + ``` -[1] Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2020). nnU-Net: a self-configuring method -for deep learning-based biomedical image segmentation. Nature Methods, 1-9. +[1] Lamm, L., Zufferey, S., Righetto, R.D., Wietrzynski, W., Yamauchi, K.A., Burt, A., Liu, Y., Zhang, H., Martinez-Sanchez, A., Ziegler, S., Isensee, F., Schnabel, J.A., Engel, B.D., and Peng, T, 2024. MemBrain v2: an end-to-end tool for the analysis of membranes in cryo-electron tomography. bioRxiv, https://doi.org/10.1101/2024.01.05.574336 + +[2] Isensee, F., Jaeger, P.F., Kohl, S.A.A., Petersen, J., Maier-Hein, K.H., 2021. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods 18, 203-211. https://doi.org/10.1038/s41592-020-01008-z ``` # Installation -For detailed installation instructions, please look [here](./installation.md). +For detailed installation instructions, please look [here](https://teamtomo.org/membrain-seg/installation/). # Features ## Segmentation Segmenting the membranes in your tomograms is the main feature of this repository. -Please find more detailed instructions [here](./Usage/Segmentation.md). +Please find more detailed instructions [here](https://teamtomo.org/membrain-seg/Usage/Segmentation/). ## Preprocessing -Currently, we provide the following two [preprocessing](https://github.com/teamtomo/membrain-seg/tree/main/src/tomo_preprocessing) options: -- pixel size matching: Rescale your tomogram to match the training pixel sizes +Currently, we provide the following two [preprocessing](https://github.com/teamtomo/membrain-seg/tree/main/src/membrain_seg/tomo_preprocessing) options: +- Pixel size matching: Rescale your tomogram to match the training pixel sizes - Fourier amplitude matching: Scale Fourier components to match the "style" of different tomograms +- Deconvolution: denoises the tomogram by applying the deconvolution filter from Warp -For more information, see the [Preprocessing](Usage/Preprocessing.md) subsection. +For more information, see the [Preprocessing](https://teamtomo.org/membrain-seg/Usage/Preprocessing/) subsection. ## Model training -It is also possible to use this package to train your own model. Instructions can be found [here](./Usage/Training.md). +It is also possible to use this package to train your own model. Instructions can be found [here](https://teamtomo.org/membrain-seg/Usage/Training/). ## Patch annotations In case you would like to train a model that works better for your tomograms, it may be beneficial to add some more patches from your tomograms to the training dataset. -Recommendations on how to to this can be found [here](Usage/Annotations.md). \ No newline at end of file +Recommendations on how to to this can be found [here](https://teamtomo.org/membrain-seg/Usage/Annotations/). diff --git a/docs/installation.md b/docs/installation.md index cd47231..a91e020 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -3,14 +3,7 @@ These installation instructions are very preliminary, and surely will not work on all systems. But if any problems come up, do not hesitate to contact us (lorenz.lamm@helmholtz-munich.de). -## Step 1: Clone repository - -Make sure to have git installed, then run -```shell -git clone https://github.com/teamtomo/membrain-seg.git -``` - -## Step 2: Create a virtual environment +## Step 1: Create a virtual environment Before running any scripts, you should create a virtual Python environment. In these instructions, we use Miniconda for managing your virtual environments, but any alternative like Conda, Mamba, virtualenv, venv, ... should be fine. @@ -27,18 +20,18 @@ In order to use it, you need to activate the environment: conda activate ``` -## Step 3: Install MemBrain-seg and its dependencies -Move to the folder "membrain-seg" (from the cloned repository in Step 1) that contains the "src" folder. -Here, run +## Step 2: Install membrain-seg via PyPI + +**New:** MemBrain-seg is now pip-installable.
+ +That means, you can install membrain-seg by typing ```shell -cd membrain-seg -pip install . +pip install membrain-seg ``` - This will install MemBrain-seg and all dependencies required for segmenting your tomograms. -## Step 4: Validate installation +## Step 3: Validate installation As a first check whether the installation was successful, you can run ```shell membrain @@ -50,7 +43,7 @@ This should display the different options you can choose from MemBrain, like "se

-## Step 5: Download pre-trained segmentation model (optional) +## Step 4: Download pre-trained segmentation model (optional) We recommend to use denoised (ideally Cryo-CARE1) tomograms for segmentation. However, our current best model is available for download [here](https://drive.google.com/file/d/1tSQIz_UCsQZNfyHg0RxD-4meFgolszo8/view?usp=sharing) and should also work on non-denoised data. Please let us know how it works for you. NOTE: Previous model files are not compatible with MONAI v1.3.0 or higher. So if you're using v1.3.0 or higher, consider downgrading to MONAI v1.2.0 or downloading this [adapted version](https://drive.google.com/file/d/1Tfg2Ju-cgSj_71_b1gVMnjqNYea7L1Hm/view?usp=sharing) of our most recent model file.