T-MIDAS was created with a focus on the reproducibility of batch image processing and quantification
- Batch processing pipelines for image format conversion, preprocessing, segmentation, ROI analysis
- Executable with a simple, text-based user interface
- Runs on any low-grade workstation with a single GPU
- Modular and simple codebase with few dependencies for easy maintenance
- Supported imaging modalities: Confocal microscopy, slidescanner, multicolor, brightfield
- Logs all your workflows and your parameter choices to a simple CSV
- You can fork this repository to adapt the batch processing scripts to your own image analysis workflows
- Quick installation
T-MIDAS is built on established image processing libraries such as scikit-image, py-clesperanto and CuPy.
All dependencies are listed here.
See selected references for more information.
[1] Image Preprocessing [1] File Conversion to TIFF [1] Convert .ndpi [2] Convert bioformats-compatible series images (.lif, .czi, ...) [3] Convert brightfield .czi [2] Cropping Largest Objects from Images /w Segment Anything [1] Slidescanner images (fluorescent, .ndpi) [2] Slidescanner images (brightfield, .ndpi) [3] Multicolor image stacks (.lif) [3] Extract intersecting regions of two images [4] Sample Random Image Subregions [5] Enhance contrast of single color image using CLAHE [6] Restore images /w Cellpose 3 (single or multiple color channel, 2D or 3D, also time series) [7] Split color channels (2D or 3D, also time series) [8] Merge color channels (2D or 3D, also time series) [9] Convert RGB images to label images [2] Image Segmentation [1] Segment bright spots (2D or 3D, also time series) [2] Segment blobs (2D or 3D, also time series) [1] User-defined or automatic (Otsu) thresholding [2] Cellpose's (generalist) cyto3 model [4] Semantic segmentation (2D; fluorescence or brightfield) [5] Semantic segmentation (2D; Segment Anything) [6] Semantic segmentation (3D; requires dark background and good SNR) [7] Improve instance segmentation using CLAHE [3] Regions of Interest (ROI) Analysis [1] Heart slices: Add 100um boundary zone to [intact+injured] ventricle masks [2] Count spots within ROI (2D) [3] Count blobs within ROI (3D) [4] Count Colocalization of ROI in 2 or 3 color channels [5] Get properties of objects within ROI (two channels) [6] Get basic ROI properties (single channel) [4] Image Segmentation Validation [1] Validate spot counts (2D) [2] Validate blobs (2D or 3D; global F1 score) [n] Start Napari (with useful plugins)
- AI for ROI detection in brightfield and fluorescence images
- Code stability
A prerequisite is the Conda package and environment management system.
The minimal Conda installer miniforge is preferable for its simplicity and speed.
After installing miniforge, you can use mamba
in the Linux terminal. Now you need to download the T-MIDAS repository either using
git clone https://github.com/MercaderLabAnatomy/T-MIDAS.git
or by downloading and unpacking the ZIP. In your terminal, change directory to the T-MIDAS folder and type
python ./scripts/install_dependencies.py
This will create the T-MIDAS environment and install all its dependencies.
To start the text-based user interface in your terminal, change directory to the T-MIDAS folder and type
python ./scripts/user_welcome.py