Skip to content

Commit

Permalink
feat(TOPUP): an initial implementation of SD estimation.
Browse files Browse the repository at this point in the history
Adds a new subworkflow based on FSL TOPUP to integrate SD estimation
for the ds001771 dataset.

- [x] Pin niworkflows to current master (while I release 1.2.0rc5
  containing nipreps/niworkflows#503, nipreps/niworkflows#504, which
  are used here).
- [x] Create a new sdc estimation workflow, with the expectation of
  upstreaming it to SDCFlows.
- [x] Implement the barebones of how nipreps/sdcflows#101 could look
  like. Also to be upstreamed to SDCFlows when mature.
- [x] Stick TOPUP from FSL 6.0.3 in the Docker image, since topup from
  FSL 5.0.x is really unstable (for instance, it fails with a
  segmentation fault on the workflow of ds001771)

Resolves: nipreps#92
  • Loading branch information
oesteban committed May 5, 2020
1 parent 634d89a commit 8a020a6
Show file tree
Hide file tree
Showing 15 changed files with 315 additions and 22 deletions.
Binary file added .docker/fsl-6.0/bin/topup
Binary file not shown.
Binary file added .docker/fsl-6.0/lib/libgfortran.so.3
Binary file not shown.
Binary file added .docker/fsl-6.0/lib/libopenblas.so.0
Binary file not shown.
3 changes: 3 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,9 @@ ENV FSLDIR="/usr/share/fsl/5.0" \
AFNI_PLUGINPATH="/usr/lib/afni/plugins"
ENV PATH="/usr/lib/fsl/5.0:/usr/lib/afni/bin:$PATH"

COPY .docker/fsl-6.0/bin/topup /usr/share/fsl/5.0/bin/topup
COPY .docker/fsl-6.0/lib/* /usr/lib/fsl/5.0/

# Installing ANTs 2.2.0 (NeuroDocker build)
ENV ANTSPATH=/usr/lib/ants
RUN mkdir -p $ANTSPATH && \
Expand Down
13 changes: 13 additions & 0 deletions dmriprep/config/reports-spec.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,19 @@ sections:
caption: Surfaces (white and pial) reconstructed with FreeSurfer (<code>recon-all</code>)
overlaid on the participant's T1w template.
subtitle: Surface reconstruction
- name: Fieldmaps
ordering: session,acquisition,run
reportlets:
- bids: {datatype: dwi, desc: pepolar, suffix: dwi}
caption: Inhomogeneities of the *B0* field introduce (oftentimes severe) spatial distortions
along the phase-encoding direction of the image. Utilizing two or more images with different
phase-encoding polarities (PEPolar) or directions, it is possible to estimate the inhomogeneity
of the field. The plot below shows a reference EPI (echo-planar imaging) volume generated
using two or more EPI images with varying phase-encoding blips.
description: Hover on the panels with the mouse pointer to also visualize the intensity of the
inhomogeneity of the field in Hertz.
static: false
subtitle: "Susceptibility-derived Distortion Correction (SDC): field inhomogeneity estimation"
- name: Diffusion
ordering: session,acquisition,run
reportlets:
Expand Down
26 changes: 26 additions & 0 deletions dmriprep/data/flirtsch/b02b0.cnf
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Resolution (knot-spacing) of warps in mm
--warpres=20,16,14,12,10,6,4,4,4
# Subsampling level (a value of 2 indicates that a 2x2x2 neighbourhood is collapsed to 1 voxel)
--subsamp=2,2,2,2,2,1,1,1,1
# FWHM of gaussian smoothing
--fwhm=8,6,4,3,3,2,1,0,0
# Maximum number of iterations
--miter=5,5,5,5,5,10,10,20,20
# Relative weight of regularisation
--lambda=0.005,0.001,0.0001,0.000015,0.000005,0.0000005,0.00000005,0.0000000005,0.00000000001
# If set to 1 lambda is multiplied by the current average squared difference
--ssqlambda=1
# Regularisation model
--regmod=bending_energy
# If set to 1 movements are estimated along with the field
--estmov=1,1,1,1,1,0,0,0,0
# 0=Levenberg-Marquardt, 1=Scaled Conjugate Gradient
--minmet=0,0,0,0,0,1,1,1,1
# Quadratic or cubic splines
--splineorder=3
# Precision for calculation and storage of Hessian
--numprec=double
# Linear or spline interpolation
--interp=spline
# If set to 1 the images are individually scaled to a common mean intensity
--scale=1
26 changes: 26 additions & 0 deletions dmriprep/data/flirtsch/b02b0_1.cnf
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Resolution (knot-spacing) of warps in mm
--warpres=20,16,14,12,10,6,4,4,4
# Subsampling level (a value of 2 indicates that a 2x2x2 neighbourhood is collapsed to 1 voxel)
--subsamp=1,1,1,1,1,1,1,1,1
# FWHM of gaussian smoothing
--fwhm=8,6,4,3,3,2,1,0,0
# Maximum number of iterations
--miter=5,5,5,5,5,10,10,20,20
# Relative weight of regularisation
--lambda=0.0005,0.0001,0.00001,0.0000015,0.0000005,0.0000005,0.00000005,0.0000000005,0.00000000001
# If set to 1 lambda is multiplied by the current average squared difference
--ssqlambda=1
# Regularisation model
--regmod=bending_energy
# If set to 1 movements are estimated along with the field
--estmov=1,1,1,1,1,0,0,0,0
# 0=Levenberg-Marquardt, 1=Scaled Conjugate Gradient
--minmet=0,0,0,0,0,1,1,1,1
# Quadratic or cubic splines
--splineorder=3
# Precision for calculation and storage of Hessian
--numprec=double
# Linear or spline interpolation
--interp=spline
# If set to 1 the images are individually scaled to a common mean intensity
--scale=1
26 changes: 26 additions & 0 deletions dmriprep/data/flirtsch/b02b0_2.cnf
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Resolution (knot-spacing) of warps in mm
--warpres=20,16,14,12,10,6,4,4,4
# Subsampling level (a value of 2 indicates that a 2x2x2 neighbourhood is collapsed to 1 voxel)
--subsamp=2,2,2,2,2,1,1,1,1
# FWHM of gaussian smoothing
--fwhm=8,6,4,3,3,2,1,0,0
# Maximum number of iterations
--miter=5,5,5,5,5,10,10,20,20
# Relative weight of regularisation
--lambda=0.005,0.001,0.0001,0.000015,0.000005,0.0000005,0.00000005,0.0000000005,0.00000000001
# If set to 1 lambda is multiplied by the current average squared difference
--ssqlambda=1
# Regularisation model
--regmod=bending_energy
# If set to 1 movements are estimated along with the field
--estmov=1,1,1,1,1,0,0,0,0
# 0=Levenberg-Marquardt, 1=Scaled Conjugate Gradient
--minmet=0,0,0,0,0,1,1,1,1
# Quadratic or cubic splines
--splineorder=3
# Precision for calculation and storage of Hessian
--numprec=double
# Linear or spline interpolation
--interp=spline
# If set to 1 the images are individually scaled to a common mean intensity
--scale=1
26 changes: 26 additions & 0 deletions dmriprep/data/flirtsch/b02b0_4.cnf
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Resolution (knot-spacing) of warps in mm
--warpres=20,16,14,12,10,6,4,4,4
# Subsampling level (a value of 2 indicates that a 2x2x2 neighbourhood is collapsed to 1 voxel)
--subsamp=4,4,2,2,2,1,1,1,1
# FWHM of gaussian smoothing
--fwhm=8,6,4,3,3,2,1,0,0
# Maximum number of iterations
--miter=5,5,5,5,5,10,10,20,20
# Relative weight of regularisation
--lambda=0.035,0.006,0.0001,0.000015,0.000005,0.0000005,0.00000005,0.0000000005,0.00000000001
# If set to 1 lambda is multiplied by the current average squared difference
--ssqlambda=1
# Regularisation model
--regmod=bending_energy
# If set to 1 movements are estimated along with the field
--estmov=1,1,1,1,1,0,0,0,0
# 0=Levenberg-Marquardt, 1=Scaled Conjugate Gradient
--minmet=0,0,0,0,0,1,1,1,1
# Quadratic or cubic splines
--splineorder=3
# Precision for calculation and storage of Hessian
--numprec=double
# Linear or spline interpolation
--interp=spline
# If set to 1 the images are individually scaled to a common mean intensity
--scale=1
42 changes: 31 additions & 11 deletions dmriprep/workflows/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,14 @@
BIDSInfo, BIDSFreeSurferDir
)
from niworkflows.utils.misc import fix_multi_T1w_source_name
from niworkflows.utils.spaces import Reference
from smriprep.workflows.anatomical import init_anat_preproc_wf

from niworkflows.utils.spaces import Reference
from ..interfaces import DerivativesDataSink, BIDSDataGrabber
from ..interfaces.reports import SubjectSummary, AboutSummary
from ..utils.bids import collect_data
from .dwi.base import init_early_b0ref_wf
from .fmap.base import init_fmap_estimation_wf


def init_dmriprep_wf():
Expand Down Expand Up @@ -244,21 +245,27 @@ def init_single_subject_wf(subject_id):
anat_preproc_wf.__postdesc__ = (anat_preproc_wf.__postdesc__ or "") + f"""
Diffusion data preprocessing
: For each of the {len(subject_data["dwi"])} dwi scans found per subject
(across all sessions), the following preprocessing was performed."""
: For each of the {len(subject_data["dwi"])} DWI scans found per subject
(across all sessions), the gradient table was vetted and converted into the *RASb*
format (i.e., given in RAS+ scanner coordinates, normalized b-vectors and scaled b-values),
and a *b=0* average for reference to the subsequent steps of preprocessing was calculated.
"""

layout = config.execution.layout
dwi_data = tuple([
(dwi, layout.get_metadata(dwi), layout.get_bvec(dwi), layout.get_bval(dwi))
for dwi in subject_data["dwi"]
])
inputnode = pe.Node(niu.IdentityInterface(fields=["dwi_data"]),
name="inputnode")
inputnode.iterables = [(
"dwi_data", tuple([
(dwi, layout.get_bvec(dwi), layout.get_bval(dwi),
layout.get_metadata(dwi)["PhaseEncodingDirection"])
for dwi in subject_data["dwi"]
])
)]
inputnode.iterables = [("dwi_data", dwi_data)]

referencenode = pe.JoinNode(niu.IdentityInterface(
fields=["dwi_file", "metadata", "dwi_reference", "dwi_mask", "gradients_rasb"]),
name="referencenode", joinsource="inputnode", run_without_submitting=True)

split_info = pe.Node(niu.Function(
function=_unpack, output_names=["dwi_file", "bvec", "bval", "pedir"]),
function=_unpack, output_names=["dwi_file", "metadata", "bvec", "bval"]),
name="split_info", run_without_submitting=True)

early_b0ref_wf = init_early_b0ref_wf()
Expand All @@ -267,8 +274,21 @@ def init_single_subject_wf(subject_id):
(split_info, early_b0ref_wf, [("dwi_file", "inputnode.dwi_file"),
("bvec", "inputnode.in_bvec"),
("bval", "inputnode.in_bval")]),
(split_info, referencenode, [("dwi_file", "dwi_file"),
("metadata", "metadata")]),
(early_b0ref_wf, referencenode, [
("outputnode.dwi_reference", "dwi_reference"),
("outputnode.dwi_mask", "dwi_mask"),
("outputnode.gradients_rasb", "gradients_rasb"),
])
])

fmap_estimation_wf = init_fmap_estimation_wf(subject_data["dwi"])
workflow.connect([
(referencenode, fmap_estimation_wf, [
("dwi_reference", "inputnode.dwi_reference"),
("dwi_mask", "inputnode.dwi_mask")]),
])
return workflow


Expand Down
7 changes: 1 addition & 6 deletions dmriprep/workflows/dwi/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,11 +53,6 @@ def init_early_b0ref_wf(
"""
# Build workflow
workflow = Workflow(name=name)
workflow.__postdesc__ = """
For every run and subject, the gradient table was vetted and converted into the *RASb*
format (i.e., given in RAS+ scanner coordinates, normalized b-vectors and scaled b-values),
and a *b=0* average for reference to the subsequent steps of preprocessing was calculated.
"""

inputnode = pe.Node(niu.IdentityInterface(
fields=['dwi_file', 'in_bvec', 'in_bval']),
Expand All @@ -84,7 +79,7 @@ def init_early_b0ref_wf(
(dwi_reference_wf, outputnode, [
('outputnode.ref_image', 'dwi_reference'),
('outputnode.dwi_mask', 'dwi_mask')]),
(gradient_table, outputnode, [('out_rasb', 'out_rasb')])
(gradient_table, outputnode, [('out_rasb', 'gradients_rasb')])
])

# REPORTING ############################################################
Expand Down
29 changes: 25 additions & 4 deletions dmriprep/workflows/dwi/util.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
from niworkflows.engine.workflows import LiterateWorkflow as Workflow
from niworkflows.interfaces.images import ValidateImage
from niworkflows.interfaces.fixes import FixN4BiasFieldCorrection as N4BiasFieldCorrection
from niworkflows.interfaces.nibabel import ApplyMask
from niworkflows.interfaces.utils import CopyXForm

from ...interfaces.images import ExtractB0, RescaleB0
Expand Down Expand Up @@ -212,8 +213,10 @@ def init_enhance_and_skullstrip_dwi_wf(
combine_masks = pe.Node(fsl.BinaryMaths(operation='mul'),
name='combine_masks')

normalize = pe.Node(niu.Function(function=_normalize), name="normalize")

# Compute masked brain
apply_mask = pe.Node(fsl.ApplyMask(), name='apply_mask')
apply_mask = pe.Node(ApplyMask(), name='apply_mask')

workflow.connect([
(inputnode, n4_correct, [('in_file', 'input_image'),
Expand All @@ -230,11 +233,29 @@ def init_enhance_and_skullstrip_dwi_wf(
(skullstrip_first_pass, combine_masks, [('mask_file', 'in_file')]),
(skullstrip_second_pass, fixhdr_skullstrip2, [('out_file', 'in_file')]),
(fixhdr_skullstrip2, combine_masks, [('out_file', 'operand_file')]),
(fixhdr_unifize, apply_mask, [('out_file', 'in_file')]),
(combine_masks, apply_mask, [('out_file', 'mask_file')]),
(combine_masks, apply_mask, [('out_file', 'in_mask')]),
(combine_masks, outputnode, [('out_file', 'mask_file')]),
(n4_correct, normalize, [('output_image', 'in_file')]),
(normalize, apply_mask, [('out', 'in_file')]),
(normalize, outputnode, [('out', 'bias_corrected_file')]),
(apply_mask, outputnode, [('out_file', 'skull_stripped_file')]),
(n4_correct, outputnode, [('output_image', 'bias_corrected_file')]),
])

return workflow


def _normalize(in_file):
from pathlib import Path
import numpy as np
import nibabel as nb

nii = nb.load(in_file)
data = nii.get_fdata()
data[data < 0] = 0
if data.max() >= 2**15 - 1:
data *= 2000 / np.percentile(data.reshape(-1), 98.0)

hdr = nii.header.copy()
hdr.set_data_dtype('int16')
nii.__class__(data.astype('int16'), nii.affine, hdr).to_filename("normalized.nii.gz")
return Path("normalized.nii.gz").absolute()
Empty file.
Loading

0 comments on commit 8a020a6

Please sign in to comment.