A basic configuration comes with the pipeline, which runs by default (the standard
config profile - see conf/base.config
. This means that you only need to configure the specifics for your system and overwrite any defaults that you want to change.
If you think that there are other people using the pipeline who would benefit from your configuration (eg. other common cluster setups), please let us know. We can add a new configuration and profile which can used by specifying
profile <name>
when running the pipeline.
To run the pipeline using your own dataset, you will need to create your config file as your_project.config
and it will be applied every time you run Nextflow.
While running the pipeline with the test
profile, the test configuration file test.config
will be copied into your output folder (./output
). Simply update the test.config
with parameters pertaining to your data and save the file anywhere, and reference it when running the pipeline with -c path/to/test.config
(see the Nextflow documentation for more).
Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments. Note that multiple profiles can be loaded, for example: -profile standard,docker
- the order of arguments is important!
standard
- The default profile, used if
-profile
is not specified at all. - Runs locally and expects all software to be installed and available on the
PATH
.
- The default profile, used if
singularity
- A generic configuration profile to be used with Singularity
- Pulls container quay.io/h3abionet_org/imputation_tools from http://quay.io/h3abionet_org
docker
- A generic configuration profile to be used with Docker
- Pulls container quay.io/h3abionet_org/imputation_tools from http://quay.io/h3abionet_org
conda
test
- A profile with a complete configuration for automated testing
- Includes links to test data so needs no other parameters
- This will copy the test configuration file into your current directory
Use this to specify the location of your input target dataset files in VCF format.
Multiple target datasets can be specified in target_datasets
of format name = dataset
, however each target dataset will be used separately.
The syntax for this :
target_datasets {
Study_name1 = "https://github.com/h3abionet/chipimputation_test_data/raw/master/testdata_imputation/target_testdata.vcf.gz"
}
A test data is provided in https://github.com/h3abionet/chipimputation_test_data/raw/master/testdata_imputation/target_testdata.vcf.gz, which can be used for testing only.
Please note the following requirements:
- This is required by the pipeline
- The path must be enclosed in quotes and must exist otherwise the pipeline will stop.
- The VCF file can contain a single or multiple chromosomes
The pipeline expects uses minimac4 to imputed genotypes. Therefore, minimac3 reference format m3vcf generated used minimac3 is expected to be used.
You need to specify both VCF
and M3VCF
files for vcfFile
and m3vcfFile
respectively in the configuration file before you launch the pipeline.
A normal glob pattern, enclosed in quotation marks, can then be used.
The syntax for this :
ref_panels {
RefPanel_name1 {
m3vcfFile = "refPanel_testdata_22_phased.m3vcf.gz"
vcfFile = "refPanel_testdata_22_phased.vcf.gz"
}
}
A test data is provided in https://github.com/h3abionet/chipimputation_test_data repo:
M3VCF
: https://github.com/h3abionet/chipimputation_test_data/raw/master/testdata_imputation/refPanel_testdata_22_phased.m3vcf.gzVCF
: https://github.com/h3abionet/chipimputation_test_data/raw/master/testdata_imputation/refPanel_testdata_22_phased.vcf.gz
Please note the following requirements:
- Both
VCF
andM3VCF
files must be in chromosomes. String extrapolation of%s
will be used to replace the chromosome - The
VCF
files will be used during phasing byeagle2
and allele frequency comparison bybcftools
steps - The
M3VCF
files will be used during imputation step byminimac4
- The path must be enclosed in quotes and must exist otherwise the pipeline will stop.
- Must be of the same build as the target dataset.
Some commonly used reference panels are available for download from minimac3 website including 1000 Genomes Phase 1
(version 3) and 1000 Genomes Phase 3
(version 5).
To generate your own M3VCF
files from VCF
files using minimac3
, please follow the instructions below as described https://genome.sph.umich.edu/wiki/Minimac3_Examples
Minimac3 --refHaps refPanel.vcf \
--processReference \
--rounds 0 \
--prefix testRun
A human reference genome in fasta
format of the same build as the target dataset is required by the pipeline during the QC step to check the REF mismatch between in the target dataseet.
This can be downloaded from the GATK bundle resource including the commonly used human_gk1_b37
genome.
An test fasta file that can be used with the test dataset is provided on https://github.com/h3abionet/chipimputation_test_data/raw/master/testdata_imputation/hg19_testdata.fasta.gz
The syntax for this :
reference_genome = hg19_testdata.fasta.gz
A genetic map file is required during phasing phase. A full genetic maps can be downloaded http://data.broadinstitute.org/alkesgroup/Eagle/downloads/tables/.
Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the steps in the pipeline, if the job exits with an error code of 143
(exceeded requested resources) it will automatically resubmit with higher requests (2 x original, then 3 x original). If it still fails after three times then the pipeline is stopped.
Wherever process-specific requirements are set in the pipeline, the default value can be changed by creating a custom config file. See the files in conf
for examples.
Please make sure to also set the -w/--work-dir
and --outdir
parameters to a S3 storage bucket of your choice - you'll get an error message notifying you if you didn't.
The output directory where the results will be saved.
Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config
) then you don't need to speicfy this on the command line for every run.
Name for the pipeline run. If not specified, Nextflow will automatically generate a random mnemonic.
NB: Single hyphen (core Nextflow option)
Specify this when restarting a pipeline. Nextflow will used cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously.
You can also supply a run name to resume a specific run: -resume [run-name]
. Use the nextflow log
command to show previous run names.
NB: Single hyphen (core Nextflow option)
Specify the path to a specific config file (this is a core NextFlow command).
NB: Single hyphen (core Nextflow option)
Note - you can use this to override defaults. For example, you can specify a config file using -c
that contains the following:
Use to set a top-limit for the default memory requirement for each process. Should be a string in the format integer-unit. eg. `--max_memory '8.GB'``
Use to set a top-limit for the default time requirement for each process.
Should be a string in the format integer-unit. eg. --max_time '2.h'
Use to set a top-limit for the default CPU requirement for each process.
Should be a string in the format integer-unit. eg. --max_cpus 1
Set to receive plain-text e-mails instead of HTML formatted.