diff --git a/sites/docs/src/content/docs/tutorials/external_usage/nf-core_configs_outside_nf-core.md b/sites/docs/src/content/docs/tutorials/external_usage/nf-core_configs_outside_nf-core.md index e318ba0dd5..7e2cbc0dc2 100644 --- a/sites/docs/src/content/docs/tutorials/external_usage/nf-core_configs_outside_nf-core.md +++ b/sites/docs/src/content/docs/tutorials/external_usage/nf-core_configs_outside_nf-core.md @@ -7,84 +7,128 @@ When you've been working with nf-core pipelines for a while, you often will get One such thing is the centralised nf-core/configs repository of pre-configured configuration files that allow nf-core pipelines to run optimally on institutional clusters via the `-profile` parameter, e.g. `-profile uppmax`. A list of existing institutional profiles can be seen on the [nf-core website](https://nf-co.re/configs). +:::tip +If you want to write your own institutional profile, see the [guide on how to write a new institutional profile](/docs/tutorials/use_nf-core_pipelines/writing_institutional_profiles). +::: + One great thing about nf-core/configs is that they aren't just restricted to nf-core pipelines, they can also be used in [fully fledged 'unofficial' nf-core pipelines](/docs/guidelines/external_use) but also in your own custom mini-scripts and pipelines! -Here we will describe the four things you will need to do in your custom script or pipeline to use nf-core institutional configs. - -1. Set default basic resources for all processes, e.g. in a `conf/base.config` - -At the bare minimum, the file should contain: - -```groovy -process { - cpus = { check_max( 1 * task.attempt, 'cpus' ) } - memory = { check_max( 7.GB * task.attempt, 'memory' ) } - time = { check_max( 4.h * task.attempt, 'time' ) } -} -``` - -(or other sensible default values) - -For a more sophisticated `base.config`, see the full [nf-core template](https://github.com/nf-core/tools/blob/master/nf_core/pipeline-template/conf/base.config) - -2. Load the base config in the `nextflow.config` - -The following should be placed outside all scopes: - -```groovy -includeConfig 'conf/base.config' -``` - -3. Load nf-core's institutional profile repository in the pipeline's `nextflow.config` - -```groovy -// Load nf-core custom profiles from different Institutions -try { - includeConfig "https://raw.githubusercontent.com/nf-core/configs/master/nfcore_custom.config" -} catch (Exception e) { - System.err.println("WARNING: Could not load nf-core/config profiles: https://raw.githubusercontent.com/nf-core/configs/master/nfcore_custom.config") -} -``` - -Note nf-core pipelines use a [parameter for the URL](https://github.com/nf-core/tools/blob/0912990a63ef29e44e07cc2ba6ab81113684e0ae/nf_core/pipeline-template/nextflow.config#L67-L72) to allow debugging. - -4. Finally Add the the [`check_max`](https://github.com/nf-core/tools/blob/0912990a63ef29e44e07cc2ba6ab81113684e0ae/nf_core/pipeline-template/nextflow.config#L233-L264) function to `nextflow.config` - -The following should be placed outside all scopes, typically at the bottom of the file: - -```groovy -// Function to ensure that resource requirements don't go beyond -// a maximum limit -def check_max(obj, type) { - if (type == 'memory') { - try { - if (obj.compareTo(params.max_memory as nextflow.util.MemoryUnit) == 1) - return params.max_memory as nextflow.util.MemoryUnit - else - return obj - } catch (all) { - println " ### ERROR ### Max memory '${params.max_memory}' is not valid! Using default value: $obj" - return obj - } - } else if (type == 'time') { - try { - if (obj.compareTo(params.max_time as nextflow.util.Duration) == 1) - return params.max_time as nextflow.util.Duration - else - return obj - } catch (all) { - println " ### ERROR ### Max time '${params.max_time}' is not valid! Using default value: $obj" - return obj - } - } else if (type == 'cpus') { - try { - return Math.min( obj, params.max_cpus as int ) - } catch (all) { - println " ### ERROR ### Max cpus '${params.max_cpus}' is not valid! Using default value: $obj" - return obj - } - } -} -``` +Here we will describe the steps you will need to perform in your custom script or pipeline to use nf-core institutional configs. + +1. In a `conf/base.config` file set default basic resources for all processes. Can be named something else. + + At the bare minimum, the file should contain: + + ```groovy + process { + cpus = 1 + memory = 7.GB + time = 4.h + } + ``` + + (or other sensible default values) + + :::note{collapse title="Note on older nf-core template/Nextflow versions"} + + When running pipelines generated with the nf-core template before version v3.0.0 or with Nextflow before version 24.04.0 you may need to use an older syntax for setting resources. The following closures prevent resources from exceeding a maximum limit. + + Set in `conf/base.config` default values. + + ```groovy + process { + cpus = { check_max( 1 * task.attempt, 'cpus' ) } + memory = { check_max( 7.GB * task.attempt, 'memory' ) } + time = { check_max( 4.h * task.attempt, 'time' ) } + } + ``` + + Then add the [`check_max`](https://github.com/nf-core/tools/blob/0912990a63ef29e44e07cc2ba6ab81113684e0ae/nf_core/pipeline-template/nextflow.config#L233-L264) function to `nextflow.config` + + The following should be placed outside all scopes, typically at the bottom of the file: + + ```groovy + // Function to ensure that resource requirements don't go beyond + // a maximum limit + def check_max(obj, type) { + if (type == 'memory') { + try { + if (obj.compareTo(params.max_memory as nextflow.util.MemoryUnit) == 1) + return params.max_memory as nextflow.util.MemoryUnit + else + return obj + } catch (all) { + println " ### ERROR ### Max memory '${params.max_memory}' is not valid! Using default value: $obj" + return obj + } + } else if (type == 'time') { + try { + if (obj.compareTo(params.max_time as nextflow.util.Duration) == 1) + return params.max_time as nextflow.util.Duration + else + return obj + } catch (all) { + println " ### ERROR ### Max time '${params.max_time}' is not valid! Using default value: $obj" + return obj + } + } else if (type == 'cpus') { + try { + return Math.min( obj, params.max_cpus as int ) + } catch (all) { + println " ### ERROR ### Max cpus '${params.max_cpus}' is not valid! Using default value: $obj" + return obj + } + } + } + ``` + + ::: + + For a more sophisticated `base.config`, see the full [nf-core template](https://github.com/nf-core/tools/blob/master/nf_core/pipeline-template/conf/base.config) + +2. In a top level `nextflow.config`, specify two `params` to specify the URL where to look for nf-core/configs from. + + ```groovy + params { + custom_config_version = 'master' + custom_config_base = "https://raw.githubusercontent.com/nf-core/configs/${params.custom_config_version}" + } + ``` + + The two parameters together make it easy to test from forks and development profiles on specific branches. + +3. In the top level `nextflow.config`, load the newly made base config. + + The following should be placed outside all scopes: + + ```groovy + includeConfig 'conf/base.config' + ``` + +4. In the top level `nextflow.config`, load nf-core's institutional profile repository based on the new `params`. This should be placed _after_ the `conf/base.config` include. + + ```groovy + includeConfig !System.getenv('NXF_OFFLINE') && params.custom_config_base ? "${params.custom_config_base}/nfcore_custom.config" : "/dev/null" + ``` + + Note that instead of the parameters, you can just directly specify a URL, e.g. + + ```groovy + includeConfig !System.getenv('NXF_OFFLINE') ? 'https://raw.githubusercontent.com/nf-core/configs/master/nfcore_custom.config' : "/dev/null" + ``` + + :::note{collapse title="Note on older nf-core template/Nextflow versions"} + If you wish to use pipelines generated with the nf-core template before `v3.0.0` , and/or when running with Nextflow versions earlier than 24.04.0 you may need to use an older syntax for loading the configs: + + ```groovy + // Load nf-core custom profiles from different Institutions + try { + includeConfig "https://raw.githubusercontent.com/nf-core/configs/master/nfcore_custom.config" + } catch (Exception e) { + System.err.println("WARNING: Could not load nf-core/config profiles: https://raw.githubusercontent.com/nf-core/configs/master/nfcore_custom.config") + } + ``` + + ::: With this, you should be able to run `nextflow run mainf.nf -profile `, and your custom script/pipeline should integrate nicely with your cluster! diff --git a/sites/docs/src/content/docs/tutorials/use_nf-core_pipelines/config_institutional_profile.md b/sites/docs/src/content/docs/tutorials/use_nf-core_pipelines/writing_institutional_profiles.md similarity index 91% rename from sites/docs/src/content/docs/tutorials/use_nf-core_pipelines/config_institutional_profile.md rename to sites/docs/src/content/docs/tutorials/use_nf-core_pipelines/writing_institutional_profiles.md index 36ae4423f6..846812961d 100644 --- a/sites/docs/src/content/docs/tutorials/use_nf-core_pipelines/config_institutional_profile.md +++ b/sites/docs/src/content/docs/tutorials/use_nf-core_pipelines/writing_institutional_profiles.md @@ -10,6 +10,8 @@ parentWeight: 5 ๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€ What this means is that you can specify common Nextflow pipeline configurations and options that can be shared across all users of your particular institutional cluster. +๐Ÿ’ช These configuration profiles can be used [outside of nf-core](/docs/tutorials/external_usage/nf-core_configs_outside_nf-core) in any Nextflow pipeline! + nf-core offers two level of profile sharing: global institutional and pipeline institutional profiles via [nf-core/configs](https://github.com/nf-core/configs) - **Global** institutional profiles represent configuration options that apply to users of _all_ nf-core pipelines. These typically define settings regarding the cluster itself, such as the type of scheduler being used, maximum resource limits and so on. @@ -183,22 +185,23 @@ params { Note that for the `config_profile_contact`, it is best to indicate a specific person. This will typically be someone who wrote the config (via their name & github handle) or whoever will maintain it at the institution (e.g. email of IT Department, Institution X), i.e. someone who can be contacted if there are questions or problems and how to contact them. -Next, in the same scope, we can also specify the `max_*` series of params. - -These are used by nf-core pipelines to limit automatic resubmission of resource-related failed jobs to ensure submitted retries do not exceed the maximum available on your cluster. These values should be the ones you found for the largest node of your cluster (i.e., the largest node a user's job can be submitted to). For example: +Finally, if you have a common resource directory for the AWS `iGenomes` collection of reference genomes, this can can also go in the `params` scope. ```nextflow params { config_profile_description = ' cluster profile provided by nf-core/configs.' config_profile_contact = ' ()' config_profile_url = 'https://.com' - max_memory = 2.TB - max_cpus = 128 - max_time = 720.h + igenomes_base = '///igenomes/' } ``` -Finally, if you have a common resource directory for the AWS `iGenomes` collection of reference genomes, this can can also go in the `params` scope. +:::note{collapse title="Note on older nf-core pipelines"} + +If using older pipelines, you will also need to specify the `max_*` series of params (now not used in the pipeline template). + +These are used by nf-core pipelines to limit the automatic resubmission of resource-related failed jobs to ensure submitted retries do not exceed the maximum available on your cluster. +These values should be the ones you found for the largest node of your cluster (i.e., the largest node a user's job can be submitted to). For example: ```nextflow params { @@ -208,13 +211,31 @@ params { max_memory = 2.TB max_cpus = 128 max_time = 720.h - igenomes_base = '///igenomes/' } ``` +::: + #### process scope -Next, we can use the `process` scope to define which scheduler to use and associated options. Any option specified in this scope means that all processes in a pipeline will use the settings defined here. +Next, we can use the `process` scope to define certain resource specifications, schedulers to use, and associated options. Any option specified in this scope means that all processes in a pipeline will use the settings defined here. + +First, you should specify a `resourceLimits` list based on the maximum resources available on your machine or infrastructure. + +These values are used by nf-core pipelines to limit the automatic resubmission of resource-related failed jobs to ensure submitted retries do not exceed the maximum available on your cluster. +These values should be the ones you found for the largest node of your cluster (i.e., the largest node to which a user's job can be submitted). + +For example: + +```nextflow +process { + resourceLimits = [ + cpus: 128, + memory: 2.TB, + time: 720.h + ] +} +``` Normally, you only need to specify which scheduler you use. For example, if using SLURM ๐Ÿ›: @@ -223,16 +244,17 @@ params { config_profile_description = ' cluster profile provided by nf-core/configs.' config_profile_contact = ' ()' config_profile_url = 'https://.com' - max_memory = 2.TB - max_cpus = 128 - max_time = 720.h igenomes_base = '///igenomes/' } process { + resourceLimits = [ + cpus: 128, + memory: 2.TB, + time: 720.h + ] executor = 'slurm' } - ``` If you need to specify more cluster-specific information regarding your cluster, this can also go here. @@ -244,13 +266,15 @@ params { config_profile_description = ' cluster profile provided by nf-core/configs.' config_profile_contact = ' ()' config_profile_url = 'https://.com' - max_memory = 2.TB - max_cpus = 128 - max_time = 720.h igenomes_base = '///igenomes/' } process { + resourceLimits = [ + cpus: 128, + memory: 2.TB, + time: 720.h + ] executor = 'slurm' queue = 'all' } @@ -265,13 +289,15 @@ params { config_profile_description = ' cluster profile provided by nf-core/configs.' config_profile_contact = ' ()' config_profile_url = 'https://.com' - max_memory = 2.TB - max_cpus = 128 - max_time = 720.h igenomes_base = '///igenomes/' } process { + resourceLimits = [ + cpus: 128, + memory: 2.TB, + time: 720.h + ] executor = 'slurm' queue = { task.time <= 2.h ? 'short' : task.time <= 24.h ? 'medium': 'long' } } @@ -285,13 +311,15 @@ params { config_profile_description = ' cluster profile provided by nf-core/configs.' config_profile_contact = ' ()' config_profile_url = 'https://.com' - max_memory = 2.TB - max_cpus = 128 - max_time = 720.h igenomes_base = '///igenomes/' } process { + resourceLimits = [ + cpus: 128, + memory: 2.TB, + time: 720.h + ] executor = 'slurm' queue = { task.cpus > 24 ? 'big' : 'small' } } @@ -307,13 +335,15 @@ params { config_profile_description = ' cluster profile provided by nf-core/configs.' config_profile_contact = ' ()' config_profile_url = 'https://.com' - max_memory = 2.TB - max_cpus = 128 - max_time = 720.h igenomes_base = '///igenomes/' } process { + resourceLimits = [ + cpus: 128, + memory: 2.TB, + time: 720.h + ] executor = 'slurm' queue = { task.cpus > 24 ? 'big' : 'small' } maxRetries = 2 @@ -332,13 +362,15 @@ If you normally need to specify additional 'non-standard' options in the headers > config_profile_description = ' cluster profile provided by nf-core/configs.' > config_profile_contact = ' ()' > config_profile_url = 'https://.com' -> max_memory = 2.TB -> max_cpus = 128 -> max_time = 720.h > igenomes_base = '///igenomes/' > } > > process { +> resourceLimits = [ +> cpus: 128, +> memory: 2.TB, +> time: 720.h +> ] > executor = 'sge' > queue = { task.cpus > 24 ? 'big' : 'small' } > maxRetries = 2 @@ -357,13 +389,15 @@ params { config_profile_description = ' cluster profile provided by nf-core/configs.' config_profile_contact = ' ()' config_profile_url = 'https://.com' - max_memory = 2.TB - max_cpus = 128 - max_time = 720.h igenomes_base = '///igenomes/' } process { + resourceLimits = [ + cpus: 128, + memory: 2.TB, + time: 720.h + ] executor = 'sge' queue = { task.cpus > 24 ? 'big' : 'small' } maxRetries = 2 @@ -385,13 +419,15 @@ params { config_profile_description = ' cluster profile provided by nf-core/configs.' config_profile_contact = ' ()' config_profile_url = 'https://.com' - max_memory = 2.TB - max_cpus = 128 - max_time = 720.h igenomes_base = '///igenomes/' } process { + resourceLimits = [ + cpus: 128, + memory: 2.TB, + time: 720.h + ] executor = 'sge' queue = { task.cpus > 24 ? 'big' : 'small' } maxRetries = 2 @@ -413,13 +449,15 @@ params { config_profile_description = ' cluster profile provided by nf-core/configs.' config_profile_contact = ' ()' config_profile_url = 'https://.com' - max_memory = 2.TB - max_cpus = 128 - max_time = 720.h igenomes_base = '///igenomes/' } process { + resourceLimits = [ + cpus: 128, + memory: 2.TB, + time: 720.h + ] executor = 'sge' queue = { task.cpus > 24 ? 'big' : 'small' } maxRetries = 2 @@ -446,13 +484,15 @@ params { config_profile_description = ' cluster profile provided by nf-core/configs.' config_profile_contact = ' ()' config_profile_url = 'https://.com' - max_memory = 2.TB - max_cpus = 128 - max_time = 720.h igenomes_base = '///igenomes/' } process { + resourceLimits = [ + cpus: 128, + memory: 2.TB, + time: 720.h + ] executor = 'sge' queue = { task.cpus > 24 ? 'big' : 'small' } maxRetries = 2 @@ -486,6 +526,11 @@ Using our example above, maybe our institution has two clusters named red and bl ```nextflow process { + resourceLimits = [ + cpus: 128, + memory: 2.TB, + time: 720.h + ] executor = 'sge' queue = { task.cpus > 24 ? 'big' : 'small' } maxRetries = 2 @@ -506,6 +551,14 @@ singularity { profiles { red { + process { + resourceLimits = [ + cpus: 128, + memory: 2.TB, + time: 720.h + ] + } + params { config_profile_description = ' 'red' cluster cluster profile provided by nf-core/configs.' config_profile_contact = ' ()' @@ -518,20 +571,25 @@ profiles { } blue { + process { + resourceLimits = [ + cpus: 64, + memory: 256.TB, + time: 24.h + ] + } + params { config_profile_description = ' 'blue' cluster profile provided by nf-core/configs.' config_profile_contact = ' ()' config_profile_url = 'https://.com`' - max_memory = 256.GB - max_cpus = 64 - max_time = 24.h igenomes_base = '///igenomes/' } } } ``` -You can see here we have moved the `params` block into each of the _internal profiles_, and updated the `config_profile_description` and `max_*` parameters accordingly. +You can see here we have moved the `process` and `params` blocks into each of the _internal profiles_, and updated the `resourceLimits` and `config_profile_description` parameters accordingly. :::warning Important: you should **not** define scopes both in the global profile AND in the internal profile. Internal profiles do _not_ inherit directives/settings defined in scopes in the base config, so anything defined in the base global profile file will be _ignored_ in the internal profile. See the [Nextflow documentation](https://www.nextflow.io/docs/latest/config.html#config-profiles) for more information. diff --git a/sites/docs/src/content/docs/usage/Getting_started/configuration.md b/sites/docs/src/content/docs/usage/Getting_started/configuration.md index 3609d24309..df776688a8 100644 --- a/sites/docs/src/content/docs/usage/Getting_started/configuration.md +++ b/sites/docs/src/content/docs/usage/Getting_started/configuration.md @@ -112,18 +112,42 @@ For more information on how to specify an executor, please refer to the [Nextflo In addition to the executor, you may find that pipeline runs occasionally fail due to a particular step of the pipeline requesting more resources than you have on your system. +To avoid these failures, you can tell Nextflow to set a cap pipeline-step resource requests against a list called `resourceLimits` specified in Nextflow config file. These should represent the maximum possible resources of a machine or node. + +For example, you can place into a file the following: + +```groovy +process { + resourceLimits = [ + cpus: 32, + memory: 256.GB, + time: 24.h + ] +} +``` + +And supply this in your pipeline run command with `-c .config`. Then, during a pipeline run, if, for example, a job exceeds the default memory request, it will be retried, increasing the memory each time until either the job completes or until it reaches a request of `256.GB`. + +Therefore, these parameters only act as a _cap_ to prevent Nextflow from submitting a single job requesting resources more than what is possible on your system and requests getting out of hand. + +Note that specifying these will not _increase_ the resources available to the pipeline tasks! See [Tuning workflow resources](#tuning-workflow-resources) for more infomation. + +:::note{collapse title="Note on older nf-core pipelines"} + +If you wish to use pipelines generated with the nf-core template before `v3.0.0`, and/or when running with Nextflow versions earlier than 24.04.0 you may need use a different syntax to prevent the resources from exceeding a maximum limit. + +In addition to the executor, you may find that pipeline runs occasionally fail due to a particular step of the pipeline requesting more resources than you have on your system. + To avoid these failures, all nf-core pipelines [check](https://github.com/nf-core/tools/blob/99961bedab1518f592668727a4d692c4ddf3c336/nf_core/pipeline-template/nextflow.config#L206-L237) pipeline-step resource requests against parameters called `--max_cpus`, `--max_memory` and `--max_time`. These should represent the maximum possible resources of a machine or node. These parameters only act as a _cap_, to prevent Nextflow submitting a single job requesting resources more than what is possible on your system. -:::warning Increasing these values from the defaults will not _increase_ the resources available to the pipeline tasks! See [Tuning workflow resources](#tuning-workflow-resources) for this. -::: Most pipelines will attempt to automatically restart jobs that fail due to lack of resources with double-requests, these caps keep those requests from getting out of hand and crashing the entire pipeline run. If a particular job exceeds the process-specific default resources and is retried, only resource requests (cpu, memory, or time) that have not yet reached the value set with `--max_` will be increased during the retry. -:::warning Setting the `--max_` parameters do not represent the total sum of resource usage of the pipeline at a given time - only a single pipeline job! + ::: ## Tuning workflow resources @@ -138,7 +162,38 @@ Where possible we try to get tools to make use of the resources available, for e To tune workflow resources to better match your requirements, we can tweak these through [custom configuration files](#custom-configuration-files) or [shared nf-core/configs](#shared-nf-coreconfigs). -By default, most process resources are specified using process _labels_, for example with the following base config: +By default, most process resources are specified using process _labels_, as in the following example base config: + +```groovy +process { + resourceLimits = [ + cpus: 32, + memory: 256.GB, + time: 24.h + ] + withLabel:process_low { + cpus = { 2 * task.attempt } + memory = { 14.GB * task.attempt } + time = { 6.h * task.attempt } + } + withLabel:process_medium { + cpus = { 6 * task.attempt } + memory = { 42.GB * task.attempt } + time = { 8.h * task.attempt } + } + withLabel:process_high { + cpus = { 12 * task.attempt } + memory = { 84.GB * task.attempt } + time = { 10.h * task.attempt } + } +} +``` + +The `resourceLimits` list sets the absolute maximums any pipeline job can request (typically corresponding to the maximum available resources on your machine). The label blocks indicate the initial 'default' resources a pipeline job will request. For most nf-core pipelines, if a job runs out of memory, it will retry the job but increase the amount of resource requested up to the `resourceLimits` maximum. + +:::note{collapse title="Note on older nf-core pipelines"} + +If you wish to use pipelines generated with the nf-core template before `v3.0.0`, and/or when running with Nextflow versions earlier than 24.04.0 you may need use a different syntax to prevent the resources from exceeding a maximum limit. ```groovy process { @@ -164,15 +219,15 @@ process { - If you want to use `check_max()` in a **custom config** file, you must copy the function to the end of your config _outside_ of any configuration scopes! It will _not_ be inherited from `base.config`. - The `* task.attempt` means that these values are doubled if a process is automatically retried after failing with an exit code that corresponds to a lack of resources. -:::warning If you want to use the `check_max()` function in your custom config, you must copy the function in the link above to the bottom of your custom config + ::: :::warning You don't need to copy all of the labels into your own custom config file, only overwrite the things you wish to change ::: -If you want to give more memory to _all_ large tasks across most nf-core pipelines, would would specify in a custom config file: +If you want to give a hard-coded memory to _all_ large tasks across most nf-core pipelines (without increases during retries), we would specify in a custom config file: ```groovy process { @@ -182,12 +237,12 @@ process { } ``` -You can be more specific than this by targeting a given process name instead of it's label using `withName`. You can see the process names in your console log when the pipeline is running For example: +You can also be more specific than this by targeting a given _process_ (job) name instead of its label using `withName`. You can see the process names in your console log when the pipeline is running For example: ```groovy process { withName: STAR_ALIGN { - cpus = 32 + cpus = { 32 * task.attempt } } } ``` diff --git a/sites/docs/src/content/docs/usage/troubleshooting/overview.md b/sites/docs/src/content/docs/usage/troubleshooting/overview.md index 350ee49f41..50d750306a 100644 --- a/sites/docs/src/content/docs/usage/troubleshooting/overview.md +++ b/sites/docs/src/content/docs/usage/troubleshooting/overview.md @@ -17,6 +17,7 @@ parentWeight: 20 1. [Unable to acquire lock error](/docs/usage/troubleshooting/aquire_lock_error) 1. [Docker permission errors](/docs/usage/troubleshooting/docker_permissions) 1. [IPv6 network errors](/docs/usage/troubleshooting/ipv6) +1. [Processes are retrying](/docs/usage/troubleshooting/retries.md) ## How to use these pages diff --git a/sites/docs/src/content/docs/usage/troubleshooting/retries.md b/sites/docs/src/content/docs/usage/troubleshooting/retries.md new file mode 100644 index 0000000000..aa8ff6f615 --- /dev/null +++ b/sites/docs/src/content/docs/usage/troubleshooting/retries.md @@ -0,0 +1,15 @@ +## Processes are retrying + +### Why did processes report an error but then retry? + +One of the nice things about Nextflow is it offers the ability to retry processes if they encounter an error and fail with certain bash [exit status](https://en.wikipedia.org/wiki/Exit_status) or codes. +Some of these errors have common causes, which allows us to provide solutions to these problems. + +A common issue is a tool requiring more memory than is (initially) made available to it based on the default memory specifications set in the pipeline. +Such errors (out of memory, or OOM) are often identified by exit code `104`, and those falling between `130` - `145`. + +Therefore all nf-core pipelines [by default will retry](https://github.com/nf-core/tools/blob/930ece572bf23b68c7a7c5259e918a878ba6499e/nf_core/pipeline-template/conf/base.config#L18) a process if it hits one of those exit codes, but requesting more resources (memory, CPUs, and time) for the re-submitted job. + +All other exit codes will cause the pipeline to fail immediately, and will not be retried. + +However, some pipelines may extend this list or provide different retry conditions based on the behaviour of the specific tools used in the pipeline. diff --git a/sites/main-site/public/_redirects b/sites/main-site/public/_redirects index 970d2fee79..36d889a01b 100644 --- a/sites/main-site/public/_redirects +++ b/sites/main-site/public/_redirects @@ -40,7 +40,8 @@ /docs/usage/offline https://nf-core-docs.netlify.app/docs/usage/getting_started/offline 200 /docs/usage/troubleshooting https://nf-core-docs.netlify.app/docs/usage/troubleshooting/overview 200 /docs/usage/tutorials https://nf-core-docs.netlify.app/docs/tutorials 200 -/docs/usage/tutorials/step_by_step_institutional_profile https://nf-core-docs.netlify.app/docs/tutorials/use_nf-core_pipelines/config_institutional_profile 200 +/docs/usage/tutorials/step_by_step_institutional_profile https://nf-core-docs.netlify.app/docs/tutorials/use_nf-core_pipelines/writing_institutional_profile 200 +/docs/tutorials/use_nf-core_pipelines/config_institutional_profile https://nf-core-docs.netlify.app/docs/tutorials/use_nf-core_pipelines/writing_institutional_profile 200 /docs/usage/tutorials/nf_core_usage_tutorial https://nf-core-docs.netlify.app/docs/tutorials/nf-core_contributing_overview 200 /docs/usage/tutorials/institution https://nf-core-docs.netlify.app/docs/tutorials/contributing_to_nf-core/contributors_institution 200 /docs/usage/data_management /blog/2020/data_management diff --git a/sites/main-site/src/content/events/2021/bytesize-13-tuning-pipeline-performance.md b/sites/main-site/src/content/events/2021/bytesize-13-tuning-pipeline-performance.md index 046fd7dd65..b335bd88ae 100644 --- a/sites/main-site/src/content/events/2021/bytesize-13-tuning-pipeline-performance.md +++ b/sites/main-site/src/content/events/2021/bytesize-13-tuning-pipeline-performance.md @@ -15,6 +15,12 @@ locations: - https://doi.org/10.6084/m9.figshare.14680260.v1 --- +:::warning +The information presented in this bytesize may not reflect the latest way of configuring Nextflow! + +Please check the Nextflow and nf-core docs for the most up-to-date information. +::: + This week, Gisela Gabernet ([@ggabernet](http://github.com/ggabernet/)) will present: _**Tuning pipeline performance**_. This will cover: diff --git a/sites/main-site/src/content/events/2021/bytesize-2-configs.md b/sites/main-site/src/content/events/2021/bytesize-2-configs.md index c0702e2b94..8be0d37374 100644 --- a/sites/main-site/src/content/events/2021/bytesize-2-configs.md +++ b/sites/main-site/src/content/events/2021/bytesize-2-configs.md @@ -15,6 +15,12 @@ locations: - https://www.bilibili.com/video/BV1M54y1a7Uy --- +:::warning +The information presented in this bytesize may not reflect the latest way of configuring Nextflow! + +Please check the Nextflow and nf-core docs for the most up-to-date information. +::: + This week, Maxime Garcia ([@MaxUlysse](http://github.com/MaxUlysse/)) will present: _**How nf-core configs work.**_ This talk will cover: - Making your own Nextflow config file