Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First Release PR Changes #29

Merged
merged 36 commits into from
Aug 23, 2024
Merged
Show file tree
Hide file tree
Changes from 31 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
75e3056
Upgrading isoquant
atrull314 Aug 7, 2024
26696f3
Removing high_memory label from isoquant
atrull314 Aug 8, 2024
45c1672
Removing check_samplesheet.py
atrull314 Aug 12, 2024
bc0bc9f
Removing unnecessary exclusions in ci
atrull314 Aug 12, 2024
d6fe57e
Fixing input validation
atrull314 Aug 12, 2024
4b392a4
Improving documentation
atrull314 Aug 12, 2024
ebf7af2
Removing commented line
atrull314 Aug 12, 2024
0e1e4e2
Fixing comment
atrull314 Aug 12, 2024
fa4d24c
Changing to long-format parameters for readability
atrull314 Aug 12, 2024
97c431c
Removing useMamba default for conda profile
atrull314 Aug 12, 2024
f87f8dc
Fixing formatting
atrull314 Aug 12, 2024
9a149c6
Removing local versions of minimap align and index in favor of the nf…
atrull314 Aug 14, 2024
e51e864
Removing local paftools module
atrull314 Aug 14, 2024
22c10cc
Adding version output to read_counts process
atrull314 Aug 14, 2024
a084083
Removing local nanocomp module in favor of nf-core module
atrull314 Aug 14, 2024
f3afd56
Updating seurat image
atrull314 Aug 14, 2024
2f6f766
Updating BLAZE citation
atrull314 Aug 14, 2024
8cbecfa
Removing local pigz module and nf-core gunzip in favor for the nf-cor…
atrull314 Aug 14, 2024
0ac8ce0
Adding read_counts image
atrull314 Aug 15, 2024
4e945d9
Adding read_counts to output.md
atrull314 Aug 15, 2024
501e0ea
Replacing gunzip with pigz
atrull314 Aug 15, 2024
ca2c2b6
syntax clean-up and removal of outdated comments
lianov Aug 16, 2024
9945b86
reducing process_high_mem back to nf-core default
lianov Aug 16, 2024
580ed5e
patch: nf-core module minimap index, nanocomp to promethion sensitive…
lianov Aug 16, 2024
070144f
fix ch_multiqc_config variable
lianov Aug 16, 2024
6c722f0
Fixing output channel name when unzipping ref files
atrull314 Aug 16, 2024
3c9ccc5
Fixing issue with pigz uncompress
atrull314 Aug 16, 2024
9fb6aa3
Linting
atrull314 Aug 19, 2024
07f2700
Enabling min_q_score and min_length on Nanofilt
atrull314 Aug 19, 2024
c3ff71a
changes to docs
lianov Aug 20, 2024
b75151f
Merge branch 'dev' of https://github.com/U-BDS/scnanoseq into dev
lianov Aug 20, 2024
576214e
Params cleanup
atrull314 Aug 21, 2024
23a7648
Linting
atrull314 Aug 22, 2024
a28bd0b
Changing default value of min_length
atrull314 Aug 22, 2024
86b0556
updated read counts and seurat images with trimming enabled
lianov Aug 23, 2024
94f03f6
Setting version to 1.0.0
atrull314 Aug 23, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ on:
push:
branches:
- dev
- template_update
pull_request:
release:
types: [published]
Expand Down
6 changes: 0 additions & 6 deletions .nf-core.yml
Original file line number Diff line number Diff line change
@@ -1,18 +1,12 @@
repository_type: pipeline
nf_core_version: "2.14.1"

pipeline_todos: false

lint:
template_strings: False # "Jinja string found in" bin/create_regex.py and bin/seurat_qc.R
files_unchanged:
- CODE_OF_CONDUCT.md
- .github/CONTRIBUTING.md
- .github/workflows/linting.yml
- lib/NfcoreTemplate.groovy
- docs/images/nf-core-scnanoseq_logo_dark.png
pipeline_todos:
- README.md
- main.nf
multiqc_config:
- report_comment
1 change: 0 additions & 1 deletion .prettierignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,3 @@ testing/
testing*
*.pyc
bin/
docs/output.md
4 changes: 2 additions & 2 deletions CITATIONS.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@

## Pipeline tools

- [BLAZE](https://www.biorxiv.org/content/10.1101/2022.08.16.504056v1)
- [BLAZE](https://pubmed.ncbi.nlm.nih.gov/37024980/)

> You Y, Prawer Y D, De Paoli-Iseppi R, Hunt C P, Parish C L, Shim H, Clark M B. Identification of cell barcodes from long-read single-cell RNA-seq with BLAZE. bioRxiv 2022 Aug .08.16.504056; doi: 10.1101/2022.08.16.504056.
> You Y, Prawer YDJ, De Paoli-Iseppi R, Hunt CPJ, Parish CL, Shim H, Clark MB. Identification of cell barcodes from long-read single-cell RNA-seq with BLAZE. Genome Biol. 2023 Apr 6;24(1):66. doi: 10.1186/s13059-023-02907-y. PMID: 37024980; PMCID: PMC10077662.

- [FastQC](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/)

Expand Down
47 changes: 29 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@

## Introduction

**nf-core/scnanoseq** is a bioinformatics best-practice analysis pipeline for 10X Genomics single-cell/nuclei RNA-seq for data derived from Oxford Nanopore Q20+ chemistry ([R10.4 flow cells (>Q20)](https://nanoporetech.com/about-us/news/oxford-nanopore-announces-technology-updates-nanopore-community-meeting)). Due to the expectation of >Q20 quality, the input data for the pipeline is not dependent on Illumina paired data. Please note `scnanoseq` can also process Oxford data with older chemistry, but we encourage usage of the Q20+ chemistry.
**nf-core/scnanoseq** is a bioinformatics best-practice analysis pipeline for 10X Genomics single-cell/nuclei RNA-seq for data derived from Oxford Nanopore Q20+ chemistry ([R10.4 flow cells (>Q20)](https://nanoporetech.com/about-us/news/oxford-nanopore-announces-technology-updates-nanopore-community-meeting)). Due to the expectation of >Q20 quality, the input data for the pipeline is not dependent on Illumina paired data. **Please note `scnanoseq` can also process Oxford data with older chemistry, but we encourage usage of the Q20+ chemistry when possible**.

The pipeline is built using [Nextflow](https://www.nextflow.io), a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The [Nextflow DSL2](https://www.nextflow.io/docs/latest/dsl2.html) implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from [nf-core/modules](https://github.com/nf-core/modules) in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

Expand All @@ -30,8 +30,8 @@ On release, automated continuous integration tests run the pipeline on a full-si
![scnanoseq diagram](assets/scnanoseq_diagram.png)

1. Raw read QC ([`FastQC`](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/), [`NanoPlot`](https://github.com/wdecoster/NanoPlot), [`NanoComp`](https://github.com/wdecoster/nanocomp) and [`ToulligQC`](https://github.com/GenomiqueENS/toulligQC))
2. Unzip and split FastQ ([`gunzip`](https://linux.die.net/man/1/gunzip))
1. Optional: Split fastq for faster processing ([`split`](https://linux.die.net/man/1/split))
2. Unzip and split FASTQ ([`pigz`](https://github.com/madler/pigz))
1. Optional: Split FASTQ for faster processing ([`split`](https://linux.die.net/man/1/split))
3. Trim and filter reads. ([`Nanofilt`](https://github.com/wdecoster/nanofilt))
4. Post trim QC ([`FastQC`](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/), [`NanoPlot`](https://github.com/wdecoster/NanoPlot) and [`ToulligQC`](https://github.com/GenomiqueENS/toulligQC))
5. Barcode detection using a custom whitelist or 10X whitelist. [`BLAZE`](https://github.com/shimlab/BLAZE)
Expand All @@ -43,7 +43,7 @@ On release, automated continuous integration tests run the pipeline on a full-si
9. Alignment ([`minimap2`](https://github.com/lh3/minimap2))
10. Post-alignment filtering of mapped reads and gathering mapping QC ([`SAMtools`](http://www.htslib.org/doc/samtools.html))
11. Post-alignment QC in unfiltered BAM files ([`NanoComp`](https://github.com/wdecoster/nanocomp), [`RSeQC`](https://rseqc.sourceforge.net/))
12. Barcode tagging with read quality, BC, BC quality, UMI, and UMI quality (custom script `./bin/tag_barcodes.py`)
12. Barcode (BC) tagging with read quality, BC quality, UMI quality (custom script `./bin/tag_barcodes.py`)
13. UMI-based deduplication [`UMI-tools`](https://github.com/CGATOxford/UMI-tools)
14. Gene and transcript level matrices generation. [`IsoQuant`](https://github.com/ablab/IsoQuant)
15. Preliminary matrix QC ([`Seurat`](https://github.com/satijalab/seurat))
Expand All @@ -56,9 +56,7 @@ On release, automated continuous integration tests run the pipeline on a full-si

First, prepare a samplesheet with your input data that looks as follows:

`samplesheet.csv`:

```csv
```csv title="samplesheet.csv"
sample,fastq,cell_count
CONTROL_REP1,AEG588A1_S1.fastq.gz,5000
CONTROL_REP1,AEG588A1_S2.fastq.gz,5000
Expand Down Expand Up @@ -86,24 +84,29 @@ For more details and further functionality, please refer to the [usage documenta

## Pipeline output

This pipeline produces feature barcode matrices at both the gene and transcript level and can be configured to retain introns within the counts themselves. These feature-barcode matrices are able to be ingested directly by most packages used for downstream analyses such as `Seurat`. Additionally, the pipeline produces a number of quality control metrics to ensure that the samples processed meet expected metrics for single-cell/nuclei data.

To see the results of an example test run with a full size dataset refer to the [results](https://nf-co.re/scnanoseq/results) tab on the nf-core website pipeline page.
For more details about the output files and reports, please refer to the
For more details about the full set of output files and reports, please refer to the
[output documentation](https://nf-co.re/scnanoseq/output).

This pipeline produces feature barcode matrices at both the gene and transcript level and can retain introns within the counts themselves. These files are able to be ingested directly by most packages used for downstream analyses such as `Seurat`. In addition the pipeline produces a number of quality control metrics to ensure that the samples processed meet expected metrics for single-cell/nuclei data.

## Troubleshooting

If you experience any issues, please make sure to submit an issue above. However, some resolutions for common issues will be noted below:
If you experience any issues, please make sure to reach out on the [#scnanoseq slack channel](https://nfcore.slack.com/archives/C03TUE2K6NS) or [open an issue on our GitHub repository](https://github.com/nf-core/scnanoseq/issues/new/choose). However, some resolutions for common issues will be noted below:

- Due to the nature of the data this pipeline analyzes, some tools can experience increased runtimes. For some of the custom tools made for this pipeline (`preextract_fastq.py` and `correct_barcodes.py`), we have leveraged the splitting that is done via the `split_amount` param to decrease their overall runtimes. The `split_amount` parameter will split the input fastqs into a number of fastq files that each have a number of lines based on the value used for this parameter. As a result, it is important not to set this parameter to be too low as it would cause the creation of a large number of files the pipeline will be processed. While this value can be highly dependent on the data, a good starting point for an analysis would be to set this value to `500000`. If you find that `PREEXTRACT_FASTQ` and `CORRECT_BARCODES` are still taking long amounts of time to run, it would be worth reducing this parameter to `200000` or `100000`, but keeping the value on the order of hundred of thousands or tens of thousands should help with with keeping the total number of processes minimal.
- One issue that has been observed is a recurrent node failure on slurm clusters that does seem to be related to submission of nextflow jobs. This issue is not related to this pipeline itself, but rather to nextflow itself. Our reserach computing are currently working on a resolution. But we have two methods that appear to help overcome should this issue arise:
1. The first is to create a custom config that increases the memory request for the job that failed. This may take a couple attempts to find the correct requests, but we have noted that there does appear to be a memory issue occassionally with this errors.
2. The second resolution is to request an interactive session with a decent amount of time and memory and cpus in order to run the pipeline on the single node. Note that this will take time as there will be minimal parallelization, but this does seem to resolve the issue.
- We acknowledge that analyzing PromethION is a common use case for this pipeline. Currently, the pipeline has been developed with defaults to analyze GridION and average sized PromethION data. For cases, where jobs have failed due for larger PromethION datasets, the defaults have been overwritten by a custom configuation file (provided by the `-c` Nextflow option) where resources were increased (substantially in some cases). Below are some of the overrides we have used, while these amounts may not work on every dataset, these will hopefully at least note which processes will need to have their resources increased:
- Due to the nature of the data this pipeline analyzes, some tools can experience increased runtimes. For some of the custom tools made for this pipeline (`preextract_fastq.py` and `correct_barcodes.py`), we have leveraged the splitting that is done via the `split_amount` parameter to decrease their overall runtimes. The `split_amount` parameter will split the input FASTQs into a number of FASTQ files that each have a number of lines based on the value used for this parameter. As a result, it is important not to set this parameter to be too low as it would cause the creation of a large number of files the pipeline will be processed. While this value can be highly dependent on the data, a good starting point for an analysis would be to set this value to `500000`. If you find that `PREEXTRACT_FASTQ` and `CORRECT_BARCODES` are still taking long amounts of time to run, it would be worth reducing this parameter to `200000` or `100000`, but keeping the value on the order of hundred of thousands or tens of thousands should help with with keeping the total number of processes minimal. An example of setting this parameter to be equal to 500000 is shown below:

```yml title="params.yml"
split_amount: 500000
```

- We have seen a recurrent node failure on slurm clusters that does seem to be related to submission of Nextflow jobs. This issue is not related to this pipeline per se, but rather to Nextflow itself. We are currently working on a resolution. But we have two methods that appear to help overcome should this issue arise:
1. Provide a custom config that increases the memory request for the job that failed. This may take a couple attempts to find the correct requests, but we have noted that there does appear to be a memory issue occasionally with these errors.
2. Request an interactive session with a decent amount of time and memory and CPUs in order to run the pipeline on the single node. Note that this will take time as there will be minimal parallelization, but this does seem to resolve the issue.
- We acknowledge that analyzing PromethION is a common use case for this pipeline. Currently, the pipeline has been developed with defaults to analyze GridION and average sized PromethION data. For cases, where jobs have fail due for larger PromethION datasets, the defaults can be overwritten by a custom configuation file (provided by the `-c` Nextflow option) where resources are increased (substantially in some cases). Below are some of the overrides we have used, and while these amounts may not work on every dataset, these will hopefully at least note which processes will need to have their resources increased:

```groovy title="custom.config"

process
{
withName: '.*:.*FASTQC.*'
Expand All @@ -126,6 +129,14 @@ process
}
}

process
{
withName: '.*:TAG_BARCODES'
{
memory = '60.GB'
}
}

process
{
withName: '.*:SAMTOOLS_SORT'
Expand All @@ -146,8 +157,8 @@ process
{
withName: '.*:ISOQUANT'
{
cpus = 40
time = '135.h'
cpus = 30
memory = '85.GB'
}
}
```
Expand Down
4 changes: 2 additions & 2 deletions assets/multiqc_config.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
report_comment: >
This report has been generated by the <a href="https://github.com/nf-core/scnanoseq/0.1.0dev" target="_blank">nf-core/scnanoseq</a>
This report has been generated by the <a href="https://github.com/nf-core/scnanoseq/dev" target="_blank">nf-core/scnanoseq</a>
analysis pipeline. For information about how to interpret these results, please see the
<a href="https://nf-co.re/scnanoseq/0.1.0dev/output" target="_blank">documentation</a>.
<a href="https://nf-co.re/scnanoseq/dev/output" target="_blank">documentation</a>.

report_section_order:
"nf-core-scnanoseq-methods-description":
Expand Down
2 changes: 1 addition & 1 deletion assets/schema_input.json
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
"errorMessage": "FastQ file for reads 1 must be provided, cannot contain spaces and must have extension '.fq.gz' or '.fastq.gz'"
},
"cell_count": {
"type": "string"
"type": "integer"
}
},
"required": ["sample", "fastq", "cell_count"]
Expand Down
Loading
Loading