Skip to content

Latest commit

 

History

History
67 lines (42 loc) · 2.03 KB

README.md

File metadata and controls

67 lines (42 loc) · 2.03 KB

The purpose of this tool is to provide convenient utilities to evaluate calling and annotation differences between VCF files.

The common use case would be that different technical approaches have been used for the same biological samples, and that the impact from these choices need to be understood.

Install dependencies

This is recommended to do within a venv environment (link)

pip install -r requirements.txt

Usage

Inputs can be gzipped.

The functionality is organized into sub commands. Currently the two commands are overview and rankmodels.

python compare_vcf.py overview \
    --inputs run1.vcf.gz run2.vcf.gz run3.vcf.gz \
    --labels first second third \
    --outdir testout \
    --contig chr20    # If you want to quickly run a subset of the data

For comparing rank model and rank model scores among variants.

python compare_vcf.py rankmodels \
    --inputs run1.vcf.gz run2.vcf.gz run3.vcf.gz \
    --labels first second third \
    --outdir testout \
    --topn 200 \
    --rankmodels "" run2_model.ini run3_model.ini \ # Optional, "" for missing if some rank models are present
    --contig chr20

Subset of outputs

FIXME: Update to show latest outputs for each of the sub commands.

Number called variants among the different vcf-files.

Total counts

Overlaps among the called variants. More info on the upset chart can be found here

Overlaps among counts

If a scoring metric is provided, histograms of the scores are generated for each dataset.

Score histograms

If a scoring metric is provided, heatmaps comparing the number of features with shared scorings are also generated.

Score heatmaps

Score table. This will be extended.

Score table

Planned extensions

  • More detailed annotation information in the output table.
  • Use the GIAB as a reference base line.