Skip to content

Commit

Permalink
Fix for #200
Browse files Browse the repository at this point in the history
  • Loading branch information
satyaog committed Aug 16, 2023
1 parent 0350916 commit 3792d9d
Show file tree
Hide file tree
Showing 5 changed files with 24 additions and 19 deletions.
2 changes: 1 addition & 1 deletion docs/examples/data/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ Data Handling during Training
*****************************


.. include:: examples/data/torchvision/_index.rst
.. include:: torchvision/index.rst
26 changes: 15 additions & 11 deletions docs/examples/data/torchvision/README.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
.. NOTE: This file is auto-generated from examples/data/torchvision/index.rst
.. This is done so this file can be easily viewed from the GitHub UI.
.. **DO NOT EDIT**
Torchvision
===========

Expand All @@ -7,8 +11,8 @@ Torchvision
Make sure to read the following sections of the documentation before using this
example:

* :ref:`pytorch_setup`
* :ref:`001 - Single GPU Job`
* `examples/frameworks/pytorch_setup <https://github.com/mila-iqia/mila-docs/tree/master/docs/examples/frameworks/pytorch_setup>`_
* `examples/distributed/single_gpu <https://github.com/mila-iqia/mila-docs/tree/master/docs/examples/distributed/single_gpu>`_

The full source code for this example is available on `the mila-docs GitHub
repository.
Expand All @@ -19,7 +23,7 @@ repository.

.. code:: diff
# distributed/001_single_gpu/job.sh -> data/torchvision/job.sh
# distributed/single_gpu/job.sh -> data/torchvision/job.sh
#!/bin/bash
#SBATCH --gpus-per-task=rtx8000:1
#SBATCH --cpus-per-task=4
Expand Down Expand Up @@ -84,7 +88,7 @@ repository.

.. code:: diff
# distributed/001_single_gpu/main.py -> data/torchvision/main.py
# distributed/single_gpu/main.py -> data/torchvision/main.py
-"""Single-GPU training example."""
+"""Torchvision training example."""
import logging
Expand Down Expand Up @@ -198,7 +202,8 @@ repository.
logger.debug(f"Accuracy: {accuracy.item():.2%}")
logger.debug(f"Average Loss: {loss.item()}")
# Advance the progress bar one step, and update the "postfix" () the progress bar. (nicer than just)
- # Advance the progress bar one step and update the progress bar text.
+ # Advance the progress bar one step, and update the "postfix" () the progress bar. (nicer than just)
progress_bar.update(1)
progress_bar.set_postfix(loss=loss.item(), accuracy=accuracy.item())
progress_bar.close()
Expand Down Expand Up @@ -243,7 +248,8 @@ repository.
- """Returns the training, validation, and test splits for CIFAR10.
+ """Returns the training, validation, and test splits for iNat.
NOTE: We don't use image transforms here for simplicity.
- NOTE: We don't use image transforms here for simplicity.
+ NOTE: We use the same image transforms here for train/val/test just to keep things simple.
Having different transformations for train and validation would complicate things a bit.
Later examples will show how to do the train/val/test split properly when using transforms.
"""
Expand Down Expand Up @@ -308,13 +314,11 @@ repository.
from torchvision.datasets import INaturalist
def link_file(src:str, dest:str):
Path(src).symlink_to(dest)
def link_file(src: Path, dest: Path) -> None:
src.symlink_to(dest)
def link_files(src:str, dest:str, workers=4):
src = Path(src)
dest = Path(dest)
def link_files(src: Path, dest: Path, workers: int = 4) -> None:
os.makedirs(dest, exist_ok=True)
with Pool(processes=workers) as pool:
for path, dnames, fnames in os.walk(str(src)):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ Torchvision
Make sure to read the following sections of the documentation before using this
example:

* :ref:`pytorch_setup`
* :ref:`001 - Single GPU Job`
* :doc:`/examples/frameworks/pytorch_setup/index`
* :doc:`/examples/distributed/single_gpu/index`

The full source code for this example is available on `the mila-docs GitHub
repository.
Expand All @@ -17,19 +17,19 @@ repository.

**job.sh**

.. literalinclude:: examples/data/torchvision/job.sh.diff
.. literalinclude:: job.sh.diff
:language: diff


**main.py**

.. literalinclude:: examples/data/torchvision/main.py.diff
.. literalinclude:: main.py.diff
:language: diff


**data.py**

.. literalinclude:: examples/data/torchvision/data.py
.. literalinclude:: data.py
:language: python


Expand Down
4 changes: 2 additions & 2 deletions docs/examples/generate_diffs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ generate_diff distributed/multi_gpu/job.sh distributed/multi_node/job.sh
generate_diff distributed/multi_gpu/main.py distributed/multi_node/main.py

# single_gpu -> torchvision
generate_diff distributed/001_single_gpu/job.sh data/torchvision/job.sh
generate_diff distributed/001_single_gpu/main.py data/torchvision/main.py
generate_diff distributed/single_gpu/job.sh data/torchvision/job.sh
generate_diff distributed/single_gpu/main.py data/torchvision/main.py

# single_gpu -> checkpointing
generate_diff distributed/single_gpu/job.sh good_practices/checkpointing/job.sh
Expand Down
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ recommend you start by checking out the :ref:`short quick start guide

examples/frameworks/index
examples/distributed/index
examples/data/index
examples/good_practices/index


Expand Down

0 comments on commit 3792d9d

Please sign in to comment.