Skip to content

Commit

Permalink
Change learning rate for segmentation configs to 0.002 (#722)
Browse files Browse the repository at this point in the history
  • Loading branch information
nkaenzig authored Dec 6, 2024
1 parent 9f73e79 commit 9c1c29d
Show file tree
Hide file tree
Showing 8 changed files with 24 additions and 11 deletions.
2 changes: 1 addition & 1 deletion configs/vision/pathology/offline/segmentation/bcss.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ model:
optimizer:
class_path: torch.optim.AdamW
init_args:
lr: ${oc.env:LR_VALUE, 0.0001}
lr: ${oc.env:LR_VALUE, 0.002}
lr_scheduler:
class_path: torch.optim.lr_scheduler.PolynomialLR
init_args:
Expand Down
2 changes: 1 addition & 1 deletion configs/vision/pathology/offline/segmentation/consep.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ model:
optimizer:
class_path: torch.optim.AdamW
init_args:
lr: ${oc.env:LR_VALUE, 0.0001}
lr: ${oc.env:LR_VALUE, 0.002}
lr_scheduler:
class_path: torch.optim.lr_scheduler.PolynomialLR
init_args:
Expand Down
2 changes: 1 addition & 1 deletion configs/vision/pathology/offline/segmentation/monusac.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ model:
optimizer:
class_path: torch.optim.AdamW
init_args:
lr: ${oc.env:LR_VALUE, 0.0001}
lr: ${oc.env:LR_VALUE, 0.002}
lr_scheduler:
class_path: torch.optim.lr_scheduler.PolynomialLR
init_args:
Expand Down
2 changes: 1 addition & 1 deletion configs/vision/pathology/online/segmentation/consep.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ model:
optimizer:
class_path: torch.optim.AdamW
init_args:
lr: ${oc.env:LR_VALUE, 0.0001}
lr: ${oc.env:LR_VALUE, 0.002}
lr_scheduler:
class_path: torch.optim.lr_scheduler.PolynomialLR
init_args:
Expand Down
2 changes: 1 addition & 1 deletion configs/vision/pathology/online/segmentation/monusac.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ model:
optimizer:
class_path: torch.optim.AdamW
init_args:
lr: ${oc.env:LR_VALUE, 0.0001}
lr: ${oc.env:LR_VALUE, 0.002}
lr_scheduler:
class_path: torch.optim.lr_scheduler.PolynomialLR
init_args:
Expand Down
2 changes: 1 addition & 1 deletion docs/leaderboards.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ We selected this approach to prioritize reliable, robust and fair FM-evaluation
| **Output activation function** | none | none | none |
| **Number of steps** | 12,500 | 12,500 (1) | 2,000 |
| **Base batch size** | 256 | 32 | 64 |
| **Base learning rate** | 0.0003 | 0.001 | 0.0001 |
| **Base learning rate** | 0.0003 | 0.001 | 0.002 |
| **Early stopping** | 5% * [Max epochs] | 10% * [Max epochs] (2) | 10% * [Max epochs] (2) |
| **Optimizer** | SGD | AdamW | AdamW |
| **Momentum** | 0.9 | n/a | n/a |
Expand Down
9 changes: 8 additions & 1 deletion src/eva/core/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,13 @@

from eva.core.models.modules import HeadModule, InferenceModule
from eva.core.models.networks import MLP
from eva.core.models.wrappers import BaseModel, HuggingFaceModel, ModelFromFunction, ONNXModel
from eva.core.models.wrappers import (
BaseModel,
HuggingFaceModel,
ModelFromFunction,
ONNXModel,
TorchHubModel,
)

__all__ = [
"HeadModule",
Expand All @@ -12,4 +18,5 @@
"HuggingFaceModel",
"ModelFromFunction",
"ONNXModel",
"TorchHubModel",
]
14 changes: 10 additions & 4 deletions src/eva/core/models/wrappers/from_torchhub.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Model wrapper for torch.hub models."""

from typing import Any, Callable, Dict, Tuple
from typing import Any, Callable, Dict, List, Tuple

import torch
import torch.nn as nn
Expand Down Expand Up @@ -72,16 +72,22 @@ def load_model(self) -> None:
TorchHubModel.__name__ = self._model_name

@override
def model_forward(self, tensor: torch.Tensor) -> torch.Tensor:
def model_forward(self, tensor: torch.Tensor) -> torch.Tensor | List[torch.Tensor]:
if self._out_indices is not None:
if not hasattr(self._model, "get_intermediate_layers"):
raise ValueError(
"Only models with `get_intermediate_layers` are supported "
"when using `out_indices`."
)

return self._model.get_intermediate_layers(
tensor, self._out_indices, reshape=True, return_class_token=False, norm=self._norm
return list(
self._model.get_intermediate_layers(
tensor,
self._out_indices,
reshape=True,
return_class_token=False,
norm=self._norm,
)
)

return self._model(tensor)

0 comments on commit 9c1c29d

Please sign in to comment.