Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify finetune function #69

Merged
merged 14 commits into from
Sep 17, 2024
Merged

Modify finetune function #69

merged 14 commits into from
Sep 17, 2024

Conversation

Hanyi11
Copy link
Contributor

@Hanyi11 Hanyi11 commented Jun 3, 2024

Main Changes:

  1. Fixed the issue of losing header information by loading the tomogram before calling the skeletonization function. The tomo.data is now passed into skeletonization, and the resulting skeleton replaces the original tomo.data for saving.
  2. Removed the PrintLearningRate callback from the fine-tuning function.
  3. Added the Fine-tuning.md file.

os.path.splitext(os.path.basename(label_path))[0] + "_skel.mrc",
)

store_tomogram(filename=out_file, tomogram=ske)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As raised in #82, this will lose the information of the header.
Can we load the tomogram before calling skeletonization and pass it instead of label path?

load_tomogram gives you a Tomogram instance (

). If you simply alter tomo.data, you can pass the Tomogram into store_tomogram instead of ske

def on_epoch_start(self, trainer, pl_module):
current_lr = trainer.optimizers[0].param_groups[0]["lr"]
print(f"Epoch {trainer.current_epoch}: Learning Rate = {current_lr}")

print_lr_cb = PrintLearningRate()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this callback in the standard function for the user?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think no

@@ -266,3 +272,112 @@ def forward(
# Normalize loss
loss = loss / sum(self.weights)
return loss


class PrintLearningRate(Callback):
Copy link
Collaborator

@LorenzLamm LorenzLamm Sep 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay to keep this here, but I think this is not necessary to have actively used in the fine-tune functionality. I guess this is only useful for debugging

@LorenzLamm LorenzLamm merged commit 3989e37 into teamtomo:finetuning Sep 17, 2024
10 checks passed
LorenzLamm added a commit that referenced this pull request Sep 17, 2024
* add skeletonization code

* Second commit

* Second commit

* Second commit

* Second commit

* Third commit

* Third commit

* Fourth commit

* Fourth commit

* Fix data type warning and absolute value error

* Add finetune function

* Modify finetune function

* Add Fine-tuning.md file

---------

Co-authored-by: Hanyi11 <[email protected]>
Co-authored-by: Hanyi Zhang <[email protected]>
Co-authored-by: Hanyi Zhang <[email protected]>
Co-authored-by: Hanyi Zhang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants