Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc] add FAQ placeholder, add note on arguments for training #927

Merged
merged 1 commit into from
Jan 1, 2025

Conversation

lxaw
Copy link

@lxaw lxaw commented Dec 31, 2024

Added small note under training section, added placeholder for to-be FAQ section.

Added small note under training section, added placeholder for to-be FAQ section.
Copy link
Contributor

@shizhediao shizhediao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks!

@@ -65,6 +66,7 @@ LoRA is a parameter-efficient finetuning algorithm and is more efficient than fu
```sh
bash run_finetune_with_lora.sh
```
Note: Please double-check that you have updated the [training script](https://raw.githubusercontent.com/OptimalScale/LMFlow/refs/heads/data-challenge/run_finetune_with_lora.sh) with the correct arguments for your use case.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we expect every participant use the exactly the same arguments, which should be provided by us.
Let's mark this in our note. We will do that.

Copy link
Author

@lxaw lxaw Jan 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shizhediao I see, makes sense!

Let's mark this in our note

Marked in the internal note, thanks for the heads up!

@shizhediao shizhediao merged commit 1b80d9a into OptimalScale:data-challenge Jan 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants