Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hi, thanks for you code,but I got very low accuray when used the lora as fllow. #16

Open
LiZhangMing opened this issue Jul 16, 2024 · 0 comments

Comments

@LiZhangMing
Copy link

LiZhangMing commented Jul 16, 2024

for GPU is 3090. the profile is: CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python finetune.py
--base_model 'yahma/llama-7b-hf'
--data_path 'dataset/openbookqa/train.json'
--output_dir ./finetuned_result/dora_r32_epoch_test1
--batch_size 16 --micro_batch_size 16 --num_epochs 3 --scaling 4.0
--learning_rate 2e-4 --cutoff_len 256 --val_set_size 120 --bottleneck_size 32
--eval_step 80 --save_step 80 --adapter_name lora
--target_modules '["q_proj", "k_proj", "v_proj", "up_proj", "down_proj"]'
--lora_r 16 --lora_alpha 32 --use_gradient_checkpointing

**then the result of OBQA is  0.08333333333333333.**                          
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant