Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP]Feat/refactor3 #2030

Open
wants to merge 122 commits into
base: main
Choose a base branch
from
Open

[WIP]Feat/refactor3 #2030

wants to merge 122 commits into from

Conversation

tastelikefeet
Copy link
Collaborator

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support
  • Refactor

PR information

Write the detail information belongs to this PR.

Experiment results

Paste your experiment result here(if needed).

…actor3

* commit '2d1aba96281c8f646881427fa857388b07fdcbef':
  Add FAQ Document (#2013)
  fix lmdeploy qwen_vl (#2009)
…actor3

* commit 'e46cda27abc5122402394b50e03b3d61ec04d0dc':
  Fix olora and pissa saving files which will cause the second saving failed (#2032)
  fix deploy eval kill (#2029)
  update code (#2028)
  refactor rlhf (#1975)
  Florence use post_encode & template support encoder-decoder (#2019)

# Conflicts:
#	examples/customization/custom.py
#	examples/pytorch/llm/scripts/dpo/lora/dpo.sh
#	examples/pytorch/llm/scripts/dpo/lora_ddp_mp/dpo.sh
#	swift/llm/model/model.py
#	swift/trainers/push_to_ms.py
1. remove quantization config from function_kwargs
2. temporary disable awq and gptq patch
3. read quantization config from config.py
…actor3

* commit '3028145d966d3dac038a2f844e2d3aca022d1348':
  support multi bbox grounding (#2045)
  fix mplug-owl3 (#2042)
  add (#2037)
  fix rlhf & zero3 (#2034)

# Conflicts:
#	swift/llm/dataset/dataset.py
…actor3

* commit 'd12f9bce106580c4719083608fb18041686258ff':
  llama3 tool calling (#2048)
  fix (#2047)
…der to classes, one for hub dataset, one for local dataset
…rocess function to subclass of RowPreprocessor
1. split model/vllm/lmdeploy from template base class
2. create infertemplate to wrap the template used in infer
Seriously doubt that pass model into encode may not solve the multi processing problem
2. remove the llm/utils folder, category them to other folders
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants