Replies: 1 comment 1 reply
-
I'm working with the vLLM team on this going forward. Stay tuned! |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Adapter support is a great feature that adds a lot of value in terms of optimizing GPU usage for multiple use case. However, as of the latest release (0.4.34), using adapters only works with the PyTorch backend which will be deprecated going forward (as per the changelog).
I was wondering if the note in the README (stating that supporting fine-tuning layers + vLLM is WIP) is still up to date and that you plan to support both features simultaneously.
Thanks for all the work you put into OpenLLM :)
Beta Was this translation helpful? Give feedback.
All reactions