-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama Text Templater #715
Llama Text Templater #715
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It exactly what's lacked in LLamaSharp! Do you have further plan about the development of the template? Actually I posted function calling as one of the OSPP projects of LLamaSharp (OSPP is what I once invited you in discord). Since template is one of the basic components of function calling, you could open some good-first-issues to let the student do it if you'd like to. :)
I think I'll probably look at making some enhancements to llama.cpp, and then coming back to support them in LLamaSharp. At the moment the template converts all messages into text and then you tokenize that text in one go. However, this doesn't seem good enough. You must tokenize that text using I'm going to see if I can PR a change into llama.cpp to run the tokenization differently for different bits. |
…s according to the model template. - Fixed `llama_chat_apply_template` method (wrong entrypoint, couldn't handle null model)
- Returning template for chaining method calls - Returning a `TextMessage` object instead of a tuple
939c5e0
to
4332ab3
Compare
I've rebased this one onto master, so it can be merged independently of #712 since it seems like that other PR is going to be delayed. |
llama_chat_apply_template
method (wrong entrypoint, couldn't handle null model)This depends on #712 review and merge that first!