Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Upgrade llama.cpp to support Phi-3-mini-128k-instruct and IBM Granite #2668

Closed
dlippold opened this issue Jul 14, 2024 · 2 comments · May be fixed by #2790
Closed

[Feature] Upgrade llama.cpp to support Phi-3-mini-128k-instruct and IBM Granite #2668

dlippold opened this issue Jul 14, 2024 · 2 comments · May be fixed by #2790
Labels
enhancement New feature or request

Comments

@dlippold
Copy link

dlippold commented Jul 14, 2024

Feature Request

It would be nice if llama.cpp could be upgraded because there are new models supported in the recent versions, e.g.:

@dlippold dlippold added the enhancement New feature or request label Jul 14, 2024
@manyoso
Copy link
Collaborator

manyoso commented Jul 14, 2024

This is being worked on and we'll have a new release out soon that will have latest llama.cpp

@ThiloteE ThiloteE changed the title [Feature] Upgrade of llama.cpp [Feature] Upgrade llama.cpp to support Phi-3-mini-128k-instruct and IBM Granite Aug 3, 2024
@ThiloteE
Copy link
Collaborator

ThiloteE commented Aug 3, 2024

llama.cpp has been upgraded, hence will close this issue.

  • I am raising a PR for the phi-3 model.
  • The IBM Granite models are not tested and may require custom quantizations. If you do the necessary testing and provide quants that are provent to work with GPT4All, I am sure PRs are welcome. Benchmarks of that model family are looking good. Seems to be better than llama-3-8b at coding.

@ThiloteE ThiloteE closed this as completed Aug 3, 2024
ThiloteE added a commit that referenced this issue Aug 3, 2024
Resolves #2668

Adds model support for [Phi-3-mini-128k-instruct](https://huggingface.co/GPT4All-Community/Phi-3-mini-128k-instruct)

### Description of Model

At the date of writing, the model has strong results in benchmarks (for its parameter size). It claims to support a context of up to 128K.

- The model was trained/finetuned on English
- License: MIT

### Personal Impression:
For 3.8 billion parameters, the model has reasonable output. It is possible to converse and follow tasks. I have held a conversation that held 24k characters and even at that long of a context, it still was able to answer "what is 2x2?" correctly, albeit the responses understandably slightly degrade at that context size. I have seen refusals when it was tasked with certain things and it seems to be finetuned with a particular alignment. Its long context and quality of responses makes it a good model, if you can bear its alignment or your use case happens to fall within the originally intended use cases of the model. It mainly will appeal to English speaking users.

### Critique:

This model does not support Grouped Query Attention, that means other models that support GQA may need less RAM/VRAM for the same amount of tokens in the context window. It has been claimed that llama-3-8b (which supports GQA) needs less RAM after a certain point (\~ 8k context).

### Motivation for this pull-request

- The model is small and fits into 3GB of VRAM or 4GB of RAM respectively (I set 8GB of RAM as minimum, as the Operating System and other Apps also need some)
- The model claims long context and it delivers (although with high RAM usage in longer conversations).
- AFAIK, apart from the Qwen1.5 and Qwen2 model family, this is the only generic purpose model family below 4B parameters that delivers that large of a context window and that is also compatible with GPT4All
- For it's size it is high on the huggingface open leaderboard benchmark
- Made by Microsoft, the model has a reputation
- Users were asking for this model


## Checklist before requesting a review
- [x] I have performed a self-review of my code.
- [ ] If it is a core feature, I have added thorough tests.
- [ ] I have added thorough documentation for my code.
- [x] I have tagged PR with relevant project labels. I acknowledge that a PR without labels may be dismissed.
- [ ] If this PR addresses a bug, I have provided both a screenshot/video of the original bug and the working solution.

Signed-off-by: ThiloteE <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants