You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Considering that TGI now supports the Messages API compatible with OpenAI API specs, it would be great to have native support in the Inference package.
curl localhost:3000/v1/chat/completions \
-X POST \
-d '{ "model": "tgi", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "What is deep learning?" } ], "stream": true, "max_tokens": 20}' \
-H 'Content-Type: application/json'
Considering that TGI now supports the Messages API compatible with OpenAI API specs, it would be great to have native support in the Inference package.
https://huggingface.co/docs/text-generation-inference/messages_api
I tried this , but
model
is not send and raises a backend errorThe text was updated successfully, but these errors were encountered: