Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get custom headers in ChatResponse #377

Open
lasseedfast opened this issue Dec 13, 2024 · 5 comments
Open

Get custom headers in ChatResponse #377

lasseedfast opened this issue Dec 13, 2024 · 5 comments
Assignees

Comments

@lasseedfast
Copy link

I'm using an Nginx server to direct requests made with the Ollama Python library to different running Ollama servers. The Nginx server selects an Ollama server based on availability, among other factors, and sends that information in the X-Choosen-Server header. Using Python's requests library, I can receive this header in the response. However, when using Ollama, it seems like I can't.
Have I overlooked something, or would it be possible to implement custom headers in the ChatResponse?

@ParthSareen
Copy link
Contributor

Hey! You should be able to pass in httpx client options directly into the Client instantiation as they are a pass through. Give that a shot and lmk if you run into issues :)

@lasseedfast
Copy link
Author

Thanks! Maybe I wasn't clear with the problem – I have tried to pass in options into to Client, but I don't understand how to receive other data than that included in the Ollama ChatRequest object, although the Nginx server in the middle are sending things like X-Headers along with the Ollama response.

@ParthSareen
Copy link
Contributor

Ahhh I see what you mean - sorry about that. Let me dig into this a bit and see if there's a good pattern we can expose. We definitely do not support this now. For my knowledge - do you need to know the return of what nginx returns? Is this throwing an error or is it a "good to know/nice to have"

@lasseedfast
Copy link
Author

lasseedfast commented Dec 19, 2024

I would love to know the return! In this case it's a string, more precise an internal IP and port number (one of the running Ollama servers) but I imagine there are other use cases as well. And it's not throwing an error, as I understand the CharRequest.model_dump method it checks the response from the server – normally an Ollama instance but this time the Nginx proxy – and includes all values that are specified but I'm not very familiar with the typing library and this is also why I don't try to solve this myself...

@ParthSareen
Copy link
Contributor

Okay! Appreciate the explanation. Will leave this issue open for now and think about how we can possibly do this. Thank you!

@ParthSareen ParthSareen self-assigned this Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants