Replies: 2 comments
-
@mauri3112 Thanks for using Phoenix. We don't support this flow at the moment. Inferences are not currently persisted in Phoenix and adding database support for inferences is not currently on our roadmap, although that may change at a later time. If you are interested in storing and visualizing your embedding data, please check out the Arize platform. |
Beta Was this translation helpful? Give feedback.
0 replies
-
@mauri3112 I'm curious, in what context are you using embeddings? Are you building an LLM app, or some other use case? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I could not figure out if there is a way, to load the embedding inferences into phoenix without stopping and restarting it. I am running it in a docker container, so I would be interested on a way to load them on a running instance, instead of during initialisation. Am I missing something? Or there is currently no way to achieve this?
Beta Was this translation helpful? Give feedback.
All reactions