Prediction Guard allows you to utilize hosted open access LLMs, LVMs, and embedding functionality with seamlessly integrated safeguards. In addition to providing a scalable access to open models, Prediction Guard allows you to configure factual consistency checks, toxicity filters, PII filters, and prompt injection blocking. Join the Prediction Guard Discord channel and request an API key to get started.
This embedding microservice is designed to efficiently convert text into vectorized embeddings using the BridgeTower model. Thus, it is ideal for both RAG or semantic search applications.
Note - The BridgeTower model implemented in Prediction Guard can actually embed text, images, or text + images (jointly). For now this service only embeds text, but a follow on contribution will enable the multimodal functionality.
Setup the following environment variables first
export PREDICTIONGUARD_API_KEY=${your_predictionguard_api_key}
cd ../../..
docker build -t opea/embedding-predictionguard:latest -f comps/embeddings/predictionguard/Dockerfile .
docker run -d --name="embedding-predictionguard" -p 6000:6000 -e PREDICTIONGUARD_API_KEY=$PREDICTIONGUARD_API_KEY opea/embedding-predictionguard:latest
curl localhost:6000/v1/embeddings \
-X POST \
-d '{"text":"Hello, world!"}' \
-H 'Content-Type: application/json'