In the MiniAutoGen framework, the Large Language Model (LLM) client interfaces are designed to interact with various LLMs like OpenAI's GPT models. These interfaces provide a unified way to retrieve model responses for different implementations, such as OpenAI's API and LiteLLM.
An interface for LLM clients, defining the method to get model responses based on a given prompt.
- Purpose: Retrieves a model response based on the input prompt.
- Parameters:
prompt
(str
): The input prompt for the model.model_name
(str
, optional): The name of the model to use. Defaults to"gpt-4"
.temperature
(float
, optional): The temperature parameter for model generation. Defaults to1
.
- Returns:
str
representing the generated model response.
A concrete implementation of LLMClientInterface
for interacting with OpenAI's LLMs.
- Purpose: Initializes the OpenAI client with the given API key.
- Parameters:
api_key
(str
): The API key for accessing the OpenAI service.
- Functionality: Retrieves a model response using the OpenAI API.
- Parameters:
model
(str
, optional): The name of the model to use. Defaults to"gpt-3.5-turbo"
.
- Error Handling: Logs an error message if the API call fails.
A concrete implementation of LLMClientInterface
for interacting with LiteLLM.
- Purpose: Initializes the LiteLLM client with a specified model.
- Parameters:
model
: The LiteLLM model to be used.
- Functionality: Retrieves a model response using the LiteLLM API.
- Error Handling: Logs an error message if the API call fails.
api_key = "your_openai_api_key"
openai_client = OpenAIClient(api_key)
response = openai_client.get_model_response("Hello, world!")
lite_llm_model = load_your_litellm_model()
litellm_client = LiteLLMClient(lite_llm_model)
response = litellm_client.get_model_response("How's the weather today?")
from miniautogen.llms.llm_client import LiteLLMClient
# Initialize the LiteLLM client with the specified model
litellm_client = LiteLLMClient(model="ollama/phi")
# Create an instance of LLMResponseComponent using the LiteLLM client
llm_component = LLMResponseComponent(litellm_client)