Skip to content

Latest commit

 

History

History
75 lines (60 loc) · 2.88 KB

llm_client.md

File metadata and controls

75 lines (60 loc) · 2.88 KB

LLM Client Interfaces

Overview

In the MiniAutoGen framework, the Large Language Model (LLM) client interfaces are designed to interact with various LLMs like OpenAI's GPT models. These interfaces provide a unified way to retrieve model responses for different implementations, such as OpenAI's API and LiteLLM.

Core Interfaces and Implementations

class LLMClientInterface

Description:

An interface for LLM clients, defining the method to get model responses based on a given prompt.

Abstract Method:

get_model_response(self, prompt, model_name="gpt-4", temperature=1)
  • Purpose: Retrieves a model response based on the input prompt.
  • Parameters:
    • prompt (str): The input prompt for the model.
    • model_name (str, optional): The name of the model to use. Defaults to "gpt-4".
    • temperature (float, optional): The temperature parameter for model generation. Defaults to 1.
  • Returns: str representing the generated model response.

class OpenAIClient(LLMClientInterface)

Description:

A concrete implementation of LLMClientInterface for interacting with OpenAI's LLMs.

Constructor:

__init__(self, api_key)
  • Purpose: Initializes the OpenAI client with the given API key.
  • Parameters:
    • api_key (str): The API key for accessing the OpenAI service.

Method: get_model_response(self, prompt, model="gpt-3.5-turbo", temperature=1)

  • Functionality: Retrieves a model response using the OpenAI API.
  • Parameters:
    • model (str, optional): The name of the model to use. Defaults to "gpt-3.5-turbo".
  • Error Handling: Logs an error message if the API call fails.

class LiteLLMClient(LLMClientInterface)

Description:

A concrete implementation of LLMClientInterface for interacting with LiteLLM.

Constructor:

__init__(self, model)
  • Purpose: Initializes the LiteLLM client with a specified model.
  • Parameters:
    • model: The LiteLLM model to be used.

Method: get_model_response(self, prompt, model_name=None)

  • Functionality: Retrieves a model response using the LiteLLM API.
  • Error Handling: Logs an error message if the API call fails.

Example Usage

Using OpenAIClient

api_key = "your_openai_api_key"
openai_client = OpenAIClient(api_key)
response = openai_client.get_model_response("Hello, world!")

Using LiteLLMClient

lite_llm_model = load_your_litellm_model()
litellm_client = LiteLLMClient(lite_llm_model)
response = litellm_client.get_model_response("How's the weather today?")

Using a LLM Client with a Component

from miniautogen.llms.llm_client import LiteLLMClient

# Initialize the LiteLLM client with the specified model
litellm_client = LiteLLMClient(model="ollama/phi")

# Create an instance of LLMResponseComponent using the LiteLLM client
llm_component = LLMResponseComponent(litellm_client)