Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The agent does not use tool to make response? #766

Open
Jimmy-L99 opened this issue Sep 14, 2024 · 3 comments
Open

The agent does not use tool to make response? #766

Jimmy-L99 opened this issue Sep 14, 2024 · 3 comments

Comments

@Jimmy-L99
Copy link

Jimmy-L99 commented Sep 14, 2024

I refer to https://github.com/langchain-ai/langserve/blob/main/examples/agent/server.py to build my own RAG agent.

test code as follows:

from fastapi import FastAPI
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langserve import add_routes
from langchain_openai import ChatOpenAI
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_milvus import Milvus
from pydantic.v1 import BaseModel
from typing import Any
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_to_openai_functions
from langchain_core.utils.function_calling import format_tool_to_openai_function
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain_core.tools import tool

llm_end_point_url = "http://***:***/v1/"
model = ChatOpenAI(model="glm4v-9b",base_url=llm_end_point_url, api_key="api_key")

### embedding ###
embedding_model = HuggingFaceEmbeddings(model_name='/root/ljm/bge/bge-large-zh-v1.5')

### milvus ###
milvus_host = "***"
milvus_port = ***
collection_name = "langchain_lichi_txt"

vector_store = Milvus(
    embedding_function=embedding_model,
    collection_name="langchain_lichi_txt",
    connection_args={"host": milvus_host, "port": milvus_port, "db_name": "glm3"},  # "db_name"字段指定数据库名称
)

retriever = vector_store.as_retriever(search_type="similarity", search_kwargs={"k": 3})

@tool
def litchi_rag(query: str) -> list:
    """该工具能够对关于荔枝的专业知识进行总结和介绍,并回答问题。"""
    return retriever.get_relevant_documents(query)

tools = [litchi_rag]

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "你是一位农业知识助手。"),
        ("user", "{input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ]
)
llm_with_tools = model.bind(functions=[format_tool_to_openai_function(t) for t in tools])
agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_to_openai_functions(
            x["intermediate_steps"],
        ),
    }
    | prompt
    | llm_with_tools
    | OpenAIFunctionsAgentOutputParser()
)

agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

class Input(BaseModel):
    input: str

class Output(BaseModel):
    output: Any

app = FastAPI(
    title="GLM4 LangChain Server",
    version="1.0",
    description="A simple api server using Langchain's Runnable interfaces",
)

add_routes(
    app,
    agent_executor.with_types(input_type=Input, output_type=Output).with_config(
        {"run_name": "agent"}
    ),
    path="/Litchi_RAG",
)

if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="****", port=8010)

Then I make query in client:

remote_runnable = RemoteRunnable("http://**:***/Litchi_RAG")
query = "深圳的荔枝有哪些品种?"
response = remote_runnable.invoke({"input": query})
print(response)

chain verbose as follows:

> Entering new AgentExecutor chain...
深圳位于中国广东省,由于其温暖的气候条件,非常适合荔枝的生长。深圳地区种植较多的荔枝品种包括但不限于以下几种:

1. **妃子笑**:这是深圳最常见的荔枝品种之一,以其甜美的口感和早熟的特点受到欢迎。
2. **桂味**:桂味荔枝也是深圳常见的品种,它以肉质爽脆、味道甜美而闻名。
3. **白糖罂**:这种荔枝果肉饱满,色泽鲜亮,口感清甜,深受消费者喜爱。
4. **黑叶**:黑叶荔枝果实较大,肉质结实,味道甘甜,是荔枝中的优良品种。

除了上述品种外,深圳还可能种植其他地方特色的荔枝品种。不过,具体的品种可能会随着时间和市场需求的变化而有所调整。如果你想了解最新的荔枝品种信息,建议咨询当地的农民合作社或者农业技术推广部门。

> Finished chain.

I notice that agent didn't use tool to make RAG, just directly response.
Did I miss anything? I am stuck here.
Any help would be greatly appreciated.

@eyurtsev
Copy link
Collaborator

Hi @2500035435,

Apologies we haven't updated the docs in a while. The recommended way to create agents currently is using langgraph.

LangServe at the moment doesn't help with deployment of langgraph primitives, but you should be able to set up vanilla FastAPI deployment pretty quickly.

Eugene

@1vash
Copy link

1vash commented Oct 2, 2024

@eyurtsev Hi, just to confirm,

  1. Do you recommend using LangGraph for all agent development, even for simple RAG agents? In my case, LangServe with a basic LangChain RAG agent works well.
  2. Does this mean LangServe team doesn't intend to support deployment of LangChain agents going forward? If so, that seems like an important update.

Ivan

@eyurtsev
Copy link
Collaborator

eyurtsev commented Oct 3, 2024

  1. Do you recommend using LangGraph for all agent development, even for simple RAG agents? In my case, LangServe with a basic LangChain RAG agent works well.

The langgraph RAG agent is just as simple: https://python.langchain.com/docs/tutorials/qa_chat_history/#tying-it-together-1

LangServe may just not work for deploying it since it's not aware of langgraph constructs.

If you have working code and it's good, then keep it as is -- no reason to change.

If you're writing new code:

  1. Does this mean LangServe team doesn't intend to support deployment of LangChain agents going forward? If so, that seems like an important update.

LangServe is designed to work for runnable constructed using LCEL (this includes the old LangChain AgentExecutor)

In the example above there's likely some issue in the code (e.g., using a deprecated function calling API from openai or maybe some issue with the system prompt) -- so this isn't so much a deployment question, but a question of whether the agent code is correct

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants