Skip to content

Commit

Permalink
Move to prompts.py
Browse files Browse the repository at this point in the history
  • Loading branch information
hinthornw committed Sep 13, 2024
1 parent 23408cc commit fcfacd1
Show file tree
Hide file tree
Showing 11 changed files with 232 additions and 61 deletions.
Empty file added .codespellignore
Empty file.
41 changes: 41 additions & 0 deletions .github/workflows/integration-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# This workflow will run integration tests for the current project once per day

name: Integration Tests

on:
schedule:
- cron: "37 14 * * *" # Run at 7:37 AM Pacific Time (14:37 UTC) every day
workflow_dispatch: # Allows triggering the workflow manually in GitHub UI

# If another scheduled run starts while this workflow is still running,
# cancel the earlier run in favor of the next run.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
integration-tests:
name: Integration Tests
strategy:
matrix:
os: [ubuntu-latest]
python-version: ["3.11", "3.12"]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv
uv pip install -r pyproject.toml
uv pip install -U pytest-asyncio
- name: Run integration tests
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }}
run: |
uv run pytest tests/integration_tests
57 changes: 57 additions & 0 deletions .github/workflows/unit-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# This workflow will run unit tests for the current project

name: CI

on:
push:
branches: ["main"]
pull_request:
workflow_dispatch: # Allows triggering the workflow manually in GitHub UI

# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
unit-tests:
name: Unit Tests
strategy:
matrix:
os: [ubuntu-latest]
python-version: ["3.11", "3.12"]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv
uv pip install -r pyproject.toml
- name: Lint with ruff
run: |
uv pip install ruff
uv run ruff check .
- name: Lint with mypy
run: |
uv pip install mypy
uv run mypy --strict src/
- name: Check README spelling
uses: codespell-project/actions-codespell@v2
with:
ignore_words_file: .codespellignore
path: README.md
- name: Check code spelling
uses: codespell-project/actions-codespell@v2
with:
ignore_words_file: .codespellignore
path: src/
- name: Run tests with pytest
run: |
uv pip install pytest
uv run pytest tests/unit_tests
91 changes: 68 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,74 @@
# LangGraph ReAct Agent Template

This LangGraph template implements a simple, extensible ReAct agent.
[![CI](https://github.com/langchain-ai/react-agent/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/langchain-ai/react-agent/actions/workflows/unit-tests.yml)
[![Integration Tests](https://github.com/langchain-ai/react-agent/actions/workflows/integration-tests.yml/badge.svg)](https://github.com/langchain-ai/react-agent/actions/workflows/integration-tests.yml)

This template defines a simple [ReAct agent](https://arxiv.org/abs/2210.03629) using [LangGraph](https://github.com/langchain-ai/langgraph), made for [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio).

![Graph view in LangGraph studio UI](./static/studio_ui.png)

## Repo Structure
It contains an example graph exported from `src/react_agent/graph.py` that implements a simple, extensible ReAct agent capable of reasoning and acting based on user inputs.

## What it does

The ReAct agent:

1. Takes a user **query** as input
2. Reasons about the query and decides on an action
3. Executes the chosen action using available tools
4. Observes the result of the action
5. Repeats steps 2-4 until it can provide a final answer

By default, it's set up with a basic set of tools, but can be easily extended with custom tools to suit various use cases.

## Getting Started

Assuming you have already [installed LangGraph Studio](https://github.com/langchain-ai/langgraph-studio?tab=readme-ov-file#download), to set up:

```txt
├── LICENSE
├── README.md
├── langgraph.json
├── poetry.lock
├── pyproject.toml
├── react_agent
│   ├── __init__.py
│   ├── graph.py
│   └── utils
│   ├── __init__.py
│   ├── configuration.py # Define the configurable variables
│   ├── state.py # Define state variables and how they're updated
│   ├── tools.py # Define the tools your agent can access
│   └── utils.py # Other sundry utilities
└── tests # Add whatever tests you'd like here
├── integration_tests
│   └── __init__.py
└── unit_tests
└── __init__.py
1. Create a `.env` file.

```bash
cp .env.example .env
```

2. Define required API keys in your `.env` file.

The primary [search tool](./src/react_agent/tools.py) [^1] used is [Tavily](https://tavily.com/). Create an API key [here](https://app.tavily.com/sign-in).

<!--
Setup instruction auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
-->

Set up your LLM API keys. This repo defaults to using [Claude](https://console.anthropic.com/login).

<!--
End setup instructions
-->

3. Customize whatever you'd like in the code.
4. Open the folder LangGraph Studio!

## How to customize

1. **Add new tools**: Extend the agent's capabilities by adding new tools in [tools.py](./src/react_agent/tools.py). These can be any Python functions that perform specific tasks.
2. **Select a different model**: We default to Anthropic's Claude 3 Sonnet. You can select a compatible chat model using `provider/model-name` via configuration. Example: `openai/gpt-4-turbo-preview`.
3. **Customize the prompt**: We provide a default system prompt in [configuration.py](./src/react_agent/configuration.py). You can easily update this via configuration in the studio.

You can also quickly extend this template by:

- Modifying the agent's reasoning process in [graph.py](./src/react_agent/graph.py).
- Adjusting the ReAct loop or adding additional steps to the agent's decision-making process.

## Development

While iterating on your graph, you can edit past state and rerun your app from past states to debug specific nodes. Local changes will be automatically applied via hot reload. Try adding an interrupt before the agent calls tools, updating the default system message in `src/react_agent/configuration.py` to take on a persona, or adding additional nodes and edges!

Follow up requests will be appended to the same thread. You can create an entirely new thread, clearing previous history, using the `+` button in the top right.

You can find the latest (under construction) docs on [LangGraph](https://github.com/langchain-ai/langgraph) here, including examples and other references. Using those guides can help you pick the right patterns to adapt here for your use case.

LangGraph Studio also integrates with [LangSmith](https://smith.langchain.com/) for more in-depth tracing and collaboration with teammates.

<!--
Configuration auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
{
Expand All @@ -37,7 +78,7 @@ Configuration auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
"properties": {
"system_prompt": {
"type": "string",
"default": "You are a helpful AI assistant.\nSystem time: {system_time}"
"default": "You are a helpful AI assistant.\n\nSystem time: {system_time}"
},
"model_name": {
"type": "string",
Expand Down Expand Up @@ -410,6 +451,10 @@ Configuration auto-generated by `langgraph template lock`. DO NOT EDIT MANUALLY.
"variables": "OPENAI_API_KEY"
}
]
},
"max_search_results": {
"type": "integer",
"default": 10
}
}
}
Expand Down
4 changes: 4 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@ dependencies = [
"langchain>=0.2.14",
"langchain-fireworks>=0.1.7",
"python-dotenv>=1.0.1",
"langchain-community>=0.2.17",
"tavily-python>=0.4.0",
]


Expand Down Expand Up @@ -54,5 +56,7 @@ lint.ignore = [
"D417",
"E501",
]
[tool.ruff.lint.per-file-ignores]
"tests/*" = ["D", "UP"]
[tool.ruff.lint.pydocstyle]
convention = "google"
22 changes: 20 additions & 2 deletions src/react_agent/configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,41 @@

from __future__ import annotations

from dataclasses import dataclass, fields
from dataclasses import dataclass, field, fields
from typing import Annotated, Optional

from langchain_core.runnables import RunnableConfig, ensure_config

from react_agent import prompts


@dataclass(kw_only=True)
class Configuration:
"""The configuration for the agent."""

system_prompt: str = "You are a helpful AI assistant.\nSystem time: {system_time}"
system_prompt: str = field(default=prompts.SYSTEM_PROMPT)
"""The system prompt to use for the agent's interactions.
This prompt sets the context and behavior for the agent.
"""

model_name: Annotated[str, {"__template_metadata__": {"kind": "llm"}}] = (
"anthropic/claude-3-5-sonnet-20240620"
)
"""The name of the language model to use for the agent's main interactions.
Should be in the form: provider/model-name.
"""

scraper_tool_model_name: Annotated[
str, {"__template_metadata__": {"kind": "llm"}}
] = "accounts/fireworks/models/firefunction-v2"
"""The name of the language model to use for the web scraping tool.
This model is specifically used for summarizing and extracting information from web pages.
"""
max_search_results: int = 10
"""The maximum number of search results to return for each search query."""

@classmethod
def from_runnable_config(
Expand Down
2 changes: 1 addition & 1 deletion src/react_agent/graph.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
"""Define a custom Reasoning and Action agent.
"""Define a custom Reasoning and Action agent.
Works with a chat model with tool calling support.
"""
Expand Down
5 changes: 5 additions & 0 deletions src/react_agent/prompts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
"""Default prompts used by the agent."""

SYSTEM_PROMPT = """You are a helpful AI assistant.
System time: {system_time}"""
52 changes: 17 additions & 35 deletions src/react_agent/tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,21 @@
It includes:
- A web scraper that uses an LLM to summarize content based on instructions
- A basic DuckDuckGo search function
- A basic Tavily search function
These tools are intended as free examples to get started. For production use,
consider implementing more robust and specialized tools tailored to your needs.
"""

from datetime import datetime, timezone
from typing import Any, Callable, Dict, List, cast
from typing import Any, Callable, List, Optional, cast

import httpx
from langchain.chat_models import init_chat_model
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import InjectedToolArg
from typing_extensions import Annotated

from react_agent.configuration import Configuration
from react_agent.utils import get_message_text
Expand Down Expand Up @@ -53,40 +56,19 @@ async def scrape_webpage(url: str, instructions: str, *, config: RunnableConfig)
return get_message_text(response_msg)


# Note, in a real use case, you'd want to use a more robust search API.
async def search_duckduckgo(query: str) -> Dict[str, Any]:
"""Search DuckDuckGo for the given query and return the JSON response.
async def search(
query: str, *, config: Annotated[RunnableConfig, InjectedToolArg]
) -> Optional[list[dict[str, Any]]]:
"""Search for general web results.
Results are limited, as this is the free public API.
This function performs a search using the Tavily search engine, which is designed
to provide comprehensive, accurate, and trusted results. It's particularly useful
for answering questions about current events.
"""
async with httpx.AsyncClient() as client:
response = await client.get(
"https://api.duckduckgo.com/", params={"q": query, "format": "json"}
)
result = cast(Dict[str, Any], response.json())

result.pop("meta", None)
return result


async def search_wikipedia(query: str) -> Dict[str, Any]:
"""Search Wikipedia for the given query and return the JSON response."""
url = "https://en.wikipedia.org/w/api.php"
async with httpx.AsyncClient() as client:
response = await client.get(
url,
params={
"action": "query",
"list": "search",
"srsearch": query,
"format": "json",
},
)
return cast(Dict[str, Any], response.json())
configuration = Configuration.from_runnable_config(config)
wrapped = TavilySearchResults(max_results=configuration.max_search_results)
result = await wrapped.ainvoke({"query": query})
return cast(list[dict[str, Any]], result)


TOOLS: List[Callable[..., Any]] = [
scrape_webpage,
search_duckduckgo,
search_wikipedia,
]
TOOLS: List[Callable[..., Any]] = [scrape_webpage, search]
14 changes: 14 additions & 0 deletions tests/integration_tests/test_graph.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
import pytest
from langsmith import unit

from react_agent import graph


@pytest.mark.asyncio
@unit
async def test_react_agent_simple_passthrough() -> None:
res = await graph.ainvoke(
{"messages": [("user", "Who is the founder of LangChain?")]}
)

assert "harrison" in str(res["messages"][-1].content).lower()
5 changes: 5 additions & 0 deletions tests/unit_tests/test_configuration.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
from react_agent.configuration import Configuration


def test_configuration_empty():
Configuration.from_runnable_config({})

0 comments on commit fcfacd1

Please sign in to comment.