Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Toolkits/issue949 generate openai tool schema #1070

Open
wants to merge 14 commits into
base: master
Choose a base branch
from

Conversation

Zhangzeyu97
Copy link
Collaborator

This commit refactors the FunctionTool class and add a new function generate_docstring in the function_tool.py file.

Description

It adds a new method generate_openai_tool_schema that uses an optional assistant model to generate the OpenAI tool schema for the specified function. If no assistant model is provided, it defaults to creating a GPT_4O_MINI model. The function's source code is used to generate a docstring and schema, which are validated before returning the final schema. generate_docstring is used to generate a docstring. If schema generation or validation fails, the process retries up to two times. This refactor improves the functionality and flexibility of the FunctionTool class.

Motivation and Context

[Why is this change required? What problem does it solve?
If it fixes an open issue, please link to the issue here.
https://github.com//issues/949

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

What types of changes does your code introduce? Put an x in all the boxes that apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of example)

Implemented Tasks

  • Uses an optional assistant model to generate the OpenAI tool schema for the specified function.

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide. (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly. (required for a bug fix or a new feature)
  • I have updated the documentation accordingly.

This commit refactors the `FunctionTool` class in the `function_tool.py` file. It adds a new method `generate_openai_tool_schema` that uses an optional assistant model to generate the OpenAI tool schema for the specified function. If no assistant model is provided, it defaults to creating a GPT_4O_MINI model. The function's source code is used to generate a docstring and schema, which are validated before returning the final schema. If schema generation or validation fails, the process retries up to two times. This refactor improves the functionality and flexibility of the `FunctionTool` class.
"Adjusted the code to comply with PEP 8 line length standards. Added the `use_schema_assistant` parameter in `FunctionTool` to control whether to use LLM to generate the schema, with a default value of `False`."
@CaelumF
Copy link
Collaborator

CaelumF commented Oct 18, 2024

My thought is that for direct calling (e.g. from another project that has camel as a dependency) the retries isn't appropriate because it is surprising and it would be better to throw an exception for the caller to handle as they like. An idea for the future is that we can have these tool implementations written in a way that is appropriate to expose as APIs, with wrappers that will do things like auto retry certain types of exceptions which will be used when given to AI agents

For now of course the AI agent use's needs should be prioritised I think

Copy link
Member

@lightaime lightaime left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Zhangzeyu97. The PR looks great!!

I left some minor comments and change the validation of tool schema in which I remove the validation of the descriptiom: #1085.

camel/toolkits/function_tool.py Outdated Show resolved Hide resolved
camel/toolkits/function_tool.py Outdated Show resolved Hide resolved
camel/toolkits/function_tool.py Outdated Show resolved Hide resolved
camel/toolkits/function_tool.py Outdated Show resolved Hide resolved
camel/toolkits/function_tool.py Outdated Show resolved Hide resolved
camel/toolkits/function_tool.py Outdated Show resolved Hide resolved
camel/toolkits/function_tool.py Outdated Show resolved Hide resolved
examples/tool_call/generate_openai_tool_schema_example.py Outdated Show resolved Hide resolved
) -> None:
self.func = func
self.openai_tool_schema = openai_tool_schema or get_openai_tool_schema(
func
)

if use_schema_assistant:
try:
self.validate_openai_tool_schema(self.openai_tool_schema)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

openai tool call actually support descriptions as optional. I remove it from the validation: #1085. So maybe we should remove the try and except and generate the schema directly if use_schema_assistant

camel/toolkits/function_tool.py Outdated Show resolved Hide resolved
@lightaime
Copy link
Member

My thought is that for direct calling (e.g. from another project that has camel as a dependency) the retries isn't appropriate because it is surprising and it would be better to throw an exception for the caller to handle as they like. An idea for the future is that we can have these tool implementations written in a way that is appropriate to expose as APIs, with wrappers that will do things like auto retry certain types of exceptions which will be used when given to AI agents

For now of course the AI agent use's needs should be prioritised I think

Sorry I do not fully get the point. Here the retries are for the docstring generation instead of tool calling. What would be the issue?

@CaelumF
Copy link
Collaborator

CaelumF commented Oct 19, 2024

My thought is that for direct calling (e.g. from another project that has camel as a dependency) the retries isn't appropriate because it is surprising and it would be better to throw an exception for the caller to handle as they like. An idea for the future is that we can have these tool implementations written in a way that is appropriate to expose as APIs, with wrappers that will do things like auto retry certain types of exceptions which will be used when given to AI agents
For now of course the AI agent use's needs should be prioritised I think

Sorry I do not fully get the point. Here the retries are for the docstring generation instead of tool calling. What would be the issue?

Yeah you're right this is fine actually

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants