Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

attachment_behavior not working for accessing remote files #95

Open
frieda-huang opened this issue Oct 20, 2024 · 3 comments
Open

attachment_behavior not working for accessing remote files #95

frieda-huang opened this issue Oct 20, 2024 · 3 comments

Comments

@frieda-huang
Copy link

Hi, I ran the following examples in agents, but it looks like there is a problem with accessing the file content when executing

iterator = agent.execute_turn(
            [turn.message],
            turn.attachments,
        )
  • podcast_transcript.py
  • rag_as_attachments.py
  • inflation.py

Output

For example, with podcast_transcript.py, I got the following output:

inference> I'd be happy to help you summarize the podcast transcript. 
Can you please provide me with the contents of the file at 
"/var/folders/2g/07kbk1350b98fd_msglwdr440000gn/T/tmp354ikrbz/CAMZO2Cjtranscript_shorter.txt"?

Inspecting further, it looks like the following code in common/execute_with_custom_tools.py is not returning the correct response.

response = self.client.agents.turn.create(
    agent_id=self.agent_id,
    session_id=self.session_id,
    messages=current_messages,
    attachments=attachments,
    stream=True,
)

I got the following in response.response.content:

'Traceback (most recent call last):\n  File "/Users/friedahuang/.vscode/extensions/ms-python.debugpy-2024.12.0-darwin-
arm64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py", line 193, in _get_py_dictionary\n
attr = getattr(var, name)\n  File "/opt/anaconda3/envs/llamastack-csye7230-searchagent-stack/lib/python3.10/site-
packages/httpx/_models.py", line 572, in content\n    raise ResponseNotRead()\nhttpx.ResponseNotRead: Attempted to access
streaming response content, without having called `read()`.\n'
@frieda-huang
Copy link
Author

frieda-huang commented Oct 21, 2024

The problem is due to the code here. I'm using pgvector for memory bank, but it is not able to retrieve that context from the upstream. Temp solution is to add provider_id="remote::pgvector" manually.

There is another problem in PGVectorIndex(EmbeddingIndex) here, it resulted in

pgvector/pgvector.py", line 60, in __init__
    self.cursor.execute(
psycopg2.errors.SyntaxError: syntax error at or near "-"
LINE 2: ...LE IF NOT EXISTS vector_store_memory_bank_b2acc8c5-5722-415b...

It could be fixed by using self.table_name = f"vector_store_{bank.identifier.replace('-', '_')}"

This is my yaml file:

version: '2'
built_at: '2024-10-14T22:51:08.838065'
image_name: csye7230-searchagent-stack
docker_image: null
conda_env: csye7230-searchagent-stack
apis:
- inference
- safety
- agents
- memory
- telemetry
providers:
  inference:
  - provider_id: remote::ollama
    provider_type: remote::ollama
    config:
      host: 127.0.0.1
      port: 11434
  safety:
  - provider_id: meta-reference
    provider_type: meta-reference
    config:
      llama_guard_shield: null
      enable_prompt_guard: false
  agents:
  - provider_id: meta-reference
    provider_type: meta-reference
    config:
      persistence_store:
        namespace: null
        type: sqlite
        db_path: /Users/friedahuang/.llama/runtime/kvstore.db
  memory:
  - provider_id: remote::pgvector
    provider_type: remote::pgvector
    config:
      host: 127.0.0.1
      port: 5432
      db: remote::pgvector
      user: llamastack_user
      password: PASSWORD
  telemetry:
  - provider_id: meta-reference
    provider_type: meta-reference
    config: {}

@ashwinb
Copy link
Contributor

ashwinb commented Oct 21, 2024

Uh oh, thanks for the helpful details. I will take a look at this today and add a test so we don't break these things again.

@yanxi0830
Copy link
Contributor

yanxi0830 commented Oct 21, 2024

Thanks for raising the issue! meta-llama/llama-stack#264 - This should fix the first issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants