Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR:mem0.memory.graph_memory:Error in search tool: 'entity_type' #2054

Open
zlht812 opened this issue Nov 26, 2024 · 2 comments
Open

ERROR:mem0.memory.graph_memory:Error in search tool: 'entity_type' #2054

zlht812 opened this issue Nov 26, 2024 · 2 comments

Comments

@zlht812
Copy link

zlht812 commented Nov 26, 2024

🐛 Describe the bug

When replacing llm with locally deployed mixtral: 8x22b and executing m.add ("I like pizza", user_id="alice123"), an error is reported: ERROR: mm0. memory. graph. memory: Error in search tool: 'entity type'
TypeError Traceback (most recent call last)
Cell In[22], line 1
----> 1 m.add("I like pizza", user_id="alice123")

File ~/opt/anaconda3/envs/mem0/lib/python3.10/site-packages/mem0/memory/main.py:120, in Memory.add(self, messages, user_id, agent_id, run_id, metadata, filters, prompt)
117 concurrent.futures.wait([future1, future2])
119 vector_store_result = future1.result()
--> 120 graph_result = future2.result()
122 if self.api_version == "v1.1":
123 return {
124 "results": vector_store_result,
125 "relations": graph_result,
126 }

File ~/opt/anaconda3/envs/mem0/lib/python3.10/concurrent/futures/_base.py:451, in Future.result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
453 self._condition.wait(timeout)
455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:

File ~/opt/anaconda3/envs/mem0/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
406 self = None

File ~/opt/anaconda3/envs/mem0/lib/python3.10/concurrent/futures/thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)

File ~/opt/anaconda3/envs/mem0/lib/python3.10/site-packages/mem0/memory/main.py:252, in Memory._add_to_graph(self, messages, filters)
250 self.graph.user_id = "USER"
251 data = "\n".join([msg["content"] for msg in messages if "content" in msg and msg["role"] != "system"])
--> 252 added_entities = self.graph.add(data, filters)
254 return added_entities

File ~/opt/anaconda3/envs/mem0/lib/python3.10/site-packages/mem0/memory/graph_memory.py:69, in MemoryGraph.add(self, data, filters)
66 extracted_relations = self._extract_relations(data, filters, entity_type_map)
68 search_output_string = format_entities(search_output)
---> 69 extracted_relations_string = format_entities(extracted_relations)
70 update_memory_prompt = get_update_memory_messages(search_output_string, extracted_relations_string)
72 _tools = [UPDATE_MEMORY_TOOL_GRAPH, ADD_MEMORY_TOOL_GRAPH, NOOP_TOOL]

File ~/opt/anaconda3/envs/mem0/lib/python3.10/site-packages/mem0/memory/utils.py:27, in format_entities(entities)
25 formatted_lines = []
26 for entity in entities:
---> 27 simplified = f"{entity['source']} -- {entity['relation'].upper()} -- {entity['destination']}"
28 formatted_lines.append(simplified)
30 return "\n".join(formatted_lines)
TypeError: string indices must be integers

@Cirr0e
Copy link

Cirr0e commented Nov 27, 2024

I can see that you're encountering an issue when trying to use Mixtral 8x22B with mem0's graph memory component. The error occurs because the graph memory component expects a specific structured output from the LLM that Mixtral isn't providing correctly.

From the codebase, I can see that the format_entities() function expects each entity to be a dictionary with 'source', 'relation', and 'destination' keys, but it seems Mixtral is returning strings instead.

Here's what's happening:

  1. The graph memory component uses LLMs to extract entities and relationships from text
  2. It expects the LLM to return structured data in a specific format
  3. The Mixtral model appears to be returning unstructured string output instead of the required JSON format

To fix this, you have two options:

  1. Use a supported structured model:
config = {
    "llm": {
        "provider": "openai_structured",  # or "azure_openai_structured"
        "config": {
            "model": "gpt-4o-mini",  # or another supported structured model
            "temperature": 0.1,
            "max_tokens": 2000
        }
    },
    "graph_store": {
        "provider": "neo4j",
        "config": {
            "url": "your_neo4j_url",
            "username": "neo4j",
            "password": "your_password"
        }
    },
    "version": "v1.1"
}
  1. If you must use Mixtral, you'll need to create a custom wrapper that formats its output to match the expected structure:
class MixtralWrapper:
    def generate_response(self, messages, tools):
        # Add formatting logic here to convert Mixtral's output
        # to the expected structure with source/relation/destination
        pass

References:

  1. From the codebase (mem0/memory/graph_memory.py): The graph memory component expects structured output with specific tools and formats
  2. From issue [graph_memory]: The current tools does not support non-structured models. #1822: There was a similar issue with non-structured models, confirming this is a known limitation

Important notes:

  • The graph memory feature requires structured output from the LLM
  • Not all models support the required structured output format
  • Using unsupported models may lead to format compatibility issues

Let me know if you'd like to try either of these solutions or if you need help implementing a custom wrapper for Mixtral.

@zlht812
Copy link
Author

zlht812 commented Nov 27, 2024

I can see that you're encountering an issue when trying to use Mixtral 8x22B with mem0's graph memory component. The error occurs because the graph memory component expects a specific structured output from the LLM that Mixtral isn't providing correctly.

From the codebase, I can see that the format_entities() function expects each entity to be a dictionary with 'source', 'relation', and 'destination' keys, but it seems Mixtral is returning strings instead.

Here's what's happening:

  1. The graph memory component uses LLMs to extract entities and relationships from text
  2. It expects the LLM to return structured data in a specific format
  3. The Mixtral model appears to be returning unstructured string output instead of the required JSON format

To fix this, you have two options:

  1. Use a supported structured model:
config = {
    "llm": {
        "provider": "openai_structured",  # or "azure_openai_structured"
        "config": {
            "model": "gpt-4o-mini",  # or another supported structured model
            "temperature": 0.1,
            "max_tokens": 2000
        }
    },
    "graph_store": {
        "provider": "neo4j",
        "config": {
            "url": "your_neo4j_url",
            "username": "neo4j",
            "password": "your_password"
        }
    },
    "version": "v1.1"
}
  1. If you must use Mixtral, you'll need to create a custom wrapper that formats its output to match the expected structure:
class MixtralWrapper:
    def generate_response(self, messages, tools):
        # Add formatting logic here to convert Mixtral's output
        # to the expected structure with source/relation/destination
        pass

References:

  1. From the codebase (mem0/memory/graph_memory.py): The graph memory component expects structured output with specific tools and formats
  2. From issue [graph_memory]: The current tools does not support non-structured models. #1822: There was a similar issue with non-structured models, confirming this is a known limitation

Important notes:

  • The graph memory feature requires structured output from the LLM
  • Not all models support the required structured output format
  • Using unsupported models may lead to format compatibility issues

Let me know if you'd like to try either of these solutions or if you need help implementing a custom wrapper for Mixtral.

Thank you for your response. If possible, we hope to provide support for mixtral: 8x22b and qwen2.5:72b.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants