Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to remove navigation/embodiment requirements for a task #4

Open
PeterAJansen opened this issue Aug 19, 2024 · 3 comments
Open
Labels
enhancement New feature or request

Comments

@PeterAJansen
Copy link
Collaborator

We've had an external feature request on whether it's possible to remove the navigation/embodiment (e.g. object manipulation) aspects of tasks through setting a flag (e.g. embodiment = false) to try to distill the scientific discovery aspects of the tasks from other skills (even more so than the unit tests do).

Adding this as a feature request so we can start a thread on how this might be accomplished, since there are a number of implementation routes/challenges:

  • Navigation: Currently we have 'teleport_to_location' and 'teleport_to_object' actions that greatly reduce the navigation needs of an agent. What might it look like to have zero navigation needs, i.e. not needing to move, search for items, etc? Would this look like, e.g., being able to see and manipulate all objects in the environment, regardless of the agent's current location (i.e. an omniscient agent)? If so, this changes the model (e.g. it's no longer a partially-observable environment).
  • Object manipulation: It's not immediately clear to me how to remove these requirements -- e.g. picking up objects that you need, etc. -- unless we also make the agent able to interact with every object regardless of its location (i.e. no checks for whether the objects that an action is performed on are accessible), which might be a simple change.

Some of the challenges here are:

  • Would these be faithful tests of removing navigation/object manipulation requirements, while maintaining discovery task difficulty? If not, what might alternate modifications be?
  • Observation: If the agent is omniscient and able to view many more(/all) objects in the environment, suddenly the size of the observation from the environment might become very large if it has to enumerate ~1000 objects, instead of only the objects within a short distance from the agent. That would definitely increase the load on an LLM/other agent model.
  • There are tasks that have steps that are spatial in nature (e.g. rocket science), and it's not clear how these modifications would translate.
  • How do we maintain a faithful representation of the visual output, if the agent isn't moving/can manipulate everything? Possibly teleport it to the last object that it interacted with, and use that as the visual observation?
@PeterAJansen PeterAJansen added the enhancement New feature or request label Aug 19, 2024
@Dandelionym
Copy link

Dear author,

Really appreciate this excellent work! However, some feedbacks hope to be valuable.

  1. The benchmark has various actions for an agent to take. Those actions such as moving left, and right, are meaningless (at least I think it is not a good setting).
  2. An LLM with a long prompt and too much prompt engineering generally fails to finish the game.
  3. It is hard to load a novel algorithm if there is no clear API documentation and examples.
  4. The given random agent is useful but doesn't consider the feasibility of adapting to other cases, together with the prompt for location transition. (as I see the above issue)
  5. Hope the author can release the code example and modify the code base by removing the location transition. That would be helpful.

Thank you all for this excellent work. :-)

@MarcCote
Copy link
Collaborator

@Dandelionym thanks for sharing additional feedback. Do you have any idea on how best to remove the navigation action and still make sense in a multi-modal environment?

Even for pure text-based games (see ScienceWorld and TextWorld), a minimum of spatial navigation is needed. If we completely abstract it away, then it means all objects can be interacted with at all time, i.e. removing a big chunk of partial observability.

@PeterAJansen
Copy link
Collaborator Author

@Dandelionym thanks again -- as an additional follow-up re:API documentation and agents, do you have specific questions about the API documentation ( https://github.com/allenai/discoveryworld , Section 3: "API Documentation"), the agents other than the random agent (e.g. the LLM agent pseudocode scaffold in the API documentation, or the agents in the paper whose code is included in this repository at https://github.com/allenai/discoveryworld/tree/main/agents ) that we can help with?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants