This repository has been archived by the owner on Apr 18, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 149
API documentation #12
Comments
Hi, Unfortunately, what's here is what there is. The example agents hopefully give some insight into the API. As to your other question, I'm not exactly sure what it's asking -- OpenAI Gym doesn't provide a Hanabi environment per se. AFAIK Gym also doesn't support multiplayer games. Best,
|
To the first question I would just add, as a starting point check out
game_example.cc or .py. They are simple and demonstrate the core methods.
Also pyhanabi.py has doc strings for most of the non-obvious methods.
Is there anything in particular that is unclear?
…On Wed, Mar 13, 2019, 8:14 PM Marc G. Bellemare, ***@***.***> wrote:
Hi,
Unfortunately, what's here is what there is. The example agents hopefully
give some insight into the API.
As to your other question, I'm not exactly sure what it's asking -- OpenAI
Gym doesn't provide a Hanabi environment per se. AFAIK Gym also doesn't
support multiplayer games.
Best,
- Marc
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#12 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADv0r3ZTTYEjVAKTP904gH80FRtazXaQks5vWbBygaJpZM4bx-HO>
.
|
Thanks for your explanations. It wasn't clear to me where to start, so the guidance to begin with WRT my second question, the Hanabi Learning Environment has potential to be used as a template or gold-standard for developing learning environments for other games. I was hoping to get a sense if this is encouraged, especially for multiplayer games. |
No problem, maybe we can add a pointer to those files in the README.
On the comment to the second question: I think for this code base will
continue to focus on Hanabi, but yes I agree that it seems like a good way
to do multiagent games generally. To the best of my knowledge, as Marc said
part of the problem with gym compatibility is limited support for
multiagent games. For example, the game environments, like Blackjack and
(previously?) Hex, assume specific policies for the opponents. From my
experience, to do MARL in games, you need two things: (i) the environment
needs to handle the specific use case of multiple decision-makers, (ii) the
algorithms/agents need specific support for this as well. For example, in
(i) the environment may have to handle turn-based games and simultaneous
games (like gridworlds) quite differently. For games specifically, in (ii)
you need the learning algorithms to handle things like a subset of the
actions being legal vs others illegal.
It would be great to have gym environments for multiagent games, but AFAICT
these features do not exist yet. People have been talking about adding
support for it, though: see openai/gym#934 .
…On Wed, Mar 13, 2019 at 10:03 PM karhohs ***@***.***> wrote:
Thanks for your explanations. It wasn't clear to me where to start, so the
guidance to begin with game_example.cc or game_example.py is very helpful.
WRT my second question, the Hanabi Learning Environment has potential to
be used as a template or gold-standard for developing learning environments
for other games. I was hoping to get a sense if this is encouraged,
especially for multiplayer games.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#12 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADv0r77DlOXcMT-k35lgk4ssHqB2aGVcks5vWcoJgaJpZM4bx-HO>
.
|
anybody understand what the output of train.py means? |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi!
I couldn't find documentation for the API. Does it exist?
Also, the README.md states the API is similar to OpenAI Gym. Would someone please share why OpenAI Gym was not sufficient for this project?
Thanks!
The text was updated successfully, but these errors were encountered: