-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-Agent RL #648
Comments
Thanks! Personally I know nothing about Reinforcement learning.
Well, the best you can do is make an integration example like we do with DiffEq, BlackBox, etc., and then open a PR in Agents.jl docs!
This statement was done without any reasoning as to why this would be hard. The integration examples with DiffEq, BlackBox, and practically any other package, was as straight forward as they could be. What is so special here? |
Hm, after looking at the wikipedia page for Reinforcement Learning https://en.wikipedia.org/wiki/Reinforcement_learning it seems to me that the process described there can already be done with Agents.jl without making any new code...? What is missing that you couldn't do out of the box in Agents.jl? Can you be specific about the actual difficulties you encountered while trying to model Reinforcement Learning in Agents.jl? (Agents.jl also support multi-agent environments) |
That is a fair criticism; therefore, I will add some context. Before writing this issue, I started working on integrating my
As mentioned above, |
Below is an example of the methods you would need to implement in order to interface with
mutable struct ABMEnv{T} <: AbstractEnv
abm::ABM
reward::T
end
ABMEnv(abm::ABM) = ABMEnv(abm, 0.0)
function RLBase.state(env::ABMEnv, ::Observation{Int}, player::Nothing) end
function RLBase.state(env::ABMEnv, ::Observation{Int}, player::Int) end
function RLBase.reward(env::ABMEnv, player::Int)
return env.reward
end
function (env::ABMEnv)(action, player::Int) end
function RLBase.action_space(env::ABMEnv, player::Nothing) end
function RLBase.action_space(env::ABMEnv, player::Int) end
function RLBase.state_space(env::ABMEnv, ::Observation{Int}, player::Nothing) end
function RLBase.state_space(env::ABMEnv, ::Observation{Int}, player::Int) end
function RLBase.is_terminated(env::ABMEnv) end
function RLBase.is_terminated(env::ABMEnv, player::Int) end
function RLBase.players(env::ABMEnv) end
function RLBase.current_player(env::ABMEnv) end
function RLBase.reset!(env::ABMEnv) end
function RLBase.NumAgentStyle(::ABMEnv)
return MultiAgent(<num agents>)
end
RLBase.DynamicStyle(::ABMEnv) = SEQUENTIAL
RLBase.ActionStyle(::ABMEnv) = MINIMAL_ACTION_SET
RLBase.InformationStyle(::ABMEnv) = IMPERFECT_INFORMATION
RLBase.StateStyle(::ABMEnv) = Observation{Int}()
RLBase.RewardStyle(::ABMEnv) = TERMINAL_REWARD
RLBase.UtilityStyle(::ABMEnv) = IDENTICAL_UTILITY
RLBase.ChanceStyle(::ABMEnv) = DETERMINISTIC |
I am not sure we are on the same page yet unfortunately. Let's first take a step back, because I am not working at your field and therefore I don't know why you need ReinforcementLearning. In the DiffEq/BlackBox integration examples it is easy for me to understand why an interplay with a different package is required: Agents.jl can't solve ODEs, or minimize functions. For the case of ReinforcementLearning.jl, what are the specific things you need it for? The reason I am asking is, because of my ignorance, so far it seems to me that ReinforcementLearning.jl is an alternative way to do agent based simulations. So I am trying to figure out why, or how, one would combine two alternative ways to do the same thing.
That's another point I think we are not on the same page. So far it seems to me that the problem is that all of these methods depend on the specific scientific problem you are trying to simulate. How would you define them generally? Isn't |
In any case, to be clear: if you can come up with a low-dependency way to establish this interfacing you need between these two packages, it is welcomed in Agents.jl as a submodule. I don't have to understand your field to welcome such an addition :D |
A typical example is here https://github.com/Farama-Foundation/MAgent And I believe Agents.jl is quite flexible to create such environments. |
Reinforcement learning is about how agents learn to take actions such as to maximize their reward. So it's not about studying how agents with a fixed set of rules behave, but to set some goals and to let the agent figure out how to achieve them. Which is kind of intriguing because you could tie the reward signal to agents to some macroscopic behavior and then let the agents learn to behave microscopically in such a way to get the macroscopic behavior right. |
For reference on the type of problems that could utilize such approach (multi agent reinforcement learning) in python world there is:
@Datseris, taking pettingzoo as a reference, what's your view on Agents.jl suitability for such use case? |
I don't have any background in reinforcement learning so I am not qualified to answer this at the moment. I would have to go through the repositories in detail and unfortunately I do not have the time capacity for doing this at the moment. User @findmyway claimed that Agents.jl is suitable for such tasks, perhaps they can expand on this. |
I know a decent amount of reinforcement learning and I think @simsurace gives a good example on how to use reinforcement learning in the ABM world, I really think the way to go is to interface with ReinforcementLearning.jl as @mplemay suggested. One way would be to use one of the standard models such as WolfSheep and use reinforcement learning for setting global goals for sheeps and wolves. It would be very cool. If someone has the time to work on this, I will take the time to review the integration, because I don't really have time to work on that at the moment myself |
e.g. this is a good paper in my opinion for more details: https://www.nature.com/articles/s41598-020-68447-8 |
@Tortar shall we close this issue? Since ReinforcementLearning.jl exists, and is better suited for the simulation scenario in discussion, is there any point in leaving this open? This isn't an integration example nor request for one. In fact, I don't even know if there is a request in this Issue anymore. |
there is surely a lot that can be done by integration e.g. there was one gsoc this year in mesa for a RL integration for that framework: https://github.com/harshmahesheka/mesa_rl |
and they added those examples to their examples library: https://github.com/projectmesa/mesa-examples/tree/main/rl |
TBH, it would be great to have an example, but while I know my fair share about ABM, I am a newbie with RL, so not sure I can bring anything to the discussion more than "another user is interested"! |
Is your feature request related to a problem? Please describe.
First off, I would like to thank you for building and maintaining an amazing project! One feature, I would be interested in adding/contributing is functionality/pathway to building RL agents with the
Agents.jl
framework which was mentioned in passing by another individual here.Describe the solution you'd like
For casual users of
Agents.jl
it can be daunting to Reinforcement Learning agents. If it is within the scope of the project, I think it would be valuable to provide some guidance or tools for creating a RL Agents. Depending on the desired scope, there are many ways to go about this.Describe alternatives you've considered
Below is a short list of options I have considered. If it makes sense to explore any of these options, it may be worth a more in-depth study.
ReinforcementLearningAgents.jl
) withinAgents.jl
/JuliaDynamics ecosystemAgents.jl
Agents.jl
popularityPlease let me know if there is anything I could do to provide more clarification or insight.
The text was updated successfully, but these errors were encountered: