-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different questions about Thespian and its functionalities #77
Comments
I don't mind multiple questions, although it might get somewhat confusing to have multiple threads of responses, so let us both feel free to split a question into a separate issue if things get confusing here. 1: Keeping an actor alive There are a couple of ways you could do this: 2: No ask for the Actor class The reason there is no ask is that ask is a blocking operation. At the fundamental level, an actor should: If the message received in step 1 cannot be handled in that single invocation, but must get a response from another actor to continue the processing, it should arrange to resume processing of the request when that response is received, but it should not block waiting for that response because it prevents it from handling other messages. To do this, you can either (a) store the original message internally to the actor, where it can be retrieved and processing can continue when the other actor's response is received, or (b) attach the original message to the request to the other actor and ensure that the other actor returns that original message as an attachment to the response. In general, (b) is the preferred mechanism to use, because it keeps the state management in the messages and not in the actors themselves; this is important because if an actor dies and is restarted, it can simply resume processing messages and doesn't require a re-initialization. Method (b) is also preferred because if the second actor never responds for some reason, the original request is not still consuming resources in the first actor (although an alternative is to use a In short, the "await" technique is a mechanism used for process-based or thread-based synchronization techniques, whereas an actor-based approach does not use those types of mechanisms and is fundamentally oriented towards receive-respond message handling technique, so you will need to deal with "message/response hell", but you will never need to deal with synchronization and deadlocks. |
hi, thanks for answers i wil look into Thespian directory documentation. It looks very promising. this part is not correct:
it should be:
|
Is there any possibility to start actor system in a foreground? i build an application where there is one main "manager" node and undefined number of worker nodes. Each worker node is a docker image with thespian inside. I need to keep docker container alive and it would be the best to have main actor process: |
do i correctly understand this scenario? i have 1 main thespian node running in a docker container. Thespian inside of it is started by: i have several thespian worker nodes running in their own docker containers. Thespian in each of them is run by: All of the nodes have joined to a convention with main thespian node as a leader If i prepare some python package with some actor implementations and preprocess it with command "gensrc" |
Thank you for catching the issue in the director documentation. Just to be sure I didn't miss anything, the main issue was the missing comma between the handlers and filters sections? Currently there is no provision to run the actor system as the main process. I would recommend just using a dummy action as your main container process. The scenario you describe is correct: the loaded sources will automatically be transferred to between ActorSystems running in different containers on an as-needed basis to satisfy |
hi, i see in a documentation following statement:
Can i ask for the same behaviour but for other config files?
and after starting an actor system i see:
so either eval is not applied here or this command shows just plain text of a configuration loaded |
The |
yes exactly ;-) i saw this after i finally was able to run two nodes in docker containers and connect them in a convention:
so i suggest updating a documentation to emphasize the fact that config files are also evaluated ;-) It was not clear to me after reading current docs ok so now i have a big success:
and very very minor problem:
i don't know from where come that lines. Ok maybe I know that from thespian director but don't know why ;-) and documentation needs one more correction. Here is the final and working logging configuration for thespian director:
what i had to change additionally to make this working?
|
I have a question:
is value of this interval somehow configurable? I would like to lower it to 5 minutes |
I think the And you have my continued appreciation and apologies for the bad logging example in the director documentation: my guess is that perhaps I hand-copied that (badly!) from somewhere and introduced several syntax errors in the process. The upcoming release will include all your fixes to this. If the convention member starts after the convention leader, there will be a delay. This is not currently configurable, although that's not unreasonable to consider. The current timing values are managed here: https://github.com/kquick/Thespian/blob/master/thespian/system/admin/convention.py#L20-L23. Current functionality is designed to be relatively conservative to avoid excessive network load during convention startup/shutdown; the assumption is that the convention can be quite large (these values were tested in a configuration with ~10,000 nodes) and that the convention tended to be long-running and reasonably well established prior to use (as opposed to your use case which is a fast startup and join scenario). Please feel free to experiment with changing the above values in your local copy and we can then determine the best way to support making these configurable if the modifications provide the behavior you are looking for. |
I've released version 3.10.5; let me know if that still has unusual output about log transmit records. |
it looks like this version has resolved this problem But i have another one ;-) Below you can find an output from an example project i have published here:
of course i do not stop my worker node. It is still working so why every 10 minutes it seems that it is leaving the convention and immediatelly joining it once again? :
@kquick could you run my example project and confirm that you can observe the same behaviour? This behaviour exists either in version 3.10.5 and 3.10.4 |
hi,
i hope you do not mind this type of task, collecting all my questions in one thread
so:
Looking into documentation i see that named actors do not have parents and therefore there is no any actor that could receive "ChildActorExited" message. But i guess that an actor system itself is receiving such a notification?
So i am looking for something like this:
ActorSystem().createActor(actor_class, globalName=attr, keep_alive=True)
There is an actor system method "ask" but no corresponding method on an actor class. What is the reason of this?
Without that how should look like normal processing like this:
Without something like "await" are we going to some kind of callback hell? (or a message/response hell? ;-))
or is there something like:
self.await(
BankAccountData,
self.send(BankActorAddress, AskForBankAccountData(client_number)
)
self.await would stop an actor processing until there is a message of type BankAccountData that an actor receive from BankActorAddress
The text was updated successfully, but these errors were encountered: