Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix for client listener threads async callback deadlock #55

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

robert-warmka
Copy link

Currently there is a potential deadlock in the Kinetic C Client, stemming from box_execute_cb. At the moment, client listener threads execute these user-defined callback functions. There are a limited number of listener threads spawned when the C Client starts (currently set to 4).

Here is how the client can deadlock currently: If the user blocks all listener threads and then executes a synchronous Kinetic command, the client will deadlock.

An example of how this can happen: A user writes a callback to lock each thread on a shared resource, and then do a PUT of that resource to the connected Kinetic drive. If one listener thread has the mutex, and the others are blocked waiting for the mutex, then only one listener thread is currently active. If this listener thread then tries to make a synchronous PUT, for example, then the last active thread will block on line 86 of the function KineticController_ExecuteOperation in the file kinetic_controller.c. The PUT will execute, but the PUT response message will never be read because all listener threads are blocked, thus deadlock.

My fix is to spawn a new thread per arbitrary user callback. A small design change, but it alleviates the listener threads of the responsibility of executing arbitrary user callback code, allowing listener threads to do what they were created to do: deal with incoming kinetic messages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant