You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You have 24 workers, so why do you expect fewer than that number of processes?
I don't really see any proof of a memory leak. Peak memory use is not the same thing as a leak.
You have 24 workers and a prefetch value of 24, which means you can have up to 24 * 24 = 576 messages delivered and unacknowledged at any given moment.
Depending on their size it can have a massive effect on the amount of memory used.
Perhaps try using fewer workers, and if your messages can be large (say, in megabytes), a lower prefetch of 8-16.
@michaelklishin thanks for you reply!
I've added monitoring with Node Exporter and Prometheus, here's some metrics:
To elaborate on the screenshot, we rebooted the sneakers docker container at around 19:00, and it had some minor tasks to process. Then, at just past 00:00, there is a major workload that lasts for roughly 45 minutes. After it finished, the memory used is not released back.
Could you please tell how Worker class instance handles tasks, does it get destroyed after messages were acked/nacked? If not, looks like we should avoid storing any data in those workers.
I'm starting sneakers(2.12.0) process like this:
with this confiuration:
Even if I specify 1 thread per worker, anyways I see 19 processes per worker:
And in the result we have a huge memory leak after first job is started:
I'm not familiar with Ruby so could you please tell how to profile such an issue?
The text was updated successfully, but these errors were encountered: