You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've seen our Arcane operator increase in memory until it hits our limits to then get restarted by kubernetes
There are only 5 streams attached to the operator, so looks like it could be a memory leak:
Steps to reproduce the issue
Monitor Arcane Operator Memory
Describe the results you expected
More or less constant
System information
v0.0.10
The text was updated successfully, but these errors were encountered:
Need to investigate. I don't see this behavior on our cluster, but we did not perform the massive migration of our streams. I will add more streams and take a look at how much memory Arcane.Operator consumes. Also a problem may exist with cache and events deduplication.
Description
We've seen our Arcane operator increase in memory until it hits our limits to then get restarted by kubernetes
There are only 5 streams attached to the operator, so looks like it could be a memory leak:
Steps to reproduce the issue
Monitor Arcane Operator Memory
Describe the results you expected
More or less constant
System information
v0.0.10
The text was updated successfully, but these errors were encountered: