You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
We have been running tyk-gateway inside the docker container, on Kubernetes, for several years now, going all the way back to 3.2.3.
We upgrade to new versions every couple of weeks or months to keep our vulnerablity scans (and hence compliance team) happy.
Recently we upgraded to 5.3, and then 5.4, from 5.2. That required that we almost triple our memory allocation on the Kubernetes pod to keep tyk-gateway from crashing on the pod we had it running on.
I'm trying to understand if there is a new setting in 5.3 and beyond that we need to adhere to, if there is something we are missing, or if this is expected behavior.
I did see that the versions after 5.3 upgrade to a newer version of Go along with some changes to the Redis drivers.
We are using Redis and thought maybe the new drivers were causing this behavior. We changed from redis-haproxy to redis-sentinel and there was no effect there.
Our install is largely a "configure and run it" type of setup. It's used for rate limiting along with proxying API calls. The production install runs about 600-700 requests per minute.
Reproduction steps
See above, this is just a question on the new version's memory usage.
Actual behavior
A clear and concise description of what you expected to happen.
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots/Video
If applicable, add screenshots or video to help explain your problem.
Logs (debug mode or log file):
Log from console or from log file.
Thanks for raising this observation. We've actually been looking into similar reports of excessive memory usage since 5.3 and have identified a change that was made where we can inadvertently log too much data from each request. So it's not a new config flag you've missed, but is a bug we introduced accidentally. 🙈
We've got a fix for this in the works and aim to include it in the next available release, probably in November.
Thanks again for flagging this to us - and for supporting Tyk!
EDIT: Clarification - we store too much data in memory, not "log too much data"
Branch/Environment/Version
Describe the bug
We have been running tyk-gateway inside the docker container, on Kubernetes, for several years now, going all the way back to 3.2.3.
We upgrade to new versions every couple of weeks or months to keep our vulnerablity scans (and hence compliance team) happy.
Recently we upgraded to 5.3, and then 5.4, from 5.2. That required that we almost triple our memory allocation on the Kubernetes pod to keep tyk-gateway from crashing on the pod we had it running on.
I'm trying to understand if there is a new setting in 5.3 and beyond that we need to adhere to, if there is something we are missing, or if this is expected behavior.
I did see that the versions after 5.3 upgrade to a newer version of Go along with some changes to the Redis drivers.
We are using Redis and thought maybe the new drivers were causing this behavior. We changed from
redis-haproxy
toredis-sentinel
and there was no effect there.Our install is largely a "configure and run it" type of setup. It's used for rate limiting along with proxying API calls. The production install runs about 600-700 requests per minute.
Reproduction steps
See above, this is just a question on the new version's memory usage.
Actual behavior
A clear and concise description of what you expected to happen.
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots/Video
If applicable, add screenshots or video to help explain your problem.
Logs (debug mode or log file):
Log from console or from log file.
Configuration (tyk config file):
Attach tyk configuration file
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: