-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issues under load #78
Comments
Thanks for trying it out. Couple questions:
I think this might have been resolved by the following:
Give |
Is that right? I've run much bigger workloads than this w/o issue. Maybe the request/response payloads are much bigger? |
|
Thanks for all the info. I'll try to reproduce and see what I can find.
This is something I have not tried yet. |
I ran some tests using 1MB payloads (reads) @ ~70 request/s. Here are the results (%iles in microseconds): direct (no proxy):
cql-proxy (
|
I'm currently running a 1hr test @ ~70 request/s using cql-proxy. I'll post the result when done. Results:
Raw data: https://gist.github.com/mpenick/75bdd66e2699e6564e87d0e6140154a1 |
I noticed this: ea8421b, Seeing a huge difference in performance (from reduced system calls). Give that a try, it's now on |
We just tried v0.1.1 and we're seeing the same issue. |
It was initially spurts now exactly (within a minute) of every 20 minutes. 10:21, 10:51, 11:11, but then it became fairly constant and occasional peaks of over 1000 failures within two minutes. |
@mpenick Do you have any update/suggestions? |
I haven't been able to reproduce the issue yet so it's hard for me to know how to proceed here. I've tried to reproduce in tests above. What's the request timeout setting you're using for |
Thinking about this a bit more, maybe it makes sense to add some initial metrics to cql-proxy. We could then see where the extra latency is coming from by pointing a Grafana instance at it. |
The cql-proxy sidecar is having trouble handling larger loads. We brought it up to our staging environment which has consistent loads at about an average of at about .25 million reads per hour and we noticed that our requests started timing out.
The text was updated successfully, but these errors were encountered: