You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we're using e2-highmem-8 (8 vCPUs and 64GB RAM) for perf testing.
e2-highmem-16 (16 vCPUs and 128GB RAM) are also a good fit for larger workloads/density.
General recommendations for running Cassandra in production:
8 to 16 vCPUs
32GB to 128GB RAM (no more than 31GB heap but the rest will be used for offheap memory and file cache)
~2TB of live data per node (not a hard limit, Cassandra can support much more but time to recovery gets fairly high then).
Zero copy streaming as it was implemented for 4.0 only works for tables using LCS and requires that there aren't too many vnodes (if at all), which reduces the benefits we can expect from it.
Compaction and repair improvements though should be helpful to handle higher densities. Even without Zero Copy Streaming, I guess the lighter memory footprint of 4.0 should help reducing GC during streaming and make it faster.
It could be interesting to benchmark this specific aspect to compare 4.0 to 3.11 time to recovery.
An open issue for documenting GKE instance sizing
┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-175
┆priority: Medium
The text was updated successfully, but these errors were encountered: