-
Notifications
You must be signed in to change notification settings - Fork 542
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"ideal" server spec per node. #289
Comments
this heavily relies on your application's workload, e.g. how many proposals you need to perform every second, how heavy will it be to execute your proposal to update your state machine, do you really need the proposal execution result or you just need to make sure that it is correctly ordered & stored.
Using only one node (replica) per server is going to cause you trouble later down the road. The main purposes of having more nodes (replicas) per server is to allow certain portion of them to be migrate to other servers when the load is too high, it also allow high parallelism, you also get the benefits of only requiring to snapshot those busy replicas more often. Spanner, TiDB and CockroachDB all follow this approach for good reasons. |
also note that dragonboat uses very limited amount of CPUs. to get the best performance, make sure you use NVME SSDs that have fast fsync performance. if you really decide to use one replica per server, please understand that dragonboat is not optimized for such use case, it is targeting lots of replicas per server with large number of concurrent requests spread across those replicas. I mean I haven't worked on any project that uses only one replica per server. |
@lni i meant i raft group per server and 3 replicas. benchmark "looks good". at the end of the day, i guess i'm complaining coz there's not actual production use case studies for reference. @kevburnsjr yes i know about scylladb's limitations. but i'll be using terarkdb so i guess can have more. 60TB / cpu core is a bit more towards "warm" storage. which is the way to use as data store with cdn in front. |
i was thinking about this raft implementation a lot and wondering what's the ratio of cpu cores vs disk space (and ram amount) that's "ideal" optimum for ingestion and throughput etc.
we all know golang's multi threading is not that great compared with c++ (seastar) or rust (monoio/glommio).
based on your experience, what do u think is best to achieve the optimum core / diskspace (and maybe ram)
your benchmark is measured using this spec:
https://github.com/lni/dragonboat/blob/master/docs/test.md
but real world situations doesnt do so many raft groups per server.
it's definitely only 1 node per server. so my question is based upon what's the ideal server spec per node.
asking so because i need to write my underlying application logic to allow for cpu processing and memory allocation and the choice of what database to use etc.
p.s. : i'm using this raft mostly for resilient storage space.
for storage:
my experience,
for simple raft coordination
i wonder if it's tested in raspberry pi 4 or something smaller.
conclusion, what's my problem / question?
that's all i'm asking. hope to have clarifications. thx.
The text was updated successfully, but these errors were encountered: