Kotlin implementation of the raft consensus algorithm
A raft is a consensus algorithm that is designed to be easy to understand. It's equivalent to Paxos in fault-tolerance and performance
Repository provides an example implementation and show-case usage with in-memory key-value storage
#Cloning repository
git clone https://github.com/AChepurnoi/raft-kotlin.git
#Building jar file
./gradlew jar
docker-compose up --build
To interact with cluster you can use cURL:
#Set test=hello
docker run --rm --net=raft-kt_default hortonworks/alpine-curl:3.1 curl --request POST --url node_one:8000/test --data 'hello' ; echo
#Read test value
docker run --rm --net=raft-kt_default hortonworks/alpine-curl:3.1 curl --request GET --url node_one:8000/test ; echo
The key-value implementation uses 307 Redirect
to redirect requests from slaves to the master.
This requires you to be able to resolve the IP from configuration (You should interact with HTTP server only from docker network e.g. you container)
Another option is to run jar files locally with proper env configuration
To read node list from env, raft env configuration uses the following notation:
NODES=[ID]:[HOST]:[PORT],[ID]:[HOST]:[PORT]...
Shows the example of how raft module can be used to implement distributed in-memory key-value storage.
Current implementation exposes two endpoints:
# Set `key={request_body}
POST HOST/{key}
#Returns the value of the `key` or `Nil` if the key does not exist
GET HOST/{key}
Key-value HTTP server uses 8000
port by default
Exposes RaftNode
class for clients to create a cluster node,
actions to mutate the state of the cluster
and method to fetch the current state.
Components:
- State
- Log
- gRPC Client/Server
- Clock
- Actions
- Raft Controller
- Unit tests -
actions
,clock
,log
,state
andRaftController
classes are tested - Integration tests -
RaftClusterTesting
class contains different test cases for living cluster (WithLocalRaftNode
instead ofGrpcClusterNode
) - Key-Value container testing - kv cluster testing -
Not implemented yet
This is not production ready implementation and very likely there are bugs
- Refactoring
- Revisit
@Volatile
and Mutex usages - Implement persistent log storage
- Implement snapshotting