Overview
NodeSync is a Python-based distributed key-value store that demonstrates core distributed systems concepts such as leader-based replication, heartbeat-driven failure detection, and dynamic consistency selection.
๐ Features
- Distributed key-value storage using TCP sockets
- Multi-node replication
- Heartbeat-based liveness monitoring
- Runtime consistency switching
- Eventual and strong (quorum-based) consistency
- Concurrent client handling using threads
- Leader election (Raft-lite)
- Automatic request forwarding to leader
- Automatic leader re-election on node failure
๐ Architecture Overview
Leader election is deterministic:
โก๏ธ the alive node with the highest port number becomes leader
๐ Running the System
Each node runs as an independent process.
- The first argument is the node's port (used as its ID)
- Remaining arguments are peer addresses
Start 3 nodes (example)
python nodes/node.py 5000 127.0.0.1:5001 127.0.0.1:5002
python nodes/node.py 5001 127.0.0.1:5000 127.0.0.1:5002
python nodes/node.py 5002 127.0.0.1:5000 127.0.0.1:5001
The node with the highest port becomes leader.
16:55:49 [NodeSync ] Node 5000 running on 127.0.0.1:5000
16:55:49 [NodeSync ] Peers: [('127.0.0.1', 5001), ('127.0.0.1', 5002)]
16:55:49 [ELECTION ] New leader elected: 5002
๐งช Client Interaction (PowerShell Example)
$client = New-Object System.Net.Sockets.TcpClient("127.0.0.1", 5002)
$stream = $client.GetStream()
$writer = New-Object System.IO.StreamWriter($stream)
$reader = New-Object System.IO.StreamReader($stream)
$writer.AutoFlush = $true
$writer.WriteLine("SET hello world")
$reader.ReadLine()
$writer.WriteLine("GET hello")
$reader.ReadLine()
Consistency Modes
NodeSync supports runtime-switchable consistency models, allowing experimentation with CAP trade-offs.
CONSISTENCY eventual
CONSISTENCY strong
- Eventual: low latency, asynchronous replication
- Strong: quorum-based replication before ACK
if quorum cannot be reached in strong mode, the write fails with:
FAIL: quorum not reached
This enables direct comparison of consistency-performance trade-offs.
Fault Tolerance
Node failures are detected using missed heartbeats. When a leader fails, a new leader is automatically elected, allowing the system to continue operating.
๐ Performance Benchmarking
NodeSync was benchmarked under both eventual and strong consistency modes.
Results show that quorum-based strong consistency increases write latency due to replica acknowledgements, while eventual consistency provides lower latency.
CLI Results:
python benchmark/benchmark.py
[Benchmark] Running Eventual Consistency test...
[Benchmark] Running Strong Consistency test...
======== RESULTS ========
Eventual Consistency:
Avg latency: 0.0139s
Max latency: 0.0371s
Strong Consistency:
Avg latency: 0.0153s
Max latency: 0.0429s
๐ Future Improvements
- Persistent write-ahead logs
- Strong consistency using Raft log replication
- Sharding and partitioning
- Client library abstraction
๐ Why NodeSync?
NodeSync bridges theoretical concepts such as CAP theorem and quorum consensus with practical implementation, making it suitable for academic evaluation and experimentation.