A high-performance implementation of the ABD (Attiya-Bar-Noy-Dolev) and the multi-writer variant (MW-ABD) atomic register protocols using eBPF for in-kernel packet processing, with a userspace reference implementation for comparison.
abdBPF/
├── abd/ # Userspace coordinator and loaders
├── abd-common/ # Shared types, constants, and message definitions
├── abd-ebpf/ # eBPF programs (XDP/TC) implementing ABD logic
├── abd-userspace/ # Userspace protocol implementation
├── bench/ # Benchmarking and performance measurement tools
├── client/ # CLI client for testing and interaction
├── scripts/ # Orchestration, evaluation, and utility scripts
├── evaluation_results_*/ # Generated benchmark results and analysis
└── logs/ # Runtime logs and monitoring data
Ensure you have the following dependencies installed:
- Linux Kernel: 5.4+ with BPF support enabled
- Build Tools:
build-essential,pkg-config,clang,llvm - System Libraries:
libbpf-dev - Network Tools:
iproute2(for network namespace management)
# Install Rust via rustup
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Install required toolchains
rustup toolchain install stable
rustup toolchain install nightly --component rust-src
# Install bpf-linker for eBPF compilation
cargo install bpf-linker# Core dependencies for evaluation and monitoring
pip3 install matplotlib numpy scipy pandas
# Optional: for CPU monitoring
pip3 install psutil
# Or install all at once
pip3 install -r scripts/requirements.txtsudo apt update
sudo apt install build-essential pkg-config clang llvm libbpf-dev iproute2# Build all components (debug mode)
cargo build
# Build optimized release version
cargo build --release
# Build with multi-writer support
cargo build --features multi-writer
# Check compilation without building
cargo check
# Clean build artifacts
cargo cleanThe main orchestration script handles cluster setup, network namespaces, and service coordination:
# Basic cluster with 3 nodes (eBPF implementation)
sudo python3 scripts/run.py
# Userspace implementation with 5 nodes
sudo python3 scripts/run.py -n 5 -u
# Run with debug builds and wait for manual termination
sudo python3 scripts/run.py -d -w
# Run quick functionality test
sudo python3 scripts/run.py test
# Run latency benchmark
sudo python3 scripts/run.py bench latency
# Run throughput benchmark
sudo python3 scripts/run.py bench throughput-n, --num-nodes NUM_NODES: Number of replica nodes (default: 3)-u, --userspace: Use userspace implementation instead of eBPF-d: Use debug builds-w, --wait: Wait for background services after client finishestest: Run basic functionality test with read/write operationsbench {latency|throughput}: Run specific benchmark type
The evaluation script provides automated benchmarking and analysis:
# Full evaluation (both eBPF and userspace implementations)
sudo python3 scripts/evaluate.py
# Analyze existing results without re-running benchmarks
python3 scripts/evaluate.py --skip-benchmarks
# Custom output directory
sudo python3 scripts/evaluate.py --output my_evaluation
# Scalability analysis across different node counts
sudo python3 scripts/evaluate.py --scalability
# Custom node counts for scalability testing
sudo python3 scripts/evaluate.py --scalability --node-counts 3 5 7 9 11
# Enable sweep load testing for throughput analysis
sudo python3 scripts/evaluate.py --sweep
# Use debug builds for evaluation
sudo python3 scripts/evaluate.py --debug
# Disable LaTeX rendering (for systems without LaTeX)
sudo python3 scripts/evaluate.py --skip-latex--output OUTPUT: Output directory for results (default: evaluation_results)--skip-benchmarks: Skip running benchmarks and analyze existing results--debug: Use debug builds for benchmarking--skip-latex: Disable LaTeX rendering in plots--num-nodes NUM_NODES: Number of nodes for benchmarking (default: 3)--sweep: Enable sweep load testing for throughput benchmarks--scalability: Run scalability evaluation across different node counts--node-counts: Custom node counts for scalability (default: 3 5 7 9 11)
# Format code
cargo fmt
# Run lints
cargo clippy
# Manual client testing (requires running cluster)
sudo python3 scripts/run.py -w
# In another terminal:
cargo run --bin client -- write 192.168.1.1 "test_value"
cargo run --bin client -- read 192.168.1.1The system supports different protocol modes via Cargo features:
# Single-writer mode (default)
cargo build
# Multi-writer mode (concurrent writers)
cargo build --features multi-writerscripts/cleanup -a