Skip to content

Manual Benchmark Guide

Performance Testing and Benchmarking

This guide provides comprehensive instructions for running performance benchmarks and analyzing system performance characteristics.

Overview

The Nerve Framework includes a sophisticated benchmarking system that measures:

  • Latency - Message routing and processing times
  • Throughput - Messages per second under various loads
  • Memory Usage - Buffer efficiency and memory footprint
  • Concurrency - Performance under high thread counts
  • QoS Behavior - Performance across different quality-of-service levels

Running Benchmarks

Basic Benchmark Suite

# Run all benchmarks
cargo bench

# Run specific benchmark categories
cargo bench --bench latency_benchmarks
cargo bench --bench throughput_benchmarks
cargo bench --bench memory_benchmarks

# Run with specific parameters
cargo bench --bench latency_benchmarks -- --message-count 100000

Custom Benchmark Configuration

Create a custom benchmark configuration:

# benches/custom_benchmark.rs
use criterion::{criterion_group, criterion_main, Criterion};
use nerve::communication::pubsub::{Publisher, Subscriber};

fn benchmark_message_routing(c: &mut Criterion) {
    c.bench_function("message_routing_small", |b| {
        b.iter(|| {
            // Benchmark small message routing
        })
    });

    c.bench_function("message_routing_large", |b| {
        b.iter(|| {
            // Benchmark large message routing
        })
    });
}

criterion_group!(benches, benchmark_message_routing);
criterion_main!(benches);

Benchmark Categories

1. Latency Benchmarks

Measure message routing latency:

# One-way latency
cargo bench --bench latency_benchmarks test_one_way_latency

# Round-trip latency
cargo bench --bench latency_benchmarks test_round_trip_latency

# Latency with various message sizes
cargo bench --bench latency_benchmarks test_latency_various_sizes

2. Throughput Benchmarks

Measure system throughput:

# Single publisher throughput
cargo bench --bench throughput_benchmarks test_single_publisher_throughput

# Multiple publisher throughput
cargo bench --bench throughput_benchmarks test_multi_publisher_throughput

# Concurrent access throughput
cargo bench --bench throughput_benchmarks test_concurrent_throughput

3. Memory Benchmarks

Measure memory efficiency:

# Buffer memory usage
cargo bench --bench memory_benchmarks test_buffer_memory_usage

# Memory leak detection
cargo bench --bench memory_benchmarks test_memory_leak_detection

# Allocation performance
cargo bench --bench memory_benchmarks test_allocation_performance

Performance Analysis

Interpreting Results

Benchmark output includes:

  • Time per operation - Average, minimum, maximum times
  • Throughput - Operations per second
  • Memory allocations - Heap usage and allocations
  • Statistical significance - Confidence intervals

Example Output Analysis

message_routing_small
  time: [85.123 ns 85.456 ns 85.789 ns]
  throughput: ~11,700 ops/sec
  change: -2.3% (improvement)

message_routing_large
  time: [245.678 ns 246.123 ns 246.567 ns]
  throughput: ~4,060 ops/sec
  change: +1.2% (regression)

Advanced Benchmarking

Stress Testing

# High-load stress test
cargo bench --bench stress_tests -- --duration 60s --threads 32

# Memory stress test
cargo bench --bench stress_tests -- --memory-limit 1GB

# Network simulation
cargo bench --bench stress_tests -- --latency 10ms --jitter 2ms

Comparative Analysis

Compare performance across:

  • Different QoS levels
  • Various buffer sizes
  • Multiple router implementations
  • Different thread counts

Best Practices

Benchmark Environment

  1. Isolated System - Run on dedicated hardware
  2. Consistent Conditions - Same hardware and software configuration
  3. Warm-up Period - Allow system to stabilize
  4. Multiple Runs - Average results across multiple executions
  5. Statistical Analysis - Use confidence intervals

Performance Tuning

Based on benchmark results:

  • Optimize buffer sizes for your workload
  • Choose appropriate QoS for your requirements
  • Configure thread pools for optimal concurrency
  • Monitor memory usage and adjust accordingly

Integration with CI/CD

Include benchmarks in your CI pipeline:

# .github/workflows/benchmarks.yml
name: Performance Benchmarks

on:
  push:
    branches: [main]
  schedule:
    - cron: '0 2 * * 0'  # Weekly on Sunday at 2 AM

jobs:
  benchmarks:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions-rs/toolchain@v1
        with:
          toolchain: stable
          override: true
      - run: cargo bench

Next Steps


This is a placeholder file. Full content coming soon.