This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
cargo build # debug build
cargo build --release # release build
cargo test # run all tests
cargo test <test_name> # run a single test
cargo clippy --all-targets -- -D warnings # lint
cargo fmt --check # format check
cargo fmt # auto-format
Tracing is controlled via RUST_LOG env var (uses tracing-subscriber with env-filter).
This is a Rust CLI (clap derive) for testing LLM relay compatibility, benchmark accuracy, and RPM rate-limit behavior. It supports OpenAI-compatible and Anthropic-compatible relay protocols.
cli.rs — CLI definition and command dispatch. Contains all subcommand handlers (check, dataset fetch, bench, rpm). The RPM command builds a schedule of timed probes, executes them concurrently, then aggregates results into a mode-specific report.config.rs — YAML config loading with ${ENV_VAR} expansion. Provider tokens are resolved lazily (only the requested provider’s env vars need to be set). AppConfig::load() is the entry point.runner.rs — Thin orchestrator that dispatches to the correct protocol adapter and measures elapsed time. run_model_request() is the single call site for all LLM requests.protocols/ — Protocol adapters (openai.rs, anthropic.rs). Each implements send(client, request) -> ModelResponse. Shared URL normalization and error extraction live in protocols/mod.rs.benchmarks/ — Dataset fetching from Hugging Face (fetch_dataset), case loaders (aime.rs, gpqa.rs), and answer judging (judge.rs). AIME rows are normalized from HF API JSON to local JSONL.metrics.rs — HDR histogram-based latency tracking plus success/failure/accuracy counters. Metrics::summary() produces the final MetricsSummary.rpm_modes.rs — Schedule generators for each RPM test mode (sustained, burst, token-bucket, sliding-window, window-boundary, diagnose). Each returns a Vec<ScheduledProbe> with offsets and phase labels.report.rs — Report structs and JSON serialization. Reports are written to reports/ directory.Cli::parse() → cli::dispatch() → builds ModelRequest → runner::run_model_request() → protocols::{openai,anthropic}::send() → returns ModelResponse.
For RPM tests, requests are scheduled via tokio::time::sleep_until with stream::buffer_unordered for concurrency control.
Config is loaded from config.yaml (YAML). Values like ${ENV_NAME} are expanded at load time. Provider tokens are expanded only when that provider is resolved, so unused providers don’t require their env vars to be set.
anyhow::Result for application errors, thiserror for typed errors in config.rstempfile for filesystem isolation and wiremock for HTTP mocking