# lq_token_test `lq_token_test` is a Rust CLI for checking LLM relay compatibility, running small benchmark probes, and testing request pacing against RPM targets. It supports OpenAI-compatible and Anthropic-compatible relay protocols, fetches local benchmark datasets, prints terminal summaries, and writes JSON reports for later comparison. ## Build And Test ```bash cargo fmt --check cargo test cargo clippy --all-targets -- -D warnings cargo run -- --help ``` Build a release binary with: ```bash cargo build --release ``` ## Configuration Copy `config.example.yaml` to `config.yaml`, then set relay URLs, model names, and token environment variables for your environment. ```yaml default_provider: openai providers: openai: protocol: openai base_url: "https://relay.example.com/v1" api_token: "${OPENAI_RELAY_TOKEN}" default_model: "gpt-4o-mini" anthropic: protocol: anthropic base_url: "https://relay.example.com" api_token: "${ANTHROPIC_RELAY_TOKEN}" default_model: "claude-3-5-sonnet-latest" benchmarks: data_dir: "data/benchmarks" aime2026: source: "huggingface:MathArena/aime_2026" split: "train" gpqa_diamond: source: "huggingface:Idavidrein/gpqa" split: "gpqa_diamond" ``` Values written as `${ENV_NAME}` are expanded when the config is loaded. For example: ```bash export OPENAI_RELAY_TOKEN="..." export ANTHROPIC_RELAY_TOKEN="..." ``` ## Dataset Fetching Fetch AIME 2026 into the configured benchmark data directory: ```bash cargo run -- dataset fetch aime2026 ``` The current AIME fetch path reads Hugging Face rows and normalizes them to local JSONL at `data/benchmarks/aime2026/aime2026.jsonl`, with `id`, `problem`, and `answer` fields. Dataset metadata is written next to the downloaded file. Fetch GPQA-Diamond: ```bash cargo run -- dataset fetch gpqa-diamond ``` GPQA-Diamond is downloaded as `data/benchmarks/gpqa_diamond/gpqa_diamond.csv`. If Hugging Face requires authentication, set `HF_TOKEN` before fetching: ```bash export HF_TOKEN="..." ``` Dataset files under `data/benchmarks` are ignored by git. ## Relay Checks Run a simple OpenAI-compatible relay check: ```bash cargo run -- check \ --config config.yaml \ --provider openai \ --model gpt-4o-mini \ --prompt "Reply with the word ready." ``` Run a simple Anthropic-compatible relay check: ```bash cargo run -- check \ --config config.yaml \ --provider anthropic \ --model claude-3-5-sonnet-latest \ --prompt "Reply with the word ready." ``` The check command prints the HTTP status, elapsed milliseconds, and model text. ## Benchmarks Run an AIME 2026 benchmark: ```bash cargo run -- bench aime2026 \ --config config.yaml \ --provider openai \ --model gpt-4o-mini \ --concurrency 4 \ --limit 10 ``` Run a GPQA-Diamond benchmark: ```bash cargo run -- bench gpqa-diamond \ --config config.yaml \ --provider openai \ --model gpt-4o-mini \ --concurrency 4 \ --limit 10 ``` Omit `--limit` to run all locally available cases. ## RPM Testing Run a sustained RPM test. This is the default mode and starts requests at a stable interval: ```bash cargo run -- rpm \ --config config.yaml \ --provider openai \ --model gpt-4o-mini \ --mode sustained \ --rpm 60 \ --duration 60s \ --prompt "Reply with pong." ``` Durations use `s` or `m`, such as `30s` or `5m`. ### RPM Modes Use a specific mode when you already know the backend limiter shape. Use `diagnose` when you do not know the backend shape and want a combined probe. Stable RPM: ```bash cargo run -- rpm --mode sustained --provider openai --rpm 120 --duration 60s --prompt "hello" ``` Instant burst capacity: ```bash cargo run -- rpm --mode burst --provider openai --burst 120 --prompt "hello" ``` Token bucket capacity plus refill behavior: ```bash cargo run -- rpm \ --mode token-bucket \ --provider openai \ --rpm 120 \ --burst 120 \ --probe-seconds 30 \ --prompt "hello" ``` Sliding 60 second window recovery: ```bash cargo run -- rpm \ --mode sliding-window \ --provider openai \ --rpm 120 \ --burst 120 \ --probe-seconds 90 \ --prompt "hello" ``` Fixed window boundary behavior: ```bash cargo run -- rpm \ --mode window-boundary \ --provider openai \ --rpm 120 \ --burst 120 \ --window-offset-ms 500 \ --prompt "hello" ``` Unknown backend diagnosis: ```bash cargo run -- rpm \ --mode diagnose \ --provider openai \ --rpm 120 \ --burst 120 \ --probe-seconds 90 \ --prompt "hello" ``` Mode meanings: - `sustained`: starts one request every `60 / rpm` seconds and measures stable throughput. - `burst`: starts `burst` requests at the same time and measures immediate burst capacity. - `token-bucket`: bursts first, then probes refill behavior based on the target RPM. - `sliding-window`: bursts first, then probes recovery across a rolling 60 second window. - `window-boundary`: sends batches before and after a minute boundary to check fixed-window reset behavior. - `diagnose`: combines several probes and writes a best-effort limiter inference into the report. Real LLM services often combine multiple limiters, such as RPM, TPM, maximum concurrency, queue length, account-level limits, model-level limits, and region-level limits. Treat mode inference as a useful signal rather than proof of a single backend algorithm. ## Reports Benchmark and RPM commands print a terminal summary with success counts, failures, latency percentiles, errors, and the report path. JSON reports are written under `reports/*.json`; the `reports` directory is ignored by git. Benchmark reports include `wrong_cases`, with each wrong case containing the case id, question, expected answer, extracted actual answer, and raw model output. RPM reports include request counts, mode, target RPM, observed RPM, latency, error counts, and mode-specific details such as burst summaries, probe summaries, window-boundary summaries, and optional limiter inference. Use `--debug-raw` with `check`, `bench`, or `rpm` to write upstream raw responses under `outputs/debug/`. Non-streaming requests save the raw JSON body, and streaming requests save the raw SSE lines. The directory is ignored by git and can help diagnose relay-side response rewriting. ## Comparing Scores Use these results as relay benchmark signals, not absolute proof by themselves. To compare against official scores or another run, align the same dataset and source, prompt text, temperature, `max_tokens`, sample limit, and scoring logic. Differences in any of those inputs can make the reported accuracy diverge from official numbers or other benchmark harnesses.