| @@ -130,13 +130,14 @@ Omit `--limit` to run all locally available cases. | |||
| ## RPM Testing | |||
| Run an RPM test: | |||
| Run a sustained RPM test. This is the default mode and starts requests at a stable interval: | |||
| ```bash | |||
| cargo run -- rpm \ | |||
| --config config.yaml \ | |||
| --provider openai \ | |||
| --model gpt-4o-mini \ | |||
| --mode sustained \ | |||
| --rpm 60 \ | |||
| --duration 60s \ | |||
| --prompt "Reply with pong." | |||
| @@ -144,11 +145,86 @@ cargo run -- rpm \ | |||
| Durations use `s` or `m`, such as `30s` or `5m`. | |||
| ### RPM Modes | |||
| Use a specific mode when you already know the backend limiter shape. Use `diagnose` when you do not know the backend shape and want a combined probe. | |||
| Stable RPM: | |||
| ```bash | |||
| cargo run -- rpm --mode sustained --provider openai --rpm 120 --duration 60s --prompt "hello" | |||
| ``` | |||
| Instant burst capacity: | |||
| ```bash | |||
| cargo run -- rpm --mode burst --provider openai --burst 120 --prompt "hello" | |||
| ``` | |||
| Token bucket capacity plus refill behavior: | |||
| ```bash | |||
| cargo run -- rpm \ | |||
| --mode token-bucket \ | |||
| --provider openai \ | |||
| --rpm 120 \ | |||
| --burst 120 \ | |||
| --probe-seconds 30 \ | |||
| --prompt "hello" | |||
| ``` | |||
| Sliding 60 second window recovery: | |||
| ```bash | |||
| cargo run -- rpm \ | |||
| --mode sliding-window \ | |||
| --provider openai \ | |||
| --rpm 120 \ | |||
| --burst 120 \ | |||
| --probe-seconds 90 \ | |||
| --prompt "hello" | |||
| ``` | |||
| Fixed window boundary behavior: | |||
| ```bash | |||
| cargo run -- rpm \ | |||
| --mode window-boundary \ | |||
| --provider openai \ | |||
| --rpm 120 \ | |||
| --burst 120 \ | |||
| --window-offset-ms 500 \ | |||
| --prompt "hello" | |||
| ``` | |||
| Unknown backend diagnosis: | |||
| ```bash | |||
| cargo run -- rpm \ | |||
| --mode diagnose \ | |||
| --provider openai \ | |||
| --rpm 120 \ | |||
| --burst 120 \ | |||
| --probe-seconds 90 \ | |||
| --prompt "hello" | |||
| ``` | |||
| Mode meanings: | |||
| - `sustained`: starts one request every `60 / rpm` seconds and measures stable throughput. | |||
| - `burst`: starts `burst` requests at the same time and measures immediate burst capacity. | |||
| - `token-bucket`: bursts first, then probes refill behavior based on the target RPM. | |||
| - `sliding-window`: bursts first, then probes recovery across a rolling 60 second window. | |||
| - `window-boundary`: sends batches before and after a minute boundary to check fixed-window reset behavior. | |||
| - `diagnose`: combines several probes and writes a best-effort limiter inference into the report. | |||
| Real LLM services often combine multiple limiters, such as RPM, TPM, maximum concurrency, queue length, account-level limits, model-level limits, and region-level limits. Treat mode inference as a useful signal rather than proof of a single backend algorithm. | |||
| ## Reports | |||
| Benchmark and RPM commands print a terminal summary with success counts, failures, latency percentiles, errors, and the report path. JSON reports are written under `reports/*.json`; the `reports` directory is ignored by git. | |||
| Benchmark reports include `wrong_cases`, with each wrong case containing the case id, question, expected answer, extracted actual answer, and raw model output. RPM reports include request counts, target RPM, latency, and error counts. | |||
| Benchmark reports include `wrong_cases`, with each wrong case containing the case id, question, expected answer, extracted actual answer, and raw model output. RPM reports include request counts, mode, target RPM, observed RPM, latency, error counts, and mode-specific details such as burst summaries, probe summaries, window-boundary summaries, and optional limiter inference. | |||
| ## Comparing Scores | |||