| @@ -804,6 +804,7 @@ git commit -m "feat: add openai and anthropic request adapters" | |||||
| - Modify: `src/benchmarks/aime.rs` | - Modify: `src/benchmarks/aime.rs` | ||||
| - Modify: `src/benchmarks/gpqa.rs` | - Modify: `src/benchmarks/gpqa.rs` | ||||
| - Modify: `src/cli.rs` | - Modify: `src/cli.rs` | ||||
| - Create: `src/report.rs` | |||||
| - [ ] **Step 1: Add metrics tests** | - [ ] **Step 1: Add metrics tests** | ||||
| @@ -813,6 +814,89 @@ Test success/error counts, accuracy, and latency percentile calculation. | |||||
| Use `hdrhistogram::Histogram<u64>` for elapsed milliseconds and counters for success, failure, correct, and total judged. | Use `hdrhistogram::Histogram<u64>` for elapsed milliseconds and counters for success, failure, correct, and total judged. | ||||
| - [ ] **Step 2.5: Implement terminal and JSON reports** | |||||
| Create `src/report.rs` with serializable report structs and a writer that creates `reports/` when needed. Benchmark runs must print a terminal summary and also save a JSON report. | |||||
| Terminal output for benchmark runs must include: | |||||
| ```text | |||||
| accuracy: 83.33% | |||||
| success: 30/30 | |||||
| latency: | |||||
| p50: 1240 ms | |||||
| p95: 2860 ms | |||||
| p99: 3120 ms | |||||
| errors: | |||||
| http_429: 2 | |||||
| wrong cases: | |||||
| - id: aime2026_007 | |||||
| expected: 42 | |||||
| actual: 41 | |||||
| report: reports/aime2026-openai-gpt-4o-mini-20260506-153012.json | |||||
| ``` | |||||
| Benchmark report JSON must include: | |||||
| ```json | |||||
| { | |||||
| "benchmark": "aime2026", | |||||
| "provider": "openai", | |||||
| "model": "gpt-4o-mini", | |||||
| "dataset": { | |||||
| "source": "huggingface:MathArena/aime_2026", | |||||
| "split": "train", | |||||
| "revision": null, | |||||
| "local_path": "data/benchmarks/aime2026/train-00000-of-00001.parquet" | |||||
| }, | |||||
| "run": { | |||||
| "started_at": "2026-05-06T15:30:12Z", | |||||
| "duration_ms": 48210, | |||||
| "concurrency": 4, | |||||
| "limit": 30, | |||||
| "temperature": 0.0, | |||||
| "max_tokens": 1024 | |||||
| }, | |||||
| "summary": { | |||||
| "accuracy": 0.8333, | |||||
| "success": 30, | |||||
| "total": 30, | |||||
| "correct": 25, | |||||
| "wrong": 5, | |||||
| "failed": 0, | |||||
| "latency_ms": { | |||||
| "p50": 1240, | |||||
| "p95": 2860, | |||||
| "p99": 3120 | |||||
| } | |||||
| }, | |||||
| "errors": [ | |||||
| { | |||||
| "code": "http_429", | |||||
| "count": 2 | |||||
| } | |||||
| ], | |||||
| "wrong_cases": [ | |||||
| { | |||||
| "id": "aime2026_007", | |||||
| "question": "problem text", | |||||
| "expected": "42", | |||||
| "actual": "41", | |||||
| "raw_output": "41" | |||||
| } | |||||
| ] | |||||
| } | |||||
| ``` | |||||
| Default report behavior: | |||||
| - Save complete `wrong_cases`. | |||||
| - Do not save every successful case. | |||||
| - Use a readable filename shape: `<benchmark>-<provider>-<model>-<timestamp>.json`. | |||||
| - Sanitize model names for filenames. | |||||
| - Print the report path after writing the file. | |||||
| - RPM reports should use the same writer style, but omit accuracy and wrong cases. | |||||
| - [ ] **Step 3: Implement local benchmark loading** | - [ ] **Step 3: Implement local benchmark loading** | ||||
| For AIME 2026 and GPQA-Diamond, read local files under `data/benchmarks`. If data is missing, return a clear error: `missing local dataset; run lq_token_test dataset fetch <name>`. | For AIME 2026 and GPQA-Diamond, read local files under `data/benchmarks`. If data is missing, return a clear error: `missing local dataset; run lq_token_test dataset fetch <name>`. | ||||
| @@ -821,10 +905,14 @@ For AIME 2026 and GPQA-Diamond, read local files under `data/benchmarks`. If dat | |||||
| Use `futures::stream` with `buffer_unordered(concurrency)` to run cases concurrently. Apply `limit` before execution. Judge each response and update metrics. | Use `futures::stream` with `buffer_unordered(concurrency)` to run cases concurrently. Apply `limit` before execution. Judge each response and update metrics. | ||||
| After benchmark execution finishes, populate the report model and write the JSON report even when some cases fail. | |||||
| - [ ] **Step 5: Implement RPM command** | - [ ] **Step 5: Implement RPM command** | ||||
| Parse duration strings like `60s` and `5m`, calculate delay between requests from `rpm`, run repeated requests, and print latency/error summary. | Parse duration strings like `60s` and `5m`, calculate delay between requests from `rpm`, run repeated requests, and print latency/error summary. | ||||
| RPM terminal output and JSON report must include target RPM, actual request count, success count, failure count, latency p50/p95/p99, and errors by status/code. | |||||
| - [ ] **Step 6: Run tests** | - [ ] **Step 6: Run tests** | ||||
| Run: `cargo test` | Run: `cargo test` | ||||
| @@ -834,7 +922,7 @@ Expected: unit tests pass. | |||||
| - [ ] **Step 7: Commit** | - [ ] **Step 7: Commit** | ||||
| ```bash | ```bash | ||||
| git add src/metrics.rs src/benchmarks src/cli.rs | |||||
| git add src/metrics.rs src/report.rs src/benchmarks src/cli.rs | |||||
| git commit -m "feat: run benchmark and rpm tests" | git commit -m "feat: run benchmark and rpm tests" | ||||
| ``` | ``` | ||||