| @@ -0,0 +1,172 @@ | |||
| # lq_token_test Design | |||
| Date: 2026-05-06 | |||
| ## Goal | |||
| Build a Rust CLI tool for testing an LLM relay service. The tool should start with reliable single-request checks, then grow into RPM/concurrency testing and dataset-based accuracy evaluation. | |||
| The first implementation will use a modular single-crate architecture. This keeps initialization light while still giving clear boundaries for config loading, CLI parsing, protocol adapters, request execution, benchmarks, and metrics. | |||
| ## Scope | |||
| First phase: | |||
| - Read relay configuration from YAML. | |||
| - Manage relay base URLs, API tokens, default models, and provider settings. | |||
| - Support OpenAI-compatible chat requests. | |||
| - Add Anthropic-compatible request structure behind the same internal runner boundary. | |||
| - Provide CLI subcommands for single checks, benchmark smoke tests, and RPM/concurrency tests. | |||
| - Support local JSONL benchmark files. | |||
| - Include simple exact-match and GSM8K-style numeric answer judging. | |||
| - Report success count, error count, latency summaries, and benchmark accuracy. | |||
| Later phase: | |||
| - Add stricter benchmark profiles for comparing against official model reports. | |||
| - Record prompt template, dataset version, sampling parameters, model identity, and scoring method. | |||
| - Add larger public dataset ingestion only after the local JSONL format is stable. | |||
| ## Architecture | |||
| Use scheme B: one binary crate split into internal modules. | |||
| Planned structure: | |||
| ```text | |||
| src/ | |||
| main.rs | |||
| cli.rs | |||
| config.rs | |||
| runner.rs | |||
| metrics.rs | |||
| protocols/ | |||
| mod.rs | |||
| openai.rs | |||
| anthropic.rs | |||
| benchmark/ | |||
| mod.rs | |||
| dataset.rs | |||
| judge.rs | |||
| ``` | |||
| Module responsibilities: | |||
| - `cli`: Defines subcommands and flags using `clap`. | |||
| - `config`: Loads YAML config, resolves environment variable references, and validates provider settings. | |||
| - `protocols`: Converts internal request data into provider-specific HTTP requests and parses responses. | |||
| - `runner`: Executes one logical model request and returns response text, elapsed time, provider metadata, and errors. | |||
| - `benchmark`: Reads JSONL test cases, runs them through the runner, and judges answers. | |||
| - `metrics`: Aggregates latency, success rate, error distribution, and accuracy summaries. | |||
| - `main`: Wires the CLI, config, runner, and command handlers together. | |||
| ## CLI Shape | |||
| Initial commands: | |||
| ```bash | |||
| lq_token_test check --config config.yaml --provider openai --model gpt-4o-mini --prompt "hello" | |||
| lq_token_test bench --config config.yaml --provider openai --dataset benchmarks/smoke.jsonl --concurrency 4 | |||
| lq_token_test rpm --config config.yaml --provider openai --rpm 60 --duration 60s --prompt "hello" | |||
| ``` | |||
| The `check` command proves that a relay, token, model, and protocol shape work. | |||
| The `bench` command runs local JSONL cases and reports accuracy plus request metrics. | |||
| The `rpm` command sends repeated requests at a target rate and reports latency and error behavior. | |||
| ## Config Format | |||
| Example: | |||
| ```yaml | |||
| default_provider: openai | |||
| providers: | |||
| openai: | |||
| protocol: openai | |||
| base_url: "https://relay.example.com/v1" | |||
| api_token: "${OPENAI_RELAY_TOKEN}" | |||
| default_model: "gpt-4o-mini" | |||
| anthropic: | |||
| protocol: anthropic | |||
| base_url: "https://relay.example.com" | |||
| api_token: "${ANTHROPIC_RELAY_TOKEN}" | |||
| default_model: "claude-3-5-sonnet-latest" | |||
| benchmarks: | |||
| smoke: | |||
| path: "benchmarks/smoke.jsonl" | |||
| kind: "qa_exact" | |||
| ``` | |||
| API tokens should be allowed directly in YAML for local testing, but environment variable references are preferred. | |||
| ## Benchmark Format | |||
| Use JSONL so datasets can be streamed and edited without custom tooling. | |||
| Example: | |||
| ```jsonl | |||
| {"id":"smoke_001","question":"What is 2 + 2?","answer":"4","kind":"exact"} | |||
| {"id":"gsm8k_001","question":"Natalia sold clips and earned 72 dollars. How much did she earn?","answer":"72","kind":"gsm8k"} | |||
| ``` | |||
| Judging rules: | |||
| - `exact`: normalize whitespace and compare the full answer. | |||
| - `gsm8k`: extract the final numeric answer from model output and compare with the expected answer. | |||
| The first phase accuracy score is an internal evaluation signal. It should not be presented as matching or disproving official model accuracy until dataset version, prompt template, sample count, temperature, and scoring method are aligned with the official report. | |||
| ## Dependencies | |||
| Recommended crates: | |||
| - `clap`: CLI parser and subcommands. | |||
| - `serde`, `serde_yaml`, `serde_json`: config and dataset parsing. | |||
| - `tokio`: async runtime. | |||
| - `reqwest`: HTTP client. | |||
| - `anyhow`: simple top-level CLI error handling. | |||
| - `thiserror`: structured module-level errors. | |||
| - `tracing`, `tracing-subscriber`: logs. | |||
| - `indicatif`: progress display for benchmark and RPM runs. | |||
| - `hdrhistogram`: latency percentiles. | |||
| - `regex`: answer extraction for GSM8K-style judging. | |||
| Prefer `reqwest` with Rustls TLS. Avoid provider SDKs in the first phase so protocol compatibility remains transparent and easy to inspect. | |||
| ## Error Handling | |||
| The CLI should surface concise user-facing errors: | |||
| - Missing config file. | |||
| - Unknown provider. | |||
| - Missing API token or unresolved environment variable. | |||
| - Unsupported protocol. | |||
| - HTTP status failures. | |||
| - Provider response parse failures. | |||
| - Invalid benchmark JSONL lines. | |||
| Benchmark and RPM commands should continue after per-request failures and include failures in the final summary. | |||
| ## Testing | |||
| Initial tests should cover: | |||
| - YAML config loading and environment variable expansion. | |||
| - JSONL benchmark parsing. | |||
| - Exact-match judging. | |||
| - GSM8K-style numeric extraction. | |||
| - Metrics aggregation. | |||
| Network tests should be kept opt-in because they require real relay credentials. | |||
| ## First Implementation Decisions | |||
| - OpenAI-compatible support starts with `/chat/completions`. | |||
| - Anthropic support includes a real adapter boundary and request shape in the first pass, but end-to-end verification can wait until a real Anthropic-compatible relay config is available. | |||
| - Benchmark cases use a minimal default prompt template: ask the question and request only the final answer. Dataset-specific templates can be added later. | |||