|
|
hace 3 días | |
|---|---|---|
| docs | hace 4 días | |
| py_demo | hace 4 días | |
| src | hace 3 días | |
| .gitignore | hace 1 semana | |
| CLAUDE.md | hace 6 días | |
| Cargo.lock | hace 1 semana | |
| Cargo.toml | hace 1 semana | |
| README.md | hace 4 días | |
| config.example.yaml | hace 4 días | |
lq_token_test is a Rust CLI for checking LLM relay compatibility, running small benchmark probes, and testing request pacing against RPM targets. It supports OpenAI-compatible, Anthropic-compatible, and Google Gemini relay protocols, fetches local benchmark datasets, prints terminal summaries, and writes JSON reports for later comparison.
cargo fmt --check
cargo test
cargo clippy --all-targets -- -D warnings
cargo run -- --help
Build a release binary with:
cargo build --release
Copy config.example.yaml to config.yaml, then set relay URLs, model names, and token environment variables for your environment.
default_provider: openai
providers:
openai:
protocol: openai
base_url: "https://relay.example.com/v1"
api_token: "${OPENAI_RELAY_TOKEN}"
default_model: "gpt-4o-mini"
thinking:
enabled: false
reasoning_effort: "medium"
reasoning_summary: "auto"
anthropic:
protocol: anthropic
base_url: "https://relay.example.com"
api_token: "${ANTHROPIC_RELAY_TOKEN}"
default_model: "claude-3-5-sonnet-latest"
thinking:
enabled: false
type: "enabled"
budget_tokens: 10000
display: "omitted"
google:
protocol: google
base_url: "https://generativelanguage.googleapis.com/v1beta"
api_token: "${GOOGLE_API_KEY}"
default_model: "gemini-3-pro-preview"
stream: true
thinking:
enabled: true
budget_tokens: 5000
effort: "high"
display: "summarized"
benchmarks:
data_dir: "data/benchmarks"
aime2026:
source: "huggingface:MathArena/aime_2026"
split: "train"
gpqa_diamond:
source: "huggingface:Idavidrein/gpqa"
split: "gpqa_diamond"
Values written as ${ENV_NAME} are expanded when the config is loaded. For example:
export OPENAI_RELAY_TOKEN="..."
export ANTHROPIC_RELAY_TOKEN="..."
export GOOGLE_API_KEY="..."
Thinking can also be enabled per run with CLI overrides such as --thinking true, --thinking-type enabled, --thinking-budget-tokens 10000, --thinking-effort high, --thinking-display omitted, --reasoning-effort high, and --reasoning-summary auto. For Anthropic, enabling thinking omits temperature from the upstream request.
For Google Gemini, thinking settings are sent under generationConfig.thinkingConfig:
budget_tokens maps to thinkingBudget.effort maps to thinkingLevel.display: summarized sends includeThoughts: true.display: omitted sends includeThoughts: false.If both budget_tokens and effort are configured, both fields are sent. Some Gemini backends or relays may reject a mixed Gemini 2.5/Gemini 3 style request, so prefer model-specific configs.
Recommended Gemini 3 config:
google:
protocol: google
base_url: "https://generativelanguage.googleapis.com/v1beta"
api_token: "${GOOGLE_API_KEY}"
default_model: "gemini-3-pro-preview"
stream: true
thinking:
enabled: true
effort: "high"
display: "summarized"
Recommended Gemini 2.5 config:
google:
protocol: google
base_url: "https://generativelanguage.googleapis.com/v1beta"
api_token: "${GOOGLE_API_KEY}"
default_model: "gemini-2.5-pro"
stream: true
thinking:
enabled: true
budget_tokens: 5000
display: "summarized"
If enabled: true is set without budget_tokens, effort, or display, the Google adapter does not send thinkingConfig; add display when you want to explicitly request or suppress thought summaries.
Fetch AIME 2026 into the configured benchmark data directory:
cargo run -- dataset fetch aime2026
The current AIME fetch path reads Hugging Face rows and normalizes them to local JSONL at data/benchmarks/aime2026/aime2026.jsonl, with id, problem, and answer fields. Dataset metadata is written next to the downloaded file.
Fetch GPQA-Diamond:
cargo run -- dataset fetch gpqa-diamond
GPQA-Diamond is downloaded as data/benchmarks/gpqa_diamond/gpqa_diamond.csv. If Hugging Face requires authentication, set HF_TOKEN before fetching:
export HF_TOKEN="..."
Dataset files under data/benchmarks are ignored by git.
Run a simple OpenAI-compatible relay check:
cargo run -- check \
--config config.yaml \
--provider openai \
--model gpt-4o-mini \
--prompt "Reply with the word ready."
Run a simple Anthropic-compatible relay check:
cargo run -- check \
--config config.yaml \
--provider anthropic \
--model claude-3-5-sonnet-latest \
--prompt "Reply with the word ready."
Run a simple Google Gemini relay check:
cargo run -- check \
--config config.yaml \
--provider google \
--model gemini-3-pro-preview \
--prompt "Reply with the word ready."
The check command prints the HTTP status, elapsed milliseconds, and model text.
Run an AIME 2026 benchmark:
cargo run -- bench aime2026 \
--config config.yaml \
--provider openai \
--model gpt-4o-mini \
--concurrency 4 \
--limit 10
AIME prompts ask the model to put the final answer in \boxed{}. Scoring extracts the last boxed integer first, then falls back only to explicit final-answer forms such as Final answer: 393, Answer: 393, or The answer is 393; unconstrained trailing integers are treated as no_answer.
Run a GPQA-Diamond benchmark:
cargo run -- bench gpqa-diamond \
--config config.yaml \
--provider openai \
--model gpt-4o-mini \
--concurrency 4 \
--limit 10
Omit --limit to run all locally available cases.
GPQA-Diamond prompts and scoring follow the OpenAI simple-evals style: the model is instructed to put Answer: $LETTER on the last line, where LETTER is one of A, B, C, or D. Scoring extracts answers with the same strict Answer: pattern, so a bare final C or prose such as I choose C is treated as no_answer.
Run a sustained RPM test. This is the default mode and starts requests at a stable interval:
cargo run -- rpm \
--config config.yaml \
--provider openai \
--model gpt-4o-mini \
--mode sustained \
--rpm 60 \
--duration 60s \
--prompt "Reply with pong."
Durations use s or m, such as 30s or 5m.
Use a specific mode when you already know the backend limiter shape. Use diagnose when you do not know the backend shape and want a combined probe.
Stable RPM:
cargo run -- rpm --mode sustained --provider openai --rpm 120 --duration 60s --prompt "hello"
Instant burst capacity:
cargo run -- rpm --mode burst --provider openai --burst 120 --prompt "hello"
Token bucket capacity plus refill behavior:
cargo run -- rpm \
--mode token-bucket \
--provider openai \
--rpm 120 \
--burst 120 \
--probe-seconds 30 \
--prompt "hello"
Sliding 60 second window recovery:
cargo run -- rpm \
--mode sliding-window \
--provider openai \
--rpm 120 \
--burst 120 \
--probe-seconds 90 \
--prompt "hello"
Fixed window boundary behavior:
cargo run -- rpm \
--mode window-boundary \
--provider openai \
--rpm 120 \
--burst 120 \
--window-offset-ms 500 \
--prompt "hello"
Unknown backend diagnosis:
cargo run -- rpm \
--mode diagnose \
--provider openai \
--rpm 120 \
--burst 120 \
--probe-seconds 90 \
--prompt "hello"
Mode meanings:
sustained: starts one request every 60 / rpm seconds and measures stable throughput.burst: starts burst requests at the same time and measures immediate burst capacity.token-bucket: bursts first, then probes refill behavior based on the target RPM.sliding-window: bursts first, then probes recovery across a rolling 60 second window.window-boundary: sends batches before and after a minute boundary to check fixed-window reset behavior.diagnose: combines several probes and writes a best-effort limiter inference into the report.Real LLM services often combine multiple limiters, such as RPM, TPM, maximum concurrency, queue length, account-level limits, model-level limits, and region-level limits. Treat mode inference as a useful signal rather than proof of a single backend algorithm.
Benchmark and RPM commands print a terminal summary with success counts, failures, latency percentiles, errors, and the report path. JSON reports are written under reports/*.json; the reports directory is ignored by git.
Benchmark reports include params.request, a non-sensitive summary of the protocol-specific request body parameters that are actually sent upstream, excluding prompts and tokens. They also include wrong_cases, with each wrong case containing the case id, question, expected answer, extracted actual answer, and raw model output. RPM reports include request counts, mode, target RPM, observed RPM, latency, error counts, and mode-specific details such as burst summaries, probe summaries, window-boundary summaries, and optional limiter inference.
When an upstream request returns a non-success HTTP status such as 400, 429, or 504, check, bench, and rpm automatically write a request/response debug JSON file under outputs/debug/. The debug file includes the full request URL, redacted request headers, full request body including the prompt, response status, response headers, and full response body. If the request fails before an HTTP response is available, for example a connect timeout, read failure, or streaming interruption counted as request_error, the same directory gets a *-request-error debug JSON with response.status: null, response.error_kind: "request_error", and the local error message. API tokens are redacted, but prompts and model outputs are preserved for troubleshooting.
Use --debug-raw with check, bench, or rpm when you also want to save successful upstream raw responses. Non-streaming success responses save the raw JSON body, and streaming success responses save the raw SSE lines. The directory is ignored by git and can help diagnose relay-side response rewriting.
Use these results as relay benchmark signals, not absolute proof by themselves. To compare against official scores or another run, align the same dataset and source, prompt text, temperature, max_tokens, sample limit, and scoring logic. Differences in any of those inputs can make the reported accuracy diverge from official numbers or other benchmark harnesses.