api模型检测
Du kan inte välja fler än 25 ämnen Ämnen måste starta med en bokstav eller siffra, kan innehålla bindestreck ('-') och vara max 35 tecken långa.
orangels f6b09d00a4 docs: plan rpm limiter modes 1 vecka sedan
docs/superpowers docs: plan rpm limiter modes 1 vecka sedan
src fix: resolve only selected provider env refs 1 vecka sedan
.gitignore chore: initialize cli project skeleton 1 vecka sedan
Cargo.lock chore: initialize cli project skeleton 1 vecka sedan
Cargo.toml chore: initialize cli project skeleton 1 vecka sedan
README.md docs: add usage guide 1 vecka sedan
config.example.yaml chore: initialize cli project skeleton 1 vecka sedan

README.md

lq_token_test

lq_token_test is a Rust CLI for checking LLM relay compatibility, running small benchmark probes, and testing request pacing against RPM targets. It supports OpenAI-compatible and Anthropic-compatible relay protocols, fetches local benchmark datasets, prints terminal summaries, and writes JSON reports for later comparison.

Build And Test

cargo fmt --check
cargo test
cargo clippy --all-targets -- -D warnings
cargo run -- --help

Build a release binary with:

cargo build --release

Configuration

Copy config.example.yaml to config.yaml, then set relay URLs, model names, and token environment variables for your environment.

default_provider: openai

providers:
  openai:
    protocol: openai
    base_url: "https://relay.example.com/v1"
    api_token: "${OPENAI_RELAY_TOKEN}"
    default_model: "gpt-4o-mini"

  anthropic:
    protocol: anthropic
    base_url: "https://relay.example.com"
    api_token: "${ANTHROPIC_RELAY_TOKEN}"
    default_model: "claude-3-5-sonnet-latest"

benchmarks:
  data_dir: "data/benchmarks"
  aime2026:
    source: "huggingface:MathArena/aime_2026"
    split: "train"
  gpqa_diamond:
    source: "huggingface:Idavidrein/gpqa"
    split: "gpqa_diamond"

Values written as ${ENV_NAME} are expanded when the config is loaded. For example:

export OPENAI_RELAY_TOKEN="..."
export ANTHROPIC_RELAY_TOKEN="..."

Dataset Fetching

Fetch AIME 2026 into the configured benchmark data directory:

cargo run -- dataset fetch aime2026

The current AIME fetch path reads Hugging Face rows and normalizes them to local JSONL at data/benchmarks/aime2026/aime2026.jsonl, with id, problem, and answer fields. Dataset metadata is written next to the downloaded file.

Fetch GPQA-Diamond:

cargo run -- dataset fetch gpqa-diamond

GPQA-Diamond is downloaded as data/benchmarks/gpqa_diamond/gpqa_diamond.csv. If Hugging Face requires authentication, set HF_TOKEN before fetching:

export HF_TOKEN="..."

Dataset files under data/benchmarks are ignored by git.

Relay Checks

Run a simple OpenAI-compatible relay check:

cargo run -- check \
  --config config.yaml \
  --provider openai \
  --model gpt-4o-mini \
  --prompt "Reply with the word ready."

Run a simple Anthropic-compatible relay check:

cargo run -- check \
  --config config.yaml \
  --provider anthropic \
  --model claude-3-5-sonnet-latest \
  --prompt "Reply with the word ready."

The check command prints the HTTP status, elapsed milliseconds, and model text.

Benchmarks

Run an AIME 2026 benchmark:

cargo run -- bench aime2026 \
  --config config.yaml \
  --provider openai \
  --model gpt-4o-mini \
  --concurrency 4 \
  --limit 10

Run a GPQA-Diamond benchmark:

cargo run -- bench gpqa-diamond \
  --config config.yaml \
  --provider openai \
  --model gpt-4o-mini \
  --concurrency 4 \
  --limit 10

Omit --limit to run all locally available cases.

RPM Testing

Run an RPM test:

cargo run -- rpm \
  --config config.yaml \
  --provider openai \
  --model gpt-4o-mini \
  --rpm 60 \
  --duration 60s \
  --prompt "Reply with pong."

Durations use s or m, such as 30s or 5m.

Reports

Benchmark and RPM commands print a terminal summary with success counts, failures, latency percentiles, errors, and the report path. JSON reports are written under reports/*.json; the reports directory is ignored by git.

Benchmark reports include wrong_cases, with each wrong case containing the case id, question, expected answer, extracted actual answer, and raw model output. RPM reports include request counts, target RPM, latency, and error counts.

Comparing Scores

Use these results as relay benchmark signals, not absolute proof by themselves. To compare against official scores or another run, align the same dataset and source, prompt text, temperature, max_tokens, sample limit, and scoring logic. Differences in any of those inputs can make the reported accuracy diverge from official numbers or other benchmark harnesses.