Browse Source

docs: add usage guide

main
orangels 1 week ago
parent
commit
4e99ab6f37
1 changed files with 155 additions and 0 deletions
  1. +155
    -0
      README.md

+ 155
- 0
README.md View File

@@ -0,0 +1,155 @@
# lq_token_test

`lq_token_test` is a Rust CLI for checking LLM relay compatibility, running small benchmark probes, and testing request pacing against RPM targets. It supports OpenAI-compatible and Anthropic-compatible relay protocols, fetches local benchmark datasets, prints terminal summaries, and writes JSON reports for later comparison.

## Build And Test

```bash
cargo fmt --check
cargo test
cargo clippy --all-targets -- -D warnings
cargo run -- --help
```

Build a release binary with:

```bash
cargo build --release
```

## Configuration

Copy `config.example.yaml` to `config.yaml`, then set relay URLs, model names, and token environment variables for your environment.

```yaml
default_provider: openai

providers:
openai:
protocol: openai
base_url: "https://relay.example.com/v1"
api_token: "${OPENAI_RELAY_TOKEN}"
default_model: "gpt-4o-mini"

anthropic:
protocol: anthropic
base_url: "https://relay.example.com"
api_token: "${ANTHROPIC_RELAY_TOKEN}"
default_model: "claude-3-5-sonnet-latest"

benchmarks:
data_dir: "data/benchmarks"
aime2026:
source: "huggingface:MathArena/aime_2026"
split: "train"
gpqa_diamond:
source: "huggingface:Idavidrein/gpqa"
split: "gpqa_diamond"
```

Values written as `${ENV_NAME}` are expanded when the config is loaded. For example:

```bash
export OPENAI_RELAY_TOKEN="..."
export ANTHROPIC_RELAY_TOKEN="..."
```

## Dataset Fetching

Fetch AIME 2026 into the configured benchmark data directory:

```bash
cargo run -- dataset fetch aime2026
```

The current AIME fetch path reads Hugging Face rows and normalizes them to local JSONL at `data/benchmarks/aime2026/aime2026.jsonl`, with `id`, `problem`, and `answer` fields. Dataset metadata is written next to the downloaded file.

Fetch GPQA-Diamond:

```bash
cargo run -- dataset fetch gpqa-diamond
```

GPQA-Diamond is downloaded as `data/benchmarks/gpqa_diamond/gpqa_diamond.csv`. If Hugging Face requires authentication, set `HF_TOKEN` before fetching:

```bash
export HF_TOKEN="..."
```

Dataset files under `data/benchmarks` are ignored by git.

## Relay Checks

Run a simple OpenAI-compatible relay check:

```bash
cargo run -- check \
--config config.yaml \
--provider openai \
--model gpt-4o-mini \
--prompt "Reply with the word ready."
```

Run a simple Anthropic-compatible relay check:

```bash
cargo run -- check \
--config config.yaml \
--provider anthropic \
--model claude-3-5-sonnet-latest \
--prompt "Reply with the word ready."
```

The check command prints the HTTP status, elapsed milliseconds, and model text.

## Benchmarks

Run an AIME 2026 benchmark:

```bash
cargo run -- bench aime2026 \
--config config.yaml \
--provider openai \
--model gpt-4o-mini \
--concurrency 4 \
--limit 10
```

Run a GPQA-Diamond benchmark:

```bash
cargo run -- bench gpqa-diamond \
--config config.yaml \
--provider openai \
--model gpt-4o-mini \
--concurrency 4 \
--limit 10
```

Omit `--limit` to run all locally available cases.

## RPM Testing

Run an RPM test:

```bash
cargo run -- rpm \
--config config.yaml \
--provider openai \
--model gpt-4o-mini \
--rpm 60 \
--duration 60s \
--prompt "Reply with pong."
```

Durations use `s` or `m`, such as `30s` or `5m`.

## Reports

Benchmark and RPM commands print a terminal summary with success counts, failures, latency percentiles, errors, and the report path. JSON reports are written under `reports/*.json`; the `reports` directory is ignored by git.

Benchmark reports include `wrong_cases`, with each wrong case containing the case id, question, expected answer, extracted actual answer, and raw model output. RPM reports include request counts, target RPM, latency, and error counts.

## Comparing Scores

Use these results as relay benchmark signals, not absolute proof by themselves. To compare against official scores or another run, align the same dataset and source, prompt text, temperature, `max_tokens`, sample limit, and scoring logic. Differences in any of those inputs can make the reported accuracy diverge from official numbers or other benchmark harnesses.

Loading…
Cancel
Save