Procházet zdrojové kódy

feat: report effective benchmark request params

codex/lq-token-test-init
orangels před 6 dny
rodič
revize
739524628c
5 změnil soubory, kde provedl 202 přidání a 4 odebrání
  1. +1
    -1
      README.md
  2. +1
    -0
      docs/USAGE.zh-CN.md
  3. +1
    -0
      docs/testing-guide.md
  4. +184
    -3
      src/cli.rs
  5. +15
    -0
      src/report.rs

+ 1
- 1
README.md Zobrazit soubor

@@ -305,7 +305,7 @@ Real LLM services often combine multiple limiters, such as RPM, TPM, maximum con


Benchmark and RPM commands print a terminal summary with success counts, failures, latency percentiles, errors, and the report path. JSON reports are written under `reports/*.json`; the `reports` directory is ignored by git. Benchmark and RPM commands print a terminal summary with success counts, failures, latency percentiles, errors, and the report path. JSON reports are written under `reports/*.json`; the `reports` directory is ignored by git.


Benchmark reports include `wrong_cases`, with each wrong case containing the case id, question, expected answer, extracted actual answer, and raw model output. RPM reports include request counts, mode, target RPM, observed RPM, latency, error counts, and mode-specific details such as burst summaries, probe summaries, window-boundary summaries, and optional limiter inference.
Benchmark reports include `params.request`, a non-sensitive summary of the protocol-specific request body parameters that are actually sent upstream, excluding prompts and tokens. They also include `wrong_cases`, with each wrong case containing the case id, question, expected answer, extracted actual answer, and raw model output. RPM reports include request counts, mode, target RPM, observed RPM, latency, error counts, and mode-specific details such as burst summaries, probe summaries, window-boundary summaries, and optional limiter inference.


When an upstream request returns a non-success HTTP status such as 400, 429, or 504, `check`, `bench`, and `rpm` automatically write a request/response debug JSON file under `outputs/debug/`. The debug file includes the full request URL, redacted request headers, full request body including the prompt, response status, response headers, and full response body. If the request fails before an HTTP response is available, for example a connect timeout, read failure, or streaming interruption counted as `request_error`, the same directory gets a `*-request-error` debug JSON with `response.status: null`, `response.error_kind: "request_error"`, and the local error message. API tokens are redacted, but prompts and model outputs are preserved for troubleshooting. When an upstream request returns a non-success HTTP status such as 400, 429, or 504, `check`, `bench`, and `rpm` automatically write a request/response debug JSON file under `outputs/debug/`. The debug file includes the full request URL, redacted request headers, full request body including the prompt, response status, response headers, and full response body. If the request fails before an HTTP response is available, for example a connect timeout, read failure, or streaming interruption counted as `request_error`, the same directory gets a `*-request-error` debug JSON with `response.status: null`, `response.error_kind: "request_error"`, and the local error message. API tokens are redacted, but prompts and model outputs are preserved for troubleshooting.




+ 1
- 0
docs/USAGE.zh-CN.md Zobrazit soubor

@@ -451,6 +451,7 @@ benchmark report 包含:
- benchmark - benchmark
- provider - provider
- model - model
- params.request:真实发送给上游的协议参数摘要,不包含 prompt 和 token
- dataset - dataset
- run 参数 - run 参数
- accuracy - accuracy


+ 1
- 0
docs/testing-guide.md Zobrazit soubor

@@ -211,6 +211,7 @@ GPQA-Diamond 的 prompt 和评分按 OpenAI `simple-evals` 风格处理:要求


报告自动写入 `reports/` 目录,JSON 格式,包含: 报告自动写入 `reports/` 目录,JSON 格式,包含:
- 总体准确率(accuracy) - 总体准确率(accuracy)
- 真实发送给上游的协议参数摘要(params.request,不含 prompt/token)
- 每道题的对错明细(wrong_cases) - 每道题的对错明细(wrong_cases)
- 延迟百分位(latency_ms、ttft_ms) - 延迟百分位(latency_ms、ttft_ms)
- 错误统计(errors) - 错误统计(errors)


+ 184
- 3
src/cli.rs Zobrazit soubor

@@ -1,13 +1,13 @@
use crate::benchmarks; use crate::benchmarks;
use crate::benchmarks::judge; use crate::benchmarks::judge;
use crate::config::{AppConfig, ProviderThinkingConfig};
use crate::config::{AppConfig, ProtocolKind, ProviderThinkingConfig};
use crate::metrics::{LatencySummary, Metrics, MetricsSummary}; use crate::metrics::{LatencySummary, Metrics, MetricsSummary};
use crate::report::{ use crate::report::{
BenchmarkParamsReport, BenchmarkReport, BenchmarkSummaryReport, CorrectCaseReport, BenchmarkParamsReport, BenchmarkReport, BenchmarkSummaryReport, CorrectCaseReport,
DatasetReport, LatencyReport, LimiterInferenceKind, LimiterInferenceReport, PhaseSummaryReport, DatasetReport, LatencyReport, LimiterInferenceKind, LimiterInferenceReport, PhaseSummaryReport,
ProbeSecondReport, RpmModeDetailReport, RpmParamsReport, RpmReport, RpmRunReport, ProbeSecondReport, RpmModeDetailReport, RpmParamsReport, RpmReport, RpmRunReport,
RpmSummaryReport, RunReport, ThinkingParamsReport, WindowBoundaryReport, WrongCaseReport,
write_benchmark_report, write_rpm_report,
RpmSummaryReport, RunReport, SentRequestParamsReport, ThinkingParamsReport,
WindowBoundaryReport, WrongCaseReport, write_benchmark_report, write_rpm_report,
}; };
use crate::rpm_modes::{ use crate::rpm_modes::{
ProbePhase, RpmMode, ScheduledProbe, burst_schedule, sliding_window_schedule, ProbePhase, RpmMode, ScheduledProbe, burst_schedule, sliding_window_schedule,
@@ -20,6 +20,7 @@ use clap::{Parser, Subcommand};
use futures::{StreamExt, stream}; use futures::{StreamExt, stream};
use indicatif::{ProgressBar, ProgressStyle}; use indicatif::{ProgressBar, ProgressStyle};
use regex::Regex; use regex::Regex;
use serde_json::{Value, json};
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
@@ -492,6 +493,7 @@ async fn run_aime_benchmark(options: BenchmarkCommandOptions) -> Result<()> {
model, model,
stream: base_request.stream, stream: base_request.stream,
thinking: thinking_report(base_request.thinking.as_ref()), thinking: thinking_report(base_request.thinking.as_ref()),
request: sent_request_params(protocol, &base_request),
dataset, dataset,
started_at, started_at,
duration_ms: started.elapsed().as_millis(), duration_ms: started.elapsed().as_millis(),
@@ -603,6 +605,7 @@ async fn run_gpqa_benchmark(options: BenchmarkCommandOptions) -> Result<()> {
model, model,
stream: base_request.stream, stream: base_request.stream,
thinking: thinking_report(base_request.thinking.as_ref()), thinking: thinking_report(base_request.thinking.as_ref()),
request: sent_request_params(protocol, &base_request),
dataset, dataset,
started_at, started_at,
duration_ms: started.elapsed().as_millis(), duration_ms: started.elapsed().as_millis(),
@@ -1237,6 +1240,7 @@ struct BenchmarkReportInput {
model: String, model: String,
stream: bool, stream: bool,
thinking: Option<ThinkingParamsReport>, thinking: Option<ThinkingParamsReport>,
request: SentRequestParamsReport,
dataset: DatasetReport, dataset: DatasetReport,
started_at: chrono::DateTime<Utc>, started_at: chrono::DateTime<Utc>,
duration_ms: u128, duration_ms: u128,
@@ -1256,6 +1260,7 @@ fn benchmark_report(input: BenchmarkReportInput) -> BenchmarkReport {
params: BenchmarkParamsReport { params: BenchmarkParamsReport {
stream: input.stream, stream: input.stream,
thinking: input.thinking, thinking: input.thinking,
request: input.request,
}, },
dataset: input.dataset, dataset: input.dataset,
run: RunReport { run: RunReport {
@@ -1282,6 +1287,106 @@ fn benchmark_report(input: BenchmarkReportInput) -> BenchmarkReport {
} }
} }


fn sent_request_params(protocol: ProtocolKind, request: &ModelRequest) -> SentRequestParamsReport {
let protocol_name = match protocol {
ProtocolKind::Openai => "openai",
ProtocolKind::Anthropic => "anthropic",
ProtocolKind::Google => "google",
};
SentRequestParamsReport {
protocol: protocol_name.to_string(),
body: sent_request_body_params(protocol, request),
}
}

fn sent_request_body_params(protocol: ProtocolKind, request: &ModelRequest) -> Value {
match protocol {
ProtocolKind::Openai => {
let mut body = serde_json::Map::new();
body.insert("temperature".to_string(), json!(request.temperature));
body.insert("max_tokens".to_string(), json!(request.max_tokens));
if request.stream {
body.insert("stream".to_string(), json!(true));
}
if let Some(thinking) = &request.thinking
&& thinking.enabled
{
if let Some(reasoning_effort) = thinking
.reasoning_effort
.as_ref()
.or(thinking.effort.as_ref())
{
body.insert("reasoning_effort".to_string(), json!(reasoning_effort));
}
if let Some(reasoning_summary) = &thinking.reasoning_summary {
body.insert("reasoning_summary".to_string(), json!(reasoning_summary));
}
}
Value::Object(body)
}
ProtocolKind::Anthropic => {
let mut body = serde_json::Map::new();
body.insert("max_tokens".to_string(), json!(request.max_tokens));
if request.stream {
body.insert("stream".to_string(), json!(true));
}
if let Some(thinking) = &request.thinking
&& thinking.enabled
{
let mut thinking_body = serde_json::Map::new();
let thinking_type = thinking.kind.as_deref().unwrap_or("enabled");
thinking_body.insert("type".to_string(), json!(thinking_type));
if thinking_type == "adaptive" {
if let Some(effort) = &thinking.effort {
thinking_body.insert("effort".to_string(), json!(effort));
}
} else if let Some(budget_tokens) = thinking.budget_tokens {
thinking_body.insert("budget_tokens".to_string(), json!(budget_tokens));
}
if let Some(display) = &thinking.display {
thinking_body.insert("display".to_string(), json!(display));
}
body.insert("thinking".to_string(), Value::Object(thinking_body));
} else {
body.insert("temperature".to_string(), json!(request.temperature));
}
Value::Object(body)
}
ProtocolKind::Google => {
let mut generation_config = serde_json::Map::new();
generation_config.insert("temperature".to_string(), json!(request.temperature));
generation_config.insert("maxOutputTokens".to_string(), json!(request.max_tokens));
if let Some(thinking) = &request.thinking
&& thinking.enabled
{
let mut thinking_config = serde_json::Map::new();
if let Some(display) = &thinking.display {
thinking_config.insert(
"includeThoughts".to_string(),
json!(display != "omitted" && display != "false"),
);
}
if let Some(budget_tokens) = thinking.budget_tokens {
thinking_config.insert("thinkingBudget".to_string(), json!(budget_tokens));
}
if let Some(effort) = &thinking.effort {
thinking_config.insert("thinkingLevel".to_string(), json!(effort));
}
if !thinking_config.is_empty() {
generation_config
.insert("thinkingConfig".to_string(), Value::Object(thinking_config));
}
}
let mut body = serde_json::Map::new();
body.insert(
"generationConfig".to_string(),
Value::Object(generation_config),
);
Value::Object(body)
}
}
}

fn latency_report(summary: &LatencySummary) -> LatencyReport { fn latency_report(summary: &LatencySummary) -> LatencyReport {
LatencyReport { LatencyReport {
p50: summary.p50, p50: summary.p50,
@@ -1712,6 +1817,82 @@ mod tests {
assert_eq!(merged.budget_tokens, Some(20000)); assert_eq!(merged.budget_tokens, Some(20000));
} }


#[test]
fn sent_anthropic_adaptive_request_params_omit_temperature_and_nulls() {
let request = ModelRequest {
base_url: "https://example.test".to_string(),
api_token: "secret".to_string(),
model: "claude-test".to_string(),
prompt: "hello".to_string(),
temperature: 0.0,
max_tokens: 32_768,
stream: true,
raw_debug: None,
thinking: Some(ThinkingConfig {
enabled: true,
kind: Some("adaptive".to_string()),
budget_tokens: None,
effort: Some("high".to_string()),
display: Some("summarized".to_string()),
reasoning_effort: None,
reasoning_summary: None,
}),
};

let params = sent_request_params(ProtocolKind::Anthropic, &request);

assert_eq!(params.protocol, "anthropic");
assert_eq!(params.body["max_tokens"], 32_768);
assert_eq!(params.body["stream"], true);
assert_eq!(params.body["thinking"]["type"], "adaptive");
assert_eq!(params.body["thinking"]["effort"], "high");
assert_eq!(params.body["thinking"]["display"], "summarized");
assert!(params.body.get("temperature").is_none());
assert!(params.body["thinking"].get("budget_tokens").is_none());
assert!(params.body["thinking"].get("reasoning_effort").is_none());
}

#[test]
fn sent_google_request_params_use_generation_config_names() {
let request = ModelRequest {
base_url: "https://example.test".to_string(),
api_token: "secret".to_string(),
model: "gemini-test".to_string(),
prompt: "hello".to_string(),
temperature: 0.0,
max_tokens: 32_768,
stream: true,
raw_debug: None,
thinking: Some(ThinkingConfig {
enabled: true,
kind: None,
budget_tokens: Some(5000),
effort: Some("high".to_string()),
display: Some("summarized".to_string()),
reasoning_effort: None,
reasoning_summary: None,
}),
};

let params = sent_request_params(ProtocolKind::Google, &request);

assert_eq!(params.protocol, "google");
assert_eq!(params.body["generationConfig"]["temperature"], 0.0);
assert_eq!(params.body["generationConfig"]["maxOutputTokens"], 32_768);
assert_eq!(
params.body["generationConfig"]["thinkingConfig"]["thinkingBudget"],
5000
);
assert_eq!(
params.body["generationConfig"]["thinkingConfig"]["thinkingLevel"],
"high"
);
assert_eq!(
params.body["generationConfig"]["thinkingConfig"]["includeThoughts"],
true
);
}

#[test] #[test]
fn rpm_command_parses_window_boundary_offset() { fn rpm_command_parses_window_boundary_offset() {
let cli = Cli::try_parse_from([ let cli = Cli::try_parse_from([


+ 15
- 0
src/report.rs Zobrazit soubor

@@ -2,6 +2,7 @@ use crate::metrics::ErrorCount;
use anyhow::{Context, Result}; use anyhow::{Context, Result};
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use serde::Serialize; use serde::Serialize;
use serde_json::Value;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};


#[derive(Debug, Clone, Serialize)] #[derive(Debug, Clone, Serialize)]
@@ -76,6 +77,13 @@ pub struct BenchmarkReport {
pub struct BenchmarkParamsReport { pub struct BenchmarkParamsReport {
pub stream: bool, pub stream: bool,
pub thinking: Option<ThinkingParamsReport>, pub thinking: Option<ThinkingParamsReport>,
pub request: SentRequestParamsReport,
}

#[derive(Debug, Clone, Serialize)]
pub struct SentRequestParamsReport {
pub protocol: String,
pub body: Value,
} }


#[derive(Debug, Clone, Serialize)] #[derive(Debug, Clone, Serialize)]
@@ -279,6 +287,13 @@ mod tests {
params: BenchmarkParamsReport { params: BenchmarkParamsReport {
stream: false, stream: false,
thinking: None, thinking: None,
request: SentRequestParamsReport {
protocol: "openai".to_string(),
body: serde_json::json!({
"temperature": 0.0,
"max_tokens": 1024
}),
},
}, },
dataset: DatasetReport { dataset: DatasetReport {
source: "local".to_string(), source: "local".to_string(),


Načítá se…
Zrušit
Uložit