Procházet zdrojové kódy

Merge pull request 'codex/lq-token-test-init' (#2) from codex/lq-token-test-init into main

Reviewed-on: https://git.malls.iformall.com/server/new-api-check/pulls/2
main
liushen před 3 dny
rodič
revize
ffb25b0126
9 změnil soubory, kde provedl 718 přidání a 104 odebrání
  1. +1
    -1
      README.md
  2. +1
    -0
      docs/USAGE.zh-CN.md
  3. +1
    -0
      docs/testing-guide.md
  4. +385
    -15
      src/cli.rs
  5. +124
    -32
      src/protocols/anthropic.rs
  6. +42
    -20
      src/protocols/google.rs
  7. +43
    -26
      src/protocols/openai.rs
  8. +68
    -5
      src/report.rs
  9. +53
    -5
      src/runner.rs

+ 1
- 1
README.md Zobrazit soubor

@@ -305,7 +305,7 @@ Real LLM services often combine multiple limiters, such as RPM, TPM, maximum con


Benchmark and RPM commands print a terminal summary with success counts, failures, latency percentiles, errors, and the report path. JSON reports are written under `reports/*.json`; the `reports` directory is ignored by git. Benchmark and RPM commands print a terminal summary with success counts, failures, latency percentiles, errors, and the report path. JSON reports are written under `reports/*.json`; the `reports` directory is ignored by git.


Benchmark reports include `wrong_cases`, with each wrong case containing the case id, question, expected answer, extracted actual answer, and raw model output. RPM reports include request counts, mode, target RPM, observed RPM, latency, error counts, and mode-specific details such as burst summaries, probe summaries, window-boundary summaries, and optional limiter inference.
Benchmark reports include `params.request`, a non-sensitive summary of the protocol-specific request body parameters that are actually sent upstream, excluding prompts and tokens. They also include `wrong_cases`, with each wrong case containing the case id, question, expected answer, extracted actual answer, and raw model output. RPM reports include request counts, mode, target RPM, observed RPM, latency, error counts, and mode-specific details such as burst summaries, probe summaries, window-boundary summaries, and optional limiter inference.


When an upstream request returns a non-success HTTP status such as 400, 429, or 504, `check`, `bench`, and `rpm` automatically write a request/response debug JSON file under `outputs/debug/`. The debug file includes the full request URL, redacted request headers, full request body including the prompt, response status, response headers, and full response body. If the request fails before an HTTP response is available, for example a connect timeout, read failure, or streaming interruption counted as `request_error`, the same directory gets a `*-request-error` debug JSON with `response.status: null`, `response.error_kind: "request_error"`, and the local error message. API tokens are redacted, but prompts and model outputs are preserved for troubleshooting. When an upstream request returns a non-success HTTP status such as 400, 429, or 504, `check`, `bench`, and `rpm` automatically write a request/response debug JSON file under `outputs/debug/`. The debug file includes the full request URL, redacted request headers, full request body including the prompt, response status, response headers, and full response body. If the request fails before an HTTP response is available, for example a connect timeout, read failure, or streaming interruption counted as `request_error`, the same directory gets a `*-request-error` debug JSON with `response.status: null`, `response.error_kind: "request_error"`, and the local error message. API tokens are redacted, but prompts and model outputs are preserved for troubleshooting.




+ 1
- 0
docs/USAGE.zh-CN.md Zobrazit soubor

@@ -451,6 +451,7 @@ benchmark report 包含:
- benchmark - benchmark
- provider - provider
- model - model
- params.request:真实发送给上游的协议参数摘要,不包含 prompt 和 token
- dataset - dataset
- run 参数 - run 参数
- accuracy - accuracy


+ 1
- 0
docs/testing-guide.md Zobrazit soubor

@@ -211,6 +211,7 @@ GPQA-Diamond 的 prompt 和评分按 OpenAI `simple-evals` 风格处理:要求


报告自动写入 `reports/` 目录,JSON 格式,包含: 报告自动写入 `reports/` 目录,JSON 格式,包含:
- 总体准确率(accuracy) - 总体准确率(accuracy)
- 真实发送给上游的协议参数摘要(params.request,不含 prompt/token)
- 每道题的对错明细(wrong_cases) - 每道题的对错明细(wrong_cases)
- 延迟百分位(latency_ms、ttft_ms) - 延迟百分位(latency_ms、ttft_ms)
- 错误统计(errors) - 错误统计(errors)


+ 385
- 15
src/cli.rs Zobrazit soubor

@@ -1,25 +1,28 @@
use crate::benchmarks; use crate::benchmarks;
use crate::benchmarks::judge; use crate::benchmarks::judge;
use crate::config::{AppConfig, ProviderThinkingConfig};
use crate::config::{AppConfig, ProtocolKind, ProviderThinkingConfig};
use crate::metrics::{LatencySummary, Metrics, MetricsSummary}; use crate::metrics::{LatencySummary, Metrics, MetricsSummary};
use crate::report::{ use crate::report::{
BenchmarkParamsReport, BenchmarkReport, BenchmarkSummaryReport, CorrectCaseReport, BenchmarkParamsReport, BenchmarkReport, BenchmarkSummaryReport, CorrectCaseReport,
DatasetReport, LatencyReport, LimiterInferenceKind, LimiterInferenceReport, PhaseSummaryReport, DatasetReport, LatencyReport, LimiterInferenceKind, LimiterInferenceReport, PhaseSummaryReport,
ProbeSecondReport, RpmModeDetailReport, RpmParamsReport, RpmReport, RpmRunReport,
RpmSummaryReport, RunReport, ThinkingParamsReport, WindowBoundaryReport, WrongCaseReport,
write_benchmark_report, write_rpm_report,
ProbeSecondReport, RequestDebugReport, RpmModeDetailReport, RpmParamsReport, RpmReport,
RpmRunReport, RpmSummaryReport, RunReport, SentRequestParamsReport, ThinkingParamsReport,
WindowBoundaryReport, WrongCaseReport, write_benchmark_report, write_rpm_report,
}; };
use crate::rpm_modes::{ use crate::rpm_modes::{
ProbePhase, RpmMode, ScheduledProbe, burst_schedule, sliding_window_schedule, ProbePhase, RpmMode, ScheduledProbe, burst_schedule, sliding_window_schedule,
sustained_schedule, token_bucket_schedule, window_boundary_plan, sustained_schedule, token_bucket_schedule, window_boundary_plan,
}; };
use crate::runner::{ModelRequest, RawDebugConfig, ThinkingConfig, run_model_request};
use crate::runner::{
ModelRequest, ModelResponse, RawDebugConfig, ThinkingConfig, run_model_request,
};
use anyhow::{Context, Result, bail}; use anyhow::{Context, Result, bail};
use chrono::Utc; use chrono::Utc;
use clap::{Parser, Subcommand}; use clap::{Parser, Subcommand};
use futures::{StreamExt, stream}; use futures::{StreamExt, stream};
use indicatif::{ProgressBar, ProgressStyle}; use indicatif::{ProgressBar, ProgressStyle};
use regex::Regex; use regex::Regex;
use serde_json::{Value, json};
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
@@ -138,7 +141,7 @@ pub enum BenchCommand {
limit: Option<usize>, limit: Option<usize>,
#[arg(long, num_args = 0..=1, default_missing_value = "true")] #[arg(long, num_args = 0..=1, default_missing_value = "true")]
stream: Option<bool>, stream: Option<bool>,
#[arg(long, default_value_t = 32_768)]
#[arg(long, default_value_t = 64_000)]
max_tokens: u32, max_tokens: u32,
#[arg(long)] #[arg(long)]
debug_raw: bool, debug_raw: bool,
@@ -170,7 +173,7 @@ pub enum BenchCommand {
limit: Option<usize>, limit: Option<usize>,
#[arg(long, num_args = 0..=1, default_missing_value = "true")] #[arg(long, num_args = 0..=1, default_missing_value = "true")]
stream: Option<bool>, stream: Option<bool>,
#[arg(long, default_value_t = 32_768)]
#[arg(long, default_value_t = 64_000)]
max_tokens: u32, max_tokens: u32,
#[arg(long)] #[arg(long)]
debug_raw: bool, debug_raw: bool,
@@ -448,10 +451,21 @@ async fn run_aime_benchmark(options: BenchmarkCommandOptions) -> Result<()> {
let mut metrics = Metrics::new(); let mut metrics = Metrics::new();
let mut wrong_cases = Vec::new(); let mut wrong_cases = Vec::new();
let mut correct_samples = Vec::new(); let mut correct_samples = Vec::new();
let mut debug_requests = Vec::new();
while let Some((case, result)) = results.next().await { while let Some((case, result)) = results.next().await {
pb.inc(1); pb.inc(1);
match result { match result {
Ok(response) => { Ok(response) => {
let prompt = case.prompt();
if let Some(debug) =
response_debug_report(Some(case.id.clone()), None, None, prompt, &response)
{
debug_requests.push(debug);
}
if response_has_no_content(&response) {
metrics.record_failure("request_error");
continue;
}
metrics.record_success( metrics.record_success(
response.status, response.status,
response.elapsed_ms as u64, response.elapsed_ms as u64,
@@ -492,6 +506,7 @@ async fn run_aime_benchmark(options: BenchmarkCommandOptions) -> Result<()> {
model, model,
stream: base_request.stream, stream: base_request.stream,
thinking: thinking_report(base_request.thinking.as_ref()), thinking: thinking_report(base_request.thinking.as_ref()),
request: sent_request_params(protocol, &base_request),
dataset, dataset,
started_at, started_at,
duration_ms: started.elapsed().as_millis(), duration_ms: started.elapsed().as_millis(),
@@ -501,6 +516,7 @@ async fn run_aime_benchmark(options: BenchmarkCommandOptions) -> Result<()> {
summary, summary,
correct_samples, correct_samples,
wrong_cases, wrong_cases,
debug_requests,
}); });
let report_path = write_benchmark_report(Path::new("."), &report)?; let report_path = write_benchmark_report(Path::new("."), &report)?;
print_benchmark_report(&report, &report_path); print_benchmark_report(&report, &report_path);
@@ -557,10 +573,21 @@ async fn run_gpqa_benchmark(options: BenchmarkCommandOptions) -> Result<()> {
let mut metrics = Metrics::new(); let mut metrics = Metrics::new();
let mut wrong_cases = Vec::new(); let mut wrong_cases = Vec::new();
let mut correct_samples = Vec::new(); let mut correct_samples = Vec::new();
let mut debug_requests = Vec::new();
while let Some((case, result)) = results.next().await { while let Some((case, result)) = results.next().await {
pb.inc(1); pb.inc(1);
match result { match result {
Ok(response) => { Ok(response) => {
let prompt = case.prompt();
if let Some(debug) =
response_debug_report(Some(case.id.clone()), None, None, prompt, &response)
{
debug_requests.push(debug);
}
if response_has_no_content(&response) {
metrics.record_failure("request_error");
continue;
}
metrics.record_success( metrics.record_success(
response.status, response.status,
response.elapsed_ms as u64, response.elapsed_ms as u64,
@@ -603,6 +630,7 @@ async fn run_gpqa_benchmark(options: BenchmarkCommandOptions) -> Result<()> {
model, model,
stream: base_request.stream, stream: base_request.stream,
thinking: thinking_report(base_request.thinking.as_ref()), thinking: thinking_report(base_request.thinking.as_ref()),
request: sent_request_params(protocol, &base_request),
dataset, dataset,
started_at, started_at,
duration_ms: started.elapsed().as_millis(), duration_ms: started.elapsed().as_millis(),
@@ -612,6 +640,7 @@ async fn run_gpqa_benchmark(options: BenchmarkCommandOptions) -> Result<()> {
summary, summary,
correct_samples, correct_samples,
wrong_cases, wrong_cases,
debug_requests,
}); });
let report_path = write_benchmark_report(Path::new("."), &report)?; let report_path = write_benchmark_report(Path::new("."), &report)?;
print_benchmark_report(&report, &report_path); print_benchmark_report(&report, &report_path);
@@ -680,6 +709,8 @@ async fn run_rpm(config_path: PathBuf, options: RpmCommandOptions) -> Result<()>
let started = Instant::now(); let started = Instant::now();
let mut metrics = Metrics::new(); let mut metrics = Metrics::new();
let mut mode_summary = RpmModeSummaryBuilder::default(); let mut mode_summary = RpmModeSummaryBuilder::default();
let mut debug_requests = Vec::new();
let prompt_for_debug = request.prompt.clone();


let results = run_scheduled_requests( let results = run_scheduled_requests(
provider_config.protocol, provider_config.protocol,
@@ -690,14 +721,32 @@ async fn run_rpm(config_path: PathBuf, options: RpmCommandOptions) -> Result<()>
.await; .await;


for result in results { for result in results {
let success = result.result.is_ok();
let success = result
.result
.as_ref()
.is_ok_and(|response| !response_has_no_content(response));
mode_summary.record(result.phase, result.second, success); mode_summary.record(result.phase, result.second, success);
match result.result { match result.result {
Ok(response) => metrics.record_success(
response.status,
response.elapsed_ms as u64,
response.first_token_ms.map(|ms| ms as u64),
),
Ok(response) => {
if let Some(debug) = response_debug_report(
None,
Some(phase_name(result.phase).to_string()),
result.second,
prompt_for_debug.clone(),
&response,
) {
debug_requests.push(debug);
}
if response_has_no_content(&response) {
metrics.record_failure("request_error");
continue;
}
metrics.record_success(
response.status,
response.elapsed_ms as u64,
response.first_token_ms.map(|ms| ms as u64),
);
}
Err(error) => metrics.record_failure(error_code(&error)), Err(error) => metrics.record_failure(error_code(&error)),
} }
} }
@@ -735,6 +784,7 @@ async fn run_rpm(config_path: PathBuf, options: RpmCommandOptions) -> Result<()>
mode: mode_plan.mode_name.to_string(), mode: mode_plan.mode_name.to_string(),
mode_detail: mode_summary.into_report(options.mode), mode_detail: mode_summary.into_report(options.mode),
errors: summary.errors, errors: summary.errors,
debug_requests,
}; };
let report_path = write_rpm_report(Path::new("."), &report)?; let report_path = write_rpm_report(Path::new("."), &report)?;
print_rpm_report(&report, &report_path); print_rpm_report(&report, &report_path);
@@ -1237,6 +1287,7 @@ struct BenchmarkReportInput {
model: String, model: String,
stream: bool, stream: bool,
thinking: Option<ThinkingParamsReport>, thinking: Option<ThinkingParamsReport>,
request: SentRequestParamsReport,
dataset: DatasetReport, dataset: DatasetReport,
started_at: chrono::DateTime<Utc>, started_at: chrono::DateTime<Utc>,
duration_ms: u128, duration_ms: u128,
@@ -1246,6 +1297,7 @@ struct BenchmarkReportInput {
summary: MetricsSummary, summary: MetricsSummary,
correct_samples: Vec<CorrectCaseReport>, correct_samples: Vec<CorrectCaseReport>,
wrong_cases: Vec<WrongCaseReport>, wrong_cases: Vec<WrongCaseReport>,
debug_requests: Vec<RequestDebugReport>,
} }


fn benchmark_report(input: BenchmarkReportInput) -> BenchmarkReport { fn benchmark_report(input: BenchmarkReportInput) -> BenchmarkReport {
@@ -1256,6 +1308,7 @@ fn benchmark_report(input: BenchmarkReportInput) -> BenchmarkReport {
params: BenchmarkParamsReport { params: BenchmarkParamsReport {
stream: input.stream, stream: input.stream,
thinking: input.thinking, thinking: input.thinking,
request: input.request,
}, },
dataset: input.dataset, dataset: input.dataset,
run: RunReport { run: RunReport {
@@ -1279,6 +1332,173 @@ fn benchmark_report(input: BenchmarkReportInput) -> BenchmarkReport {
errors: input.summary.errors, errors: input.summary.errors,
correct_samples: input.correct_samples, correct_samples: input.correct_samples,
wrong_cases: input.wrong_cases, wrong_cases: input.wrong_cases,
debug_requests: input.debug_requests,
}
}

const NEAR_TTFT_LATENCY_THRESHOLD_MS: u128 = 1_000;

fn response_debug_report(
id: Option<String>,
phase: Option<String>,
second: Option<u64>,
prompt: String,
response: &ModelResponse,
) -> Option<RequestDebugReport> {
let ttft_latency_delta_ms = response
.first_token_ms
.map(|ttft_ms| response.elapsed_ms.abs_diff(ttft_ms));
let near_ttft_latency =
ttft_latency_delta_ms.is_some_and(|delta_ms| delta_ms < NEAR_TTFT_LATENCY_THRESHOLD_MS);
let has_think_tags = contains_think_tags(&response.text);
let no_content = response_has_no_content(response);

let mut reasons = Vec::new();
if no_content {
reasons.push("no_content");
}
if near_ttft_latency {
reasons.push("near_ttft_latency");
}
if has_think_tags {
reasons.push("contains_think_tags");
}
if reasons.is_empty() {
return None;
}

Some(RequestDebugReport {
reason: reasons.join(","),
id,
phase,
second,
prompt,
output: response.text.clone(),
latency_ms: response.elapsed_ms,
ttft_ms: response.first_token_ms,
ttft_latency_delta_ms,
chunk_count: response.chunk_elapsed_ms.len(),
chunk_elapsed_ms: response.chunk_elapsed_ms.clone(),
has_think_tags,
})
}

fn response_has_no_content(response: &ModelResponse) -> bool {
response.text.trim().is_empty()
}

fn contains_think_tags(text: &str) -> bool {
let lower = text.to_ascii_lowercase();
lower.contains("<think>") && lower.contains("</think>")
}

fn phase_name(phase: ProbePhase) -> &'static str {
match phase {
ProbePhase::Burst => "burst",
ProbePhase::RefillProbe => "refill_probe",
ProbePhase::SlidingProbe => "sliding_probe",
ProbePhase::BeforeBoundary => "before_boundary",
ProbePhase::AfterBoundary => "after_boundary",
}
}

fn sent_request_params(protocol: ProtocolKind, request: &ModelRequest) -> SentRequestParamsReport {
let protocol_name = match protocol {
ProtocolKind::Openai => "openai",
ProtocolKind::Anthropic => "anthropic",
ProtocolKind::Google => "google",
};
SentRequestParamsReport {
protocol: protocol_name.to_string(),
body: sent_request_body_params(protocol, request),
}
}

fn sent_request_body_params(protocol: ProtocolKind, request: &ModelRequest) -> Value {
match protocol {
ProtocolKind::Openai => {
let mut body = serde_json::Map::new();
body.insert("temperature".to_string(), json!(request.temperature));
body.insert("max_tokens".to_string(), json!(request.max_tokens));
if request.stream {
body.insert("stream".to_string(), json!(true));
}
if let Some(thinking) = &request.thinking
&& thinking.enabled
{
if let Some(reasoning_effort) = thinking
.reasoning_effort
.as_ref()
.or(thinking.effort.as_ref())
{
body.insert("reasoning_effort".to_string(), json!(reasoning_effort));
}
if let Some(reasoning_summary) = &thinking.reasoning_summary {
body.insert("reasoning_summary".to_string(), json!(reasoning_summary));
}
}
Value::Object(body)
}
ProtocolKind::Anthropic => {
let mut body = serde_json::Map::new();
body.insert("max_tokens".to_string(), json!(request.max_tokens));
if request.stream {
body.insert("stream".to_string(), json!(true));
}
if let Some(thinking) = &request.thinking
&& thinking.enabled
{
let mut thinking_body = serde_json::Map::new();
let thinking_type = thinking.kind.as_deref().unwrap_or("enabled");
thinking_body.insert("type".to_string(), json!(thinking_type));
if thinking_type == "adaptive" {
if let Some(effort) = &thinking.effort {
thinking_body.insert("effort".to_string(), json!(effort));
}
} else if let Some(budget_tokens) = thinking.budget_tokens {
thinking_body.insert("budget_tokens".to_string(), json!(budget_tokens));
}
if let Some(display) = &thinking.display {
thinking_body.insert("display".to_string(), json!(display));
}
body.insert("thinking".to_string(), Value::Object(thinking_body));
} else {
body.insert("temperature".to_string(), json!(request.temperature));
}
Value::Object(body)
}
ProtocolKind::Google => {
let mut generation_config = serde_json::Map::new();
generation_config.insert("temperature".to_string(), json!(request.temperature));
generation_config.insert("maxOutputTokens".to_string(), json!(request.max_tokens));
if let Some(thinking) = &request.thinking
&& thinking.enabled
{
let mut thinking_config = serde_json::Map::new();
if let Some(display) = &thinking.display {
thinking_config.insert(
"includeThoughts".to_string(),
json!(display != "omitted" && display != "false"),
);
}
if let Some(budget_tokens) = thinking.budget_tokens {
thinking_config.insert("thinkingBudget".to_string(), json!(budget_tokens));
}
if let Some(effort) = &thinking.effort {
thinking_config.insert("thinkingLevel".to_string(), json!(effort));
}
if !thinking_config.is_empty() {
generation_config
.insert("thinkingConfig".to_string(), Value::Object(thinking_config));
}
}
let mut body = serde_json::Map::new();
body.insert(
"generationConfig".to_string(),
Value::Object(generation_config),
);
Value::Object(body)
}
} }
} }


@@ -1576,7 +1796,7 @@ mod tests {
} }


#[test] #[test]
fn bench_command_defaults_max_tokens_to_32768() {
fn bench_command_defaults_max_tokens_to_64000() {
let cli = Cli::try_parse_from(["lq_token_test", "bench", "gpqa-diamond"]) let cli = Cli::try_parse_from(["lq_token_test", "bench", "gpqa-diamond"])
.expect("parse gpqa bench"); .expect("parse gpqa bench");


@@ -1587,7 +1807,7 @@ mod tests {
panic!("expected gpqa-diamond bench command"); panic!("expected gpqa-diamond bench command");
}; };


assert_eq!(max_tokens, 32_768);
assert_eq!(max_tokens, 64_000);
} }


#[test] #[test]
@@ -1712,6 +1932,156 @@ mod tests {
assert_eq!(merged.budget_tokens, Some(20000)); assert_eq!(merged.budget_tokens, Some(20000));
} }


#[test]
fn sent_anthropic_adaptive_request_params_omit_temperature_and_nulls() {
let request = ModelRequest {
base_url: "https://example.test".to_string(),
api_token: "secret".to_string(),
model: "claude-test".to_string(),
prompt: "hello".to_string(),
temperature: 0.0,
max_tokens: 32_768,
stream: true,
raw_debug: None,
thinking: Some(ThinkingConfig {
enabled: true,
kind: Some("adaptive".to_string()),
budget_tokens: None,
effort: Some("high".to_string()),
display: Some("summarized".to_string()),
reasoning_effort: None,
reasoning_summary: None,
}),
};

let params = sent_request_params(ProtocolKind::Anthropic, &request);

assert_eq!(params.protocol, "anthropic");
assert_eq!(params.body["max_tokens"], 32_768);
assert_eq!(params.body["stream"], true);
assert_eq!(params.body["thinking"]["type"], "adaptive");
assert_eq!(params.body["thinking"]["effort"], "high");
assert_eq!(params.body["thinking"]["display"], "summarized");
assert!(params.body.get("temperature").is_none());
assert!(params.body["thinking"].get("budget_tokens").is_none());
assert!(params.body["thinking"].get("reasoning_effort").is_none());
}

#[test]
fn sent_google_request_params_use_generation_config_names() {
let request = ModelRequest {
base_url: "https://example.test".to_string(),
api_token: "secret".to_string(),
model: "gemini-test".to_string(),
prompt: "hello".to_string(),
temperature: 0.0,
max_tokens: 32_768,
stream: true,
raw_debug: None,
thinking: Some(ThinkingConfig {
enabled: true,
kind: None,
budget_tokens: Some(5000),
effort: Some("high".to_string()),
display: Some("summarized".to_string()),
reasoning_effort: None,
reasoning_summary: None,
}),
};

let params = sent_request_params(ProtocolKind::Google, &request);

assert_eq!(params.protocol, "google");
assert_eq!(params.body["generationConfig"]["temperature"], 0.0);
assert_eq!(params.body["generationConfig"]["maxOutputTokens"], 32_768);
assert_eq!(
params.body["generationConfig"]["thinkingConfig"]["thinkingBudget"],
5000
);
assert_eq!(
params.body["generationConfig"]["thinkingConfig"]["thinkingLevel"],
"high"
);
assert_eq!(
params.body["generationConfig"]["thinkingConfig"]["includeThoughts"],
true
);
}

#[test]
fn response_debug_report_records_near_ttft_latency_with_full_prompt_and_output() {
let response = ModelResponse {
text: "final answer".to_string(),
status: 200,
elapsed_ms: 1_500,
first_token_ms: Some(750),
chunk_elapsed_ms: vec![100, 750],
};

let debug = response_debug_report(
Some("case-1".to_string()),
None,
None,
"full prompt".to_string(),
&response,
)
.expect("near ttft/latency should be debugged");

assert_eq!(debug.reason, "near_ttft_latency");
assert_eq!(debug.id.as_deref(), Some("case-1"));
assert_eq!(debug.prompt, "full prompt");
assert_eq!(debug.output, "final answer");
assert_eq!(debug.latency_ms, 1_500);
assert_eq!(debug.ttft_ms, Some(750));
assert_eq!(debug.ttft_latency_delta_ms, Some(750));
assert_eq!(debug.chunk_count, 2);
assert_eq!(debug.chunk_elapsed_ms, vec![100, 750]);
assert!(!debug.has_think_tags);
}

#[test]
fn response_debug_report_marks_think_tags() {
let response = ModelResponse {
text: "<think>hidden</think>\nanswer".to_string(),
status: 200,
elapsed_ms: 5_000,
first_token_ms: Some(100),
chunk_elapsed_ms: vec![100],
};

let debug = response_debug_report(None, None, None, "prompt".to_string(), &response)
.expect("think tags should be debugged");

assert_eq!(debug.reason, "contains_think_tags");
assert!(debug.has_think_tags);
assert_eq!(debug.ttft_latency_delta_ms, Some(4_900));
}

#[test]
fn response_debug_report_records_empty_content_as_request_debug() {
let response = ModelResponse {
text: " ".to_string(),
status: 200,
elapsed_ms: 2_000,
first_token_ms: None,
chunk_elapsed_ms: Vec::new(),
};

let debug = response_debug_report(
None,
Some("burst".to_string()),
Some(0),
"prompt".to_string(),
&response,
)
.expect("empty content should be debugged");

assert_eq!(debug.reason, "no_content");
assert_eq!(debug.phase.as_deref(), Some("burst"));
assert_eq!(debug.second, Some(0));
assert!(response_has_no_content(&response));
}

#[test] #[test]
fn rpm_command_parses_window_boundary_offset() { fn rpm_command_parses_window_boundary_offset() {
let cli = Cli::try_parse_from([ let cli = Cli::try_parse_from([


+ 124
- 32
src/protocols/anthropic.rs Zobrazit soubor

@@ -114,6 +114,7 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon
status: status_code, status: status_code,
elapsed_ms: 0, elapsed_ms: 0,
first_token_ms: None, first_token_ms: None,
chunk_elapsed_ms: Vec::new(),
}) })
} }


@@ -187,22 +188,28 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
let mut text = String::new(); let mut text = String::new();
let mut raw_stream = String::new(); let mut raw_stream = String::new();
let mut first_token_ms: Option<u128> = None; let mut first_token_ms: Option<u128> = None;
let mut chunk_elapsed_ms = Vec::new();
let mut current_event = String::new(); let mut current_event = String::new();
let mut done = false; let mut done = false;


while let Some(chunk) = stream.next().await { while let Some(chunk) = stream.next().await {
let chunk = chunk
.map_err(|error| {
write_request_error_debug_blocking(
let chunk = match chunk {
Ok(chunk) => chunk,
Err(error) => {
write_response_request_error_debug(
request, request,
&debug_url, &debug_url,
request_body.clone(), request_body.clone(),
status_code,
&response_headers,
&raw_stream,
format!("Anthropic stream interrupted: {error}"), format!("Anthropic stream interrupted: {error}"),
"anthropic-request-error",
);
error
})
.context("Anthropic stream interrupted")?;
)
.await?;
return Err(error).context("Anthropic stream interrupted");
}
};
chunk_elapsed_ms.push(started.elapsed().as_millis());
for line in buffer.feed(&chunk) { for line in buffer.feed(&chunk) {
raw_stream.push_str(&line); raw_stream.push_str(&line);
raw_stream.push('\n'); raw_stream.push('\n');
@@ -264,6 +271,7 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
status: status_code, status: status_code,
elapsed_ms: 0, elapsed_ms: 0,
first_token_ms, first_token_ms,
chunk_elapsed_ms,
}) })
} }


@@ -414,9 +422,32 @@ fn write_request_error_debug_blocking(
mod tests { mod tests {
use crate::runner::{ModelRequest, RawDebugConfig, ThinkingConfig}; use crate::runner::{ModelRequest, RawDebugConfig, ThinkingConfig};
use reqwest::Client; use reqwest::Client;
use std::path::{Path, PathBuf};
use wiremock::matchers::{body_json, header, method, path}; use wiremock::matchers::{body_json, header, method, path};
use wiremock::{Mock, MockServer, ResponseTemplate}; use wiremock::{Mock, MockServer, ResponseTemplate};


fn read_single_debug_file(root: &Path) -> String {
let mut files = Vec::new();
collect_debug_files(root, &mut files);
assert_eq!(files.len(), 1);
std::fs::read_to_string(&files[0]).expect("read raw debug file")
}

fn collect_debug_files(dir: &Path, files: &mut Vec<PathBuf>) {
for entry in std::fs::read_dir(dir)
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries")
{
let path = entry.path();
if path.is_dir() {
collect_debug_files(&path, files);
} else {
files.push(path);
}
}
}

#[tokio::test] #[tokio::test]
async fn extracts_first_text_block() { async fn extracts_first_text_block() {
let server = MockServer::start().await; let server = MockServer::start().await;
@@ -647,12 +678,7 @@ mod tests {
.await .await
.expect_err("non-success should fail"); .expect_err("non-success should fail");


let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let raw = read_single_debug_file(temp_dir.path());
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json"); let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");


assert_eq!(debug["request"]["headers"]["x-api-key"], "[REDACTED]"); assert_eq!(debug["request"]["headers"]["x-api-key"], "[REDACTED]");
@@ -696,12 +722,7 @@ mod tests {
.await .await
.expect_err("connection should fail"); .expect_err("connection should fail");


let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let raw = read_single_debug_file(temp_dir.path());
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json"); let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");


assert_eq!(debug["request"]["method"], "POST"); assert_eq!(debug["request"]["method"], "POST");
@@ -811,13 +832,9 @@ mod tests {
assert_eq!(response.status, 200); assert_eq!(response.status, 200);
assert_eq!(response.text, "hi there"); assert_eq!(response.text, "hi there");
assert!(response.first_token_ms.is_some()); assert!(response.first_token_ms.is_some());
assert_eq!(response.chunk_elapsed_ms.len(), 1);


let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let raw = read_single_debug_file(temp_dir.path());
assert!(raw.contains("event: content_block_delta")); assert!(raw.contains("event: content_block_delta"));
assert!(raw.contains("\"text\":\"hi \"")); assert!(raw.contains("\"text\":\"hi \""));
} }
@@ -868,12 +885,7 @@ mod tests {
.contains("completed without producing any content") .contains("completed without producing any content")
); );


let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let raw = read_single_debug_file(temp_dir.path());
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json"); let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");


assert_eq!( assert_eq!(
@@ -891,6 +903,86 @@ mod tests {
assert!(!raw.contains("anthropic-secret-token")); assert!(!raw.contains("anthropic-secret-token"));
} }


#[tokio::test]
async fn interrupted_stream_debug_records_status_headers_and_partial_stream() {
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::TcpListener;

let listener = TcpListener::bind("127.0.0.1:0")
.await
.expect("bind listener");
let addr = listener.local_addr().expect("listener addr");
let server = tokio::spawn(async move {
let (mut socket, _) = listener.accept().await.expect("accept request");
let mut buffer = [0_u8; 4096];
let _ = socket.read(&mut buffer).await.expect("read request");
let chunk = "event: ping\ndata: {\"type\":\"ping\"}\n\n";
let response = format!(
"HTTP/1.1 200 OK\r\n\
content-type: text/event-stream\r\n\
transfer-encoding: chunked\r\n\
x-request-id: broken-stream\r\n\
\r\n\
{:x}\r\n{}\r\n\
zz\r\nbroken\r\n",
chunk.len(),
chunk
);
socket
.write_all(response.as_bytes())
.await
.expect("write malformed response");
});

let temp_dir = tempfile::tempdir().expect("create temp dir");
let request = ModelRequest {
base_url: format!("http://{addr}"),
api_token: "anthropic-secret-token".to_string(),
model: "claude-test".to_string(),
prompt: "prompt before broken stream".to_string(),
temperature: 0.0,
max_tokens: 1024,
stream: true,
raw_debug: Some(
RawDebugConfig::new(
temp_dir.path().to_path_buf(),
"anthropic-claude-test".to_string(),
)
.with_success_raw(false),
),
thinking: None,
};

let error = super::send_stream(&Client::new(), &request)
.await
.expect_err("malformed chunked stream should fail");
server.await.expect("server task");
assert!(error.to_string().contains("stream interrupted"));

let raw = read_single_debug_file(temp_dir.path());
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");

assert_eq!(debug["response"]["status"], 200);
assert_eq!(
debug["response"]["headers"]["x-request-id"],
"broken-stream"
);
assert_eq!(debug["response"]["error_kind"], "request_error");
assert!(
debug["response"]["body"]
.as_str()
.expect("partial stream")
.contains("event: ping")
);
assert!(
debug["response"]["error"]
.as_str()
.expect("stream error")
.contains("Anthropic stream interrupted")
);
assert!(!raw.contains("anthropic-secret-token"));
}

fn anthropic_request(base_url: String) -> ModelRequest { fn anthropic_request(base_url: String) -> ModelRequest {
ModelRequest { ModelRequest {
base_url, base_url,


+ 42
- 20
src/protocols/google.rs Zobrazit soubor

@@ -110,6 +110,7 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon
status: status_code, status: status_code,
elapsed_ms: 0, elapsed_ms: 0,
first_token_ms: None, first_token_ms: None,
chunk_elapsed_ms: Vec::new(),
}) })
} }


@@ -182,20 +183,26 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
let mut text = String::new(); let mut text = String::new();
let mut raw_stream = String::new(); let mut raw_stream = String::new();
let mut first_token_ms: Option<u128> = None; let mut first_token_ms: Option<u128> = None;
let mut chunk_elapsed_ms = Vec::new();


while let Some(chunk) = stream.next().await { while let Some(chunk) = stream.next().await {
let chunk = chunk
.map_err(|error| {
write_request_error_debug_blocking(
let chunk = match chunk {
Ok(chunk) => chunk,
Err(error) => {
write_response_request_error_debug(
request, request,
&debug_url, &debug_url,
request_body.clone(), request_body.clone(),
status_code,
&response_headers,
&raw_stream,
format!("Google stream interrupted: {error}"), format!("Google stream interrupted: {error}"),
"google-request-error",
);
error
})
.context("Google stream interrupted")?;
)
.await?;
return Err(error).context("Google stream interrupted");
}
};
chunk_elapsed_ms.push(started.elapsed().as_millis());
for line in buffer.feed(&chunk) { for line in buffer.feed(&chunk) {
raw_stream.push_str(&line); raw_stream.push_str(&line);
raw_stream.push('\n'); raw_stream.push('\n');
@@ -244,6 +251,7 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
status: status_code, status: status_code,
elapsed_ms: 0, elapsed_ms: 0,
first_token_ms, first_token_ms,
chunk_elapsed_ms,
}) })
} }


@@ -411,9 +419,32 @@ struct GooglePart {
mod tests { mod tests {
use crate::runner::{ModelRequest, RawDebugConfig, ThinkingConfig}; use crate::runner::{ModelRequest, RawDebugConfig, ThinkingConfig};
use reqwest::Client; use reqwest::Client;
use std::path::{Path, PathBuf};
use wiremock::matchers::{body_json, header, method, path}; use wiremock::matchers::{body_json, header, method, path};
use wiremock::{Mock, MockServer, ResponseTemplate}; use wiremock::{Mock, MockServer, ResponseTemplate};


fn read_single_debug_file(root: &Path) -> String {
let mut files = Vec::new();
collect_debug_files(root, &mut files);
assert_eq!(files.len(), 1);
std::fs::read_to_string(&files[0]).expect("read raw debug file")
}

fn collect_debug_files(dir: &Path, files: &mut Vec<PathBuf>) {
for entry in std::fs::read_dir(dir)
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries")
{
let path = entry.path();
if path.is_dir() {
collect_debug_files(&path, files);
} else {
files.push(path);
}
}
}

#[tokio::test] #[tokio::test]
async fn sends_generate_content_with_thinking_config_and_extracts_text() { async fn sends_generate_content_with_thinking_config_and_extracts_text() {
let server = MockServer::start().await; let server = MockServer::start().await;
@@ -517,6 +548,7 @@ mod tests {
assert_eq!(response.status, 200); assert_eq!(response.status, 200);
assert_eq!(response.text, "hi there"); assert_eq!(response.text, "hi there");
assert!(response.first_token_ms.is_some()); assert!(response.first_token_ms.is_some());
assert_eq!(response.chunk_elapsed_ms.len(), 1);
} }


#[tokio::test] #[tokio::test]
@@ -556,12 +588,7 @@ mod tests {
.await .await
.expect_err("non-success should fail"); .expect_err("non-success should fail");


let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let raw = read_single_debug_file(temp_dir.path());
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json"); let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");


assert_eq!(debug["request"]["headers"]["x-goog-api-key"], "[REDACTED]"); assert_eq!(debug["request"]["headers"]["x-goog-api-key"], "[REDACTED]");
@@ -602,12 +629,7 @@ mod tests {
.await .await
.expect_err("connection should fail"); .expect_err("connection should fail");


let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let raw = read_single_debug_file(temp_dir.path());
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json"); let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");


assert_eq!(debug["request"]["method"], "POST"); assert_eq!(debug["request"]["method"], "POST");


+ 43
- 26
src/protocols/openai.rs Zobrazit soubor

@@ -116,6 +116,7 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon
status: status_code, status: status_code,
elapsed_ms: 0, elapsed_ms: 0,
first_token_ms: None, first_token_ms: None,
chunk_elapsed_ms: Vec::new(),
}) })
} }


@@ -178,21 +179,27 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
let mut text = String::new(); let mut text = String::new();
let mut raw_stream = String::new(); let mut raw_stream = String::new();
let mut first_token_ms: Option<u128> = None; let mut first_token_ms: Option<u128> = None;
let mut chunk_elapsed_ms = Vec::new();
let mut done = false; let mut done = false;


while let Some(chunk) = stream.next().await { while let Some(chunk) = stream.next().await {
let chunk = chunk
.map_err(|error| {
write_request_error_debug_blocking(
let chunk = match chunk {
Ok(chunk) => chunk,
Err(error) => {
write_response_request_error_debug(
request, request,
&debug_url, &debug_url,
request_body.clone(), request_body.clone(),
status_code,
&response_headers,
&raw_stream,
format!("OpenAI stream interrupted: {error}"), format!("OpenAI stream interrupted: {error}"),
"openai-request-error",
);
error
})
.context("OpenAI stream interrupted")?;
)
.await?;
return Err(error).context("OpenAI stream interrupted");
}
};
chunk_elapsed_ms.push(started.elapsed().as_millis());
for line in buffer.feed(&chunk) { for line in buffer.feed(&chunk) {
raw_stream.push_str(&line); raw_stream.push_str(&line);
raw_stream.push('\n'); raw_stream.push('\n');
@@ -248,6 +255,7 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
status: status_code, status: status_code,
elapsed_ms: 0, elapsed_ms: 0,
first_token_ms, first_token_ms,
chunk_elapsed_ms,
}) })
} }


@@ -399,9 +407,32 @@ fn write_request_error_debug_blocking(
mod tests { mod tests {
use crate::runner::{ModelRequest, RawDebugConfig, ThinkingConfig}; use crate::runner::{ModelRequest, RawDebugConfig, ThinkingConfig};
use reqwest::Client; use reqwest::Client;
use std::path::{Path, PathBuf};
use wiremock::matchers::{body_json, header, method, path}; use wiremock::matchers::{body_json, header, method, path};
use wiremock::{Mock, MockServer, ResponseTemplate}; use wiremock::{Mock, MockServer, ResponseTemplate};


fn read_single_debug_file(root: &Path) -> String {
let mut files = Vec::new();
collect_debug_files(root, &mut files);
assert_eq!(files.len(), 1);
std::fs::read_to_string(&files[0]).expect("read raw debug file")
}

fn collect_debug_files(dir: &Path, files: &mut Vec<PathBuf>) {
for entry in std::fs::read_dir(dir)
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries")
{
let path = entry.path();
if path.is_dir() {
collect_debug_files(&path, files);
} else {
files.push(path);
}
}
}

#[tokio::test] #[tokio::test]
async fn extracts_chat_completion_text() { async fn extracts_chat_completion_text() {
let server = MockServer::start().await; let server = MockServer::start().await;
@@ -591,12 +622,7 @@ mod tests {
.await .await
.expect_err("non-success should fail"); .expect_err("non-success should fail");


let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let raw = read_single_debug_file(temp_dir.path());
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json"); let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");


assert_eq!(debug["request"]["method"], "POST"); assert_eq!(debug["request"]["method"], "POST");
@@ -647,12 +673,7 @@ mod tests {
.await .await
.expect_err("connection should fail"); .expect_err("connection should fail");


let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let raw = read_single_debug_file(temp_dir.path());
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json"); let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");


assert_eq!(debug["request"]["method"], "POST"); assert_eq!(debug["request"]["method"], "POST");
@@ -763,13 +784,9 @@ mod tests {
assert_eq!(response.status, 200); assert_eq!(response.status, 200);
assert_eq!(response.text, "hi there"); assert_eq!(response.text, "hi there");
assert!(response.first_token_ms.is_some()); assert!(response.first_token_ms.is_some());
assert_eq!(response.chunk_elapsed_ms.len(), 1);


let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let raw = read_single_debug_file(temp_dir.path());
assert!(raw.contains("data: {\"choices\"")); assert!(raw.contains("data: {\"choices\""));
assert!(raw.contains("data: [DONE]")); assert!(raw.contains("data: [DONE]"));
} }


+ 68
- 5
src/report.rs Zobrazit soubor

@@ -1,7 +1,8 @@
use crate::metrics::ErrorCount; use crate::metrics::ErrorCount;
use anyhow::{Context, Result}; use anyhow::{Context, Result};
use chrono::{DateTime, Utc};
use chrono::{DateTime, Local, Utc};
use serde::Serialize; use serde::Serialize;
use serde_json::Value;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};


#[derive(Debug, Clone, Serialize)] #[derive(Debug, Clone, Serialize)]
@@ -70,12 +71,20 @@ pub struct BenchmarkReport {
pub errors: Vec<ErrorCount>, pub errors: Vec<ErrorCount>,
pub correct_samples: Vec<CorrectCaseReport>, pub correct_samples: Vec<CorrectCaseReport>,
pub wrong_cases: Vec<WrongCaseReport>, pub wrong_cases: Vec<WrongCaseReport>,
pub debug_requests: Vec<RequestDebugReport>,
} }


#[derive(Debug, Clone, Serialize)] #[derive(Debug, Clone, Serialize)]
pub struct BenchmarkParamsReport { pub struct BenchmarkParamsReport {
pub stream: bool, pub stream: bool,
pub thinking: Option<ThinkingParamsReport>, pub thinking: Option<ThinkingParamsReport>,
pub request: SentRequestParamsReport,
}

#[derive(Debug, Clone, Serialize)]
pub struct SentRequestParamsReport {
pub protocol: String,
pub body: Value,
} }


#[derive(Debug, Clone, Serialize)] #[derive(Debug, Clone, Serialize)]
@@ -132,6 +141,23 @@ pub struct RpmReport {
pub summary: RpmSummaryReport, pub summary: RpmSummaryReport,
pub mode_detail: Option<RpmModeDetailReport>, pub mode_detail: Option<RpmModeDetailReport>,
pub errors: Vec<ErrorCount>, pub errors: Vec<ErrorCount>,
pub debug_requests: Vec<RequestDebugReport>,
}

#[derive(Debug, Clone, Serialize)]
pub struct RequestDebugReport {
pub reason: String,
pub id: Option<String>,
pub phase: Option<String>,
pub second: Option<u64>,
pub prompt: String,
pub output: String,
pub latency_ms: u128,
pub ttft_ms: Option<u128>,
pub ttft_latency_delta_ms: Option<u128>,
pub chunk_count: usize,
pub chunk_elapsed_ms: Vec<u128>,
pub has_think_tags: bool,
} }


#[derive(Debug, Clone, Serialize)] #[derive(Debug, Clone, Serialize)]
@@ -210,14 +236,17 @@ fn write_report<T: Serialize>(
started_at: DateTime<Utc>, started_at: DateTime<Utc>,
report: &T, report: &T,
) -> Result<PathBuf> { ) -> Result<PathBuf> {
let reports_dir = root.join("reports");
let local_started_at = started_at.with_timezone(&Local);
let reports_dir = root
.join("reports")
.join(local_started_at.format("%Y%m%d").to_string());
std::fs::create_dir_all(&reports_dir).with_context(|| { std::fs::create_dir_all(&reports_dir).with_context(|| {
format!( format!(
"failed to create reports directory {}", "failed to create reports directory {}",
reports_dir.display() reports_dir.display()
) )
})?; })?;
let timestamp = started_at.format("%Y%m%dT%H%M%SZ");
let timestamp = local_started_at.format("%Y%m%dT%H%M%S%z");
let filename = format!( let filename = format!(
"{}-{}-{}-{timestamp}.json", "{}-{}-{}-{timestamp}.json",
sanitize_filename_component(benchmark), sanitize_filename_component(benchmark),
@@ -272,6 +301,7 @@ mod tests {
#[test] #[test]
fn writes_benchmark_report_under_reports_dir() { fn writes_benchmark_report_under_reports_dir() {
let temp_dir = tempfile::tempdir().expect("create temp dir"); let temp_dir = tempfile::tempdir().expect("create temp dir");
let started_at = Utc.with_ymd_and_hms(2026, 5, 6, 1, 2, 3).unwrap();
let report = BenchmarkReport { let report = BenchmarkReport {
benchmark: "aime2026".to_string(), benchmark: "aime2026".to_string(),
provider: "openai".to_string(), provider: "openai".to_string(),
@@ -279,6 +309,13 @@ mod tests {
params: BenchmarkParamsReport { params: BenchmarkParamsReport {
stream: false, stream: false,
thinking: None, thinking: None,
request: SentRequestParamsReport {
protocol: "openai".to_string(),
body: serde_json::json!({
"temperature": 0.0,
"max_tokens": 1024
}),
},
}, },
dataset: DatasetReport { dataset: DatasetReport {
source: "local".to_string(), source: "local".to_string(),
@@ -287,7 +324,7 @@ mod tests {
local_path: "data/benchmarks/aime2026/aime2026.jsonl".to_string(), local_path: "data/benchmarks/aime2026/aime2026.jsonl".to_string(),
}, },
run: RunReport { run: RunReport {
started_at: Utc.with_ymd_and_hms(2026, 5, 6, 1, 2, 3).unwrap(),
started_at,
duration_ms: 123, duration_ms: 123,
concurrency: 2, concurrency: 2,
limit: Some(1), limit: Some(1),
@@ -315,11 +352,17 @@ mod tests {
errors: vec![], errors: vec![],
correct_samples: vec![], correct_samples: vec![],
wrong_cases: vec![], wrong_cases: vec![],
debug_requests: vec![],
}; };


let path = write_benchmark_report(temp_dir.path(), &report).expect("write report"); let path = write_benchmark_report(temp_dir.path(), &report).expect("write report");


assert!(path.ends_with("reports/aime2026-openai-gpt-test-20260506T010203Z.json"));
let local_started_at = started_at.with_timezone(&chrono::Local);
let expected_date = local_started_at.format("%Y%m%d").to_string();
let expected_timestamp = local_started_at.format("%Y%m%dT%H%M%S%z").to_string();
assert!(path.ends_with(format!(
"reports/{expected_date}/aime2026-openai-gpt-test-{expected_timestamp}.json"
)));
assert!(path.exists()); assert!(path.exists());
} }


@@ -365,6 +408,20 @@ mod tests {
mode: "sustained".to_string(), mode: "sustained".to_string(),
mode_detail: None, mode_detail: None,
errors: vec![], errors: vec![],
debug_requests: vec![RequestDebugReport {
reason: "near_ttft_latency".to_string(),
id: None,
phase: Some("refill_probe".to_string()),
second: Some(1),
prompt: "Hi".to_string(),
output: "<think>x</think>done".to_string(),
latency_ms: 1000,
ttft_ms: Some(1000),
ttft_latency_delta_ms: Some(0),
chunk_count: 2,
chunk_elapsed_ms: vec![100, 1000],
has_think_tags: true,
}],
}; };


let json = serde_json::to_string(&report).expect("serialize report"); let json = serde_json::to_string(&report).expect("serialize report");
@@ -372,6 +429,11 @@ mod tests {
assert!(json.contains("\"prompt\":\"Hi\"")); assert!(json.contains("\"prompt\":\"Hi\""));
assert!(json.contains("\"stream\":false")); assert!(json.contains("\"stream\":false"));
assert!(json.contains("\"duration\":\"60s\"")); assert!(json.contains("\"duration\":\"60s\""));
assert!(json.contains("\"debug_requests\""));
assert!(json.contains("\"reason\":\"near_ttft_latency\""));
assert!(json.contains("\"chunk_count\":2"));
assert!(json.contains("\"chunk_elapsed_ms\":[100,1000]"));
assert!(json.contains("\"has_think_tags\":true"));
} }


#[test] #[test]
@@ -443,6 +505,7 @@ mod tests {
}), }),
}), }),
errors: vec![], errors: vec![],
debug_requests: vec![],
}; };


let json = serde_json::to_string(&report).expect("serialize report"); let json = serde_json::to_string(&report).expect("serialize report");


+ 53
- 5
src/runner.rs Zobrazit soubor

@@ -1,7 +1,7 @@
use crate::config::ProtocolKind; use crate::config::ProtocolKind;
use crate::protocols; use crate::protocols;
use anyhow::{Context, Result}; use anyhow::{Context, Result};
use chrono::Utc;
use chrono::Local;
use reqwest::Client; use reqwest::Client;
use reqwest::header::HeaderMap; use reqwest::header::HeaderMap;
use serde::Serialize; use serde::Serialize;
@@ -102,23 +102,25 @@ impl RawDebugConfig {
} }


async fn write_debug_file(&self, response_kind: &str, contents: &str) -> Result<PathBuf> { async fn write_debug_file(&self, response_kind: &str, contents: &str) -> Result<PathBuf> {
tokio::fs::create_dir_all(&self.output_dir)
let now = Local::now();
let dated_output_dir = self.output_dir.join(now.format("%Y%m%d").to_string());
tokio::fs::create_dir_all(&dated_output_dir)
.await .await
.with_context(|| { .with_context(|| {
format!( format!(
"failed to create raw debug dir {}", "failed to create raw debug dir {}",
self.output_dir.display()
dated_output_dir.display()
) )
})?; })?;
let sequence = self.counter.fetch_add(1, Ordering::Relaxed) + 1; let sequence = self.counter.fetch_add(1, Ordering::Relaxed) + 1;
let timestamp = Utc::now().format("%Y%m%dT%H%M%S%.3fZ");
let timestamp = now.format("%Y%m%dT%H%M%S%.3f%z");
let filename = format!( let filename = format!(
"{}-{}-{sequence:06}-{}.txt", "{}-{}-{sequence:06}-{}.txt",
self.prefix, self.prefix,
timestamp, timestamp,
sanitize_filename_component(response_kind) sanitize_filename_component(response_kind)
); );
let path = self.output_dir.join(filename);
let path = dated_output_dir.join(filename);
tokio::fs::write(&path, contents) tokio::fs::write(&path, contents)
.await .await
.with_context(|| format!("failed to write raw debug response {}", path.display()))?; .with_context(|| format!("failed to write raw debug response {}", path.display()))?;
@@ -241,6 +243,7 @@ pub struct ModelResponse {
pub status: u16, pub status: u16,
pub elapsed_ms: u128, pub elapsed_ms: u128,
pub first_token_ms: Option<u128>, pub first_token_ms: Option<u128>,
pub chunk_elapsed_ms: Vec<u128>,
} }


pub async fn run_model_request( pub async fn run_model_request(
@@ -298,4 +301,49 @@ mod tests {
assert!(debug.contains("[REDACTED]")); assert!(debug.contains("[REDACTED]"));
assert!(!debug.contains("sk-secret-token")); assert!(!debug.contains("sk-secret-token"));
} }

#[tokio::test]
async fn raw_debug_writes_files_under_date_directory() {
let temp_dir = tempfile::tempdir().expect("create temp dir");
let raw_debug = RawDebugConfig::new(temp_dir.path().join("debug"), "model/test".into());

let path = raw_debug
.write_response("response-kind", "body")
.await
.expect("write debug response");

let relative = path
.strip_prefix(temp_dir.path().join("debug"))
.expect("path under debug dir");
let components = relative
.components()
.map(|component| component.as_os_str().to_string_lossy().into_owned())
.collect::<Vec<_>>();

assert_eq!(components.len(), 2);
assert_eq!(components[0].len(), 8);
assert!(components[0].chars().all(|ch| ch.is_ascii_digit()));
assert!(components[1].ends_with("-response-kind.txt"));
}

#[tokio::test]
async fn raw_debug_uses_local_time_for_directory_and_filename() {
let temp_dir = tempfile::tempdir().expect("create temp dir");
let raw_debug = RawDebugConfig::new(temp_dir.path().join("debug"), "model/test".into());

let path = raw_debug
.write_response("response-kind", "body")
.await
.expect("write debug response");

let local_now = chrono::Local::now();
let local_date = local_now.format("%Y%m%d").to_string();
let local_hour_prefix = local_now.format("%Y%m%dT%H").to_string();
let filename = path.file_name().expect("debug filename").to_string_lossy();

assert!(path.starts_with(temp_dir.path().join("debug").join(local_date)));
assert!(filename.contains(&local_hour_prefix));
assert!(filename.contains(&local_now.format("%z").to_string()));
assert!(!filename.contains('Z'));
}
} }

Načítá se…
Zrušit
Uložit