Просмотр исходного кода

feat: log request errors for debug

pull/1/head
orangels 5 дней назад
Родитель
Сommit
948e32d64a
8 измененных файлов: 457 добавлений и 65 удалений
  1. +1
    -1
      README.md
  2. +5
    -4
      docs/USAGE.zh-CN.md
  3. +5
    -4
      docs/testing-guide.md
  4. +46
    -46
      src/cli.rs
  5. +127
    -3
      src/protocols/anthropic.rs
  6. +127
    -3
      src/protocols/google.rs
  7. +120
    -3
      src/protocols/openai.rs
  8. +26
    -1
      src/runner.rs

+ 1
- 1
README.md Просмотреть файл

@@ -307,7 +307,7 @@ Benchmark and RPM commands print a terminal summary with success counts, failure

Benchmark reports include `wrong_cases`, with each wrong case containing the case id, question, expected answer, extracted actual answer, and raw model output. RPM reports include request counts, mode, target RPM, observed RPM, latency, error counts, and mode-specific details such as burst summaries, probe summaries, window-boundary summaries, and optional limiter inference.

When an upstream request returns a non-success HTTP status such as 400, 429, or 504, `check`, `bench`, and `rpm` automatically write a request/response debug JSON file under `outputs/debug/`. The debug file includes the full request URL, redacted request headers, full request body including the prompt, response status, response headers, and full response body. API tokens are redacted, but prompts and model outputs are preserved for troubleshooting.
When an upstream request returns a non-success HTTP status such as 400, 429, or 504, `check`, `bench`, and `rpm` automatically write a request/response debug JSON file under `outputs/debug/`. The debug file includes the full request URL, redacted request headers, full request body including the prompt, response status, response headers, and full response body. If the request fails before an HTTP response is available, for example a connect timeout, read failure, or streaming interruption counted as `request_error`, the same directory gets a `*-request-error` debug JSON with `response.status: null`, `response.error_kind: "request_error"`, and the local error message. API tokens are redacted, but prompts and model outputs are preserved for troubleshooting.

Use `--debug-raw` with `check`, `bench`, or `rpm` when you also want to save successful upstream raw responses. Non-streaming success responses save the raw JSON body, and streaming success responses save the raw SSE lines. The directory is ignored by git and can help diagnose relay-side response rewriting.



+ 5
- 4
docs/USAGE.zh-CN.md Просмотреть файл

@@ -414,16 +414,17 @@ jq . reports/<报告文件>.json

### 错误请求/响应 Debug

当上游返回 400、429、504 等非成功 HTTP 状态时,`check`、`bench`、`rpm` 会自动在 `outputs/debug/` 写入请求/响应 debug JSON,不需要额外参数。
当上游返回 400、429、504 等非成功 HTTP 状态时,`check`、`bench`、`rpm` 会自动在 `outputs/debug/` 写入请求/响应 debug JSON,不需要额外参数。请求在拿到 HTTP 响应前失败时,例如连接超时、读取响应失败、流式响应中断并被统计成 `request_error`,也会写入 `*-request-error` debug JSON。

debug JSON 包含:

- 完整请求 URL
- 脱敏后的请求 headers
- 完整请求 body,包括 prompt
- 响应 status
- 完整响应 headers
- 完整响应 body
- 响应 status;`request_error` 场景为 `null`
- 完整响应 headers;`request_error` 场景为空对象
- 完整响应 body;`request_error` 场景为空字符串
- `request_error` 场景额外包含 `response.error_kind: "request_error"` 和本地错误信息 `response.error`

API token 会被脱敏,但 prompt、题目内容和模型输出会原样保存。排查完后可以按需清理 `outputs/debug/`。



+ 5
- 4
docs/testing-guide.md Просмотреть файл

@@ -290,16 +290,17 @@ Google Gemini 开启 thinking 后,请求体会按以下规则写入 `generatio

## 错误请求/响应 Debug

当上游返回 400、429、504 等非成功 HTTP 状态时,`check`、`bench`、`rpm` 会自动在 `outputs/debug/` 写入一份请求/响应 debug JSON,不需要额外加参数。
当上游返回 400、429、504 等非成功 HTTP 状态时,`check`、`bench`、`rpm` 会自动在 `outputs/debug/` 写入一份请求/响应 debug JSON,不需要额外加参数。请求在拿到 HTTP 响应前失败时,例如连接超时、读取响应失败、流式响应中断并被统计成 `request_error`,也会写入 `*-request-error` debug JSON。

debug JSON 包含:

- 完整请求 URL
- 脱敏后的请求 headers
- 完整请求 body,包括 prompt
- 响应 status
- 完整响应 headers
- 完整响应 body
- 响应 status;`request_error` 场景为 `null`
- 完整响应 headers;`request_error` 场景为空对象
- 完整响应 body;`request_error` 场景为空字符串
- `request_error` 场景额外包含 `response.error_kind: "request_error"` 和本地错误信息 `response.error`

API token 会被脱敏,但 prompt、题目内容和模型输出会原样保存,排查完后按需清理 `outputs/debug/`。



+ 46
- 46
src/cli.rs Просмотреть файл

@@ -324,7 +324,7 @@ async fn dispatch_bench(command: BenchCommand) -> Result<()> {
reasoning_effort,
reasoning_summary,
} => {
run_aime_benchmark(
run_aime_benchmark(BenchmarkCommandOptions {
config,
provider,
model,
@@ -333,7 +333,7 @@ async fn dispatch_bench(command: BenchCommand) -> Result<()> {
stream,
max_tokens,
debug_raw,
ThinkingOverrides {
thinking_overrides: ThinkingOverrides {
thinking,
thinking_type,
thinking_budget_tokens,
@@ -342,7 +342,7 @@ async fn dispatch_bench(command: BenchCommand) -> Result<()> {
reasoning_effort,
reasoning_summary,
},
)
})
.await
}
BenchCommand::GpqaDiamond {
@@ -362,7 +362,7 @@ async fn dispatch_bench(command: BenchCommand) -> Result<()> {
reasoning_effort,
reasoning_summary,
} => {
run_gpqa_benchmark(
run_gpqa_benchmark(BenchmarkCommandOptions {
config,
provider,
model,
@@ -371,7 +371,7 @@ async fn dispatch_bench(command: BenchCommand) -> Result<()> {
stream,
max_tokens,
debug_raw,
ThinkingOverrides {
thinking_overrides: ThinkingOverrides {
thinking,
thinking_type,
thinking_budget_tokens,
@@ -380,14 +380,14 @@ async fn dispatch_bench(command: BenchCommand) -> Result<()> {
reasoning_effort,
reasoning_summary,
},
)
})
.await
}
}
}

async fn run_aime_benchmark(
config_path: PathBuf,
struct BenchmarkCommandOptions {
config: PathBuf,
provider: Option<String>,
model: Option<String>,
concurrency: usize,
@@ -396,11 +396,15 @@ async fn run_aime_benchmark(
max_tokens: u32,
debug_raw: bool,
thinking_overrides: ThinkingOverrides,
) -> Result<()> {
let config = AppConfig::load(&config_path)?;
let provider_name = provider_name(&config, provider.as_deref())?;
}

async fn run_aime_benchmark(options: BenchmarkCommandOptions) -> Result<()> {
let config = AppConfig::load(&options.config)?;
let provider_name = provider_name(&config, options.provider.as_deref())?;
let provider_config = config.resolved_provider(Some(&provider_name))?;
let model = model.unwrap_or_else(|| provider_config.default_model.clone());
let model = options
.model
.unwrap_or_else(|| provider_config.default_model.clone());
let loaded = benchmarks::aime::load_cases(Path::new(&config.benchmarks.data_dir))?;
let dataset = dataset_report(
config
@@ -410,15 +414,17 @@ async fn run_aime_benchmark(
.map(|dataset| (dataset.source.as_str(), dataset.split.as_str())),
&loaded.local_path,
);
let cases = apply_limit(loaded.cases, limit);
let cases = apply_limit(loaded.cases, options.limit);
let total = cases.len() as u64;
let started_at = Utc::now();
let started = Instant::now();
let mut base_request = request_template(&provider_config, &model, 0.0, max_tokens);
base_request.stream = stream.unwrap_or(provider_config.stream);
base_request.raw_debug = raw_debug_config(debug_raw, &provider_name, &model);
base_request.thinking =
merged_thinking_config(provider_config.thinking.as_ref(), thinking_overrides);
let mut base_request = request_template(&provider_config, &model, 0.0, options.max_tokens);
base_request.stream = options.stream.unwrap_or(provider_config.stream);
base_request.raw_debug = raw_debug_config(options.debug_raw, &provider_name, &model);
base_request.thinking = merged_thinking_config(
provider_config.thinking.as_ref(),
options.thinking_overrides,
);
let protocol = provider_config.protocol;

let pb = ProgressBar::new(total);
@@ -437,7 +443,7 @@ async fn run_aime_benchmark(
(case, result)
}
})
.buffer_unordered(nonzero_concurrency(concurrency));
.buffer_unordered(nonzero_concurrency(options.concurrency));

let mut metrics = Metrics::new();
let mut wrong_cases = Vec::new();
@@ -489,9 +495,9 @@ async fn run_aime_benchmark(
dataset,
started_at,
duration_ms: started.elapsed().as_millis(),
concurrency,
limit,
max_tokens,
concurrency: options.concurrency,
limit: options.limit,
max_tokens: options.max_tokens,
summary,
correct_samples,
wrong_cases,
@@ -501,21 +507,13 @@ async fn run_aime_benchmark(
Ok(())
}

async fn run_gpqa_benchmark(
config_path: PathBuf,
provider: Option<String>,
model: Option<String>,
concurrency: usize,
limit: Option<usize>,
stream: Option<bool>,
max_tokens: u32,
debug_raw: bool,
thinking_overrides: ThinkingOverrides,
) -> Result<()> {
let config = AppConfig::load(&config_path)?;
let provider_name = provider_name(&config, provider.as_deref())?;
async fn run_gpqa_benchmark(options: BenchmarkCommandOptions) -> Result<()> {
let config = AppConfig::load(&options.config)?;
let provider_name = provider_name(&config, options.provider.as_deref())?;
let provider_config = config.resolved_provider(Some(&provider_name))?;
let model = model.unwrap_or_else(|| provider_config.default_model.clone());
let model = options
.model
.unwrap_or_else(|| provider_config.default_model.clone());
let loaded = benchmarks::gpqa::load_cases(Path::new(&config.benchmarks.data_dir))?;
let dataset = dataset_report(
config
@@ -525,15 +523,17 @@ async fn run_gpqa_benchmark(
.map(|dataset| (dataset.source.as_str(), dataset.split.as_str())),
&loaded.local_path,
);
let cases = apply_limit(loaded.cases, limit);
let cases = apply_limit(loaded.cases, options.limit);
let total = cases.len() as u64;
let started_at = Utc::now();
let started = Instant::now();
let mut base_request = request_template(&provider_config, &model, 0.0, max_tokens);
base_request.stream = stream.unwrap_or(provider_config.stream);
base_request.raw_debug = raw_debug_config(debug_raw, &provider_name, &model);
base_request.thinking =
merged_thinking_config(provider_config.thinking.as_ref(), thinking_overrides);
let mut base_request = request_template(&provider_config, &model, 0.0, options.max_tokens);
base_request.stream = options.stream.unwrap_or(provider_config.stream);
base_request.raw_debug = raw_debug_config(options.debug_raw, &provider_name, &model);
base_request.thinking = merged_thinking_config(
provider_config.thinking.as_ref(),
options.thinking_overrides,
);
let protocol = provider_config.protocol;

let pb = ProgressBar::new(total);
@@ -552,7 +552,7 @@ async fn run_gpqa_benchmark(
(case, result)
}
})
.buffer_unordered(nonzero_concurrency(concurrency));
.buffer_unordered(nonzero_concurrency(options.concurrency));

let mut metrics = Metrics::new();
let mut wrong_cases = Vec::new();
@@ -606,9 +606,9 @@ async fn run_gpqa_benchmark(
dataset,
started_at,
duration_ms: started.elapsed().as_millis(),
concurrency,
limit,
max_tokens,
concurrency: options.concurrency,
limit: options.limit,
max_tokens: options.max_tokens,
summary,
correct_samples,
wrong_cases,


+ 127
- 3
src/protocols/anthropic.rs Просмотреть файл

@@ -20,6 +20,16 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon
.json(&request_body)
.send()
.await
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("failed to send Anthropic messages request: {error}"),
"anthropic-request-error",
);
error
})
.context("failed to send Anthropic messages request")?;

let status = response.status();
@@ -28,6 +38,16 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon
let body = response
.text()
.await
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("failed to read Anthropic response body: {error}"),
"anthropic-request-error",
);
error
})
.context("failed to read Anthropic response body")?;
if status.is_success()
&& let Some(raw_debug) = &request.raw_debug
@@ -87,6 +107,16 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
.json(&request_body)
.send()
.await
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("failed to send Anthropic streaming request: {error}"),
"anthropic-request-error",
);
error
})
.context("failed to send Anthropic streaming request")?;

let status = response.status();
@@ -97,6 +127,16 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
let body = response
.text()
.await
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("failed to read Anthropic error response body: {error}"),
"anthropic-request-error",
);
error
})
.context("failed to read Anthropic error response body")?;
if let Some(raw_debug) = &request.raw_debug {
raw_debug
@@ -104,9 +144,11 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
"anthropic-error-http",
debug_request(request, &debug_url, request_body),
HttpDebugResponse {
status: status_code,
status: Some(status_code),
headers: response_headers_for_debug(&response_headers),
body: body.clone(),
error_kind: None,
error: None,
},
)
.await
@@ -127,7 +169,18 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
let mut done = false;

while let Some(chunk) = stream.next().await {
let chunk = chunk.context("Anthropic stream interrupted")?;
let chunk = chunk
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("Anthropic stream interrupted: {error}"),
"anthropic-request-error",
);
error
})
.context("Anthropic stream interrupted")?;
for line in buffer.feed(&chunk) {
raw_stream.push_str(&line);
raw_stream.push('\n');
@@ -256,9 +309,11 @@ async fn write_error_debug(
response_kind,
debug_request(request, url, request_body),
HttpDebugResponse {
status,
status: Some(status),
headers: response_headers_for_debug(response_headers),
body: response_body.to_string(),
error_kind: None,
error: None,
},
)
.await
@@ -279,6 +334,22 @@ fn debug_request(request: &ModelRequest, url: &str, body: Value) -> HttpDebugReq
}
}

fn write_request_error_debug_blocking(
request: &ModelRequest,
url: &str,
request_body: Value,
error: String,
response_kind: &str,
) {
if let Some(raw_debug) = &request.raw_debug {
let _ = futures::executor::block_on(raw_debug.write_request_error(
response_kind,
debug_request(request, url, request_body),
error,
));
}
}

#[cfg(test)]
mod tests {
use crate::runner::{ModelRequest, RawDebugConfig, ThinkingConfig};
@@ -543,6 +614,59 @@ mod tests {
assert!(!raw.contains("anthropic-secret-token"));
}

#[tokio::test]
async fn request_error_debug_records_request_and_local_error_without_token() {
let temp_dir = tempfile::tempdir().expect("create temp dir");
let request = ModelRequest {
base_url: "http://127.0.0.1:9".to_string(),
api_token: "anthropic-secret-token".to_string(),
model: "claude-test".to_string(),
prompt: "prompt before anthropic connect error".to_string(),
temperature: 0.0,
max_tokens: 1024,
stream: false,
raw_debug: Some(RawDebugConfig::new(
temp_dir.path().to_path_buf(),
"anthropic-claude-test".to_string(),
)),
thinking: None,
};

let _ = super::send(&Client::new(), &request)
.await
.expect_err("connection should fail");

let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");

assert_eq!(debug["request"]["method"], "POST");
assert!(
debug["request"]["url"]
.as_str()
.expect("request url")
.ends_with("/v1/messages")
);
assert_eq!(debug["request"]["headers"]["x-api-key"], "[REDACTED]");
assert_eq!(
debug["request"]["body"]["messages"][0]["content"],
"prompt before anthropic connect error"
);
assert_eq!(debug["response"]["status"], serde_json::Value::Null);
assert_eq!(debug["response"]["error_kind"], "request_error");
assert!(
debug["response"]["error"]
.as_str()
.expect("local error")
.contains("failed to send Anthropic messages request")
);
assert!(!raw.contains("anthropic-secret-token"));
}

#[tokio::test]
async fn base_url_with_v1_prefix_does_not_duplicate_messages_path() {
let server = MockServer::start().await;


+ 127
- 3
src/protocols/google.rs Просмотреть файл

@@ -19,6 +19,16 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon
.json(&request_body)
.send()
.await
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("failed to send Google generateContent request: {error}"),
"google-request-error",
);
error
})
.context("failed to send Google generateContent request")?;

let status = response.status();
@@ -27,6 +37,16 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon
let body = response
.text()
.await
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("failed to read Google response body: {error}"),
"google-request-error",
);
error
})
.context("failed to read Google response body")?;
if status.is_success()
&& let Some(raw_debug) = &request.raw_debug
@@ -78,6 +98,16 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
.json(&request_body)
.send()
.await
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("failed to send Google streamGenerateContent request: {error}"),
"google-request-error",
);
error
})
.context("failed to send Google streamGenerateContent request")?;

let status = response.status();
@@ -88,6 +118,16 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
let body = response
.text()
.await
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("failed to read Google error response body: {error}"),
"google-request-error",
);
error
})
.context("failed to read Google error response body")?;
if let Some(raw_debug) = &request.raw_debug {
raw_debug
@@ -95,9 +135,11 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
"google-error-http",
debug_request(request, &debug_url, request_body),
HttpDebugResponse {
status: status_code,
status: Some(status_code),
headers: response_headers_for_debug(&response_headers),
body: body.clone(),
error_kind: None,
error: None,
},
)
.await
@@ -116,7 +158,18 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
let mut first_token_ms: Option<u128> = None;

while let Some(chunk) = stream.next().await {
let chunk = chunk.context("Google stream interrupted")?;
let chunk = chunk
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("Google stream interrupted: {error}"),
"google-request-error",
);
error
})
.context("Google stream interrupted")?;
for line in buffer.feed(&chunk) {
raw_stream.push_str(&line);
raw_stream.push('\n');
@@ -230,9 +283,11 @@ async fn write_error_debug(
response_kind,
debug_request(request, url, request_body),
HttpDebugResponse {
status,
status: Some(status),
headers: response_headers_for_debug(response_headers),
body: response_body.to_string(),
error_kind: None,
error: None,
},
)
.await
@@ -250,6 +305,22 @@ fn debug_request(request: &ModelRequest, url: &str, body: Value) -> HttpDebugReq
}
}

fn write_request_error_debug_blocking(
request: &ModelRequest,
url: &str,
request_body: Value,
error: String,
response_kind: &str,
) {
if let Some(raw_debug) = &request.raw_debug {
let _ = futures::executor::block_on(raw_debug.write_request_error(
response_kind,
debug_request(request, url, request_body),
error,
));
}
}

#[derive(Debug, Deserialize)]
struct GoogleResponse {
candidates: Vec<GoogleCandidate>,
@@ -445,6 +516,59 @@ mod tests {
assert!(!raw.contains("google-secret-token"));
}

#[tokio::test]
async fn request_error_debug_records_request_and_local_error_without_token() {
let temp_dir = tempfile::tempdir().expect("create temp dir");
let request = ModelRequest {
base_url: "http://127.0.0.1:9".to_string(),
api_token: "google-secret-token".to_string(),
model: "gemini-test".to_string(),
prompt: "prompt before google connect error".to_string(),
temperature: 0.0,
max_tokens: 1024,
stream: false,
raw_debug: Some(RawDebugConfig::new(
temp_dir.path().to_path_buf(),
"google-gemini-test".to_string(),
)),
thinking: None,
};

let _ = super::send(&Client::new(), &request)
.await
.expect_err("connection should fail");

let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");

assert_eq!(debug["request"]["method"], "POST");
assert!(
debug["request"]["url"]
.as_str()
.expect("request url")
.ends_with("/models/gemini-test:generateContent")
);
assert_eq!(debug["request"]["headers"]["x-goog-api-key"], "[REDACTED]");
assert_eq!(
debug["request"]["body"]["contents"][0]["parts"][0]["text"],
"prompt before google connect error"
);
assert_eq!(debug["response"]["status"], serde_json::Value::Null);
assert_eq!(debug["response"]["error_kind"], "request_error");
assert!(
debug["response"]["error"]
.as_str()
.expect("local error")
.contains("failed to send Google generateContent request")
);
assert!(!raw.contains("google-secret-token"));
}

fn google_request(base_url: String) -> ModelRequest {
ModelRequest {
base_url,


+ 120
- 3
src/protocols/openai.rs Просмотреть файл

@@ -19,6 +19,16 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon
.json(&request_body)
.send()
.await
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("failed to send OpenAI chat completion request: {error}"),
"openai-request-error",
);
error
})
.context("failed to send OpenAI chat completion request")?;

let status = response.status();
@@ -27,6 +37,16 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon
let body = response
.text()
.await
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("failed to read OpenAI response body: {error}"),
"openai-request-error",
);
error
})
.context("failed to read OpenAI response body")?;
if status.is_success()
&& let Some(raw_debug) = &request.raw_debug
@@ -84,6 +104,16 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
.json(&request_body)
.send()
.await
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("failed to send OpenAI streaming request: {error}"),
"openai-request-error",
);
error
})
.context("failed to send OpenAI streaming request")?;

let status = response.status();
@@ -101,9 +131,11 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
"openai-error-http",
debug_request(request, &debug_url, request_body),
HttpDebugResponse {
status: status_code,
status: Some(status_code),
headers: response_headers_for_debug(&response_headers),
body: body.clone(),
error_kind: None,
error: None,
},
)
.await
@@ -123,7 +155,18 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
let mut done = false;

while let Some(chunk) = stream.next().await {
let chunk = chunk.context("OpenAI stream interrupted")?;
let chunk = chunk
.map_err(|error| {
write_request_error_debug_blocking(
request,
&debug_url,
request_body.clone(),
format!("OpenAI stream interrupted: {error}"),
"openai-request-error",
);
error
})
.context("OpenAI stream interrupted")?;
for line in buffer.feed(&chunk) {
raw_stream.push_str(&line);
raw_stream.push('\n');
@@ -247,9 +290,11 @@ async fn write_error_debug(
response_kind,
debug_request(request, url, request_body),
HttpDebugResponse {
status,
status: Some(status),
headers: response_headers_for_debug(response_headers),
body: response_body.to_string(),
error_kind: None,
error: None,
},
)
.await
@@ -270,6 +315,22 @@ fn debug_request(request: &ModelRequest, url: &str, body: Value) -> HttpDebugReq
}
}

fn write_request_error_debug_blocking(
request: &ModelRequest,
url: &str,
request_body: Value,
error: String,
response_kind: &str,
) {
if let Some(raw_debug) = &request.raw_debug {
let _ = futures::executor::block_on(raw_debug.write_request_error(
response_kind,
debug_request(request, url, request_body),
error,
));
}
}

#[cfg(test)]
mod tests {
use crate::runner::{ModelRequest, RawDebugConfig, ThinkingConfig};
@@ -500,6 +561,62 @@ mod tests {
assert!(!raw.contains("sk-real-token"));
}

#[tokio::test]
async fn request_error_debug_records_request_and_local_error_without_token() {
let temp_dir = tempfile::tempdir().expect("create temp dir");
let request = ModelRequest {
base_url: "http://127.0.0.1:9".to_string(),
api_token: "sk-real-token".to_string(),
model: "gpt-test".to_string(),
prompt: "prompt before connect error".to_string(),
temperature: 0.0,
max_tokens: 1024,
stream: false,
raw_debug: Some(RawDebugConfig::new(
temp_dir.path().to_path_buf(),
"openai-gpt-test".to_string(),
)),
thinking: None,
};

let _ = super::send(&Client::new(), &request)
.await
.expect_err("connection should fail");

let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");

assert_eq!(debug["request"]["method"], "POST");
assert!(
debug["request"]["url"]
.as_str()
.expect("request url")
.ends_with("/chat/completions")
);
assert_eq!(
debug["request"]["headers"]["authorization"],
"Bearer [REDACTED]"
);
assert_eq!(
debug["request"]["body"]["messages"][0]["content"],
"prompt before connect error"
);
assert_eq!(debug["response"]["status"], serde_json::Value::Null);
assert_eq!(debug["response"]["error_kind"], "request_error");
assert!(
debug["response"]["error"]
.as_str()
.expect("local error")
.contains("failed to send OpenAI chat completion request")
);
assert!(!raw.contains("sk-real-token"));
}

#[tokio::test]
async fn base_url_with_v1_prefix_keeps_chat_completion_path() {
let server = MockServer::start().await;


+ 26
- 1
src/runner.rs Просмотреть файл

@@ -80,6 +80,27 @@ impl RawDebugConfig {
self.write_debug_file(response_kind, &contents).await
}

pub async fn write_request_error(
&self,
response_kind: &str,
request: HttpDebugRequest,
error: String,
) -> Result<PathBuf> {
let envelope = HttpDebugEnvelope {
request,
response: HttpDebugResponse {
status: None,
headers: BTreeMap::new(),
body: String::new(),
error_kind: Some("request_error".to_string()),
error: Some(error),
},
};
let contents = serde_json::to_string_pretty(&envelope)
.context("failed to serialize raw debug request error")?;
self.write_debug_file(response_kind, &contents).await
}

async fn write_debug_file(&self, response_kind: &str, contents: &str) -> Result<PathBuf> {
tokio::fs::create_dir_all(&self.output_dir)
.await
@@ -121,9 +142,13 @@ pub struct HttpDebugRequest {

#[derive(Debug, Serialize)]
pub struct HttpDebugResponse {
pub status: u16,
pub status: Option<u16>,
pub headers: BTreeMap<String, String>,
pub body: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub error_kind: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
}

pub fn request_headers_for_debug(headers: &[(&str, String)]) -> BTreeMap<String, String> {


Загрузка…
Отмена
Сохранить