Sfoglia il codice sorgente

feat: record upstream error debug payloads

pull/1/head
orangels 5 giorni fa
parent
commit
898d02f7ef
8 ha cambiato i file con 677 aggiunte e 32 eliminazioni
  1. +47
    -2
      README.md
  2. +19
    -4
      docs/USAGE.zh-CN.md
  3. +83
    -5
      docs/testing-guide.md
  4. +10
    -2
      src/cli.rs
  5. +140
    -6
      src/protocols/anthropic.rs
  6. +135
    -7
      src/protocols/google.rs
  7. +147
    -6
      src/protocols/openai.rs
  8. +96
    -0
      src/runner.rs

+ 47
- 2
README.md Vedi File

@@ -76,7 +76,50 @@ export ANTHROPIC_RELAY_TOKEN="..."
export GOOGLE_API_KEY="..."
```

Thinking can also be enabled per run with CLI overrides such as `--thinking true`, `--thinking-type enabled`, `--thinking-budget-tokens 10000`, `--thinking-display omitted`, `--reasoning-effort high`, and `--reasoning-summary auto`. For Anthropic, enabling thinking omits `temperature` from the upstream request. For Google Gemini, `budget_tokens` maps to `generationConfig.thinkingConfig.thinkingBudget`, `effort` maps to `thinkingLevel`, and `display: summarized` enables `includeThoughts`.
Thinking can also be enabled per run with CLI overrides such as `--thinking true`, `--thinking-type enabled`, `--thinking-budget-tokens 10000`, `--thinking-effort high`, `--thinking-display omitted`, `--reasoning-effort high`, and `--reasoning-summary auto`. For Anthropic, enabling thinking omits `temperature` from the upstream request.

### Google Gemini Thinking

For Google Gemini, thinking settings are sent under `generationConfig.thinkingConfig`:

- `budget_tokens` maps to `thinkingBudget`.
- `effort` maps to `thinkingLevel`.
- `display: summarized` sends `includeThoughts: true`.
- `display: omitted` sends `includeThoughts: false`.

If both `budget_tokens` and `effort` are configured, both fields are sent. Some Gemini backends or relays may reject a mixed Gemini 2.5/Gemini 3 style request, so prefer model-specific configs.

Recommended Gemini 3 config:

```yaml
google:
protocol: google
base_url: "https://generativelanguage.googleapis.com/v1beta"
api_token: "${GOOGLE_API_KEY}"
default_model: "gemini-3-pro-preview"
stream: true
thinking:
enabled: true
effort: "high"
display: "summarized"
```

Recommended Gemini 2.5 config:

```yaml
google:
protocol: google
base_url: "https://generativelanguage.googleapis.com/v1beta"
api_token: "${GOOGLE_API_KEY}"
default_model: "gemini-2.5-pro"
stream: true
thinking:
enabled: true
budget_tokens: 5000
display: "summarized"
```

If `enabled: true` is set without `budget_tokens`, `effort`, or `display`, the Google adapter does not send `thinkingConfig`; add `display` when you want to explicitly request or suppress thought summaries.

## Dataset Fetching

@@ -260,7 +303,9 @@ Benchmark and RPM commands print a terminal summary with success counts, failure

Benchmark reports include `wrong_cases`, with each wrong case containing the case id, question, expected answer, extracted actual answer, and raw model output. RPM reports include request counts, mode, target RPM, observed RPM, latency, error counts, and mode-specific details such as burst summaries, probe summaries, window-boundary summaries, and optional limiter inference.

Use `--debug-raw` with `check`, `bench`, or `rpm` to write upstream raw responses under `outputs/debug/`. Non-streaming requests save the raw JSON body, and streaming requests save the raw SSE lines. The directory is ignored by git and can help diagnose relay-side response rewriting.
When an upstream request returns a non-success HTTP status such as 400, 429, or 504, `check`, `bench`, and `rpm` automatically write a request/response debug JSON file under `outputs/debug/`. The debug file includes the full request URL, redacted request headers, full request body including the prompt, response status, response headers, and full response body. API tokens are redacted, but prompts and model outputs are preserved for troubleshooting.

Use `--debug-raw` with `check`, `bench`, or `rpm` when you also want to save successful upstream raw responses. Non-streaming success responses save the raw JSON body, and streaming success responses save the raw SSE lines. The directory is ignored by git and can help diagnose relay-side response rewriting.

## Comparing Scores



+ 19
- 4
docs/USAGE.zh-CN.md Vedi File

@@ -408,9 +408,24 @@ ls -lt reports | head
jq . reports/<报告文件>.json
```

### 原始响应 Debug
### 错误请求/响应 Debug

如果需要排查中转站是否改写了模型响应,可以开启 `--debug-raw`:
当上游返回 400、429、504 等非成功 HTTP 状态时,`check`、`bench`、`rpm` 会自动在 `outputs/debug/` 写入请求/响应 debug JSON,不需要额外参数。

debug JSON 包含:

- 完整请求 URL
- 脱敏后的请求 headers
- 完整请求 body,包括 prompt
- 响应 status
- 完整响应 headers
- 完整响应 body

API token 会被脱敏,但 prompt、题目内容和模型输出会原样保存。排查完后可以按需清理 `outputs/debug/`。

### 原始成功响应 Debug

如果需要排查中转站是否改写了成功响应,可以开启 `--debug-raw`:

```bash
cargo run -- check --provider anthropic --stream --debug-raw --prompt "hello"
@@ -418,13 +433,13 @@ cargo run -- bench aime2026 --provider anthropic --stream --debug-raw --limit 3
cargo run -- rpm --provider anthropic --rpm 60 --duration 30s --stream --debug-raw --prompt "hello"
```

开启后,原始响应会写到:
开启后,成功请求的原始响应会写到:

```text
outputs/debug/
```

非流式请求保存完整 JSON body;流式请求保存原始 SSE 行,包括 `event:` 和 `data:`。文件不包含 API token,但可能包含模型输出内容,所以 `outputs/` 不会提交到 git。
非流式成功请求保存完整 JSON body;流式成功请求保存原始 SSE 行,包括 `event:` 和 `data:`。文件不包含 API token,但可能包含模型输出内容,所以 `outputs/` 不会提交到 git。

benchmark report 包含:



+ 83
- 5
docs/testing-guide.md Vedi File

@@ -6,6 +6,7 @@
# 设置环境变量
export ANTHROPIC_RELAY_TOKEN="your-token-here"
export OPENAI_RELAY_TOKEN="your-token-here"
export GOOGLE_API_KEY="your-token-here"

# 构建
cargo build --release
@@ -13,6 +14,24 @@ cargo build --release

确认 `config.yaml` 中 provider 配置正确(base_url、protocol、default_model)。

Google Gemini provider 推荐配置:

```yaml
providers:
google:
protocol: google
base_url: "https://generativelanguage.googleapis.com/v1beta"
api_token: "${GOOGLE_API_KEY}"
default_model: "gemini-3-pro-preview"
stream: true
thinking:
enabled: true
effort: high
display: summarized
```

如果走中转站,把 `base_url` 改成中转站提供的 Gemini base url。

---

## 阶段 1:RPM 限流测试(Anthropic 协议)
@@ -32,6 +51,15 @@ cargo run -- check --provider anthropic --stream --prompt "Reply with pong."
- 非流式:返回文本 + elapsed_ms
- 流式:额外输出 first_token_ms,且 first_token_ms < elapsed_ms

Google Gemini 连通性验证:

```bash
cargo run -- check --provider google --prompt "Reply with pong."
cargo run -- check --provider google --stream --prompt "Reply with pong."
```

验证要点相同。Google 流式请求会走 `streamGenerateContent`,非流式请求会走 `generateContent`。

### 1.2 Sustained 模式 — 持续发送

以目标 RPM 匀速发送,观察成功率。
@@ -145,6 +173,9 @@ cargo run -- bench aime2026 --provider anthropic

# 指定更强模型
cargo run -- bench aime2026 --provider anthropic --model anthropic/claude-sonnet-4-20250514

# Google Gemini 快速验证
cargo run -- bench aime2026 --provider google --model gemini-3-pro-preview --limit 10
```

### 2.3 GPQA Diamond(研究生级别多选题)
@@ -158,6 +189,9 @@ cargo run -- bench gpqa-diamond --provider anthropic

# 指定模型
cargo run -- bench gpqa-diamond --provider anthropic --model anthropic/claude-sonnet-4-20250514

# Google Gemini 快速验证
cargo run -- bench gpqa-diamond --provider google --model gemini-3-pro-preview --limit 10
```

### 2.4 精度参考基线
@@ -217,13 +251,57 @@ cargo run -- bench aime2026 \
--thinking true \
--reasoning-effort high \
--reasoning-summary auto

# Google Gemini 3 thinkingLevel
cargo run -- bench aime2026 \
--provider google \
--model gemini-3-pro-preview \
--limit 5 \
--thinking true \
--thinking-effort high \
--thinking-display summarized

# Google Gemini 2.5 thinkingBudget
cargo run -- bench aime2026 \
--provider google \
--model gemini-2.5-pro \
--limit 5 \
--thinking true \
--thinking-budget-tokens 5000 \
--thinking-display summarized
```

Anthropic 开启 thinking 后,请求体不会发送 `temperature`。benchmark 和 RPM 的 JSON report 会记录本次 thinking 参数,方便复现实验。

## 原始响应 Debug 模式
Google Gemini 开启 thinking 后,请求体会按以下规则写入 `generationConfig.thinkingConfig`:

- `budget_tokens` 对应 `thinkingBudget`,推荐给 Gemini 2.5 使用。
- `effort` 对应 `thinkingLevel`,推荐给 Gemini 3 使用。
- `display: summarized` 会发送 `includeThoughts: true`。
- `display: omitted` 会发送 `includeThoughts: false`。

如果同时配置 `budget_tokens` 和 `effort`,两个字段都会发送。为了减少后端或中转站兼容性问题,建议 Gemini 3 只配 `effort`,Gemini 2.5 只配 `budget_tokens`。

如果只配置 `enabled: true`,但不配置 `budget_tokens`、`effort`、`display`,当前实现不会发送 `thinkingConfig`。如果想显式请求或关闭 thought summary,至少加上 `display: summarized` 或 `display: omitted`。

## 错误请求/响应 Debug

当上游返回 400、429、504 等非成功 HTTP 状态时,`check`、`bench`、`rpm` 会自动在 `outputs/debug/` 写入一份请求/响应 debug JSON,不需要额外加参数。

debug JSON 包含:

- 完整请求 URL
- 脱敏后的请求 headers
- 完整请求 body,包括 prompt
- 响应 status
- 完整响应 headers
- 完整响应 body

API token 会被脱敏,但 prompt、题目内容和模型输出会原样保存,排查完后按需清理 `outputs/debug/`。

## 原始成功响应 Debug 模式

排查中转站是否改写响应时,可以加 `--debug-raw`:
排查中转站是否改写成功响应时,可以加 `--debug-raw`:

```bash
cargo run -- check --provider anthropic --stream --debug-raw --prompt "Reply with pong."
@@ -231,10 +309,10 @@ cargo run -- bench aime2026 --provider anthropic --stream --debug-raw --limit 3
cargo run -- rpm --provider anthropic --rpm 60 --duration 30s --stream --debug-raw --prompt "Hi"
```

开启后,程序会把上游原始响应写到 `outputs/debug/`:
开启后,程序会把成功请求的上游原始响应写到 `outputs/debug/`:

- 非流式请求保存完整 JSON body
- 流式请求保存原始 SSE 行,包括 `event:` 和 `data:`
- 非流式成功请求保存完整 JSON body
- 流式成功请求保存原始 SSE 行,包括 `event:` 和 `data:`
- 文件不包含 API token
- 文件可能包含模型输出内容,`outputs/` 不会提交到 git



+ 10
- 2
src/cli.rs Vedi File

@@ -1096,12 +1096,13 @@ fn provider_name(config: &AppConfig, provider: Option<&str>) -> Result<String> {
}

fn raw_debug_config(enabled: bool, provider: &str, model: &str) -> Option<RawDebugConfig> {
enabled.then(|| {
Some(
RawDebugConfig::new(
PathBuf::from("outputs/debug"),
format!("{provider}-{model}"),
)
})
.with_success_raw(enabled),
)
}

#[derive(Debug, Default)]
@@ -1594,6 +1595,13 @@ mod tests {
assert!(debug_raw);
}

#[test]
fn raw_debug_config_is_created_even_when_success_raw_is_disabled() {
let raw_debug = raw_debug_config(false, "google", "gemini-test");

assert!(raw_debug.is_some());
}

#[test]
fn check_command_parses_thinking_overrides() {
let cli = Cli::try_parse_from([


+ 140
- 6
src/protocols/anthropic.rs Vedi File

@@ -1,4 +1,7 @@
use crate::runner::{ModelRequest, ModelResponse};
use crate::runner::{
HttpDebugRequest, HttpDebugResponse, ModelRequest, ModelResponse, request_headers_for_debug,
response_headers_for_debug,
};
use anyhow::{Context, Result, bail};
use futures::StreamExt;
use reqwest::Client;
@@ -8,22 +11,28 @@ use std::time::Instant;

pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelResponse> {
let url = super::endpoint_url(&request.base_url, "/v1/messages")?;
let debug_url = url.to_string();
let request_body = request_body(request, false);
let response = client
.post(url)
.header("x-api-key", &request.api_token)
.header("anthropic-version", "2023-06-01")
.json(&request_body(request, false))
.json(&request_body)
.send()
.await
.context("failed to send Anthropic messages request")?;

let status = response.status();
let status_code = status.as_u16();
let response_headers = response.headers().clone();
let body = response
.text()
.await
.context("failed to read Anthropic response body")?;
if let Some(raw_debug) = &request.raw_debug {
if status.is_success()
&& let Some(raw_debug) = &request.raw_debug
&& raw_debug.write_success_raw()
{
raw_debug
.write_response("anthropic-json", &body)
.await
@@ -31,6 +40,16 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon
}

if !status.is_success() {
write_error_debug(
request,
&debug_url,
request_body,
status_code,
&response_headers,
&body,
"anthropic-error-http",
)
.await?;
bail!(
"{}",
super::upstream_error_message("Anthropic", status_code, &body)
@@ -58,18 +77,21 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon

pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<ModelResponse> {
let url = super::endpoint_url(&request.base_url, "/v1/messages")?;
let debug_url = url.to_string();
let started = Instant::now();
let request_body = request_body(request, true);
let response = client
.post(url)
.header("x-api-key", &request.api_token)
.header("anthropic-version", "2023-06-01")
.json(&request_body(request, true))
.json(&request_body)
.send()
.await
.context("failed to send Anthropic streaming request")?;

let status = response.status();
let status_code = status.as_u16();
let response_headers = response.headers().clone();

if !status.is_success() {
let body = response
@@ -78,7 +100,15 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
.context("failed to read Anthropic error response body")?;
if let Some(raw_debug) = &request.raw_debug {
raw_debug
.write_response("anthropic-error", &body)
.write_http_error(
"anthropic-error-http",
debug_request(request, &debug_url, request_body),
HttpDebugResponse {
status: status_code,
headers: response_headers_for_debug(&response_headers),
body: body.clone(),
},
)
.await
.context("failed to write Anthropic raw debug error response")?;
}
@@ -131,7 +161,9 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
}
}

if let Some(raw_debug) = &request.raw_debug {
if let Some(raw_debug) = &request.raw_debug
&& raw_debug.write_success_raw()
{
raw_debug
.write_response("anthropic-sse", &raw_stream)
.await
@@ -209,6 +241,44 @@ fn request_body(request: &ModelRequest, stream: bool) -> Value {
body
}

async fn write_error_debug(
request: &ModelRequest,
url: &str,
request_body: Value,
status: u16,
response_headers: &reqwest::header::HeaderMap,
response_body: &str,
response_kind: &str,
) -> Result<()> {
if let Some(raw_debug) = &request.raw_debug {
raw_debug
.write_http_error(
response_kind,
debug_request(request, url, request_body),
HttpDebugResponse {
status,
headers: response_headers_for_debug(response_headers),
body: response_body.to_string(),
},
)
.await
.context("failed to write Anthropic raw debug error response")?;
}
Ok(())
}

fn debug_request(request: &ModelRequest, url: &str, body: Value) -> HttpDebugRequest {
HttpDebugRequest {
method: "POST".to_string(),
url: url.to_string(),
headers: request_headers_for_debug(&[
("x-api-key", request.api_token.clone()),
("anthropic-version", "2023-06-01".to_string()),
]),
body,
}
}

#[cfg(test)]
mod tests {
use crate::runner::{ModelRequest, RawDebugConfig, ThinkingConfig};
@@ -409,6 +479,70 @@ mod tests {
assert!(!message.contains("sk-leaked-token"));
}

#[tokio::test]
async fn non_success_error_debug_records_request_and_response_without_token() {
let server = MockServer::start().await;
let temp_dir = tempfile::tempdir().expect("create temp dir");
Mock::given(method("POST"))
.and(path("/v1/messages"))
.respond_with(
ResponseTemplate::new(504)
.insert_header("x-request-id", "anthropic-req")
.set_body_json(serde_json::json!({
"error": {
"message": "gateway timeout"
}
})),
)
.mount(&server)
.await;

let request = ModelRequest {
base_url: server.uri(),
api_token: "anthropic-secret-token".to_string(),
model: "claude-test".to_string(),
prompt: "full anthropic prompt".to_string(),
temperature: 0.0,
max_tokens: 1024,
stream: false,
raw_debug: Some(RawDebugConfig::new(
temp_dir.path().to_path_buf(),
"anthropic-claude-test".to_string(),
)),
thinking: None,
};

let _ = super::send(&Client::new(), &request)
.await
.expect_err("non-success should fail");

let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");

assert_eq!(debug["request"]["headers"]["x-api-key"], "[REDACTED]");
assert_eq!(
debug["request"]["body"]["messages"][0]["content"],
"full anthropic prompt"
);
assert_eq!(debug["response"]["status"], 504);
assert_eq!(
debug["response"]["headers"]["x-request-id"],
"anthropic-req"
);
assert!(
debug["response"]["body"]
.as_str()
.expect("response body")
.contains("gateway timeout")
);
assert!(!raw.contains("anthropic-secret-token"));
}

#[tokio::test]
async fn base_url_with_v1_prefix_does_not_duplicate_messages_path() {
let server = MockServer::start().await;


+ 135
- 7
src/protocols/google.rs Vedi File

@@ -1,4 +1,7 @@
use crate::runner::{ModelRequest, ModelResponse};
use crate::runner::{
HttpDebugRequest, HttpDebugResponse, ModelRequest, ModelResponse, request_headers_for_debug,
response_headers_for_debug,
};
use anyhow::{Context, Result, bail};
use futures::StreamExt;
use reqwest::Client;
@@ -8,21 +11,27 @@ use std::time::Instant;

pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelResponse> {
let url = google_endpoint(&request.base_url, &request.model, "generateContent")?;
let debug_url = url.to_string();
let request_body = request_body(request);
let response = client
.post(url)
.header("x-goog-api-key", &request.api_token)
.json(&request_body(request))
.json(&request_body)
.send()
.await
.context("failed to send Google generateContent request")?;

let status = response.status();
let status_code = status.as_u16();
let response_headers = response.headers().clone();
let body = response
.text()
.await
.context("failed to read Google response body")?;
if let Some(raw_debug) = &request.raw_debug {
if status.is_success()
&& let Some(raw_debug) = &request.raw_debug
&& raw_debug.write_success_raw()
{
raw_debug
.write_response("google-json", &body)
.await
@@ -30,6 +39,16 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon
}

if !status.is_success() {
write_error_debug(
request,
&debug_url,
request_body,
status_code,
&response_headers,
&body,
"google-error-http",
)
.await?;
bail!(
"{}",
super::upstream_error_message("Google", status_code, &body)
@@ -50,17 +69,20 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon

pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<ModelResponse> {
let url = google_endpoint(&request.base_url, &request.model, "streamGenerateContent")?;
let debug_url = url.to_string();
let started = Instant::now();
let request_body = request_body(request);
let response = client
.post(url)
.header("x-goog-api-key", &request.api_token)
.json(&request_body(request))
.json(&request_body)
.send()
.await
.context("failed to send Google streamGenerateContent request")?;

let status = response.status();
let status_code = status.as_u16();
let response_headers = response.headers().clone();

if !status.is_success() {
let body = response
@@ -69,7 +91,15 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
.context("failed to read Google error response body")?;
if let Some(raw_debug) = &request.raw_debug {
raw_debug
.write_response("google-error", &body)
.write_http_error(
"google-error-http",
debug_request(request, &debug_url, request_body),
HttpDebugResponse {
status: status_code,
headers: response_headers_for_debug(&response_headers),
body: body.clone(),
},
)
.await
.context("failed to write Google raw debug error response")?;
}
@@ -107,7 +137,9 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
}
}

if let Some(raw_debug) = &request.raw_debug {
if let Some(raw_debug) = &request.raw_debug
&& raw_debug.write_success_raw()
{
raw_debug
.write_response("google-sse", &raw_stream)
.await
@@ -183,6 +215,41 @@ fn response_text(response: GoogleResponse) -> Option<String> {
(!text.is_empty()).then_some(text)
}

async fn write_error_debug(
request: &ModelRequest,
url: &str,
request_body: Value,
status: u16,
response_headers: &reqwest::header::HeaderMap,
response_body: &str,
response_kind: &str,
) -> Result<()> {
if let Some(raw_debug) = &request.raw_debug {
raw_debug
.write_http_error(
response_kind,
debug_request(request, url, request_body),
HttpDebugResponse {
status,
headers: response_headers_for_debug(response_headers),
body: response_body.to_string(),
},
)
.await
.context("failed to write Google raw debug error response")?;
}
Ok(())
}

fn debug_request(request: &ModelRequest, url: &str, body: Value) -> HttpDebugRequest {
HttpDebugRequest {
method: "POST".to_string(),
url: url.to_string(),
headers: request_headers_for_debug(&[("x-goog-api-key", request.api_token.clone())]),
body,
}
}

#[derive(Debug, Deserialize)]
struct GoogleResponse {
candidates: Vec<GoogleCandidate>,
@@ -207,7 +274,7 @@ struct GooglePart {

#[cfg(test)]
mod tests {
use crate::runner::{ModelRequest, ThinkingConfig};
use crate::runner::{ModelRequest, RawDebugConfig, ThinkingConfig};
use reqwest::Client;
use wiremock::matchers::{body_json, header, method, path};
use wiremock::{Mock, MockServer, ResponseTemplate};
@@ -317,6 +384,67 @@ mod tests {
assert!(response.first_token_ms.is_some());
}

#[tokio::test]
async fn non_success_error_debug_records_request_and_response_without_token() {
let server = MockServer::start().await;
let temp_dir = tempfile::tempdir().expect("create temp dir");
Mock::given(method("POST"))
.and(path("/models/gemini-test:generateContent"))
.respond_with(
ResponseTemplate::new(400)
.insert_header("x-request-id", "google-req")
.set_body_json(serde_json::json!({
"error": {
"message": "bad google request"
}
})),
)
.mount(&server)
.await;

let request = ModelRequest {
base_url: server.uri(),
api_token: "google-secret-token".to_string(),
model: "gemini-test".to_string(),
prompt: "full google prompt".to_string(),
temperature: 0.0,
max_tokens: 1024,
stream: false,
raw_debug: Some(RawDebugConfig::new(
temp_dir.path().to_path_buf(),
"google-gemini-test".to_string(),
)),
thinking: None,
};

let _ = super::send(&Client::new(), &request)
.await
.expect_err("non-success should fail");

let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");

assert_eq!(debug["request"]["headers"]["x-goog-api-key"], "[REDACTED]");
assert_eq!(
debug["request"]["body"]["contents"][0]["parts"][0]["text"],
"full google prompt"
);
assert_eq!(debug["response"]["status"], 400);
assert_eq!(debug["response"]["headers"]["x-request-id"], "google-req");
assert!(
debug["response"]["body"]
.as_str()
.expect("response body")
.contains("bad google request")
);
assert!(!raw.contains("google-secret-token"));
}

fn google_request(base_url: String) -> ModelRequest {
ModelRequest {
base_url,


+ 147
- 6
src/protocols/openai.rs Vedi File

@@ -1,4 +1,7 @@
use crate::runner::{ModelRequest, ModelResponse};
use crate::runner::{
HttpDebugRequest, HttpDebugResponse, ModelRequest, ModelResponse, request_headers_for_debug,
response_headers_for_debug,
};
use anyhow::{Context, Result, bail};
use futures::StreamExt;
use reqwest::Client;
@@ -8,21 +11,27 @@ use std::time::Instant;

pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelResponse> {
let url = super::endpoint_url(&request.base_url, "/chat/completions")?;
let debug_url = url.to_string();
let request_body = request_body(request, false);
let response = client
.post(url)
.bearer_auth(&request.api_token)
.json(&request_body(request, false))
.json(&request_body)
.send()
.await
.context("failed to send OpenAI chat completion request")?;

let status = response.status();
let status_code = status.as_u16();
let response_headers = response.headers().clone();
let body = response
.text()
.await
.context("failed to read OpenAI response body")?;
if let Some(raw_debug) = &request.raw_debug {
if status.is_success()
&& let Some(raw_debug) = &request.raw_debug
&& raw_debug.write_success_raw()
{
raw_debug
.write_response("openai-json", &body)
.await
@@ -30,6 +39,16 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon
}

if !status.is_success() {
write_error_debug(
request,
&debug_url,
request_body,
status_code,
&response_headers,
&body,
"openai-error-http",
)
.await?;
bail!(
"{}",
super::upstream_error_message("OpenAI", status_code, &body)
@@ -56,17 +75,20 @@ pub async fn send(client: &Client, request: &ModelRequest) -> Result<ModelRespon

pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<ModelResponse> {
let url = super::endpoint_url(&request.base_url, "/chat/completions")?;
let debug_url = url.to_string();
let started = Instant::now();
let request_body = request_body(request, true);
let response = client
.post(url)
.bearer_auth(&request.api_token)
.json(&request_body(request, true))
.json(&request_body)
.send()
.await
.context("failed to send OpenAI streaming request")?;

let status = response.status();
let status_code = status.as_u16();
let response_headers = response.headers().clone();

if !status.is_success() {
let body = response
@@ -75,7 +97,15 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
.context("failed to read OpenAI error response body")?;
if let Some(raw_debug) = &request.raw_debug {
raw_debug
.write_response("openai-error", &body)
.write_http_error(
"openai-error-http",
debug_request(request, &debug_url, request_body),
HttpDebugResponse {
status: status_code,
headers: response_headers_for_debug(&response_headers),
body: body.clone(),
},
)
.await
.context("failed to write OpenAI raw debug error response")?;
}
@@ -121,7 +151,9 @@ pub async fn send_stream(client: &Client, request: &ModelRequest) -> Result<Mode
}
}

if let Some(raw_debug) = &request.raw_debug {
if let Some(raw_debug) = &request.raw_debug
&& raw_debug.write_success_raw()
{
raw_debug
.write_response("openai-sse", &raw_stream)
.await
@@ -200,6 +232,44 @@ fn request_body(request: &ModelRequest, stream: bool) -> Value {
body
}

async fn write_error_debug(
request: &ModelRequest,
url: &str,
request_body: Value,
status: u16,
response_headers: &reqwest::header::HeaderMap,
response_body: &str,
response_kind: &str,
) -> Result<()> {
if let Some(raw_debug) = &request.raw_debug {
raw_debug
.write_http_error(
response_kind,
debug_request(request, url, request_body),
HttpDebugResponse {
status,
headers: response_headers_for_debug(response_headers),
body: response_body.to_string(),
},
)
.await
.context("failed to write OpenAI raw debug error response")?;
}
Ok(())
}

fn debug_request(request: &ModelRequest, url: &str, body: Value) -> HttpDebugRequest {
HttpDebugRequest {
method: "POST".to_string(),
url: url.to_string(),
headers: request_headers_for_debug(&[(
"authorization",
format!("Bearer {}", request.api_token),
)]),
body,
}
}

#[cfg(test)]
mod tests {
use crate::runner::{ModelRequest, RawDebugConfig, ThinkingConfig};
@@ -359,6 +429,77 @@ mod tests {
assert!(!message.contains("sk-leaked-token"));
}

#[tokio::test]
async fn non_success_error_debug_records_request_and_response_without_token() {
let server = MockServer::start().await;
let temp_dir = tempfile::tempdir().expect("create temp dir");
Mock::given(method("POST"))
.and(path("/chat/completions"))
.respond_with(
ResponseTemplate::new(429)
.insert_header("x-request-id", "req-test")
.set_body_json(serde_json::json!({
"error": {
"message": "rate limited with full response"
}
})),
)
.mount(&server)
.await;

let request = ModelRequest {
base_url: server.uri(),
api_token: "sk-real-token".to_string(),
model: "gpt-test".to_string(),
prompt: "full prompt should be recorded".to_string(),
temperature: 0.0,
max_tokens: 1024,
stream: false,
raw_debug: Some(RawDebugConfig::new(
temp_dir.path().to_path_buf(),
"openai-gpt-test".to_string(),
)),
thinking: None,
};

let _ = super::send(&Client::new(), &request)
.await
.expect_err("non-success should fail");

let debug_files = std::fs::read_dir(temp_dir.path())
.expect("read debug dir")
.collect::<Result<Vec<_>, _>>()
.expect("debug entries");
assert_eq!(debug_files.len(), 1);
let raw = std::fs::read_to_string(debug_files[0].path()).expect("read raw debug file");
let debug: serde_json::Value = serde_json::from_str(&raw).expect("debug json");

assert_eq!(debug["request"]["method"], "POST");
assert!(
debug["request"]["url"]
.as_str()
.expect("request url")
.ends_with("/chat/completions")
);
assert_eq!(
debug["request"]["headers"]["authorization"],
"Bearer [REDACTED]"
);
assert_eq!(
debug["request"]["body"]["messages"][0]["content"],
"full prompt should be recorded"
);
assert_eq!(debug["response"]["status"], 429);
assert_eq!(debug["response"]["headers"]["x-request-id"], "req-test");
assert!(
debug["response"]["body"]
.as_str()
.expect("response body")
.contains("rate limited with full response")
);
assert!(!raw.contains("sk-real-token"));
}

#[tokio::test]
async fn base_url_with_v1_prefix_keeps_chat_completion_path() {
let server = MockServer::start().await;


+ 96
- 0
src/runner.rs Vedi File

@@ -3,6 +3,10 @@ use crate::protocols;
use anyhow::{Context, Result};
use chrono::Utc;
use reqwest::Client;
use reqwest::header::HeaderMap;
use serde::Serialize;
use serde_json::Value;
use std::collections::BTreeMap;
use std::fmt;
use std::path::PathBuf;
use std::sync::Arc;
@@ -38,6 +42,7 @@ pub struct RawDebugConfig {
output_dir: PathBuf,
prefix: String,
counter: Arc<AtomicU64>,
write_success_raw: bool,
}

impl RawDebugConfig {
@@ -46,10 +51,36 @@ impl RawDebugConfig {
output_dir,
prefix: sanitize_filename_component(&prefix),
counter: Arc::new(AtomicU64::new(0)),
write_success_raw: true,
}
}

pub fn with_success_raw(mut self, write_success_raw: bool) -> Self {
self.write_success_raw = write_success_raw;
self
}

pub fn write_success_raw(&self) -> bool {
self.write_success_raw
}

pub async fn write_response(&self, response_kind: &str, contents: &str) -> Result<PathBuf> {
self.write_debug_file(response_kind, contents).await
}

pub async fn write_http_error(
&self,
response_kind: &str,
request: HttpDebugRequest,
response: HttpDebugResponse,
) -> Result<PathBuf> {
let envelope = HttpDebugEnvelope { request, response };
let contents = serde_json::to_string_pretty(&envelope)
.context("failed to serialize raw debug HTTP error")?;
self.write_debug_file(response_kind, &contents).await
}

async fn write_debug_file(&self, response_kind: &str, contents: &str) -> Result<PathBuf> {
tokio::fs::create_dir_all(&self.output_dir)
.await
.with_context(|| {
@@ -74,6 +105,71 @@ impl RawDebugConfig {
}
}

#[derive(Debug, Serialize)]
pub struct HttpDebugEnvelope {
pub request: HttpDebugRequest,
pub response: HttpDebugResponse,
}

#[derive(Debug, Serialize)]
pub struct HttpDebugRequest {
pub method: String,
pub url: String,
pub headers: BTreeMap<String, String>,
pub body: Value,
}

#[derive(Debug, Serialize)]
pub struct HttpDebugResponse {
pub status: u16,
pub headers: BTreeMap<String, String>,
pub body: String,
}

pub fn request_headers_for_debug(headers: &[(&str, String)]) -> BTreeMap<String, String> {
headers
.iter()
.map(|(name, value)| {
let name = name.to_ascii_lowercase();
let value = if is_sensitive_header(&name) {
redact_header_value(&name, value)
} else {
value.clone()
};
(name, value)
})
.collect()
}

pub fn response_headers_for_debug(headers: &HeaderMap) -> BTreeMap<String, String> {
headers
.iter()
.map(|(name, value)| {
(
name.as_str().to_ascii_lowercase(),
value.to_str().unwrap_or("<non-utf8>").to_string(),
)
})
.collect()
}

fn is_sensitive_header(name: &str) -> bool {
matches!(
name,
"authorization" | "x-api-key" | "x-goog-api-key" | "api-key"
)
}

fn redact_header_value(name: &str, value: &str) -> String {
if name == "authorization"
&& let Some((scheme, _)) = value.split_once(' ')
{
format!("{scheme} [REDACTED]")
} else {
"[REDACTED]".to_string()
}
}

impl fmt::Debug for ModelRequest {
fn fmt(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result {
formatter


Caricamento…
Annulla
Salva