Jerry's Blog

Back

把不同格式的工具调用请求,统一成一种内部格式,执行本地函数,再把结果包装回去。再往上一层,把不同 provider 的 request / response 整包转换、路由和回填串起来。 这就是这份 demo 的完整使命。本文按”先跑通最小骨架,再一层层加能力”的方式,从零拆解这个模块的设计与实现——包括为什么要 normalize、怎么分层、为什么还要有 adapter/route/gateway loop。

它其实有两条链路:

call_envelope → normalize → dispatch → execute → wrap_result → append_context

provider request → transform_request → upstream response → transform_response → extract_tool_calls → append_tool_results


一、这个模块到底解决什么问题#

先别写代码,先写目标。你这版代码其实同时解决了两类问题:

  1. 接收不同厂商风格的工具调用信封(envelope)
  2. 统一成内部标准格式
  3. 根据名字找到本地处理函数
  4. 执行函数并把结果重新包装成对应格式
  5. 在 provider gateway 层按 route 选择 adapter
  6. 在上下游格式之间双向转换 request / response
  7. 把调用和结果追加到 provider-native 历史里,驱动下一轮 tool loop
交互演示:工具调用分发的主链路
点击每个阶段,查看它的输入输出和职责

上面这个交互图展示的是第一层,也就是本地 host runtime。对应的第二层是 provider gateway runtime。两层职责不一样:

关注点核心函数
Local Host Runtime一个 tool call 进来以后,怎么 normalize、执行、回包、记账normalize_tool_call()dispatch_tool_call()build_tool_result_envelope()append_tool_interaction_to_context()
Provider Gateway Runtime一整包 provider request/response 怎么按 route 转换、抽取 tool call、把 tool result 回填回去get_adapter()transform_gateway_exchange()extract_tool_calls_from_provider_response()append_tool_results_to_provider_messages()run_gateway_tool_loop()

第一层的主链路如下,后面的本地执行函数基本都在服务它:

阶段函数回答的问题
Normalizenormalize_tool_call()模型到底调用了哪个工具,参数是什么?
Dispatchdispatch_tool_call()把整条链路串起来执行
Wrapbuild_tool_result_envelope()执行结果要按哪种格式回给模型?
Appendappend_tool_interaction_to_context()这次调用和结果怎么写进会话历史?

二、数据结构先行:异常类、路由枚举与标准化对象#

第一步不要急着写大函数,先把最基础的类型定义好。

2.1 先写异常类#

后面很多地方都可能失败——envelope 格式不对、参数不是 dict、注册表里没 handler、handler 参数不匹配。所以先定义统一异常:

class EnvelopeDispatchError(Exception):
    """Raised when a tool call envelope cannot be normalized or dispatched."""
python

这样后面所有分发错误都能抛这个,逻辑会很干净。

2.2 先把 provider / format / route 说清楚#

如果你只做本地 dispatch,NormalizedToolCall 就够了。但你这版代码已经扩展成了双层 runtime,所以还需要显式建模“谁对谁说话”:

class Provider(str, Enum):
    OPENAI = "openai"
    CLAUDE = "claude"
    GEMINI = "gemini"
    CODEX = "codex"


class ApiFormat(str, Enum):
    OPENAI_CHAT_COMPLETIONS = "openai_chat_completions"
    OPENAI_RESPONSES = "openai_responses"
    ANTHROPIC_MESSAGES = "anthropic_messages"
    GEMINI_CONTENTS = "gemini_contents"
    CODEX_TERMINAL = "codex_terminal"


@dataclass(frozen=True)
class AdapterRoute:
    downstream_provider: Provider
    downstream_format: ApiFormat
    upstream_provider: Provider
    upstream_format: ApiFormat
python

这里最重要的不是枚举本身,而是 AdapterRoute 这个四元组。它把“下游是谁、下游说什么格式、上游是谁、上游说什么格式”定义成一个显式路由键。后面 adapter registry 就围着它转。

2.3 再写标准化后的数据结构#

你最终想把所有不同格式的 envelope 变成统一格式,所以先定义那个”统一格式”:

@dataclass
class NormalizedToolCall:
    source_format: str
    source_variant: str
    call_id: str
    dispatch_key: str
    arguments: Dict[str, Any]
    raw_envelope: Dict[str, Any]
python

这里每个字段都很关键:

  • source_format:大类来源,如 openai / claude / codex
  • source_variant:同一家 provider 内的具体变体,如 responses_api / responses_wrapper / chat_completions
  • call_id:后面回包时关联用
  • dispatch_key:真正拿来找 handler 的键
  • arguments:已经解析好的参数(一律是 dict)
  • raw_envelope:保留原始输入,便于调试

source_variant 看起来像细节,其实非常关键。因为同样是 OpenAI,Responses APIChat Completions 的工具结果回包形状并不一样。只记 source_format="openai" 不够,必须再记一次具体分支。

这一步很重要,因为它决定了后面所有本地 dispatch 代码都围绕这个统一对象展开。


三、处理器注册表#

dispatch 最终要调用真实 Python 函数,所以需要一个映射表:

Handler = Callable[..., Dict[str, Any]]

TOOL_REGISTRY: Dict[str, Handler] = {
    "functions.exec_command": exec_command,
    "functions.write_stdin": write_stdin,
    "functions.apply_patch": apply_patch,
    "functions.view_image": view_image,
    "snapshot_before_edit": snapshot_before_edit,
    "restore_snapshot": restore_snapshot,
    "edit_file_with_validation": edit_file_with_validation,
}
python

理解方式:

  • Schema 里定义”工具名字”
  • Registry 决定”这个名字对应哪个 Python 函数”

也就是说,真正把协议名绑定到本地执行逻辑的地方,就是这个表

写这个表时注意两点:key 要和模型发来的工具名一致;value 必须是可调用对象,且统一返回 dict


四、辅助函数#

先写两个小函数,独立、简单、好测。

4.1 生成 call_id#

有些 envelope 没给 id,需要本地补一个:

def _generated_call_id(prefix: str = "call") -> str:
    return f"{prefix}_{uuid.uuid4().hex[:8]}"
python

4.2 解析 arguments#

不同厂商里,arguments 可能已经是 dict,也可能是 JSON 字符串:

def _parse_arguments(value: Any) -> Dict[str, Any]:
    if isinstance(value, dict):
        return value
    if isinstance(value, str):
        parsed = json.loads(value)
        if not isinstance(parsed, dict):
            raise EnvelopeDispatchError(
                "function.arguments JSON must decode to an object"
            )
        return parsed
    raise EnvelopeDispatchError(
        f"tool arguments must be object or JSON string, got {type(value).__name__}"
    )
python

把解析逻辑单独抽出来的好处:normalize_tool_call() 不会变得太臃肿,参数格式校验集中在一个地方,后续测试更容易写。


五、核心一:normalize_tool_call#

这是整个模块最重要的第一核心函数。它的任务不是执行工具,而是:把不同来源的 envelope 变成同一种内部结构。

交互演示:三种信封格式的统一化
选择不同厂商格式,点击 Normalize 查看统一后的结果
原始信封
Normalize 结果

按”识别来源格式”的顺序来写。每一段遵循相同模式:先判断是不是这种格式 → 再校验关键字段 → 再提取统一字段 → 最后返回 NormalizedToolCall

5.1 Codex 风格#

Codex 风格看 recipient_nameparameters

if "recipient_name" in raw:
    dispatch_key = raw["recipient_name"]
    parameters = raw.get("parameters")
    if not isinstance(parameters, dict):
        raise EnvelopeDispatchError(
            "codex envelope requires an object 'parameters' field"
        )
    call_id = str(
        raw.get("call_id") or raw.get("id") or _generated_call_id("codex")
    )
    return NormalizedToolCall(
        source_format="codex",
        source_variant="codex_terminal",
        call_id=call_id,
        dispatch_key=str(dispatch_key),
        arguments=dict(parameters),
        raw_envelope=raw,
    )
python

5.2 OpenAI 家族其实有三种分支#

你这版代码里,OpenAI 不是一个分支,而是三个分支。它们语义相同,形状不同,所以 source_variant 必须被记录下来。

第一种是 Responses API item,顶层直接带 name / arguments

if raw.get("type") == "function_call" and "name" in raw and "arguments" in raw:
    return NormalizedToolCall(
        source_format="openai",
        source_variant="responses_api",
        call_id=str(raw.get("call_id") or raw.get("id") or _generated_call_id("openai")),
        dispatch_key=str(raw["name"]),
        arguments=_parse_arguments(raw["arguments"]),
        raw_envelope=raw,
    )
python

第二种是这篇文章 demo 里常用的 wrapper 形状,也就是 function.name / function.arguments

if raw.get("type") == "function_call":
    fn = raw.get("function")
    if not isinstance(fn, dict):
        raise EnvelopeDispatchError(
            "openai-style function_call envelope requires a 'function' object"
        )
    if "name" not in fn or "arguments" not in fn:
        raise EnvelopeDispatchError(
            "openai-style function_call requires function.name and function.arguments"
        )
    return NormalizedToolCall(
        source_format="openai",
        source_variant="responses_wrapper",
        call_id=str(raw.get("call_id") or raw.get("id") or _generated_call_id("openai")),
        dispatch_key=str(fn["name"]),
        arguments=_parse_arguments(fn["arguments"]),
        raw_envelope=raw,
    )
python

第三种是 Chat Completions tool call item,它的 type"function",而不是 "function_call"

if raw.get("type") == "function":
    fn = raw.get("function")
    if not isinstance(fn, dict):
        raise EnvelopeDispatchError(
            "openai chat tool call requires a 'function' object"
        )
    if "name" not in fn or "arguments" not in fn:
        raise EnvelopeDispatchError(
            "openai chat tool call requires function.name and function.arguments"
        )
    return NormalizedToolCall(
        source_format="openai",
        source_variant="chat_completions",
        call_id=str(raw.get("id") or raw.get("call_id") or _generated_call_id("openai")),
        dispatch_key=str(fn["name"]),
        arguments=_parse_arguments(fn["arguments"]),
        raw_envelope=raw,
    )
python

这三段的共同点是:最终都要产出同一个 dispatch_key + arguments。差异被压缩进 source_variant,后面回包时再用。

5.4 Claude 风格#

Claude 风格用 type: "tool_use" + name + input

if raw.get("type") == "tool_use":
    if "name" not in raw or "input" not in raw:
        raise EnvelopeDispatchError(
            "claude-style tool_use requires name and input"
        )
    if not isinstance(raw["input"], dict):
        raise EnvelopeDispatchError(
            "claude-style tool_use input must be an object"
        )
    call_id = str(raw.get("id") or _generated_call_id("claude"))
    return NormalizedToolCall(
        source_format="claude",
        source_variant="messages_api",
        call_id=call_id,
        dispatch_key=str(raw["name"]),
        arguments=dict(raw["input"]),
        raw_envelope=raw,
    )
python

5.5 兜底异常#

raise EnvelopeDispatchError("unrecognized tool call envelope shape")
python

六、核心二:build_tool_result_envelope#

这个函数做的是 normalize 的反方向:

  • normalize 是”外部格式 → 内部格式”
  • build_tool_result_envelope 是”内部结果 → 外部格式”

根据 source_format 和必要时的 source_variant 分支返回不同字典:

def build_tool_result_envelope(
    normalized_call: NormalizedToolCall,
    output: Dict[str, Any],
) -> Dict[str, Any]:

    if normalized_call.source_format == "openai":
        if normalized_call.source_variant == "chat_completions":
            return {
                "role": "tool",
                "tool_call_id": normalized_call.call_id,
                "name": normalized_call.dispatch_key,
                "content": _stringify_tool_output(output),
            }
        return {
            "type": "function_call_output",
            "call_id": normalized_call.call_id,
            "name": normalized_call.dispatch_key,
            "output": output,
        }

    if normalized_call.source_format == "claude":
        return {
            "type": "tool_result",
            "tool_use_id": normalized_call.call_id,
            "name": normalized_call.dispatch_key,
            "content": output,
        }

    if normalized_call.source_format == "codex":
        return {
            "type": "tool_result",
            "call_id": normalized_call.call_id,
            "recipient_name": normalized_call.dispatch_key,
            "output": output,
        }

    raise EnvelopeDispatchError(
        f"unsupported source format: {normalized_call.source_format}"
    )
python

注意这里已经不是简单的“按 provider 选模板”,而是“按 provider + variant 选模板”:

  • OpenAI Chat Completions:工具结果是独立的 role="tool" message
  • OpenAI Responses / wrapper:工具结果是 function_call_output
  • Claude:工具结果是 tool_result
  • Codex:工具结果是 demo 自定义的 tool_result

所以 source_variant 不是装饰字段,它直接决定结果 envelope 长什么样。


七、上下文追加函数#

这个函数的目的很单纯:把工具调用记录和工具结果记录加到 context 里。

def append_tool_interaction_to_context(
    context: MutableSequence[Dict[str, Any]],
    *,
    call_envelope: Mapping[str, Any],
    tool_result_envelope: Mapping[str, Any],
) -> List[Dict[str, Any]]:
    updated = list(context)
    updated.append({
        "role": "assistant",
        "type": "tool_call",
        "content": dict(call_envelope),
    })
    updated.append({
        "role": "tool",
        "type": "tool_result",
        "content": dict(tool_result_envelope),
    })
    return updated
python

这里最核心的思想是:

  • 模型不会自动记住工具调用过程
  • 宿主运行时必须自己维护上下文
  • 下一轮要把这些记录再喂回去

本质上,这个函数就是”更新对话状态”。不过要注意,它记录的是通用审计上下文,不是 provider-native 的下一轮请求历史。真正要把工具结果按 Claude / OpenAI / Gemini 的原生格式喂回去,还需要后面的 append_tool_results_to_provider_messages()


八、总调度函数 dispatch_tool_call#

建议最后写,因为它依赖前面的所有组件。它按顺序串起整条链路:

交互演示:dispatch_tool_call 执行流程
点击”下一步”逐步观察 dispatch 的每个阶段
def dispatch_tool_call(
    call_envelope: Mapping[str, Any],
    *,
    context: Optional[MutableSequence[Dict[str, Any]]] = None,
    registry: Optional[Dict[str, Handler]] = None,
) -> Dict[str, Any]:

    # 第一步:normalize
    normalized = normalize_tool_call(call_envelope)

    # 第二步:拿 registry
    active_registry = dict(registry or TOOL_REGISTRY)

    # 第三步:查 handler
    handler = active_registry.get(normalized.dispatch_key)
    if handler is None:
        raise EnvelopeDispatchError(
            f"no handler registered for {normalized.dispatch_key!r}"
        )

    # 第四步:执行 handler
    try:
        output = handler(**normalized.arguments)
    except TypeError as exc:
        raise EnvelopeDispatchError(
            f"handler argument mismatch for {normalized.dispatch_key}: {exc}"
        ) from exc

    # 第五步:检查返回值
    if not isinstance(output, dict):
        raise EnvelopeDispatchError(
            f"handler {normalized.dispatch_key!r} returned "
            f"{type(output).__name__}, expected dict"
        )

    # 第六步:构造结果 envelope
    tool_result_envelope = build_tool_result_envelope(normalized, output)

    # 第七步:更新上下文
    updated_context = append_tool_interaction_to_context(
        context or [],
        call_envelope=call_envelope,
        tool_result_envelope=tool_result_envelope,
    )

    # 第八步:返回完整结果
    return {
        "normalized_call": {
            "source_format": normalized.source_format,
            "source_variant": normalized.source_variant,
            "call_id": normalized.call_id,
            "dispatch_key": normalized.dispatch_key,
            "arguments": normalized.arguments,
        },
        "tool_output": output,
        "tool_result_envelope": tool_result_envelope,
        "updated_context": updated_context,
    }
python

注意这里专门抓 TypeError——最常见问题就是参数名不匹配、缺参数、多参数。返回的不只是最终结果,还包括调试很有用的中间信息。

但到这里为止,你解决的仍然只是第一层:本地工具调用怎么被规范化、执行、回包。它还没有回答“Claude 风格的整包请求如何转成 OpenAI 风格,再把 OpenAI 的整包响应转回来”。这就是下一节的 gateway runtime。

8.1 第二层:Provider Gateway Runtime#

本地 dispatch 层处理的是“一个 tool call 进来怎么办”。gateway 层处理的是“整包 provider request / response 怎么在上下游之间来回翻译”。

这一层的关键不是 NormalizedToolCall,而是显式 route:

@dataclass(frozen=True)
class AdapterRoute:
    downstream_provider: Provider
    downstream_format: ApiFormat
    upstream_provider: Provider
    upstream_format: ApiFormat
python

AdapterRoute 的意义在于:系统不再写死“Claude 一定转 OpenAI”,而是把“哪种下游格式接哪种上游格式”显式建模出来。这样 adapter registry 才有存在意义。

围绕它展开的核心接口如下:

class ProviderAdapter(Protocol):
    def name(self) -> str: ...
    def route(self) -> AdapterRoute: ...
    def transform_request(self, body: Mapping[str, Any]) -> Dict[str, Any]: ...
    def transform_response(self, body: Mapping[str, Any]) -> Dict[str, Any]: ...


class IdentityAdapter:
    def transform_request(self, body: Mapping[str, Any]) -> Dict[str, Any]:
        return dict(body)

    def transform_response(self, body: Mapping[str, Any]) -> Dict[str, Any]:
        return dict(body)
python

这就是 adapter 模式在这里的具体落点:

  • ClaudeCodeOpenAIAdapter:真正做格式改写
  • GeminiPassthroughAdapter:仍然走同一条 route / registry / adapter 链路,但 body 原样返回
  • IdentityAdapter:把“透传”也变成一个显式的一等公民,而不是散落在 if/else 里的特判

注册和查找也很直接:

REGISTERED_ADAPTERS: List[ProviderAdapter] = [
    ClaudeCodeOpenAIAdapter(),
    GeminiPassthroughAdapter(),
]

ADAPTER_REGISTRY: Dict[Tuple[Provider, ApiFormat, Provider, ApiFormat], ProviderAdapter] = {
    adapter.route().key(): adapter for adapter in REGISTERED_ADAPTERS
}
python

于是 gateway 层的主问题就从“写死选哪个 provider”变成了“按 route 选 strategy”。

8.2 从静态 transform 到完整 tool loop#

有了 adapter,还只是完成了“静态改写”。真正让 gateway 活起来的是下面这几个函数:

函数职责
transform_gateway_exchange()静态演示 request / response 双向转换
extract_tool_calls_from_provider_response()从 provider-native response 中抽出可 dispatch 的 tool calls
append_tool_results_to_provider_messages()按 provider-native 规则把 tool result 回填进下一轮请求
run_gateway_tool_loop()把整轮 transform -> extract -> dispatch -> append back 串成闭环

最关键的是 run_gateway_tool_loop(),它回答的是:不是只调一次模型,而是模型要工具、宿主执行、再把结果回给模型,直到模型停止要工具为止。

交互演示:Gateway Tool Loop
点击节点,查看第二层 runtime 如何在 provider 之间循环,直到模型停止要工具
当前轮次样例

骨架大概就是这样:

adapter = get_adapter(
    downstream_provider=downstream_provider,
    downstream_format=downstream_format,
    upstream_provider=upstream_provider,
    upstream_format=upstream_format,
)

current_request = _clone_jsonish(dict(initial_request))

for turn_index in range(max_turns):
    upstream_request = adapter.transform_request(current_request)
    upstream_response = dict(upstream_responder(_clone_jsonish(upstream_request), turn_index))
    downstream_response = adapter.transform_response(upstream_response)

    tool_calls = extract_tool_calls_from_provider_response(
        downstream_response,
        provider=downstream_provider,
        api_format=downstream_format,
    )

    if not tool_calls:
        return downstream_response

    dispatch_results = [
        dispatch_tool_call(call, registry=registry)
        for call in tool_calls
    ]
    current_request = append_tool_results_to_provider_messages(
        request_body=current_request,
        response_body=downstream_response,
        dispatch_results=dispatch_results,
        provider=downstream_provider,
        api_format=downstream_format,
    )
python

这里最值得注意的一点是:normalize_tool_call() 没有消失,它只是被放进了更大一层的 gateway loop 里。也就是说:

  • transform_request / transform_response 处理的是整包 provider 协议
  • normalize_tool_call / build_tool_result_envelope 处理的是单个 tool call envelope

这两层长得像,但不应该混成一层。


九、设计哲学:为什么要 normalize#

normalize 的目的不只是”让下游都接收统一参数”。更准确地说,normalize 是为了把”协议差异”挡在系统边界之外。

交互演示:边界隔离架构
点击每一层,理解为什么要在入口统一格式

9.1 边界隔离:把外部复杂性挡在入口#

外部世界是脏的、变动的、不统一的。内部系统应该是干净的、稳定的、可控的。

如果不做 normalize,协议差异会一路渗透到整个系统:dispatch 判断一次、日志判断一次、回包判断一次、context 更新又判断一次。以后加新 provider 还要全系统改。

把不稳定、异构、供应商相关的复杂性,收口在系统入口。 这就是”边界吸收复杂度”。

9.2 先统一语义,再执行逻辑#

normalize 不是在做业务执行,它是在做语义提纯。三种信封虽然字段名不同,但语义一样——都在表达”调用哪个工具,参数是什么”。

# 这三个底层表达的都是同一件事:
{"function": {"name": "snapshot_before_edit", "arguments": {...}}}
{"name": "snapshot_before_edit", "input": {...}}
{"recipient_name": "snapshot_before_edit", "parameters": {...}}
python

不要让系统围着表示形式转,而要围着语义转。 这就是 representation → semantics 的转换。

9.3 把变化点集中起来#

Provider 格式是高变化点。今天支持 OpenAI、Claude、Codex,明天可能加 Gemini、Mistral。

不过在双层 runtime 里,要把“变化点”分成两类看:

  • 如果新增的是单个 tool-call envelope 变体,主要改 normalize_tool_call()build_tool_result_envelope()
  • 如果新增的是整包 provider route,主要新增一个 ProviderAdapter 实现,并注册到 adapter registry

也就是说,“变化点集中”并不是把所有变化都塞进一个函数,而是把每类变化收口到它所属的那一层。

没有 normalize有 normalize
协议判断散落 everywhere协议判断集中在适配层
新增一种变体要全系统追着改新增 envelope 变体改 local adapter,新增 route 改 provider adapter
主流程依赖具体协议主流程依赖稳定抽象

9.4 内部先定义自己的世界观#

NormalizedToolCall 在说:“我这个运行时不想用厂商的世界观组织代码,我要先定义自己的世界观。”

你不是”围着 OpenAI 的 schema 写代码”,而是”先定义自己的内部抽象,再把 OpenAI 映射进来”。

  • 外部协议只是输入材料
  • 内部模型才是系统真正依赖的抽象

9.5 尽早校验,尽早失败#

normalize 让错误更早暴露。如果不在 normalize 里拦住格式错误,后面执行阶段才爆炸,错误会更模糊。

  • 错误位置明确
  • 错误类型清晰
  • 后面业务逻辑可以更简单

这叫 fail fast

9.6 类比编译器前端#

不同语言写法不同,但编译器不会一路拿源码文本往后传——它会先转成 AST。后面的优化器和执行器面对的是统一中间表示(IR),而不是原始文本。

编译器本模块
输入语言Provider envelope
AST / IRNormalizedToolCall
执行器Dispatcher

先把多样输入压缩成单一中间表示,再驱动后续流程。 输入多态,内部单态。


十、镜像设计:build_tool_result_envelope 的对称哲学#

在你这版代码里,其实有两组镜像动作:

  • 本地 dispatch 层:normalize_tool_callbuild_tool_result_envelope
  • provider gateway 层:transform_requesttransform_response

先说第一组。build_tool_result_envelope 是 normalize 的镜像动作

  • normalize_tool_call外部 → 内部
  • build_tool_result_envelope内部 → 外部

两者放在一起,构成了“单个 tool call envelope”的完整边界设计。

10.1 为什么结果回包也不直接返回 dict#

handler 返回的只是”内部结果”,不是”协议结果”。比如 handler 可能只返回 {"ok": True, "snapshot_path": "..."}——这只是业务数据。但模型侧真正期待的,还包括:这是哪次调用的结果、对应哪个工具、字段名要符合哪个 provider 的协议。

业务结果本身差不多,但协议包装不同。 所以这里的哲学和 normalize 完全对称:

内部执行逻辑不应该关心外部协议怎么收结果。

10.2 双向适配模型#

外部请求 → 适配输入 → 内部执行 → 适配输出 → 外部结果
plaintext

真正稳定的只有中间那一层:NormalizedToolCallhandler(**arguments)output: dict。输入和输出都可能因 envelope 变体而变化,但中间层最好不要变。

再往上一层,provider gateway 也有一个类似结构:

downstream requesttransform_request()upstream request

upstream responsetransform_response()downstream response

所以“镜像设计”不是只出现一次,而是在两层边界上都出现一次。

把不稳定性留在边界,把稳定性留在核心。

10.3 六边形架构 / Ports and Adapters#

虽然这个例子很小,但它很像六边形架构的思想。内部核心只关心”调用哪个能力、参数是什么、执行结果是什么”,而不关心”请求是 OpenAI 发来的还是 Claude 发来的、结果回包要叫 output 还是 content”。而 gateway 层则只关心“整包协议怎么翻译”,不直接碰本地业务 handler。

角色对应组件
输入适配器normalize_tool_calltransform_request
输出适配器build_tool_result_envelopetransform_response
核心编排器dispatch_tool_callrun_gateway_tool_loop
能力实现层TOOL_REGISTRY + handlers

10.4 边界进出对称#

如果只做输入适配、不做输出适配,核心编排逻辑就会被 provider 细节污染——你可能会在 dispatch_tool_call 里写 if source_format == "openai": ...。最漂亮的做法是:入口统一一次,出口统一一次,中间主流程只处理内部对象。

10.5 维护抽象纯度#

如果 handler 必须自己返回 OpenAI/Claude/Codex 风格结果,业务 handler 就已经知道外部协议了。一旦换 provider,所有 handler 都可能要改。

业务逻辑不应该依赖传输协议。 handler 只应该表达业务事实——是否成功、生成了什么文件、输出是什么数据。至于封装和投递,那是 runtime 的事。

这和 Web 开发里一样:service 层返回业务对象,controller 层决定 HTTP status、JSON 结构。不要把 controller 的事塞进 service。

10.6 source_format 的克制设计#

内部虽然不该被 provider 细节污染,但 runtime 仍然需要知道”最后要翻译成哪种外部格式”。所以 source_format / source_variant 的设计很有意思:

  • 它不是给业务 handler 用的
  • 它是给本地 envelope 适配层用的
  • 不让整个系统到处知道 provider 是谁,但也不完全丢掉必要的回包信息

而 provider 级别的路由信息,则由 AdapterRoute 单独承载。这样“单个 tool call 的来源信息”和“整包 request/response 的上下游路由”不会混在同一个字段里。

保留必要来源信息,但不让它进入业务语义层。

10.7 协议/语义解耦#

  • 协议:字段名、消息格式、信封形状
  • 语义:调用哪个工具、参数是什么、结果是什么

normalize 和 result envelope 两层一起实现的,就是:让协议变化不影响语义层,让语义层稳定存在。

10.8 可测试性收益#

现在你可以分别测:

测试对象验证内容
normalize给 OpenAI/Claude/Codex 输入,看是否都变成同一个内部对象
dispatch给合理的内部路径,看 handler 是否被正确执行
result envelope给统一 output,看是否能正确还原为不同 provider 格式
gateway adapter给一份下游 request / 上游 response,看 route 是否选对、transform 是否正确
gateway loop给一个会要工具的 responder,看是否能完成 extract → dispatch → append back 的闭环

系统被切成了几个可独立验证的部件。这不仅是”架构优雅”,还有非常实际的收益:更好测、更好扩展、更好 debug、更好替换 provider。


十一、生产级实现:cc-switch 的 Rust 双向转换#

上面是 Python 的实现。在真实生产环境中,cc-switch(一个跨平台的 Claude Code / Codex / Gemini CLI 配置管理工具)的代理层用 Rust 实现了和本文第二层 gateway runtime 高度对应的双向适配模式

Adapter Trait:统一供应商接口#

cc-switch 定义了一个 ProviderAdapter trait,所有供应商适配器都需要实现:

pub trait ProviderAdapter: Send + Sync {
    fn name(&self) -> &'static str;
    fn extract_base_url(&self, provider: &Provider) -> Result<String, ProxyError>;
    fn extract_auth(&self, provider: &Provider) -> Option<AuthInfo>;
    fn build_url(&self, base_url: &str, endpoint: &str) -> String;
    fn add_auth_headers(&self, request: RequestBuilder, auth: &AuthInfo) -> RequestBuilder;

    // 双向转换接口
    fn needs_transform(&self, _provider: &Provider) -> bool { false }
    fn transform_request(&self, body: Value, _provider: &Provider) -> Result<Value, ProxyError> {
        Ok(body)  // 默认透传
    }
    fn transform_response(&self, body: Value) -> Result<Value, ProxyError> {
        Ok(body)  // 默认透传
    }
}
rust

注意 transform_requesttransform_response 就是入口适配和出口适配,默认是透传(identity),只在需要格式转换时才覆盖。

Anthropic → OpenAI:请求转换(gateway request transform 方向)#

cc-switch 在 transform.rs 中实现了 Anthropic 格式到 OpenAI 格式的请求转换。关键逻辑是工具调用的转换:

// Claude 的 tool_use → OpenAI 的 tool_calls
"tool_use" => {
    let id = block.get("id").and_then(|i| i.as_str()).unwrap_or("");
    let name = block.get("name").and_then(|n| n.as_str()).unwrap_or("");
    let input = block.get("input").cloned().unwrap_or(json!({}));
    tool_calls.push(json!({
        "id": id,
        "type": "function",
        "function": {
            "name": name,
            // 注意:OpenAI 的 arguments 必须是 JSON 字符串
            "arguments": serde_json::to_string(&input).unwrap_or_default()
        }
    }));
}
rust

这里能看到和 Python 实现完全对应的问题:Claude 的 input 是 dict,但 OpenAI 的 arguments 必须是 JSON 字符串。这个问题在本文里一方面由 gateway 层的 transform_request() 处理,另一方面也会在本地 envelope 标准化时由 _parse_arguments() 兜住。

OpenAI → Anthropic:响应转换(gateway response transform 方向)#

// OpenAI 的 tool_calls → Claude 的 tool_use
if let Some(tool_calls) = message.get("tool_calls").and_then(|t| t.as_array()) {
    for tc in tool_calls {
        let func = tc.get("function").unwrap_or(&empty_obj);
        let args_str = func.get("arguments")
            .and_then(|a| a.as_str())
            .unwrap_or("{}");
        let input: Value = serde_json::from_str(args_str).unwrap_or(json!({}));

        content.push(json!({
            "type": "tool_use",
            "id": tc.get("id").and_then(|i| i.as_str()).unwrap_or(""),
            "name": func.get("name").and_then(|n| n.as_str()).unwrap_or(""),
            "input": input  // JSON 字符串 → dict
        }));
    }
}
rust

这就是 transform_requesttransform_response 的镜像关系在生产代码中的体现——进来时把 dict 转成 JSON 字符串,出去时把 JSON 字符串解析回 dict。

关键映射对照#

Python 实现cc-switch Rust 实现
transform_request()transform_request()
transform_response()transform_response()
ProviderAdapter / IdentityAdapterProviderAdapter trait + 默认透传实现
AdapterRoute + adapter registryprovider adapter 选择逻辑
_parse_arguments()serde_json::from_str(args_str)

如果一定要和第一层对应起来,那么 normalize_tool_call() 更像是“对单个 tool envelope 做 canonicalization”,而 cc-switch 这里主要处理的是“整包 HTTP body 的 provider 协议转换”。两者精神相通,但不该一一硬对齐。

设计一致性#

cc-switch 的实现验证了所有设计原则:

  1. 边界进出对称——transform_requesttransform_response 成对出现
  2. 默认透传——只在 needs_transform() 返回 true 时才转换
  3. 适配器模式——ProviderAdapter trait 是统一接口,ClaudeAdapter 是具体实现
  4. 变化点集中——新增 provider(如 Gemini)只需实现新 adapter,不影响核心转发逻辑

十二、为什么用 Registry 做 Late Binding#

很多人第一反应是用 if/elif 分发:

# 反面示例
if dispatch_key == "snapshot_before_edit":
    output = snapshot_before_edit(**args)
elif dispatch_key == "exec_command":
    output = exec_command(**args)
elif dispatch_key == "apply_patch":
    output = apply_patch(**args)
# ...
python

这能跑,但它是脚本思维,不是 runtime 思维。

字符串→函数的间接层#

Registry 的本质是一张键到可调用对象 / 策略对象的映射表

在这份代码里,其实有两张注册表:

  • TOOL_REGISTRYdispatch_key -> handler
  • ADAPTER_REGISTRYAdapterRoute -> ProviderAdapter

先看本地工具这一张,它是最容易理解的:

TOOL_REGISTRY: Dict[str, Handler] = {
    "snapshot_before_edit": snapshot_before_edit,
    "functions.exec_command": exec_command,
    # ...
}
python

这张表把”协议里的工具名”和”真正要执行的 Python 函数”解耦了。

而 gateway 层那张 registry 的作用则是:把“上下游四元组 route”与“对应的转换策略 adapter”解耦。两张表解决的是不同层次的问题,但设计手法完全一致。

if/elif vs Registry 的关键差异#

if/elif 分发Registry
新增工具改 dispatch 函数代码往表里加一行
测试需要 mock 整个 dispatch 函数直接传入自定义 registry
运行时扩展不可能可以动态注册/注销
协议名和函数名强耦合完全解耦
谁决定绑定关系dispatch 函数内部调用方 / 配置层

为什么说 Registry 更像 Runtime#

if/elif 是编译时决定的——你写完代码,工具列表就固定了。Registry 是运行时决定的——调用方可以传入自定义 registry,可以动态增删工具。

这也是为什么 dispatch_tool_call 接受 registry 参数:

def dispatch_tool_call(
    call_envelope,
    *,
    registry: Optional[Mapping[str, Handler]] = None,  # 外部可注入
):
    active_registry = dict(registry or TOOL_REGISTRY)
python

测试时传一个只有一个 handler 的 registry,生产时用默认的。这种灵活性用 if/elif 做不到。

更深一层:Late Binding 是一种依赖反转#

dispatch 不再依赖具体的 handler 实现,而是依赖 registry 接口。handler 通过注册自己来”插入”系统。这就是 IoC 容器的最小实现。


十三、为什么 Handler 统一要求返回 Dict#

如果有的 handler 返回字符串,有的返回 dict,有的返回 list,外层包装逻辑会越来越混乱。统一返回 dict 是一种契约设计

对 build_tool_result_envelope 的意义#

build_tool_result_envelope 直接把 handler 的返回值塞进 output / content 字段。如果返回类型不确定,这个函数就需要一堆类型判断。统一成 dict 后:

# 永远成立,不需要类型判断
tool_result_envelope = build_tool_result_envelope(normalized, output)
python

对调试的意义#

dict 可以直接 json.dumps,可以加字段、可以嵌套。字符串做不到这些。dict 是 JSON 世界里最自然的”业务结果容器”。

对 handler 的约束#

这也是对 handler 作者的一种指引——你的返回值应该是结构化的业务数据,不是一段文本。如果确实要返回文本,包成 {"text": "..."} 就行。

# dispatch 里的强制检查
if not isinstance(output, dict):
    raise EnvelopeDispatchError(
        f"handler {normalized.dispatch_key!r} returned "
        f"{type(output).__name__}, expected dict"
    )
python

十四、为什么 append_tool_interaction_to_context 单独拆出来#

很容易把上下文追加逻辑直接写在 dispatch_tool_call 里。但单独拆出来有三个好处:

1)职责清晰#

dispatch 负责”执行”,append 负责”记录”。这两个关注点不同——执行可能失败,但已经成功的调用仍然需要记录。

2)可独立调用#

有些场景下你可能不想走完整 dispatch 流程,但仍然需要手动往 context 里追加记录(比如重放历史记录、导入外部日志)。

3)解释了一个容易被忽略的事实#

模型不会自动记住工具调用过程。 大模型 API 是无状态的——每次调用都需要把完整对话历史传过去。宿主 runtime 必须自己维护 context,在下一轮调用前把工具调用和结果喂回去。

这个函数的存在本身就在提醒你:上下文管理是 runtime 的责任,不是模型的。


十五、dispatch_tool_call 为什么是编排器#

dispatch_tool_call 自己几乎不做任何”实质工作”。它做的是:

  1. 调用 normalize_tool_call
  2. 查 registry
  3. 调用 handler
  4. 调用 build_tool_result_envelope
  5. 调用 append_tool_interaction_to_context
  6. 组装返回值

它是一个纯粹的编排函数(orchestrator)。

为什么不把所有逻辑揉进一个函数#

如果把 normalize、dispatch、wrap、append 全写在一个函数里,你会得到一个 100+ 行的巨型函数,里面既有协议识别、又有 handler 执行、又有结果包装、又有上下文管理。

问题是:

  • 改一处影响全部——修改 normalize 逻辑可能意外影响 context 追加
  • 无法单独测试——想测 normalize 必须 mock handler,想测 handler 必须构造完整 envelope
  • 无法单独复用——只想 normalize 而不 dispatch?做不到

编排器模式的好处是:每个子函数可以独立理解、独立测试、独立复用。 编排器只负责把它们串起来。

编排器的返回值设计#

注意 dispatch 返回的不只是最终结果,而是完整的中间状态:

return {
    "normalized_call": {...},       # normalize 的输出
    "tool_output": output,          # handler 的原始输出
    "tool_result_envelope": {...},  # 包装后的结果
    "updated_context": [...],       # 更新后的上下文
}
python

这让调用方可以拿到任何一步的中间产物,方便调试、日志、审计。


十六、错误处理哲学#

这份代码里的错误处理不是随意的,有三条主线:

Fail Fast:尽早失败#

normalize 阶段就拦住格式错误——不要让一个格式不对的 envelope 一路传到 handler 执行阶段才爆炸。越早失败,错误信息越精确。

# normalize 里的早期校验
if not isinstance(fn, dict):
    raise EnvelopeDispatchError(
        "openai-style function_call envelope requires a 'function' object"
    )
python

边界校验:在系统入口做结构验证#

normalize 不仅提取字段,还校验结构。这意味着过了 normalize 之后的代码,可以安全地假设数据是合法的

这就是”在边界做验证,在内部做信任”的设计。

类型约束:用运行时检查弥补动态语言的缺陷#

Python 是动态类型语言,没有编译器帮你检查参数类型。所以代码里有多处显式类型检查:

检查位置检查内容防止什么
_parse_argumentsvalue 必须是 dict 或 str非法参数类型
normalize_tool_callparameters/input 必须是 dict上游传了非对象参数
dispatch_tool_callhandler 返回值必须是 dicthandler 作者返回了非标准类型
dispatch_tool_call捕获 TypeError参数名不匹配

这些检查在静态类型语言里是编译器的活,在 Python 里就是 runtime 的活。


十七、如果演化成生产级 Runtime#

目前的实现是一个完整但最小化的版本。如果要投入生产,还需要加这些层:

能力为什么需要实现思路
日志追踪每次工具调用的 input/output,排查问题normalize 后记录 dispatch_key + call_id,handler 执行后记录耗时和结果摘要
权限控制不是所有工具都允许被调用在 dispatch 里查 registry 之后、执行 handler 之前,检查 allowed_tools 白名单
Schema 校验确保 arguments 符合工具的参数 schemanormalize 后用 JSON Schema 校验 normalized.arguments,不合法则返回 agent-friendly 错误
重试handler 执行可能因网络等原因暂时失败在 dispatch 里包装 retry 逻辑,注意幂等性
超时handler 不能无限期运行asyncio.wait_forsignal.alarm 给 handler 加执行时限
幂等重试时不能重复执行有副作用的操作handler 接受 call_id 作为幂等键,重复调用返回缓存结果
审计合规需要,记录谁在什么时候调用了什么工具append_tool_interaction_to_context 同时写审计日志
沙箱工具执行不能损害宿主系统handler 在 Docker/gVisor/E2B 容器中执行

这些能力的加入不需要改变核心架构——normalize → dispatch → wrap → append 的主链路保持不变,新增的能力以中间件的形式插入链路的各个阶段。这也是为什么把系统拆成独立函数很重要:每个函数都是一个可以插入中间件的接缝。


十八、最推荐的实现顺序#

如果你真的要从零自己敲,最自然的顺序已经不是“写完 dispatch 就结束”,而是先把第一层跑通,再往上补第二层 gateway

第 1 轮:先打通本地 dispatch 最小闭环#

  1. EnvelopeDispatchError
  2. NormalizedToolCall
  3. _parse_arguments()
  4. normalize_tool_call() 先只写一个 OpenAI 分支
  5. build_tool_result_envelope() 先只支持一个 OpenAI 结果形状
  6. dispatch_tool_call()
  7. TOOL_REGISTRY
  8. 一个假的 handler + 一条假的 envelope 测试

第 2 轮:把本地 envelope 变体补齐#

  1. 补 Claude 分支
  2. 补 Codex 分支
  3. source_variant 加进 NormalizedToolCall
  4. 把 OpenAI 拆成 responses_api / responses_wrapper / chat_completions
  5. append_tool_interaction_to_context() 独立出来

第 3 轮:把 provider route 显式建模#

  1. Provider
  2. ApiFormat
  3. AdapterRoute
  4. ProviderAdapter
  5. IdentityAdapter

做到这一步,你才真正从“脚本式转换”进入“gateway 设计”。

第 4 轮:补 adapter registry 和静态 transform#

  1. ClaudeCodeOpenAIAdapter
  2. GeminiPassthroughAdapter
  3. REGISTERED_ADAPTERS
  4. ADAPTER_REGISTRY
  5. get_adapter()
  6. transform_gateway_exchange()

第 5 轮:把 gateway 变成真正的 tool loop#

  1. extract_tool_calls_from_provider_response()
  2. append_tool_results_to_provider_messages()
  3. run_gateway_tool_loop()
  4. 一个会在第 0 轮要工具、第 1 轮给最终答案的 upstream_responder

做到这里,整套结构就已经从“单次 dispatch demo”升级成“真正像 gateway 的 teaching runtime”了。


十九、最容易犯的 6 个错误#

1)把 provider request transform 和 tool envelope normalize 混成一层#

transform_request() 处理整包 HTTP body,normalize_tool_call() 处理单个 tool call。两者很像,但职责不同。

2)只记 source_format,不记 source_variant#

OpenAI Chat Completions 和 OpenAI Responses 的结果 envelope 根本不是一个形状。少了 source_variant,回包逻辑迟早会脏掉。

3)以为“透传”就是绕开系统#

不是。透传应该仍然经过 route、registry、adapter 选择,只是 transform_request() / transform_response() 原样返回。

4)把通用 audit context 和 provider-native history 混为一谈#

append_tool_interaction_to_context() 是记账,append_tool_results_to_provider_messages() 才是给下一轮 provider 请求喂历史。

5)只做静态 request/response transform,不把它串成 loop#

只会做 transform_gateway_exchange() 还不够。真正的 agent runtime 一定还要有 extract、dispatch、append back 这三步。

6)把 registry 写死在函数内部#

无论是 TOOL_REGISTRY 还是 ADAPTER_REGISTRY,都应该保留注入和替换空间,否则测试、扩展、灰度都很难做。


二十、为什么这份代码的分层是合理的#

职责分离#

  • normalize_tool_call:只处理单个 tool envelope 的 canonicalization
  • dispatch_tool_call:只处理本地 handler 编排
  • build_tool_result_envelope:只处理单个 tool result 的回包模板
  • ProviderAdapter:只处理整包 request / response 的协议转换
  • extract_tool_calls_from_provider_response:只负责从 provider-native response 中抽取可 dispatch 的调用
  • append_tool_results_to_provider_messages:只负责按 provider-native 规则回填历史
  • run_gateway_tool_loop:只负责把整轮闭环串起来

两张注册表,各管一层#

  • TOOL_REGISTRY:解决“这个工具名该执行哪个本地函数”
  • ADAPTER_REGISTRY:解决“这条 route 该选哪个转换策略”

这就是为什么它看起来不像一个大而全的巨型函数,而像几个彼此咬合的小模块。

标准化对象与显式路由并存#

NormalizedToolCall 负责统一“单个调用”的内部表示,AdapterRoute 负责统一“整包上下游”的路由表示。前者解决语义提纯,后者解决系统路径选择。

透传也被建模成显式能力#

IdentityAdapter / GeminiPassthroughAdapter 很关键。它们说明“什么都不改”也应该是一种显式策略,而不是随手漏掉的默认分支。


二十一、完整代码#

这篇文章对应的文件,已经不再是“只有 dispatch 的小脚本”,而是一个双层 teaching runtime。下面放的是当前完整实现,默认折叠:

View Code
dispatch_envelope_demo.py
#!/usr/bin/env python3
"""Dispatch envelope + provider adapter demo.

This module combines two related runtime boundaries:

1. Local host runtime:
   envelope -> normalize -> dispatch -> execute -> wrap_result -> append_context
2. Provider gateway runtime (in the spirit of cc-switch):
   provider request -> transform_request -> upstream format
   upstream response -> transform_response -> downstream format

The goal is to show both layers in one place:

- the *tool dispatch* layer turns one tool-call envelope into a concrete local
  function invocation plus a matching tool result envelope
- the *provider adapter* layer rewrites whole request / response bodies between
  Anthropic-style and OpenAI-style message formats

This remains a teaching implementation, not the real Codex or cc-switch code.
"""

from __future__ import annotations

import argparse
import json
import sys
import uuid
from dataclasses import dataclass
from enum import Enum
from pathlib import Path
from typing import Any, Callable, Dict, List, Mapping, MutableSequence, Optional, Protocol, Sequence, Tuple


CURRENT_DIR = Path(__file__).resolve().parent
if str(CURRENT_DIR) not in sys.path:
    sys.path.insert(0, str(CURRENT_DIR))

# Allow the module to be imported as part of `src.*` or executed directly as a
# standalone script from the demo directory.
try:
    from src.edit_validation_workflow import (
        edit_file_with_validation,
        restore_snapshot,
        snapshot_before_edit,
    )
    from src.local_runtime import apply_patch, exec_command, view_image, write_stdin
except ModuleNotFoundError:
    from edit_validation_workflow import (
        edit_file_with_validation,
        restore_snapshot,
        snapshot_before_edit,
    )
    from local_runtime import apply_patch, exec_command, view_image, write_stdin


class EnvelopeDispatchError(Exception):
    """Raised when a tool call envelope cannot be normalized or dispatched."""


class ProviderAdapterError(Exception):
    """Raised when a provider request or response cannot be transformed."""


class Provider(str, Enum):
    OPENAI = "openai"
    CLAUDE = "claude"
    GEMINI = "gemini"
    CODEX = "codex"


class ApiFormat(str, Enum):
    OPENAI_CHAT_COMPLETIONS = "openai_chat_completions"
    OPENAI_RESPONSES = "openai_responses"
    ANTHROPIC_MESSAGES = "anthropic_messages"
    GEMINI_CONTENTS = "gemini_contents"
    CODEX_TERMINAL = "codex_terminal"


@dataclass(frozen=True)
class AdapterRoute:
    downstream_provider: Provider
    downstream_format: ApiFormat
    upstream_provider: Provider
    upstream_format: ApiFormat

    def key(self) -> Tuple[Provider, ApiFormat, Provider, ApiFormat]:
        return (
            self.downstream_provider,
            self.downstream_format,
            self.upstream_provider,
            self.upstream_format,
        )

    def to_dict(self) -> Dict[str, str]:
        return {
            "downstream_provider": self.downstream_provider.value,
            "downstream_format": self.downstream_format.value,
            "upstream_provider": self.upstream_provider.value,
            "upstream_format": self.upstream_format.value,
        }


@dataclass
class NormalizedToolCall:
    source_format: str
    source_variant: str
    call_id: str
    dispatch_key: str
    arguments: Dict[str, Any]
    raw_envelope: Dict[str, Any]


Handler = Callable[..., Dict[str, Any]]
UpstreamResponder = Callable[[Dict[str, Any], int], Mapping[str, Any]]


@dataclass
class GatewayTurn:
    turn_index: int
    upstream_request: Dict[str, Any]
    upstream_response: Dict[str, Any]
    downstream_response: Dict[str, Any]
    tool_calls: List[Dict[str, Any]]
    dispatch_results: List[Dict[str, Any]]
    assistant_text: Optional[str] = None
    next_request: Optional[Dict[str, Any]] = None


@dataclass
class TraceEvent:
    session_id: int
    kind: str
    turn_index: Optional[int]
    payload: Dict[str, Any]


TOOL_REGISTRY: Dict[str, Handler] = {
    "functions.exec_command": exec_command,
    "functions.write_stdin": write_stdin,
    "functions.apply_patch": apply_patch,
    "functions.view_image": view_image,
    "snapshot_before_edit": snapshot_before_edit,
    "restore_snapshot": restore_snapshot,
    "edit_file_with_validation": edit_file_with_validation,
}


def _generated_call_id(prefix: str = "call") -> str:
    return f"{prefix}_{uuid.uuid4().hex[:8]}"


def _json_compact(value: Any) -> str:
    return json.dumps(value, ensure_ascii=False, separators=(",", ":"))


def _parse_arguments(value: Any) -> Dict[str, Any]:
    if isinstance(value, dict):
        return dict(value)
    if isinstance(value, str):
        parsed = json.loads(value)
        if not isinstance(parsed, dict):
            raise EnvelopeDispatchError(
                "function.arguments JSON must decode to an object"
            )
        return parsed
    raise EnvelopeDispatchError(
        f"tool arguments must be object or JSON string, got {type(value).__name__}"
    )


def _stringify_tool_output(value: Any) -> str:
    if isinstance(value, str):
        return value
    return _json_compact(value)


def _clone_jsonish(value: Any) -> Any:
    return json.loads(json.dumps(value, ensure_ascii=False))


def _expect_dict(value: Any, label: str, *, exc_type: type[Exception]) -> Dict[str, Any]:
    if not isinstance(value, dict):
        raise exc_type(f"{label} must be an object")
    return dict(value)


def normalize_tool_call(envelope: Mapping[str, Any]) -> NormalizedToolCall:
    """Normalize provider-specific tool envelopes into one canonical shape."""

    raw = dict(envelope)

    # Codex terminal-style tool dispatch.
    if "recipient_name" in raw:
        parameters = raw.get("parameters")
        if not isinstance(parameters, dict):
            raise EnvelopeDispatchError(
                "codex envelope requires an object 'parameters' field"
            )
        return NormalizedToolCall(
            source_format="codex",
            source_variant="codex_terminal",
            call_id=str(raw.get("call_id") or raw.get("id") or _generated_call_id("codex")),
            dispatch_key=str(raw["recipient_name"]),
            arguments=dict(parameters),
            raw_envelope=raw,
        )

    # OpenAI Responses API item:
    # {"type": "function_call", "call_id": "...", "name": "...", "arguments": "..."}
    if raw.get("type") == "function_call" and "name" in raw and "arguments" in raw:
        return NormalizedToolCall(
            source_format="openai",
            source_variant="responses_api",
            call_id=str(raw.get("call_id") or raw.get("id") or _generated_call_id("openai")),
            dispatch_key=str(raw["name"]),
            arguments=_parse_arguments(raw["arguments"]),
            raw_envelope=raw,
        )

    # Teaching wrapper used in this repo:
    # {"type": "function_call", "function": {"name": "...", "arguments": ...}}
    if raw.get("type") == "function_call":
        fn = raw.get("function")
        if not isinstance(fn, dict):
            raise EnvelopeDispatchError(
                "openai-style function_call envelope requires a 'function' object"
            )
        if "name" not in fn or "arguments" not in fn:
            raise EnvelopeDispatchError(
                "openai-style function_call requires function.name and function.arguments"
            )
        return NormalizedToolCall(
            source_format="openai",
            source_variant="responses_wrapper",
            call_id=str(raw.get("call_id") or raw.get("id") or _generated_call_id("openai")),
            dispatch_key=str(fn["name"]),
            arguments=_parse_arguments(fn["arguments"]),
            raw_envelope=raw,
        )

    # OpenAI Chat Completions tool call item:
    # {"id": "...", "type": "function", "function": {...}}
    if raw.get("type") == "function":
        fn = raw.get("function")
        if not isinstance(fn, dict):
            raise EnvelopeDispatchError(
                "openai chat tool call requires a 'function' object"
            )
        if "name" not in fn or "arguments" not in fn:
            raise EnvelopeDispatchError(
                "openai chat tool call requires function.name and function.arguments"
            )
        return NormalizedToolCall(
            source_format="openai",
            source_variant="chat_completions",
            call_id=str(raw.get("id") or raw.get("call_id") or _generated_call_id("openai")),
            dispatch_key=str(fn["name"]),
            arguments=_parse_arguments(fn["arguments"]),
            raw_envelope=raw,
        )

    # Claude Messages API tool_use block.
    if raw.get("type") == "tool_use":
        if "name" not in raw or "input" not in raw:
            raise EnvelopeDispatchError("claude-style tool_use requires name and input")
        if not isinstance(raw["input"], dict):
            raise EnvelopeDispatchError("claude-style tool_use input must be an object")
        return NormalizedToolCall(
            source_format="claude",
            source_variant="messages_api",
            call_id=str(raw.get("id") or _generated_call_id("claude")),
            dispatch_key=str(raw["name"]),
            arguments=dict(raw["input"]),
            raw_envelope=raw,
        )

    raise EnvelopeDispatchError("unrecognized tool call envelope shape")


def build_tool_result_envelope(
    normalized_call: NormalizedToolCall,
    output: Dict[str, Any],
) -> Dict[str, Any]:
    """Wrap handler output back into the matching provider-style result shape."""

    if normalized_call.source_format == "openai":
        if normalized_call.source_variant == "chat_completions":
            return {
                "role": "tool",
                "tool_call_id": normalized_call.call_id,
                "name": normalized_call.dispatch_key,
                "content": _stringify_tool_output(output),
            }
        return {
            "type": "function_call_output",
            "call_id": normalized_call.call_id,
            "name": normalized_call.dispatch_key,
            "output": output,
        }

    if normalized_call.source_format == "claude":
        return {
            "type": "tool_result",
            "tool_use_id": normalized_call.call_id,
            "name": normalized_call.dispatch_key,
            "content": output,
        }

    if normalized_call.source_format == "codex":
        return {
            "type": "tool_result",
            "call_id": normalized_call.call_id,
            "recipient_name": normalized_call.dispatch_key,
            "output": output,
        }

    raise EnvelopeDispatchError(
        f"unsupported source format: {normalized_call.source_format}"
    )


def append_tool_interaction_to_context(
    context: MutableSequence[Dict[str, Any]],
    *,
    call_envelope: Mapping[str, Any],
    tool_result_envelope: Mapping[str, Any],
) -> List[Dict[str, Any]]:
    """Append generic audit-friendly tool-call history into context."""

    updated = list(context)
    updated.append(
        {
            "role": "assistant",
            "type": "tool_call",
            "content": dict(call_envelope),
        }
    )
    updated.append(
        {
            "role": "tool",
            "type": "tool_result",
            "content": dict(tool_result_envelope),
        }
    )
    return updated


def build_provider_native_context_entries(
    normalized_call: NormalizedToolCall,
    *,
    call_envelope: Mapping[str, Any],
    tool_result_envelope: Mapping[str, Any],
) -> List[Dict[str, Any]]:
    """Return provider-native context entries for the same tool interaction."""

    if normalized_call.source_format == "claude":
        return [
            {
                "role": "assistant",
                "content": [dict(call_envelope)],
            },
            {
                "role": "user",
                "content": [dict(tool_result_envelope)],
            },
        ]

    if normalized_call.source_format == "openai" and normalized_call.source_variant == "chat_completions":
        return [
            {
                "role": "assistant",
                "content": None,
                "tool_calls": [dict(call_envelope)],
            },
            dict(tool_result_envelope),
        ]

    return [
        {
            "role": "assistant",
            "content": [dict(call_envelope)],
        },
        {
            "role": "tool",
            "content": [dict(tool_result_envelope)],
        },
    ]


def dispatch_tool_call(
    call_envelope: Mapping[str, Any],
    *,
    context: Optional[List[Dict[str, Any]]] = None,
    registry: Optional[Mapping[str, Handler]] = None,
) -> Dict[str, Any]:
    """Dispatch one normalized tool call through the local registry."""

    normalized = normalize_tool_call(call_envelope)
    active_registry = dict(registry or TOOL_REGISTRY)

    handler = active_registry.get(normalized.dispatch_key)
    if handler is None:
        raise EnvelopeDispatchError(
            f"no handler registered for {normalized.dispatch_key!r}"
        )

    try:
        output = handler(**normalized.arguments)
    except TypeError as exc:
        raise EnvelopeDispatchError(
            f"handler argument mismatch for {normalized.dispatch_key}: {exc}"
        ) from exc
    except Exception as exc:
        raise EnvelopeDispatchError(
            f"handler {normalized.dispatch_key!r} failed: {exc}"
        ) from exc

    if not isinstance(output, dict):
        raise EnvelopeDispatchError(
            f"handler {normalized.dispatch_key!r} returned "
            f"{type(output).__name__}, expected dict"
        )

    tool_result_envelope = build_tool_result_envelope(normalized, output)
    updated_context = append_tool_interaction_to_context(
        context or [],
        call_envelope=call_envelope,
        tool_result_envelope=tool_result_envelope,
    )
    provider_native_context = build_provider_native_context_entries(
        normalized,
        call_envelope=call_envelope,
        tool_result_envelope=tool_result_envelope,
    )

    return {
        "normalized_call": {
            "source_format": normalized.source_format,
            "source_variant": normalized.source_variant,
            "call_id": normalized.call_id,
            "dispatch_key": normalized.dispatch_key,
            "arguments": normalized.arguments,
        },
        "tool_output": output,
        "tool_result_envelope": tool_result_envelope,
        "updated_context": updated_context,
        "provider_native_context": provider_native_context,
    }


class ProviderAdapter(Protocol):
    """Small teaching protocol modeled after cc-switch style adapters."""

    def name(self) -> str:
        ...

    def route(self) -> AdapterRoute:
        ...

    def transform_request(self, body: Mapping[str, Any]) -> Dict[str, Any]:
        ...

    def transform_response(self, body: Mapping[str, Any]) -> Dict[str, Any]:
        ...


class IdentityAdapter:
    """Default passthrough adapter for providers that already match upstream."""

    def __init__(self, *, adapter_name: str, adapter_route: AdapterRoute) -> None:
        self._name = adapter_name
        self._route = adapter_route

    def name(self) -> str:
        return self._name

    def route(self) -> AdapterRoute:
        return self._route

    def transform_request(self, body: Mapping[str, Any]) -> Dict[str, Any]:
        return dict(body)

    def transform_response(self, body: Mapping[str, Any]) -> Dict[str, Any]:
        return dict(body)


def _anthropic_tool_to_openai(tool: Mapping[str, Any]) -> Dict[str, Any]:
    raw = dict(tool)
    return {
        "type": "function",
        "function": {
            "name": raw["name"],
            "description": raw.get("description", ""),
            "parameters": raw.get("input_schema", {"type": "object", "properties": {}}),
        },
    }


def _anthropic_text_from_blocks(content: Sequence[Any]) -> str:
    parts: List[str] = []
    for block in content:
        if isinstance(block, dict) and block.get("type") == "text":
            text = block.get("text")
            if text:
                parts.append(str(text))
    return "\n".join(parts)


def _anthropic_message_to_openai_messages(message: Mapping[str, Any]) -> List[Dict[str, Any]]:
    raw = dict(message)
    role = str(raw.get("role", "user"))
    content = raw.get("content", "")

    if isinstance(content, str):
        return [{"role": role, "content": content}]
    if not isinstance(content, list):
        raise ProviderAdapterError("anthropic message content must be string or list")

    if role == "assistant":
        text_parts: List[str] = []
        tool_calls: List[Dict[str, Any]] = []
        for block in content:
            entry = _expect_dict(
                block,
                "anthropic assistant content block",
                exc_type=ProviderAdapterError,
            )
            if entry.get("type") == "text":
                if entry.get("text"):
                    text_parts.append(str(entry["text"]))
                continue
            if entry.get("type") == "tool_use":
                input_obj = entry.get("input")
                if not isinstance(input_obj, dict):
                    raise ProviderAdapterError("anthropic tool_use input must be an object")
                tool_calls.append(
                    {
                        "id": str(entry.get("id") or _generated_call_id("toolu")),
                        "type": "function",
                        "function": {
                            "name": str(entry["name"]),
                            "arguments": _json_compact(input_obj),
                        },
                    }
                )
                continue
            raise ProviderAdapterError(
                f"unsupported anthropic assistant block type: {entry.get('type')!r}"
            )

        assistant_message: Dict[str, Any] = {
            "role": "assistant",
            "content": "\n".join(text_parts) if text_parts else None,
        }
        if tool_calls:
            assistant_message["tool_calls"] = tool_calls
        return [assistant_message]

    if role == "user":
        openai_messages: List[Dict[str, Any]] = []
        text_parts: List[str] = []

        for block in content:
            entry = _expect_dict(
                block,
                "anthropic user content block",
                exc_type=ProviderAdapterError,
            )
            if entry.get("type") == "text":
                if entry.get("text"):
                    text_parts.append(str(entry["text"]))
                continue
            if entry.get("type") == "tool_result":
                if text_parts:
                    openai_messages.append(
                        {
                            "role": "user",
                            "content": "\n".join(text_parts),
                        }
                    )
                    text_parts = []
                openai_messages.append(
                    {
                        "role": "tool",
                        "tool_call_id": str(entry["tool_use_id"]),
                        "content": _stringify_tool_output(entry.get("content", "")),
                    }
                )
                continue
            raise ProviderAdapterError(
                f"unsupported anthropic user block type: {entry.get('type')!r}"
            )

        if text_parts:
            openai_messages.append({"role": "user", "content": "\n".join(text_parts)})
        return openai_messages

    raise ProviderAdapterError(f"unsupported anthropic message role: {role!r}")


def _responses_output_to_openai_message(output_items: Sequence[Any]) -> Dict[str, Any]:
    text_parts: List[str] = []
    tool_calls: List[Dict[str, Any]] = []

    for item in output_items:
        entry = _expect_dict(
            item,
            "openai responses output item",
            exc_type=ProviderAdapterError,
        )
        item_type = entry.get("type")

        if item_type in {"text", "output_text"}:
            text = entry.get("text")
            if text:
                text_parts.append(str(text))
            continue

        if item_type == "message":
            content = entry.get("content")
            if isinstance(content, list):
                text = _anthropic_text_from_blocks(content)
                if text:
                    text_parts.append(text)
            continue

        if item_type == "function_call":
            arguments = entry.get("arguments", "{}")
            if not isinstance(arguments, str):
                arguments = _json_compact(arguments)
            tool_calls.append(
                {
                    "id": str(entry.get("call_id") or entry.get("id") or _generated_call_id("call")),
                    "type": "function",
                    "function": {
                        "name": str(entry["name"]),
                        "arguments": arguments,
                    },
                }
            )
            continue

    message: Dict[str, Any] = {
        "role": "assistant",
        "content": "\n".join(text_parts) if text_parts else None,
    }
    if tool_calls:
        message["tool_calls"] = tool_calls
    return message


def _extract_openai_assistant_message(body: Mapping[str, Any]) -> Dict[str, Any]:
    raw = dict(body)

    if isinstance(raw.get("choices"), list) and raw["choices"]:
        choice = _expect_dict(
            raw["choices"][0],
            "openai choice",
            exc_type=ProviderAdapterError,
        )
        return _expect_dict(
            choice.get("message"),
            "openai choice.message",
            exc_type=ProviderAdapterError,
        )

    if raw.get("role") == "assistant":
        return raw

    if isinstance(raw.get("output"), list):
        return _responses_output_to_openai_message(raw["output"])

    raise ProviderAdapterError("could not extract assistant message from OpenAI body")


class ClaudeCodeOpenAIAdapter:
    """cc-switch-style adapter:

    - downstream client speaks Anthropic Messages API-like payloads
    - upstream provider speaks OpenAI chat-completions tool calling
    """

    _route = AdapterRoute(
        downstream_provider=Provider.CLAUDE,
        downstream_format=ApiFormat.ANTHROPIC_MESSAGES,
        upstream_provider=Provider.OPENAI,
        upstream_format=ApiFormat.OPENAI_CHAT_COMPLETIONS,
    )

    def name(self) -> str:
        return "claude-code-openai"

    def route(self) -> AdapterRoute:
        return self._route

    def transform_request(self, body: Mapping[str, Any]) -> Dict[str, Any]:
        raw = dict(body)
        transformed = {
            key: value
            for key, value in raw.items()
            if key not in {"system", "messages", "tools"}
        }

        openai_messages: List[Dict[str, Any]] = []

        system = raw.get("system")
        if isinstance(system, str) and system.strip():
            openai_messages.append({"role": "system", "content": system})
        elif isinstance(system, list):
            system_text = _anthropic_text_from_blocks(system)
            if system_text:
                openai_messages.append({"role": "system", "content": system_text})

        for message in raw.get("messages", []):
            openai_messages.extend(_anthropic_message_to_openai_messages(message))

        transformed["messages"] = openai_messages
        if isinstance(raw.get("tools"), list):
            transformed["tools"] = [
                _anthropic_tool_to_openai(tool) for tool in raw["tools"]
            ]

        return transformed

    def transform_response(self, body: Mapping[str, Any]) -> Dict[str, Any]:
        raw = dict(body)
        message = _extract_openai_assistant_message(raw)

        content_blocks: List[Dict[str, Any]] = []
        if message.get("content"):
            content_blocks.append(
                {
                    "type": "text",
                    "text": str(message["content"]),
                }
            )

        for tool_call in message.get("tool_calls") or []:
            entry = _expect_dict(
                tool_call,
                "openai tool call",
                exc_type=ProviderAdapterError,
            )
            fn = _expect_dict(
                entry.get("function"),
                "openai tool call function",
                exc_type=ProviderAdapterError,
            )
            content_blocks.append(
                {
                    "type": "tool_use",
                    "id": str(entry.get("id") or _generated_call_id("toolu")),
                    "name": str(fn["name"]),
                    "input": _parse_arguments(fn.get("arguments", "{}")),
                }
            )

        transformed: Dict[str, Any] = {
            "role": "assistant",
            "content": content_blocks,
            "stop_reason": "tool_use"
            if any(block["type"] == "tool_use" for block in content_blocks)
            else "end_turn",
        }

        for key in ("id", "model", "usage"):
            if key in raw:
                transformed[key] = raw[key]

        return transformed


class GeminiPassthroughAdapter(IdentityAdapter):
    def __init__(self) -> None:
        super().__init__(
            adapter_name="gemini-passthrough",
            adapter_route=AdapterRoute(
                downstream_provider=Provider.GEMINI,
                downstream_format=ApiFormat.GEMINI_CONTENTS,
                upstream_provider=Provider.GEMINI,
                upstream_format=ApiFormat.GEMINI_CONTENTS,
            ),
        )


REGISTERED_ADAPTERS: List[ProviderAdapter] = [
    ClaudeCodeOpenAIAdapter(),
    GeminiPassthroughAdapter(),
]


def build_adapter_registry(
    adapters: Sequence[ProviderAdapter],
) -> Dict[Tuple[Provider, ApiFormat, Provider, ApiFormat], ProviderAdapter]:
    return {adapter.route().key(): adapter for adapter in adapters}


ADAPTER_REGISTRY: Dict[Tuple[Provider, ApiFormat, Provider, ApiFormat], ProviderAdapter] = (
    build_adapter_registry(REGISTERED_ADAPTERS)
)


def list_registered_adapters() -> List[Dict[str, Any]]:
    return [
        {
            "name": adapter.name(),
            "route": adapter.route().to_dict(),
        }
        for adapter in REGISTERED_ADAPTERS
    ]


def get_adapter(
    *,
    downstream_provider: Provider,
    downstream_format: ApiFormat,
    upstream_provider: Provider,
    upstream_format: ApiFormat,
) -> ProviderAdapter:
    route = AdapterRoute(
        downstream_provider=downstream_provider,
        downstream_format=downstream_format,
        upstream_provider=upstream_provider,
        upstream_format=upstream_format,
    )
    adapter = ADAPTER_REGISTRY.get(route.key())
    if adapter is None:
        raise ProviderAdapterError(
            "no adapter registered for "
            f"{downstream_provider.value}/{downstream_format.value} -> "
            f"{upstream_provider.value}/{upstream_format.value}"
        )
    return adapter


def transform_gateway_exchange(
    *,
    downstream_provider: Provider,
    downstream_format: ApiFormat,
    upstream_provider: Provider,
    upstream_format: ApiFormat,
    request_body: Mapping[str, Any],
    response_body: Mapping[str, Any],
) -> Dict[str, Any]:
    adapter = get_adapter(
        downstream_provider=downstream_provider,
        downstream_format=downstream_format,
        upstream_provider=upstream_provider,
        upstream_format=upstream_format,
    )
    return {
        "adapter": adapter.name(),
        "route": adapter.route().to_dict(),
        "input_request": dict(request_body),
        "transformed_request": adapter.transform_request(request_body),
        "input_response": dict(response_body),
        "transformed_response": adapter.transform_response(response_body),
    }


def extract_tool_calls_from_provider_response(
    response_body: Mapping[str, Any],
    *,
    provider: Provider,
    api_format: ApiFormat,
) -> List[Dict[str, Any]]:
    """Extract provider-native tool call envelopes from one assistant response."""

    raw = dict(response_body)

    if provider == Provider.CLAUDE and api_format == ApiFormat.ANTHROPIC_MESSAGES:
        if raw.get("role") != "assistant":
            raise ProviderAdapterError("anthropic response must be an assistant message")
        content = raw.get("content", [])
        if not isinstance(content, list):
            raise ProviderAdapterError("anthropic assistant content must be a list")
        return [
            dict(block)
            for block in content
            if isinstance(block, dict) and block.get("type") == "tool_use"
        ]

    if provider == Provider.OPENAI and api_format == ApiFormat.OPENAI_CHAT_COMPLETIONS:
        message = _extract_openai_assistant_message(raw)
        return [
            dict(tool_call)
            for tool_call in message.get("tool_calls") or []
            if isinstance(tool_call, dict)
        ]

    if provider == Provider.OPENAI and api_format == ApiFormat.OPENAI_RESPONSES:
        if isinstance(raw.get("output"), list):
            return [
                dict(item)
                for item in raw["output"]
                if isinstance(item, dict) and item.get("type") == "function_call"
            ]
        if raw.get("type") == "function_call":
            return [raw]
        return []

    if provider == Provider.GEMINI and api_format == ApiFormat.GEMINI_CONTENTS:
        candidates = raw.get("candidates")
        if not isinstance(candidates, list) or not candidates:
            return []
        first_candidate = _expect_dict(
            candidates[0],
            "gemini candidate",
            exc_type=ProviderAdapterError,
        )
        content = _expect_dict(
            first_candidate.get("content", {}),
            "gemini candidate content",
            exc_type=ProviderAdapterError,
        )
        parts = content.get("parts", [])
        if not isinstance(parts, list):
            raise ProviderAdapterError("gemini content.parts must be a list")

        extracted: List[Dict[str, Any]] = []
        for part in parts:
            if not isinstance(part, dict):
                continue
            function_call = part.get("functionCall")
            if not isinstance(function_call, dict):
                continue
            extracted.append(
                {
                    "type": "function_call",
                    "call_id": str(
                        function_call.get("id")
                        or function_call.get("call_id")
                        or _generated_call_id("gemini")
                    ),
                    "name": str(function_call["name"]),
                    "arguments": function_call.get("args", {}),
                }
            )
        return extracted

    raise ProviderAdapterError(
        f"tool extraction not implemented for {provider.value}/{api_format.value}"
    )


def extract_text_from_provider_response(
    response_body: Mapping[str, Any],
    *,
    provider: Provider,
    api_format: ApiFormat,
) -> Optional[str]:
    """Extract the assistant's human-readable text from a provider-native response."""

    raw = dict(response_body)

    if provider == Provider.CLAUDE and api_format == ApiFormat.ANTHROPIC_MESSAGES:
        if raw.get("role") != "assistant":
            raise ProviderAdapterError("anthropic response must be an assistant message")
        content = raw.get("content", [])
        if not isinstance(content, list):
            raise ProviderAdapterError("anthropic assistant content must be a list")
        return _anthropic_text_from_blocks(content)

    if provider == Provider.OPENAI and api_format == ApiFormat.OPENAI_CHAT_COMPLETIONS:
        message = _extract_openai_assistant_message(raw)
        content = message.get("content")
        if content is None:
            return None
        return str(content)

    if provider == Provider.OPENAI and api_format == ApiFormat.OPENAI_RESPONSES:
        message = _extract_openai_assistant_message(raw)
        content = message.get("content")
        if content is None:
            return None
        return str(content)

    if provider == Provider.GEMINI and api_format == ApiFormat.GEMINI_CONTENTS:
        candidates = raw.get("candidates")
        if not isinstance(candidates, list) or not candidates:
            return None
        first_candidate = _expect_dict(
            candidates[0],
            "gemini candidate",
            exc_type=ProviderAdapterError,
        )
        content = _expect_dict(
            first_candidate.get("content", {}),
            "gemini candidate content",
            exc_type=ProviderAdapterError,
        )
        parts = content.get("parts", [])
        if not isinstance(parts, list):
            raise ProviderAdapterError("gemini content.parts must be a list")
        texts = [
            str(part["text"])
            for part in parts
            if isinstance(part, dict) and part.get("text")
        ]
        return "\n".join(texts) if texts else None

    raise ProviderAdapterError(
        f"text extraction not implemented for {provider.value}/{api_format.value}"
    )


def append_tool_results_to_provider_messages(
    *,
    request_body: Mapping[str, Any],
    response_body: Mapping[str, Any],
    dispatch_results: Sequence[Mapping[str, Any]],
    provider: Provider,
    api_format: ApiFormat,
) -> Dict[str, Any]:
    """Append assistant tool-call output plus tool results into next-turn history."""

    updated_request = _clone_jsonish(dict(request_body))

    if provider == Provider.CLAUDE and api_format == ApiFormat.ANTHROPIC_MESSAGES:
        messages = list(updated_request.get("messages", []))
        messages.append(_clone_jsonish(dict(response_body)))

        tool_result_blocks = [
            _clone_jsonish(dict(result["tool_result_envelope"]))
            for result in dispatch_results
        ]
        if tool_result_blocks:
            messages.append({"role": "user", "content": tool_result_blocks})
        updated_request["messages"] = messages
        return updated_request

    if provider == Provider.OPENAI and api_format == ApiFormat.OPENAI_CHAT_COMPLETIONS:
        messages = list(updated_request.get("messages", []))
        messages.append(_clone_jsonish(_extract_openai_assistant_message(response_body)))
        messages.extend(
            _clone_jsonish(dict(result["tool_result_envelope"]))
            for result in dispatch_results
        )
        updated_request["messages"] = messages
        return updated_request

    if provider == Provider.OPENAI and api_format == ApiFormat.OPENAI_RESPONSES:
        inputs = list(updated_request.get("input", []))
        if raw_output := dict(response_body).get("output"):
            if isinstance(raw_output, list):
                inputs.extend(_clone_jsonish(raw_output))
        for result in dispatch_results:
            inputs.append(_clone_jsonish(dict(result["tool_result_envelope"])))
        updated_request["input"] = inputs
        return updated_request

    if provider == Provider.GEMINI and api_format == ApiFormat.GEMINI_CONTENTS:
        contents = list(updated_request.get("contents", []))

        candidates = dict(response_body).get("candidates")
        if isinstance(candidates, list) and candidates:
            first_candidate = _expect_dict(
                candidates[0],
                "gemini candidate",
                exc_type=ProviderAdapterError,
            )
            if isinstance(first_candidate.get("content"), dict):
                contents.append(_clone_jsonish(first_candidate["content"]))

        response_parts = []
        for result in dispatch_results:
            response_parts.append(
                {
                    "functionResponse": {
                        "name": result["normalized_call"]["dispatch_key"],
                        "response": result["tool_output"],
                    }
                }
            )
        if response_parts:
            contents.append({"role": "user", "parts": response_parts})

        updated_request["contents"] = contents
        return updated_request

    raise ProviderAdapterError(
        f"tool-result append not implemented for {provider.value}/{api_format.value}"
    )


def _gateway_turn_to_dict(turn: GatewayTurn) -> Dict[str, Any]:
    return {
        "turn_index": turn.turn_index,
        "upstream_request": turn.upstream_request,
        "upstream_response": turn.upstream_response,
        "downstream_response": turn.downstream_response,
        "tool_calls": turn.tool_calls,
        "dispatch_results": turn.dispatch_results,
        "assistant_text": turn.assistant_text,
        "next_request": turn.next_request,
    }


def _trace_event_to_dict(event: TraceEvent) -> Dict[str, Any]:
    return {
        "session_id": event.session_id,
        "kind": event.kind,
        "turn_index": event.turn_index,
        "payload": event.payload,
    }


@dataclass
class GatewaySession:
    adapter: ProviderAdapter
    downstream_provider: Provider
    downstream_format: ApiFormat
    upstream_provider: Provider
    upstream_format: ApiFormat
    current_request: Dict[str, Any]
    registry: Mapping[str, Handler]
    audit_context: List[Dict[str, Any]]
    turns: List[GatewayTurn]
    completed: bool = False
    stop_reason: Optional[str] = None
    final_response: Optional[Dict[str, Any]] = None

    def step(self, upstream_responder: UpstreamResponder) -> GatewayTurn:
        if self.completed:
            raise ProviderAdapterError("gateway session already completed")

        turn_index = len(self.turns)
        upstream_request = self.adapter.transform_request(self.current_request)
        upstream_response = dict(
            upstream_responder(_clone_jsonish(upstream_request), turn_index)
        )
        downstream_response = self.adapter.transform_response(upstream_response)
        tool_calls = extract_tool_calls_from_provider_response(
            downstream_response,
            provider=self.downstream_provider,
            api_format=self.downstream_format,
        )
        assistant_text = extract_text_from_provider_response(
            downstream_response,
            provider=self.downstream_provider,
            api_format=self.downstream_format,
        )

        dispatch_results: List[Dict[str, Any]] = []
        next_request: Optional[Dict[str, Any]] = None

        if not tool_calls:
            self.completed = True
            self.stop_reason = "assistant_response"
            self.final_response = _clone_jsonish(downstream_response)
        else:
            for call in tool_calls:
                result = dispatch_tool_call(
                    call,
                    context=self.audit_context,
                    registry=self.registry,
                )
                dispatch_results.append(result)
                self.audit_context = list(result["updated_context"])

            next_request = append_tool_results_to_provider_messages(
                request_body=self.current_request,
                response_body=downstream_response,
                dispatch_results=dispatch_results,
                provider=self.downstream_provider,
                api_format=self.downstream_format,
            )
            self.current_request = _clone_jsonish(next_request)

        turn = GatewayTurn(
            turn_index=turn_index,
            upstream_request=_clone_jsonish(upstream_request),
            upstream_response=_clone_jsonish(upstream_response),
            downstream_response=_clone_jsonish(downstream_response),
            tool_calls=_clone_jsonish(tool_calls),
            dispatch_results=_clone_jsonish(dispatch_results),
            assistant_text=assistant_text,
            next_request=_clone_jsonish(next_request) if next_request is not None else None,
        )
        self.turns.append(turn)
        return turn

    def run(self, upstream_responder: UpstreamResponder, *, max_turns: int = 8) -> Dict[str, Any]:
        while len(self.turns) < max_turns and not self.completed:
            self.step(upstream_responder)

        if not self.completed:
            self.stop_reason = "max_turns_exceeded"

        return self.to_dict()

    def to_dict(self) -> Dict[str, Any]:
        return {
            "completed": self.completed,
            "stop_reason": self.stop_reason,
            "adapter": self.adapter.name(),
            "route": self.adapter.route().to_dict(),
            "turns": [_gateway_turn_to_dict(turn) for turn in self.turns],
            "final_request": self.current_request,
            "final_response": self.final_response,
            "runtime_audit_context": self.audit_context,
        }


def create_gateway_session(
    *,
    initial_request: Mapping[str, Any],
    downstream_provider: Provider,
    downstream_format: ApiFormat,
    upstream_provider: Provider,
    upstream_format: ApiFormat,
    registry: Optional[Mapping[str, Handler]] = None,
) -> GatewaySession:
    adapter = get_adapter(
        downstream_provider=downstream_provider,
        downstream_format=downstream_format,
        upstream_provider=upstream_provider,
        upstream_format=upstream_format,
    )
    return GatewaySession(
        adapter=adapter,
        downstream_provider=downstream_provider,
        downstream_format=downstream_format,
        upstream_provider=upstream_provider,
        upstream_format=upstream_format,
        current_request=_clone_jsonish(dict(initial_request)),
        registry=dict(registry or TOOL_REGISTRY),
        audit_context=[],
        turns=[],
    )


class GatewayRuntime:
    """Higher-level teaching runtime that manages gateway sessions and traces."""

    def __init__(
        self,
        *,
        adapters: Optional[Sequence[ProviderAdapter]] = None,
        registry: Optional[Mapping[str, Handler]] = None,
        upstream_responder: Optional[UpstreamResponder] = None,
    ) -> None:
        self.adapters = list(adapters or REGISTERED_ADAPTERS)
        self.adapter_registry = build_adapter_registry(self.adapters)
        self.registry = dict(registry or TOOL_REGISTRY)
        self.upstream_responder = upstream_responder or _demo_upstream_responder
        self._sessions: Dict[int, GatewaySession] = {}
        self._traces: Dict[int, List[TraceEvent]] = {}
        self._next_session_id = 1

    def list_routes(self) -> List[Dict[str, Any]]:
        return [
            {
                "name": adapter.name(),
                "route": adapter.route().to_dict(),
            }
            for adapter in self.adapters
        ]

    def get_adapter(
        self,
        *,
        downstream_provider: Provider,
        downstream_format: ApiFormat,
        upstream_provider: Provider,
        upstream_format: ApiFormat,
    ) -> ProviderAdapter:
        route = AdapterRoute(
            downstream_provider=downstream_provider,
            downstream_format=downstream_format,
            upstream_provider=upstream_provider,
            upstream_format=upstream_format,
        )
        adapter = self.adapter_registry.get(route.key())
        if adapter is None:
            raise ProviderAdapterError(
                "no adapter registered for "
                f"{downstream_provider.value}/{downstream_format.value} -> "
                f"{upstream_provider.value}/{upstream_format.value}"
            )
        return adapter

    def create_session(
        self,
        *,
        initial_request: Mapping[str, Any],
        downstream_provider: Provider,
        downstream_format: ApiFormat,
        upstream_provider: Provider,
        upstream_format: ApiFormat,
    ) -> int:
        adapter = self.get_adapter(
            downstream_provider=downstream_provider,
            downstream_format=downstream_format,
            upstream_provider=upstream_provider,
            upstream_format=upstream_format,
        )
        session_id = self._next_session_id
        self._next_session_id += 1
        session = GatewaySession(
            adapter=adapter,
            downstream_provider=downstream_provider,
            downstream_format=downstream_format,
            upstream_provider=upstream_provider,
            upstream_format=upstream_format,
            current_request=_clone_jsonish(dict(initial_request)),
            registry=self.registry,
            audit_context=[],
            turns=[],
        )
        self._sessions[session_id] = session
        self._traces[session_id] = [
            TraceEvent(
                session_id=session_id,
                kind="session_created",
                turn_index=None,
                payload={
                    "adapter": adapter.name(),
                    "route": adapter.route().to_dict(),
                    "initial_request": _clone_jsonish(dict(initial_request)),
                },
            )
        ]
        return session_id

    def get_session(self, session_id: int) -> GatewaySession:
        session = self._sessions.get(session_id)
        if session is None:
            raise ProviderAdapterError(f"unknown gateway session: {session_id}")
        return session

    def get_trace(self, session_id: int) -> List[Dict[str, Any]]:
        if session_id not in self._traces:
            raise ProviderAdapterError(f"unknown gateway session: {session_id}")
        return [_trace_event_to_dict(event) for event in self._traces[session_id]]

    def _record_trace(
        self,
        session_id: int,
        *,
        kind: str,
        turn_index: Optional[int],
        payload: Mapping[str, Any],
    ) -> None:
        self._traces[session_id].append(
            TraceEvent(
                session_id=session_id,
                kind=kind,
                turn_index=turn_index,
                payload=_clone_jsonish(dict(payload)),
            )
        )

    def step_session(self, session_id: int) -> Dict[str, Any]:
        session = self.get_session(session_id)
        turn_index = len(session.turns)
        self._record_trace(
            session_id,
            kind="turn_started",
            turn_index=turn_index,
            payload={"request": _clone_jsonish(session.current_request)},
        )
        try:
            turn = session.step(self.upstream_responder)
        except Exception as exc:
            self._record_trace(
                session_id,
                kind="turn_failed",
                turn_index=turn_index,
                payload={
                    "error_type": type(exc).__name__,
                    "error": str(exc),
                },
            )
            raise
        self._record_trace(
            session_id,
            kind="turn_completed",
            turn_index=turn.turn_index,
            payload={
                "tool_call_count": len(turn.tool_calls),
                "assistant_text": turn.assistant_text,
                "completed": session.completed,
                "stop_reason": session.stop_reason,
            },
        )
        return {
            "session_id": session_id,
            "turn": _gateway_turn_to_dict(turn),
            "completed": session.completed,
            "stop_reason": session.stop_reason,
        }

    def run_session(self, session_id: int, *, max_turns: int = 8) -> Dict[str, Any]:
        session = self.get_session(session_id)
        starting_turns = len(session.turns)
        while len(session.turns) - starting_turns < max_turns and not session.completed:
            self.step_session(session_id)
        last_kind = self._traces[session_id][-1].kind if self._traces[session_id] else None
        if not session.completed and len(session.turns) - starting_turns >= max_turns:
            session.stop_reason = "max_turns_exceeded"
            if last_kind != "session_stopped":
                self._record_trace(
                    session_id,
                    kind="session_stopped",
                    turn_index=None,
                    payload={"stop_reason": session.stop_reason},
                )
        elif session.completed:
            if last_kind != "session_completed":
                self._record_trace(
                    session_id,
                    kind="session_completed",
                    turn_index=None,
                    payload={"stop_reason": session.stop_reason},
                )

        return {
            "session_id": session_id,
            "session": session.to_dict(),
            "trace": self.get_trace(session_id),
        }


def run_gateway_tool_loop(
    *,
    initial_request: Mapping[str, Any],
    downstream_provider: Provider,
    downstream_format: ApiFormat,
    upstream_provider: Provider,
    upstream_format: ApiFormat,
    upstream_responder: UpstreamResponder,
    registry: Optional[Mapping[str, Handler]] = None,
    max_turns: int = 8,
) -> Dict[str, Any]:
    """Run a full gateway loop until the upstream model stops requesting tools."""

    session = create_gateway_session(
        initial_request=initial_request,
        downstream_provider=downstream_provider,
        downstream_format=downstream_format,
        upstream_provider=upstream_provider,
        upstream_format=upstream_format,
        registry=registry,
    )
    return session.run(upstream_responder, max_turns=max_turns)


def _demo_context() -> List[Dict[str, Any]]:
    return [
        {"role": "system", "content": "You are a coding agent."},
        {"role": "user", "content": "Create a snapshot before editing demo.py."},
    ]


def _demo_call() -> Dict[str, Any]:
    return {
        "id": "call_demo_snapshot",
        "type": "function_call",
        "function": {
            "name": "snapshot_before_edit",
            "arguments": {
                "file_path": str((Path.cwd() / "README.md").resolve()),
                "snapshot_root": str((Path.cwd() / ".snapshots-demo").resolve()),
                "workspace_root": str(Path.cwd().resolve()),
            },
        },
    }


def _demo_claude_request() -> Dict[str, Any]:
    return {
        "model": "claude-sonnet-demo",
        "system": "You are a careful coding assistant.",
        "messages": [
            {"role": "user", "content": "Find TODO comments under src."},
            {
                "role": "assistant",
                "content": [
                    {"type": "text", "text": "I will inspect the repo with ripgrep."},
                    {
                        "type": "tool_use",
                        "id": "toolu_demo_exec",
                        "name": "functions.exec_command",
                        "input": {
                            "cmd": "rg -n TODO src",
                            "workdir": "/workspace/demo",
                            "yield_time_ms": 250,
                        },
                    },
                ],
            },
            {
                "role": "user",
                "content": [
                    {
                        "type": "tool_result",
                        "tool_use_id": "toolu_demo_exec",
                        "content": "src/app.py:10:# TODO refactor parser",
                    }
                ],
            },
        ],
        "tools": [
            {
                "name": "functions.exec_command",
                "description": "Execute a shell command in the workspace.",
                "input_schema": {
                    "type": "object",
                    "properties": {
                        "cmd": {"type": "string"},
                        "workdir": {"type": "string"},
                        "yield_time_ms": {"type": "integer"},
                    },
                    "required": ["cmd"],
                },
            }
        ],
    }


def _demo_openai_response() -> Dict[str, Any]:
    return {
        "id": "chatcmpl_demo_001",
        "model": "gpt-4.1-demo",
        "choices": [
            {
                "finish_reason": "tool_calls",
                "message": {
                    "role": "assistant",
                    "content": "I found one TODO and will surface it.",
                    "tool_calls": [
                        {
                            "id": "call_demo_exec",
                            "type": "function",
                            "function": {
                                "name": "functions.exec_command",
                                "arguments": _json_compact(
                                    {
                                        "cmd": "printf 'src/app.py:10:# TODO refactor parser\\n'",
                                        "workdir": str(Path.cwd().resolve()),
                                        "yield_time_ms": 250,
                                    }
                                ),
                            },
                        }
                    ],
                },
            }
        ],
        "usage": {
            "prompt_tokens": 120,
            "completion_tokens": 18,
            "total_tokens": 138,
        },
    }


def _demo_claude_loop_request() -> Dict[str, Any]:
    return {
        "model": "claude-sonnet-demo",
        "system": "You are a careful coding assistant.",
        "messages": [
            {"role": "user", "content": "Find TODO comments under src."},
        ],
        "tools": [
            {
                "name": "functions.exec_command",
                "description": "Execute a shell command in the workspace.",
                "input_schema": {
                    "type": "object",
                    "properties": {
                        "cmd": {"type": "string"},
                        "workdir": {"type": "string"},
                        "yield_time_ms": {"type": "integer"},
                    },
                    "required": ["cmd"],
                },
            }
        ],
    }


def _demo_gemini_request() -> Dict[str, Any]:
    return {
        "model": "gemini-2.5-flash-demo",
        "contents": [
            {
                "role": "user",
                "parts": [
                    {
                        "text": "Summarize the repository structure and suggest a first search command."
                    }
                ],
            }
        ],
        "tools": [
            {
                "functionDeclarations": [
                    {
                        "name": "functions.exec_command",
                        "description": "Execute a shell command in the current workspace.",
                        "parameters": {
                            "type": "object",
                            "properties": {
                                "cmd": {"type": "string"},
                            },
                            "required": ["cmd"],
                        },
                    }
                ]
            }
        ],
    }


def _demo_gemini_response() -> Dict[str, Any]:
    return {
        "candidates": [
            {
                "content": {
                    "role": "model",
                    "parts": [
                        {
                            "text": "Start with `rg --files src` to inspect the repository quickly."
                        }
                    ],
                },
                "finishReason": "STOP",
            }
        ],
        "usageMetadata": {
            "promptTokenCount": 41,
            "candidatesTokenCount": 17,
            "totalTokenCount": 58,
        },
    }


def _demo_upstream_responder(
    transformed_request: Dict[str, Any],
    turn_index: int,
) -> Mapping[str, Any]:
    if turn_index == 0:
        return _demo_openai_response()

    messages = transformed_request.get("messages", [])
    tool_summary = "No tool output received."
    for message in messages:
        if isinstance(message, dict) and message.get("role") == "tool":
            tool_summary = str(message.get("content"))

    return {
        "id": "chatcmpl_demo_002",
        "model": "gpt-4.1-demo",
        "choices": [
            {
                "finish_reason": "stop",
                "message": {
                    "role": "assistant",
                    "content": f"Based on the tool result, here is the answer:\n{tool_summary}",
                },
            }
        ],
        "usage": {
            "prompt_tokens": 182,
            "completion_tokens": 28,
            "total_tokens": 210,
        },
    }


def run_dispatch_demo() -> Dict[str, Any]:
    return dispatch_tool_call(_demo_call(), context=_demo_context())


def run_cc_switch_adapter_demo() -> Dict[str, Any]:
    return transform_gateway_exchange(
        downstream_provider=Provider.CLAUDE,
        downstream_format=ApiFormat.ANTHROPIC_MESSAGES,
        upstream_provider=Provider.OPENAI,
        upstream_format=ApiFormat.OPENAI_CHAT_COMPLETIONS,
        request_body=_demo_claude_request(),
        response_body=_demo_openai_response(),
    )


def run_gemini_passthrough_demo() -> Dict[str, Any]:
    return transform_gateway_exchange(
        downstream_provider=Provider.GEMINI,
        downstream_format=ApiFormat.GEMINI_CONTENTS,
        upstream_provider=Provider.GEMINI,
        upstream_format=ApiFormat.GEMINI_CONTENTS,
        request_body=_demo_gemini_request(),
        response_body=_demo_gemini_response(),
    )


def run_registry_demo() -> Dict[str, Any]:
    selected = get_adapter(
        downstream_provider=Provider.CLAUDE,
        downstream_format=ApiFormat.ANTHROPIC_MESSAGES,
        upstream_provider=Provider.OPENAI,
        upstream_format=ApiFormat.OPENAI_CHAT_COMPLETIONS,
    )
    return {
        "registered_adapters": list_registered_adapters(),
        "selected_adapter": {
            "name": selected.name(),
            "route": selected.route().to_dict(),
        },
    }


def run_gateway_loop_demo() -> Dict[str, Any]:
    return run_gateway_tool_loop(
        initial_request=_demo_claude_loop_request(),
        downstream_provider=Provider.CLAUDE,
        downstream_format=ApiFormat.ANTHROPIC_MESSAGES,
        upstream_provider=Provider.OPENAI,
        upstream_format=ApiFormat.OPENAI_CHAT_COMPLETIONS,
        upstream_responder=_demo_upstream_responder,
    )


def run_gateway_session_demo() -> Dict[str, Any]:
    session = create_gateway_session(
        initial_request=_demo_claude_loop_request(),
        downstream_provider=Provider.CLAUDE,
        downstream_format=ApiFormat.ANTHROPIC_MESSAGES,
        upstream_provider=Provider.OPENAI,
        upstream_format=ApiFormat.OPENAI_CHAT_COMPLETIONS,
    )
    session.run(_demo_upstream_responder, max_turns=8)
    return session.to_dict()


def run_gateway_runtime_demo() -> Dict[str, Any]:
    runtime = GatewayRuntime(upstream_responder=_demo_upstream_responder)
    session_id = runtime.create_session(
        initial_request=_demo_claude_loop_request(),
        downstream_provider=Provider.CLAUDE,
        downstream_format=ApiFormat.ANTHROPIC_MESSAGES,
        upstream_provider=Provider.OPENAI,
        upstream_format=ApiFormat.OPENAI_CHAT_COMPLETIONS,
    )
    run_result = runtime.run_session(session_id, max_turns=8)
    return {
        "routes": runtime.list_routes(),
        **run_result,
    }


def run_all_demos() -> Dict[str, Any]:
    return {
        "dispatch_demo": run_dispatch_demo(),
        "registry_demo": run_registry_demo(),
        "cc_switch_adapter_demo": run_cc_switch_adapter_demo(),
        "gemini_passthrough_demo": run_gemini_passthrough_demo(),
        "gateway_loop_demo": run_gateway_loop_demo(),
        "gateway_session_demo": run_gateway_session_demo(),
        "gateway_runtime_demo": run_gateway_runtime_demo(),
    }


def main(argv: Optional[Sequence[str]] = None) -> int:
    parser = argparse.ArgumentParser(
        description="Teaching demo for tool envelope dispatch and cc-switch-style provider transforms."
    )
    parser.add_argument(
        "demo",
        nargs="?",
        choices=["dispatch", "adapter", "registry", "gemini", "loop", "session", "runtime", "all"],
        default="all",
        help="Which demo payload to print.",
    )
    args = parser.parse_args(argv)

    if args.demo == "dispatch":
        payload = run_dispatch_demo()
    elif args.demo == "adapter":
        payload = run_cc_switch_adapter_demo()
    elif args.demo == "registry":
        payload = run_registry_demo()
    elif args.demo == "gemini":
        payload = run_gemini_passthrough_demo()
    elif args.demo == "loop":
        payload = run_gateway_loop_demo()
    elif args.demo == "session":
        payload = run_gateway_session_demo()
    elif args.demo == "runtime":
        payload = run_gateway_runtime_demo()
    else:
        payload = run_all_demos()

    print(json.dumps(payload, ensure_ascii=False, indent=2))
    return 0


if __name__ == "__main__":
    raise SystemExit(main())
python

如果你要抓住这份代码最核心的阅读顺序,我建议这样看:

  1. 先读 NormalizedToolCallnormalize_tool_call(),理解第一层怎样把多种 envelope 抹平
  2. 再读 dispatch_tool_call(),理解本地 handler 是怎样被编排起来的
  3. 再读 ProviderAdapterAdapterRouteget_adapter(),理解第二层怎样按 route 选转换器
  4. 最后读 run_gateway_tool_loop(),把“静态 transform”升级成“会跑多轮工具回环的 runtime”

到这里你就会发现:这份 demo 真正想表达的不是某一家 provider 的字段细节,而是如何把异构协议压缩进清晰的边界层


二十二、总结#

用一句最接地气的话解释:

前台说话方式很多种,网关先做一轮整包翻译,再把里面具体的工具调用拆出来,翻成内部统一术语,交给后厨做事。后厨不该知道这个单是 Claude 下的还是 OpenAI 下的,只需要知道菜名、规格、桌号。 这就是双层 runtime 的核心。

把它压缩成设计原则:

原则体现
输入多态,内部单态外部允许多种格式,内部只认一种
边界吸收复杂度tool envelope 差异收口在 normalize_tool_call,provider 整包差异收口在 adapter 层
围着语义转,不围着表示转提取”调用谁+参数”的统一语义
变化点集中新增 envelope 变体改 normalize / build_result,新增 route 改 adapter registry
尽早失败normalize 阶段就拦住格式错误
抽象但不失真NormalizedToolCall 保留 raw_envelope
边界进出对称本地层有 normalize/build_result,gateway 层有 transform_request/transform_response

这个 runtime 的真正世界观不是 OpenAI,不是 Claude,也不是 Codex。它真正的世界观是:单个调用有一个标准内部表示,整包请求也有一条显式 route,provider 只是边界上的一种方言。

在系统边界上吸收异构性,在系统内部维护单一真相。


References#

Tool Call Dispatch:从 Normalize 到 Gateway Adapter 的统一分发设计
https://jerry609.github.io/blog/tool-call-dispatch-normalize-design
Author Jerry
Published at March 18, 2026
Comment seems to stuck. Try to refresh?✨