KaibanJS Team as an OpenClaw Agent via the OpenResponses API

Discover how to expose any KaibanJS multi-agent team as an OpenClaw agent backend using the OpenResponses API specification — delivering the full power of KaibanJS orchestration directly to WhatsApp, Telegram, Discord, and every messaging channel OpenClaw supports.

KaibanJS Team integrated with OpenClaw messaging gateway via the OpenResponses API specification

OpenClaw: The Messaging Gateway for AI Developers

OpenClaw is a powerful messaging gateway that connects AI agents to the channels where users already live — WhatsApp, Telegram, Discord, and more — through a single, unified interface. Rather than building a separate bot for each platform, developers configure OpenClaw once and let it handle the channel-specific complexity: webhooks, auth, message formatting, and rate limits.

OpenClaw natively speaks the OpenResponses API specification — an open standard modeled after the OpenAI Responses API — making it trivial to swap the AI backend behind any agent. This is where KaibanJS changes the game: instead of a single LLM call, you can plug in a full multi-agent pipeline that researches, writes, reviews, and delivers structured outputs — all transparently, from the user's perspective on any channel.

For KaibanJS developers, OpenClaw removes the last barrier between a workflow and real-world users. Your teams no longer live only in scripts or web UIs — they become first-class conversational agents on the platforms your users trust every day.

The Challenge: Bringing KaibanJS Workflows to Messaging Channels

Without this integration, deploying a KaibanJS team to WhatsApp or Telegram requires solving hard, unrelated problems before any real AI work begins.

Per-Platform Boilerplate

WhatsApp, Telegram, and Discord each have their own SDK, webhook format, and auth model. Maintaining separate integrations for every channel is a significant engineering burden.

Single LLM Ceiling

Standard AI gateways route messages to a single model call. KaibanJS multi-agent pipelines — with research, writing, and review stages — have no standard way to plug in as a backend.

Workflow Reuse Problem

KaibanJS teams built for one context — a web app or a script — cannot be easily reused across channels without custom adapters, duplicating effort and splitting maintenance.

Streaming Complexity

Implementing Server-Sent Events, managing partial token delivery, and handling streaming error states correctly across different clients is non-trivial to build from scratch.

The Solution: KaibanJS as an OpenResponses Backend

The OpenClaw–KaibanJS adapter is a lightweight Express.js server that implements POST /v1/responses following the OpenResponses specification. OpenClaw treats it as a custom model provider — routing user messages from any channel directly into a KaibanJS team.start() call and returning the result as a properly formatted response.

OpenResponses Compliance

The adapter fully implements the OpenResponses spec: request normalization, structured JSON responses, SSE streaming events, and OpenClaw-compatible error codes — zero custom protocol work needed.

Drop-in Team Replacement

The default Content Creation Team (ResearchBot → WriterBot → ReviewBot) lives in src/team/index.ts. Swap it with any KaibanJS Team factory to instantly deploy a different workflow to OpenClaw.

Streaming SSE Support

When OpenClaw requests stream: true, the adapter emits the full OpenResponses SSE event sequence — from response.created through response.completed — enabling real-time token delivery.

Secure Authentication

The adapter validates every incoming request with a Bearer token via KAIBAN_OPENRESPONSES_SECRET. The same secret is configured as the apiKey in OpenClaw's model provider entry.

Integration Architecture

End-to-end message flow from a user on any messaging channel to a KaibanJS multi-agent team and back

1

User Sends a Message

A user sends a message on WhatsApp, Telegram, or Discord. OpenClaw receives it through its platform-specific webhook and normalizes it into an OpenResponses request.

2

OpenClaw Calls the Adapter

OpenClaw resolves the configured model kaiban-adapter/kaiban and calls POST /v1/responses on the KaibanJS adapter with an Authorization: Bearer header.

3

Adapter Extracts & Routes

The adapter normalizes the request body (handling OpenClaw's wrapper format), extracts the user's text from the input field, and creates a fresh KaibanJS Team instance with the message as the topic input.

4

KaibanJS Team Executes

The full multi-agent pipeline runs: ResearchBot gathers information, WriterBot produces the content, ReviewBot validates quality. Any KaibanJS team can be used in place of this default.

5

Response Delivered to User

The adapter formats the team result into a valid OpenResponses JSON object (or SSE stream), returns it to OpenClaw, and the reply reaches the user on their original messaging channel.

Key Components

  • Express.js Adapter Server: Listens on POST /v1/responses with Bearer token auth; health check on GET /health
  • Request Normalizer: normalizeBody unwraps OpenClaw's body wrapper; extractUserMessage handles both plain strings and structured message arrays
  • KaibanJS Team Factory: Default Content Creation Team in src/team/index.ts — swappable with any other KaibanJS team definition
  • SSE Streaming: Full OpenResponses event sequence (response.created response.completed) via src/sse.ts
  • Error Handling: Distinguishes BLOCKED (422) from ERRORED (500); extracts real error messages from KaibanJS workflowLogs
  • TypeScript + ES Modules: Built with strict TypeScript, run via tsx for zero-build development

Default Team & How to Replace It

The playground ships with a ready-to-run Content Creation Team. Because the team definition is isolated in a single file, any KaibanJS workflow can be dropped in without touching the adapter logic.

Default: Content Creation Team

A sequential 3-agent pipeline that turns any user topic into a researched, written, and reviewed piece of content:

  1. ResearchBot — Research Specialist: gathers and analyzes information about the given topic using the configured LLM
  2. WriterBot — Content Writer: transforms research findings into polished, engaging content
  3. ReviewBot — Quality Reviewer: validates the output against quality standards and refines the final result

The user's message is injected as { topic: userMessage } into the team inputs. Powered by OPENAI_API_KEY.

Bring Your Own KaibanJS Team

The adapter is intentionally decoupled from the team definition. To use a different workflow:

  • 1.Open src/team/index.ts and replace the exported createTeam factory with your own KaibanJS Team definition
  • 2.Ensure your team accepts the relevant input(s) injected from the user message. Update the extractUserMessage mapping in src/adapter.ts if you need to reshape the input
  • 3.Add any additional environment variables your team requires to .env and restart the adapter — no other changes needed

Any KaibanJS team works here: airline reaccommodation, revenue management, RAG pipelines, or any custom workflow you have already built.

Demo: KaibanJS Team Answering via OpenClaw

Watch how a KaibanJS multi-agent team powers an OpenClaw agent, delivering orchestrated responses to a messaging channel through the OpenResponses API adapter

Demo: KaibanJS Content Creation Team via OpenClaw

See the full flow: a message sent on a messaging channel travels through OpenClaw, hits the KaibanJS OpenResponses adapter, triggers the multi-agent pipeline, and returns a structured response — all without any per-channel custom code.

View Source Code on GitHub

OpenClaw Configuration Reference

Registering KaibanJS as a custom model provider in OpenClaw requires a single config block in ~/.openclaw/openclaw.json

Model Provider Registration

Add the following block under models.providers in your OpenClaw config. Use models.mode: "merge" to keep existing providers intact:

{
  models: {
    mode: "merge",
    providers: {
      "kaiban-adapter": {
        baseUrl: "http://localhost:3100/v1",
        apiKey: "${KAIBAN_OPENRESPONSES_SECRET}",
        api: "openai-responses",
        models: [{
          id: "kaiban",
          name: "KaibanJS Team",
          reasoning: false,
          input: ["text"],
          cost: { input: 0, output: 0 },
          contextWindow: 128000,
          maxTokens: 32000
        }]
      }
    }
  }
}

Important: baseUrl must include /v1 — OpenClaw automatically appends /responses.

Agent Configuration

Reference the provider using the providerId/modelId format. Set a generous timeoutSeconds since multi-agent pipelines take longer than a single LLM call:

{
  agents: {
    list: [{
      id: "kaiban-team",
      default: true,
      model: "kaiban-adapter/kaiban"
    }],
    defaults: {
      model: {
        primary: "kaiban-adapter/kaiban"
      },
      timeoutSeconds: 600
    }
  }
}

Gotcha: Do NOT add provider, endpoint, or auth inside agents.list[] — those cause "Unrecognized keys" errors. The model backend is configured exclusively through models.providers.

Getting Started

Run the KaibanJS OpenClaw adapter locally in four steps and start delivering multi-agent responses on any messaging channel

1. Clone and Install

git clone https://github.com/kaiban-ai/KaibanJS.git
cd KaibanJS/playground/openclaw-openresponses
npm install

2. Configure Environment

cp .env.example .env
# Then set:
PORT=3100
KAIBAN_OPENRESPONSES_SECRET=
$(openssl rand -base64 32)
OPENAI_API_KEY=sk-...

3. Register in OpenClaw

Add the kaiban-adapter model provider to your ~/.openclaw/openclaw.json as shown in the configuration reference above. Set timeoutSeconds: 600 in agent defaults.

4. Start and Test

npm run dev

Send a message on any OpenClaw-connected channel. The adapter starts on port 3100 by default. Verify it is running with GET /health.

Why OpenClaw Is a Strategic Integration for KaibanJS

The OpenClaw integration expands where KaibanJS workflows can live — and who can benefit from them

Reach Billions of Users

WhatsApp has over 2 billion monthly active users. OpenClaw makes every KaibanJS team immediately accessible on the world's most popular messaging platforms — with no extra channel code.

Orchestration as a First-Class Backend

The OpenResponses spec was designed for single models. This adapter proves that a full multi-agent workflow can satisfy the same contract — elevating KaibanJS to a drop-in replacement for any OpenAI-compatible backend.

Protocol Interoperability

Alongside A2A and MCP integrations, OpenResponses is another open standard that KaibanJS now speaks fluently — cementing its position in the emerging ecosystem of interoperable AI tools.

Zero Lock-in Architecture

The swappable team file means developers can evolve their KaibanJS workflow independently of the OpenClaw configuration. Change your pipeline; the gateway adapter stays the same.

Production-Ready Pattern

Bearer token auth, structured error handling (BLOCKED vs ERRORED), health endpoint, and SSE streaming make this adapter a production-grade template — not just a proof of concept.

Growing Ecosystem Signal

Each integration — A2A, MCP, OpenResponses — signals to the developer community that KaibanJS is a serious, protocol-first framework ready for enterprise and production deployments.

Ready to Deploy Your KaibanJS Team to Every Messaging Channel?

The OpenClaw adapter pattern works with any KaibanJS team. Build your multi-agent workflow once and deliver it to WhatsApp, Telegram, Discord, and beyond — through the open OpenResponses specification.

GitHub Stars

We’re almost there! 🌟 Help us hit 100 stars!

Star KaibanJS - Only 100 to go! ⭐