# LangBot

LangBot is open-source AI application delivery infrastructure for instant messaging platforms. It connects AI applications, agents, and workflows built with Dify, Coze, n8n, OpenAI-compatible models, Claude, DeepSeek, Gemini, Qwen, Ollama, and custom tools to real users on Discord, Slack, Telegram, WeChat, WeCom, QQ, Lark, DingTalk, LINE, KOOK, and more.

## What LangBot is for

- Deploy AI bots across multiple messaging platforms from one platform.
- Connect no-code and low-code AI builders such as Dify, Coze, n8n, FastGPT, and model-provider APIs to production chat channels.
- Extend bot behavior with a process-isolated Python plugin SDK.
- Use LangBot Cloud for managed hosting or self-host LangBot with Docker.
- Build enterprise-grade bot middleware with access control, observability, knowledge base integrations, and plugin extensibility.

## Developer and agent resources

- Developer guide: https://langbot.app/developers
- Agent summary: https://langbot.app/llms.txt
- Full agent context: https://langbot.app/llms-full.txt
- OpenAPI overview: https://langbot.app/openapi.json
- A2A agent card: https://langbot.app/.well-known/agent-card.json
- MCP discovery: https://langbot.app/.well-known/mcp
- MCP server card: https://langbot.app/.well-known/mcp/server-card.json
- API catalog: https://langbot.app/.well-known/api-catalog
- Documentation: https://docs.langbot.app
- GitHub: https://github.com/langbot-app/LangBot

## Auth, limits, and error handling

LangBot Space uses browser login, OAuth/device authorization, and bearer/API-key access where enabled. Self-hosted LangBot deployments are operator-controlled. Agents should use JSON APIs where available, expect structured JSON error responses, back off on HTTP 429, and respect Retry-After and rate-limit headers when present. Streaming model output is supported when the configured runner, provider, and messaging adapter support it; integrations should also tolerate non-streaming final responses.
