What Is MCP?
The Model Context Protocol (MCP) is an open standard for connecting AI models to external tools and data sources. Instead of building custom API integrations for every AI platform, MCP provides a single protocol that any compliant client can use.
For brand governance, this changes everything.
The Integration Problem
Before MCP, connecting brand governance to AI workflows meant:
- Building custom middleware for each AI provider
- Maintaining separate integration code for Claude, GPT-4, Gemini, and every new model
- Hoping that each integration correctly enforced all your rules
- Having no standard way to audit decisions across providers
How Torobari Uses MCP
The Torobari MCP server exposes three core tools that any MCP-compatible AI agent can call:
torobari_check_output
Submit any AI-generated text for enforcement evaluation. Returns a PASS, BLOCK, or ESCALATE decision with full rule-level detail.
torobari_get_brand_rules
Fetch your active enforcement ruleset. AI agents can use this to understand your brand constraints before generating content, reducing the number of blocked outputs.
torobari_get_decision
Look up any past enforcement decision by ID. This is the audit trail - every decision is retrievable with full context.
Fail-Closed Over MCP
Our MCP server maintains the same fail-closed guarantee as our direct API. If the connection to Torobari is interrupted, the server returns a BLOCK decision with primaryReason: "system_error". An AI agent that cannot verify brand compliance should not produce content.
Transport Options
The Torobari MCP server supports both STDIO (for local development) and Streamable HTTP (for production deployment). This means you can test brand governance locally and deploy it as a hosted service with the same codebase.
Getting Started
- Create an MCP token in Torobari Integrations
- Configure your MCP client with the token
- Start making
torobari_check_outputcalls from your AI workflows
Every output checked. Every decision logged. Every audit trail complete.