How Enforcement Works
The Torobari enforcement engine is the core of our platform. Every piece of AI-generated content passes through it before reaching production. Here is how it works.
Rule Types
The engine supports multiple rule types, each targeting a specific dimension of brand compliance:
- Forbidden phrases - Block specific words or phrases that should never appear in brand communications
- Slang detection - Flag informal language that does not match your brand voice
- Emoji control - Enforce whether emojis are appropriate for your brand
- Length limits - Ensure outputs stay within channel-appropriate bounds
- ALL CAPS detection - Prevent shouting in professional communications
- URL shortener blocking - Keep links trustworthy and transparent
- Hashtag control - Manage social media formatting consistency
The Decision Model
Every evaluation produces one of three decisions:
| Decision | Meaning | |----------|---------| | PASS | Content meets all active rules. Safe to publish. | | BLOCK | Content violates an explicit rule. Must be revised before publication. | | ESCALATE | Content violates a soft rule. Needs human review. |
Deterministic by Design
Given the same rules and the same input, the engine always produces the same output. This is not negotiable. Determinism means:
- No randomness in evaluation
- No model-dependent interpretation
- No time-of-day variance
- Complete reproducibility for audits
Fail-Closed Guarantee
If something goes wrong - a rule config is malformed, the engine encounters an unexpected input, or a service dependency is unavailable - the default decision is always BLOCK.
We would rather block a valid output than pass an invalid one.
Channel-Aware Evaluation
Rules can be scoped to specific channels. A disclaimer that is required on ads might not be required on internal emails. The engine evaluates each output in the context of its destination channel: social, ads, email, web, or print.
Ruleset Hashing
Every evaluation includes a rulesetHash - a deterministic hash of the active rules at evaluation time. This lets you verify that a decision was made against the exact ruleset you expect, even months later.
What This Means for Teams
Teams no longer need to trust that AI outputs are on-brand. They can verify it, every time, with machine-readable evidence.