Agent Control Plane

control.traingle.ai — Control Plane v1

Usage: —
IDLE
CONTROL PLANE

Agent Control Plane

Control how AI agents behave at runtime before they answer.

Use this workspace to compare agent behavior, apply policies, and review responses in a controlled environment.

Sign in to use the workspace, or bring your own API key to continue.

Mode

RUN MODE — Compare Product vs Execution agents

Try a scenario

Start by selecting a scenario
Run the system to compare behavior
Switch policies to explore differences

Master Intent

alignment Mode

Show alignment Mode Advanced
How agents work together
Both agents respond to the same intent separately, without directional dependence.

View

Scenario Guidance

Show Scenario Guidance Optional
Recommended Policy
Select a scenario to see guidance.
Recommendation
Guidance only. Your selected policies remain manual.

Create Custom Policy

Show Create Custom Policy Advanced

Generated Policy

Policy Name
Not generated
Behavior Summary
Not generated
When to Use
Not generated
Tradeoff
Not generated
Recommended Use
Not generated

Review Policy

Generate a policy to review it before applying.
Sign in to run agents or use your own API key

Planner Agent (Product Agent)

Defines direction
IDLE
Current Policy
HIP_01 — Balanced Assistant
Policy Type
Standard
Short Summary
Will usually answer if the request appears safe and understandable.

Executor Agent (Execution Agent)

Produces output
IDLE
Current Policy
HIP_01 — Balanced Assistant
Policy Type
Standard
Short Summary
Will usually answer if the request appears safe and understandable.

System Summary

Alignment

PARTIAL

Conflict

UNKNOWN

Summary

Run both agents to compute system summary.

Why results differ

Run both agents to compare policy outcomes.

What this means

Interpretation will appear after both agents complete.

Why these policies

Policy intent explanation will appear after both agents complete.

Session Summary

Runs
0

Why this matters

This system reduces incorrect or risky responses by requiring evidence and escalating when needed.
A system that controls how AI agents behave — so they give safer, evidence-based answers.

Control how AI behaves — before it answers

Run scenarios, apply rules, and see how AI decisions change in real time.
What’s Broken
AI answers confidently — even when it shouldn’t.
It can make decisions without enough information, leading to inconsistent or risky outcomes.
What This System Does
This system controls how AI responds at runtime.
It can require evidence, prevent unsupported answers, or escalate when information is missing.
Differentiators
No training. No prompt systemering. No fine-tuning.
Control happens at runtime, directly on how AI behaves.
Value
  • Reduces risky responses
  • Improves decision consistency
  • Makes AI behavior predictable
Used for
Customer support
Financial decisions
Compliance review
Research and validation
See how behavior changes across real scenarios below.

Developer Guide

Show Developer Guide Reference
This system controls how AI agents behave at runtime to produce safer, evidence-based answers.
Call it via HTTP or use the built-in adapter to run scenarios and receive structured outputs.
How to call (HTTP)
{
  "hip_profile_id": "HIP_01",
  "prompt": "Help a user resolve an issue with a delayed order.",
  "targets": [{"provider":"openai","model":"gpt-4o-mini"}]
}
Input format
{
  "intent": "...",
  "agents": [
    {"role":"...","policy":"...","provider":"..."}
  ],
  "alignment_mode": "..."
}
Output structure
{
  "agents":[{"agent_name":"...","decision":"...","output":"...","policy":"...","status":"..."}],
  "summary":{"alignment":"...","conflict":"...","explanation":"..."},
  "signals":["..."]
}
Examples
Customer Support
Request:
{ "intent":"Help a user resolve an issue with a delayed order.", "template":"customer_support_system" }

Response:
{ "agents":[...], "summary":{...}, "signals":[...] }
Financial Decision
Request:
{ "intent":"Advise whether to approve a refund.", "template":"financial_decision_system" }

Response:
{ "agents":[...], "summary":{...}, "signals":[...] }
Uncertain Case
Request:
{ "intent":"User asks about a charge with limited info.", "template":"uncertain_case_system" }

Response:
{ "agents":[...], "summary":{...}, "signals":[...] }
Templates
customer_support — resolve support issues safely
refund_review — structured refund decisions
uncertain — handle ambiguity and escalate
evidence — require proof before answering
conflict — resolve policy conflicts
escalation_handling — safe escalation across agents
multi_step_review — parallel multi-agent review
validation_approval — validate then approve
Output fields
decision → what the system chose
summary → overall outcome
signals → key behavior indicators