Identity OS¶
Runtime behavioral control for AI agents.¶
Your agent follows instructions — until it doesn't. Identity OS enforces behavior at the execution layer, not the prompt.
One API call per turn. Works with LangGraph, CrewAI, and OpenAI Agents SDK.
Who is this for?¶
Customer Service Agents¶
Your support bot needs to stay helpful, never escalate rudely, never share internal data. But after 50 turns the system prompt fades and the agent starts improvising.
With Identity OS: The bot has a persistent behavioral identity. Risky actions like share_internal_data or escalate_rudely are in the forbidden_actions list — enforced at runtime, not by prompt. Even after 300 turns, the bot stays in character.
Autonomous Research Agents¶
Your agent runs overnight — browsing, summarizing, deciding next steps. You wake up and it's gone completely off-track, spending tokens on irrelevant rabbit holes.
With Identity OS: Drift detection catches when the agent's behavior shifts from its established pattern. Small context-appropriate shifts are allowed. Large deviations trigger auto-rollback to the last stable state. You wake up to an agent that stayed on mission.
Multi-Agent Workflows¶
You have 5 agents working together. One gets stuck in a retry loop, starts failing, and its erratic behavior cascades to the others.
With Identity OS: The stress model detects the failure pattern. Constraints automatically tighten — the stressed agent switches to conservative mode, stops making risky decisions. When the issue resolves, constraints loosen back. No cascading failures.
Compliance-Sensitive Deployments¶
Your enterprise client needs to prove that the AI agent never took unauthorized actions. "We told it not to in the system prompt" doesn't pass an audit.
With Identity OS: Every action produces an auditable ExecutionContract. You can prove exactly which actions were allowed, which were blocked, and why — traceable to the behavioral state at that moment. Deterministic, not probabilistic.
Validated Across 2,000+ Adversarial Turns¶
Use Case Results¶
| Scenario | Turns | Key Result |
|---|---|---|
| Customer Service Bot | 170 | 100/100 jailbreak attacks blocked, 0 false positives on 50 normal queries |
| Research Agent | 300 | Drift detected at turn 114; 57 D1 events caught across 5 behavioral phases |
| Pipeline Agent | 200 | 3 stress/recovery cycles; reached OVER, recovered autonomously every time |
| Benchmark Suite | 1,200 | 4 × 300 adversarial scenarios: steady-state, gradual shift, stress spikes, chaotic |
Customer Service Bot — Jailbreak Prevention (170 turns)¶
| Metric | Result |
|---|---|
| Normal queries handled | 50/50, zero false positives |
| Jailbreak attacks blocked | 100/100 |
| Stress escalation | LOW → MED → HIGH → OVER (automatic) |
| Post-attack recovery | Agent operational again within 20 calm turns |
Research Agent — Drift Detection (300 turns)¶
| Metric | Result |
|---|---|
| Stable analytical phase | 100 turns, stability index 0.93 |
| First drift detected | Turn 114 (D1 — context drift) |
| Total drift events caught | 57 D1 events across gradual shift, sudden spike, and chaotic phases |
| False D3 (corrupt) alerts | 0 |
Pipeline Agent — Stress & Recovery (200 turns)¶
| Metric | Result |
|---|---|
| Stress/recovery cycles | 3 complete cycles |
| Reached OVER (max stress) | Yes — with hard_locks activated |
| Autonomous recovery | Yes — every time, no manual intervention |
| Energy range | 0.00 (depleted) → 1.00 (full), recovered each cycle |
| Stress transitions | 10 across all 4 levels (LOW/MED/HIGH/OVER) |
Performance¶
| Metric | Result |
|---|---|
| Throughput | 594 turns/sec |
| Latency per turn | ~1.7ms |
| Automated test coverage | 432+ tests passing |
| Framework adapters tested | LangGraph, CrewAI, OpenAI Agents SDK |
How It Works¶
1. Your agent acts → sends an observation¶
result = client.engine.process(
instance_id=instance.id,
mode_target=Mode.EXPLORATION,
signal_strength=0.7
)
2. Identity OS returns a contract¶
{
"allowed_actions": ["explore", "clarify", "suggest"],
"forbidden_actions": ["emotional_manipulation", "identity_override"],
"decision_style": { "tempo": "measured", "risk": "moderate" },
"stress_level": "LOW",
"energy_level": 0.50
}
3. Your agent follows the contract¶
Forbidden actions never reach the LLM. Allowed actions adapt to current state. Behavior stays consistent — automatically.
Quick Start¶
from identity_os_sdk import IdentityOS, Mode
client = IdentityOS(api_key="idos_sk_xxx")
# Create an agent identity
instance = client.instances.create(name="Aria")
# Feed observations, get contracts
result = client.engine.process(
instance_id=instance.id,
mode_target=Mode.EXPLORATION,
signal_strength=0.7
)
# Use the contract to guard behavior
contract = result.contract
if "explore" in contract.allowed_actions:
# Safe to proceed
pass
Full guide: Quickstart
Integrations¶
Drop-in adapters for your framework:
Pricing¶
| Tier | Price | Cycles/month | Instances | |
|---|---|---|---|---|
| Free | $0 | 10K | 3 | Get Started |
| Indie | $29/mo | 100K | 50 | Subscribe |
| Pro | $99/mo | 500K | Unlimited | Subscribe |
1 cycle = 1 API call to /process. Try it free first →
Get Started¶
- 5-Minute Tutorial — From
pip installto your first contract - Try It Live — Run it locally, no signup needed
- Before & After — See exactly what changes in your code (2-3 lines)
- Core Concepts — Modes, stress, drift, contracts
- API Reference — All endpoints
- Python SDK · TypeScript SDK