Skip to content

Framework Integrations

Identity OS is designed to integrate seamlessly with popular agent frameworks. The API is framework-agnostic, but we provide specific integration guides for:

  • LangGraph — LangChain's state-based agent framework
  • CrewAI — Multi-agent orchestration framework
  • OpenAI Agents SDK — Direct OpenAI agent API

All integrations follow the same pattern:

Your Framework
[Behavioral Observation Detection]
Identity OS API
[ExecutionContract]
Your Framework (constrained behavior)

Integration Pattern

Every integration has three steps:

1. Initialize

Create an instance for your agent:

client = IdentityOS(api_key="idos_sk_xxx")
agent_instance = client.instances.create(name="MyAgent")

2. Observe

Feed behavioral observations to Identity OS whenever your agent takes actions:

result = client.engine.process(
    instance_id=agent_instance.id,
    mode_target=infer_behavioral_mode(agent_action),
    signal_strength=confidence_in_mode,
    context=agent_metadata,
    # Optional: caller-controlled timestamp avoids throttle
    # in high-frequency remote scenarios
    # timestamp=datetime.utcnow(),
)

Caller-controlled timestamps

The /process endpoint accepts an optional timestamp parameter. When provided, the engine uses your timestamp instead of server wall-clock time. This prevents minimum cycle interval throttle from silently dropping rapid consecutive calls in remote/API integration scenarios. The built-in LangGraph and CrewAI adapters handle this automatically.

3. Constrain

Read the ExecutionContract and use it to guard your agent's behavior:

contract = result.contract

if action in contract.allowed_actions:
    agent.execute(action, contract.decision_style)
else:
    # Choose safe alternative
    agent.execute(fallback_action(contract.allowed_actions))

Framework Comparison

Framework State Type Trigger Points Complexity Overhead
LangGraph Graph nodes Each node transition Low ~2ms per call
CrewAI Agent tasks Task completion Medium ~5ms per call
OpenAI SDK Tool calls Tool selection Medium ~3ms per call


Common Patterns

Pattern 1: Guarding Tool Use

# In your framework's tool handler
def execute_tool(tool_name, *args):
    result = identity_os.engine.process(
        instance_id,
        mode_target=Mode.ASSERTION,  # Executing
        signal_strength=0.8
    )

    contract = result.contract

    if tool_name in contract.allowed_actions:
        # Safe to execute
        return tool_impl(tool_name, *args)
    else:
        # Tool use forbidden, suggest alternative
        return "I'm unable to use that tool right now. Let me try a different approach."

Pattern 2: Adapting Decision Speed

# In your planning/decision node
contract = identity_os.engine.get_contract(instance_id)

if contract.decision_style["tempo"] == "decisive":
    # Pick first option, move fast
    return best_option(options)[0]
elif contract.decision_style["tempo"] == "exploratory":
    # Evaluate all options thoroughly
    return evaluate_all(options)
else:
    # Balanced approach
    return pick_best(options)

Pattern 3: Stress-Aware Task Queuing

# In your task selection logic
contract = identity_os.engine.get_contract(instance_id)

if contract.stress_state in ["HIGH", "OVER"]:
    # Only essential tasks
    return filter_essential_tasks(task_queue)
elif contract.energy_level < 0.3:
    # Reduced load
    return task_queue[:len(task_queue)//2]
else:
    # Normal capacity
    return task_queue

Pattern 4: Behavioral Monitoring & Alerting

# Periodic health check
snapshot = identity_os.engine.get_snapshot(instance_id)

alerts = []

if snapshot.stress_state == "OVER":
    alerts.append("Agent is critically stressed, recommend human intervention")

if snapshot.energy_level < 0.1:
    alerts.append("Agent energy depleted, pausing operations")

if snapshot.stability_index < 0.5:
    alerts.append("Agent behavioral stability degrading, possible drift")

for alert in alerts:
    send_notification(alert)

Choosing a Framework

Use LangGraph if:

  • ✅ You have a defined workflow graph (nodes, edges, branching)
  • ✅ You want explicit state management at each step
  • ✅ You need checkpointing and recovery
  • ✅ Your agent has a clear decision tree

Example: Multi-step research workflow (Plan → Search → Evaluate → Synthesize)

Use CrewAI if:

  • ✅ You have multiple specialized agents (researcher, analyst, writer)
  • ✅ You want agent-to-agent collaboration
  • ✅ You need task-based orchestration
  • ✅ Agents have distinct roles

Example: Marketing team (content researcher, copywriter, designer, analyzer)

Use OpenAI Agents SDK if:

  • ✅ You want minimal boilerplate
  • ✅ You're already invested in OpenAI ecosystem
  • ✅ You want simple tool-calling patterns
  • ✅ You prefer convention over configuration

Example: Simple Q&A agent with a few tools


Observable Behavioral Signals

When integrating, map your framework's actions to behavioral modes:

Framework Event Mode Strength
Agent asks question PERCEPTION 0.6–0.8
Agent tries new tool EXPLORATION 0.6–0.9
Agent follows procedure ORDER 0.5–0.8
Agent executes plan ASSERTION 0.6–0.9
Agent asks for help CONNECTION 0.7–0.95
Agent refuses to act IDENTITY 0.7–0.95
Agent fails repeatedly STRESS_RESPONSE 0.5–0.95

API Rate Limits in Context

Each integration has different call frequency:

Framework Calls/Turn Calls/1000 turns
LangGraph 1–3 (node transitions) 1K–3K
CrewAI 1–5 (task completions) 1K–5K
OpenAI SDK 1–10 (tool calls) 1K–10K

Optimization: Batch multiple observations with /process/batch to reduce API calls by 70%.


Error Handling in Integrations

All frameworks should handle:

try:
    result = client.engine.process(instance_id, ...)
except HTTPException as e:
    if e.status_code == 429:
        # Rate limited, wait and retry
        await sleep(e.retry_after)
        return retry()
    elif e.status_code == 401:
        # Auth failed, check API key
        logger.error("Invalid API key")
        raise
    elif e.status_code >= 500:
        # Server error, graceful degradation
        logger.warning("Identity OS unavailable, using default behavior")
        return agent.default_behavior()

Telemetry & Monitoring

All integrations should log:

import logging

logger = logging.getLogger("identity_os")

result = client.engine.process(instance_id, ...)

logger.info(
    "agent_behavior",
    extra={
        "instance_id": instance_id,
        "stress_state": result.contract.stress_state,
        "energy_level": result.contract.energy_level,
        "dominant_modes": result.contract.dominant_modes,
        "allowed_actions": result.contract.allowed_actions
    }
)

Next Steps

  1. Choose your framework: LangGraph | CrewAI
  2. Follow the integration guide for your chosen framework
  3. Test with the Quickstart scenario
  4. Deploy with monitoring (use telemetry patterns above)

Support