OpenAI Agents SDK¶
Identity OS integrates with the OpenAI Agents SDK via input/output guardrails, identity-aware tool wrapping, and shared context.
Quick Start¶
from identity_os_sdk import IdentityOS
from identity_os.integrations.openai_agents import (
create_identity_agent,
create_identity_tool,
IdentityOSContext,
)
# Initialize
client = IdentityOS(api_key="idos_sk_xxx")
instance = client.instances.create(name="MyAgent")
# Create an agent with Identity OS guardrails
agent = create_identity_agent(
name="Research Assistant",
instructions="You are a helpful research assistant.",
tools=[my_search_tool, my_write_tool],
identity_client=client,
instance_id=instance.id,
)
What Gets Enforced¶
| Hook | What it does |
|---|---|
| Input guardrail | Before execution: checks if the inferred action is forbidden, blocks with tripwire if so |
| Output guardrail | After execution: validates output doesn't violate contract |
| Tool wrapper | Before each tool call: verifies the action type is allowed |
| Context injection | Makes the current contract available to all tools and guardrails |
| Stress parity | Consecutive blocked actions (3+) trigger stress signal, same as LangGraph/CrewAI |
Wrapping Individual Tools¶
# Wrap a tool with contract enforcement
safe_search = create_identity_tool(
tool_fn=my_search_tool,
identity_client=client,
instance_id=instance.id,
action="explore", # Optional: explicit action type for reliable matching
)
If you don't specify action=, the adapter infers the action from the tool's name and docstring.
Accessing the Contract in Tools¶
from identity_os.integrations.openai_agents import IdentityOSContext
# Inside a tool function
def my_tool(ctx, input):
identity_ctx = IdentityOSContext.from_run_context(ctx)
if identity_ctx.is_allowed("explore"):
# Safe to proceed
pass
style = identity_ctx.get_decision_style()
# Adapt behavior based on contract
Dynamic Instructions¶
The adapter automatically appends the current contract summary to your agent's instructions before each run:
[Identity OS Contract]
Stress: LOW | Energy: 0.85
Allowed: explore, clarify, suggest
Forbidden: emotional_manipulation, identity_override
Style: measured tempo, moderate risk
This gives the LLM awareness of its constraints without requiring prompt engineering.
Best Practices¶
- Use explicit
action=on critical tools — inference is heuristic, explicit is reliable - Don't override the guardrails — they're the enforcement layer
- Monitor stress level — if your agent hits HIGH/OVER often, your task design may be too aggressive
- Use
IdentityOSContextto adapt tool behavior based on contract state