Skip to content

Project Integration

Model

ContextUnity separates shared services (the platform) from projects (domain applications).

  • Shared services (Brain, Router, Worker, Commerce) provide the common AI runtime
  • Projects live in ~/Projects/cu-projects/ outside the monorepo
  • Projects integrate via manifest + gRPC — never import service internals
  • Each project has an ai optional dependency on cu.core — works without AI too
  • A single contextunity.project.yaml manifest describes the entire integration contract

Quick Start — Manifest-Driven Integration

1. Create the Manifest

Every project starts with contextunity.project.yaml at the project root:

apiVersion: "contextunity/v1alpha1"
kind: "ContextUnityProject"
project:
id: "my-project" # Unique ID (namespace in Router)
name: "My Project" # Human-readable
tenant: "my-project" # Tenant ID for data isolation
services:
router: { enabled: true } # Required: LLM orchestration
brain: { enabled: true } # Traces, episodic memory, knowledge search
shield: { enabled: true } # Secret management, compliance
zero: { enabled: true } # PII anonymization
router:
graph:
id: "my-project"
template: "sql_analytics" # Built-in: sql_analytics, gardener, dispatcher, rag_retrieval
nodes:
- name: "planner"
type: "llm"
model: "openai/gpt-5-mini"
model_secret_ref: "MY_API_KEY" # Env var with API key
prompt_ref: "src/app/prompts.py::SYSTEM_PROMPT"
pii_masking: true
- name: "tool_execution"
type: "tool"
tool_binding: "my_sql_tool"
tools:
- name: "my_sql_tool"
type: "sql"
execution: "federated" # Runs in YOUR process, not Router
policy:
ai_model_policy:
default_ai_model: "openai/gpt-5-mini"
tracing_enabled: true

Full manifest reference: packages/core/src/cu.core/manifest/examples/contextunity.project.yaml

2. Bootstrap in One Call

# Django apps.py — single entry point
from django.apps import AppConfig
class ChatConfig(AppConfig):
name = "chat"
def ready(self):
import chat.tools # Loads @federated_tool definitions
from contextunity.core.sdk.bootstrap import bootstrap_django
bootstrap_django()

This single call:

  1. Auto-discovers and reads the manifest YAML
  2. Resolves prompt_ref directly from referenced Python modules
  3. Resolves model_secret_ref from env
  4. Compiles per-node data into a flat bundle via ArtifactGenerator
  5. Syncs API keys to Shield (if enabled)
  6. Sends RegisterManifest gRPC to Router
  7. Starts BiDi stream executor in a background daemon thread
  8. Handles reconnection with exponential backoff

3. Define Federated Tools

The import chat.tools line ensures that Python evaluates your tool modules, triggering the decorators before the SDK collects them.

chat/tools.py
from contextunity.core.sdk.tools import federated_tool
from contextunity.core.sdk import FederatedToolCallContext
@federated_tool("patient_sql_tool")
async def execute_patient_query(query: str, ctx: FederatedToolCallContext) -> dict:
"""All tools execute within your project process.
The SDK automatically connects them to the Router via gRPC BiDi stream.
"""
assert ctx.caller_tenant
# Your database logic here...
return {"status": "ok", "rows": []}

4. Set Environment Variables

Terminal window
# .env (project-level)
CU_ROUTER_GRPC_URL=localhost:50050
MY_API_KEY=sk-...
CU_PROJECT_SECRET=my-hmac-secret

ProjectBootstrapConfig.from_env() reads standard env vars automatically.

Standard Service Ports

ServicePortProtocolEnv Variable
Router50050gRPCCU_ROUTER_GRPC_URL
Brain50051gRPCCU_BRAIN_GRPC_URL
Worker50052gRPCCU_WORKER_GRPC_URL
Shield50054gRPCCU_SHIELD_GRPC_URL
Zero50055gRPCCU_ZERO_GRPC_URL

Integration Patterns

The manifest-driven pattern is the canonical integration:

  1. Manifest declares what the project needs (graph, tools, models, services)
  2. Tool definition is done via the @federated_tool decorator.
  3. SDK bootstrap handles everything automatically (bootstrap_django or bootstrap_standalone).
from contextunity.core.sdk.tools import federated_tool
from contextunity.core.sdk import FederatedToolCallContext
@federated_tool("my_sql_tool")
async def execute_safe_query(sql: str, ctx: FederatedToolCallContext) -> dict:
"""Project-side tool execution — runs in YOUR process."""
assert ctx.caller_tenant
return run_query(sql)

Pattern B: SDK Router Client (Runtime Requests)

For per-request calls to the Router (chat dispatch, agent execution), never manually open gRPC channels or mint tokens. Use the SDK clients (SyncRouterClient or AsyncRouterClient) which automatically handle metadata, token minting, and tracing headers:

from contextunity.core.sdk.clients.router import SyncRouterClient
def call_ai(messages: list, user_id: str):
# The client automatically mints a short-lived ContextToken,
# manages the gRPC channel, and resolves platform settings.
with SyncRouterClient() as client:
result, metrics = client.execute_agent(
graph_name="my-project",
payload={"messages": messages},
metadata={"user_id": user_id}
)
return result

Pattern C: Brain SDK (Knowledge Store)

from contextunity.core import BrainClient
brain = BrainClient(host="localhost:50051")
# Search
results = await brain.search(tenant_id="my-project", query_text="...", limit=10)
# Upsert
item_id = await brain.upsert(tenant_id="my-project", content="...", source_type="document")

API Key Secret Flow

API keys declared via model_secret_ref in manifest nodes are handled automatically:

Shield StatusFlowSecurity
Shield ON (production)SDK → Shield PutSecret → Router gets keys from Shield at LLM call time✅ Keys never in bundle
Shield OFF (dev)SDK includes keys inline in bundle → Router stores in memory⚠️ Warning emitted
Single-tenantRouter uses its own env keys → no model_secret_ref needed✅ Simplest setup

Token Permissions Reference

OperationRequired PermissionsNotes
Register manifesttools:register:{tenant}SDK handles this automatically
Open BiDi streamstream:executorFor federated tool execution
Dispatch (AI execution)router:execute, tool:{name}, brain:readSpecific tool permission (not wildcard!)
Brain searchbrain:read
Brain writebrain:read, brain:write

Security model: Always-on ContextToken-based auth. HMAC for open-source, Ed25519/Shield for enterprise. tool:* wildcard works but logs a warning — use specific tool:{name} in production.

Project Structure Convention

cu-projects/<name>/
├── contextunity.project.yaml # ← Manifest (source of truth)
├── src/ # Application code
│ ├── <project>/ # Django/FastAPI project
│ ├── chat/ # AI chat app (apps.py, views.py, prompts.py)
│ └── manage.py
├── AGENTS.md # Agent guidelines
├── .agent/skills/ # Project-specific skills
├── pyproject.toml # Dependencies (cu.core as optional ai extra)
├── example.env # Template (safe to commit)
├── .env # Active config (gitignored)
├── mise.toml # Task runner (serve, migrate, lint, test)
├── docker-compose.yml # Local services (PostgreSQL, Redis)
└── deploy/ # Production deployment configs

Common Mistakes

MistakeFix
Model/prompt config in settings.pyAll model/prompt/tool config belongs in contextunity.project.yaml
project_id as a constructor paramSDK reads project_id from manifest’s project.id automatically
Scattered os.getenv() for service URLsUse ProjectBootstrapConfig.from_env() — reads all canonical env vars
API keys only in Router’s .envProject API keys go in project’s .env — synced to Shield at startup
rlm_api_key in gRPC payloadNever pass keys in payloads. Router resolves from Shield → env fallback
dict(response.payload)Use MessageToDict(response.payload)
from contextunity.core.sdk import BrainClientUse from contextunity.core import BrainClient
timeout=60 for dispatchUse 120–180s for multi-iteration ReAct agents
tool:* wildcard in productionUse specific tool:execute_my_sql — wildcard logs a warning
from contextunity.router import ... in projectProjects MUST NOT import router internals — use gRPC + manifest
cu.core as main depUse optional ai extra — project works without AI
grpc.insecure_channel() directlyUse create_channel_sync() from cu.core.grpc_utils — TLS-aware

Special Deployments

  • PII-sensitive projects use two-database architecture: clean DB (LLM-forbidden) + anonymized DB (LLM-accessible)
  • Multi-tenant projects use tenant_id for data isolation in shared services
  • Project-specific Temporal workflows register via Worker agent registry