Project Integration
Model
ContextUnity separates shared services (the platform) from projects (domain applications).
- Shared services (Brain, Router, Worker, Commerce) provide the common AI runtime
- Projects live in
~/Projects/cu-projects/outside the monorepo - Projects integrate via manifest + gRPC — never import service internals
- Each project has an
aioptional dependency oncu.core— works without AI too - A single
contextunity.project.yamlmanifest describes the entire integration contract
Quick Start — Manifest-Driven Integration
1. Create the Manifest
Every project starts with contextunity.project.yaml at the project root:
apiVersion: "contextunity/v1alpha1"kind: "ContextUnityProject"
project: id: "my-project" # Unique ID (namespace in Router) name: "My Project" # Human-readable tenant: "my-project" # Tenant ID for data isolation
services: router: { enabled: true } # Required: LLM orchestration brain: { enabled: true } # Traces, episodic memory, knowledge search shield: { enabled: true } # Secret management, compliance zero: { enabled: true } # PII anonymization
router: graph: id: "my-project" template: "sql_analytics" # Built-in: sql_analytics, gardener, dispatcher, rag_retrieval nodes: - name: "planner" type: "llm" model: "openai/gpt-5-mini" model_secret_ref: "MY_API_KEY" # Env var with API key prompt_ref: "src/app/prompts.py::SYSTEM_PROMPT" pii_masking: true
- name: "tool_execution" type: "tool" tool_binding: "my_sql_tool"
tools: - name: "my_sql_tool" type: "sql" execution: "federated" # Runs in YOUR process, not Router
policy: ai_model_policy: default_ai_model: "openai/gpt-5-mini" tracing_enabled: trueFull manifest reference:
packages/core/src/cu.core/manifest/examples/contextunity.project.yaml
2. Bootstrap in One Call
# Django apps.py — single entry pointfrom django.apps import AppConfig
class ChatConfig(AppConfig): name = "chat" def ready(self): import chat.tools # Loads @federated_tool definitions from contextunity.core.sdk.bootstrap import bootstrap_django bootstrap_django()This single call:
- Auto-discovers and reads the manifest YAML
- Resolves
prompt_refdirectly from referenced Python modules - Resolves
model_secret_reffrom env - Compiles per-node data into a flat bundle via
ArtifactGenerator - Syncs API keys to Shield (if enabled)
- Sends
RegisterManifestgRPC to Router - Starts BiDi stream executor in a background daemon thread
- Handles reconnection with exponential backoff
3. Define Federated Tools
The import chat.tools line ensures that Python evaluates your tool modules, triggering the decorators before the SDK collects them.
from contextunity.core.sdk.tools import federated_toolfrom contextunity.core.sdk import FederatedToolCallContext
@federated_tool("patient_sql_tool")async def execute_patient_query(query: str, ctx: FederatedToolCallContext) -> dict: """All tools execute within your project process. The SDK automatically connects them to the Router via gRPC BiDi stream. """ assert ctx.caller_tenant # Your database logic here... return {"status": "ok", "rows": []}4. Set Environment Variables
# .env (project-level)CU_ROUTER_GRPC_URL=localhost:50050MY_API_KEY=sk-...CU_PROJECT_SECRET=my-hmac-secretProjectBootstrapConfig.from_env() reads standard env vars automatically.
Standard Service Ports
| Service | Port | Protocol | Env Variable |
|---|---|---|---|
| Router | 50050 | gRPC | CU_ROUTER_GRPC_URL |
| Brain | 50051 | gRPC | CU_BRAIN_GRPC_URL |
| Worker | 50052 | gRPC | CU_WORKER_GRPC_URL |
| Shield | 50054 | gRPC | CU_SHIELD_GRPC_URL |
| Zero | 50055 | gRPC | CU_ZERO_GRPC_URL |
Integration Patterns
Pattern A: Manifest + SDK Bootstrap (Recommended)
The manifest-driven pattern is the canonical integration:
- Manifest declares what the project needs (graph, tools, models, services)
- Tool definition is done via the
@federated_tooldecorator. - SDK bootstrap handles everything automatically (
bootstrap_djangoorbootstrap_standalone).
from contextunity.core.sdk.tools import federated_toolfrom contextunity.core.sdk import FederatedToolCallContext
@federated_tool("my_sql_tool")async def execute_safe_query(sql: str, ctx: FederatedToolCallContext) -> dict: """Project-side tool execution — runs in YOUR process.""" assert ctx.caller_tenant return run_query(sql)Pattern B: SDK Router Client (Runtime Requests)
For per-request calls to the Router (chat dispatch, agent execution), never manually open gRPC channels or mint tokens. Use the SDK clients (SyncRouterClient or AsyncRouterClient) which automatically handle metadata, token minting, and tracing headers:
from contextunity.core.sdk.clients.router import SyncRouterClient
def call_ai(messages: list, user_id: str): # The client automatically mints a short-lived ContextToken, # manages the gRPC channel, and resolves platform settings. with SyncRouterClient() as client: result, metrics = client.execute_agent( graph_name="my-project", payload={"messages": messages}, metadata={"user_id": user_id} ) return resultPattern C: Brain SDK (Knowledge Store)
from contextunity.core import BrainClient
brain = BrainClient(host="localhost:50051")
# Searchresults = await brain.search(tenant_id="my-project", query_text="...", limit=10)
# Upsertitem_id = await brain.upsert(tenant_id="my-project", content="...", source_type="document")API Key Secret Flow
API keys declared via model_secret_ref in manifest nodes are handled automatically:
| Shield Status | Flow | Security |
|---|---|---|
| Shield ON (production) | SDK → Shield PutSecret → Router gets keys from Shield at LLM call time | ✅ Keys never in bundle |
| Shield OFF (dev) | SDK includes keys inline in bundle → Router stores in memory | ⚠️ Warning emitted |
| Single-tenant | Router uses its own env keys → no model_secret_ref needed | ✅ Simplest setup |
Token Permissions Reference
| Operation | Required Permissions | Notes |
|---|---|---|
| Register manifest | tools:register:{tenant} | SDK handles this automatically |
| Open BiDi stream | stream:executor | For federated tool execution |
| Dispatch (AI execution) | router:execute, tool:{name}, brain:read | Specific tool permission (not wildcard!) |
| Brain search | brain:read | |
| Brain write | brain:read, brain:write |
Security model: Always-on ContextToken-based auth. HMAC for open-source, Ed25519/Shield for enterprise.
tool:*wildcard works but logs a warning — use specifictool:{name}in production.
Project Structure Convention
cu-projects/<name>/├── contextunity.project.yaml # ← Manifest (source of truth)├── src/ # Application code│ ├── <project>/ # Django/FastAPI project│ ├── chat/ # AI chat app (apps.py, views.py, prompts.py)│ └── manage.py├── AGENTS.md # Agent guidelines├── .agent/skills/ # Project-specific skills├── pyproject.toml # Dependencies (cu.core as optional ai extra)├── example.env # Template (safe to commit)├── .env # Active config (gitignored)├── mise.toml # Task runner (serve, migrate, lint, test)├── docker-compose.yml # Local services (PostgreSQL, Redis)└── deploy/ # Production deployment configsCommon Mistakes
| Mistake | Fix |
|---|---|
| Model/prompt config in settings.py | All model/prompt/tool config belongs in contextunity.project.yaml |
project_id as a constructor param | SDK reads project_id from manifest’s project.id automatically |
Scattered os.getenv() for service URLs | Use ProjectBootstrapConfig.from_env() — reads all canonical env vars |
API keys only in Router’s .env | Project API keys go in project’s .env — synced to Shield at startup |
rlm_api_key in gRPC payload | Never pass keys in payloads. Router resolves from Shield → env fallback |
dict(response.payload) | Use MessageToDict(response.payload) |
from contextunity.core.sdk import BrainClient | Use from contextunity.core import BrainClient |
timeout=60 for dispatch | Use 120–180s for multi-iteration ReAct agents |
tool:* wildcard in production | Use specific tool:execute_my_sql — wildcard logs a warning |
from contextunity.router import ... in project | Projects MUST NOT import router internals — use gRPC + manifest |
cu.core as main dep | Use optional ai extra — project works without AI |
grpc.insecure_channel() directly | Use create_channel_sync() from cu.core.grpc_utils — TLS-aware |
Special Deployments
- PII-sensitive projects use two-database architecture: clean DB (LLM-forbidden) + anonymized DB (LLM-accessible)
- Multi-tenant projects use
tenant_idfor data isolation in shared services - Project-specific Temporal workflows register via Worker agent registry