ContextSpatial
ContextSpatial will extend the ContextUnity ecosystem into the physical world — bridging AI agents with voice, location, vision, and augmented reality.
Planned Capabilities
Voice & Telephony
- LiveKit SFU — Real-time audio streaming via WebRTC
- Native Voice LLMs — Direct binding to OpenAI Realtime API and Gemini Live Bidi-streams, bypassing the traditional STT → LLM → TTS stack
- Semantic Interrupts — Voice agents maintain active conversational state with ContextRouter, firing tool operations mid-conversation
Geospatial Intelligence
- PostGIS Spatial Indexing — Proximity search, geofencing, and polygon operations
- Location-Aware Search — Geographic dimensions added to Brain semantic search
- Map Enrichment — Geocoding, reverse geocoding, and POI resolution
Vision & Recognition
- Visual Search — Image-based product and entity recognition
- Document Processing — OCR and document understanding pipelines
AR/VR
- Spatial Computing — Augmented reality overlays powered by ContextBrain knowledge