Skip to content

Enrichment — AI Content Generation

Enrichment uses LLM to generate and enhance product content — descriptions, attributes, and wiki information. It complements Gardener (normalization) and Matcher (product linking) in the Commerce AI pipeline.

Description Generation

Generate professional, casual, or SEO-optimized product descriptions in any language.

Taxonomy Sync

Classify products into the site’s category hierarchy via Brain knowledge graph.

Wiki Enrichment

Research and populate brand pages, technology descriptions, and catalog entries.

Multi-language

Ukrainian-first (canonical) with LLM-powered translation to any target language.

Per-Task Model Configuration

Enrichment tasks use dedicated LLM models configured per-project in contextunity.project.yaml:

contextunity.project.yaml
graphs:
nodes:
- id: enricher
model: "openai:gpt-5-mini" # Description generation
- id: wiki_researcher
model: "openai:gpt-5-mini" # Brand/technology content
- id: gardener
model: "openai:gpt-5-mini" # Normalization (Gardener)
- id: matcher
model: "mercury-2" # Product Linking (Matcher)

Description Generation

Generate product descriptions via contextunity.router’s LLM pipeline:

# Via MCP tool
result = await enrich_product_description(
product_id=42,
language="uk", # Ukrainian (canonical)
style="professional" # professional | casual | seo
)

Input context (automatically gathered):

  • Product title, categories, attributes
  • Existing description (if any)
  • Brand information

Output: 2-3 paragraphs highlighting key features, technical specs, and benefits.

Styles

StyleUse Case
professionalProduct pages, catalogs — factual, structured
casualBlog posts, social media — conversational tone
seoSearch optimization — keyword-rich, scannable

Language Support

The canonical language is Ukrainian (set in project config). Descriptions can be generated in any language — the system uses the project’s get_primary_language() to determine the default.

To generate translations after the canonical description exists:

  1. Generate canonical description in Ukrainian
  2. Use update_product() to save
  3. Generate translated versions for other languages (en, de, etc.)
  4. Save to Product.translations JSON field

Taxonomy Classification

Taxonomy sync pushes the site’s category tree to contextunity.brain for knowledge graph operations:

Terminal window
# Sync category taxonomy to Brain
python manage.py taxonomy_sync --domain category
# Sync color taxonomy
python manage.py taxonomy_sync --domain color

The taxonomy tree is also available via MCP:

tree = await get_taxonomy_tree(max_depth=3)
# Returns flat list of category nodes with depth and product counts

Enrichment Queue

Products needing enrichment are tracked by status:

StatusMeaning
rawNewly harvested, no enrichment yet
enrichingCurrently being processed
enrichedAll enrichment tasks complete

Monitoring

# List products pending enrichment
queue = await list_enrichment_queue(dealer="gorgany", limit=25)
# Check enrichment status for a specific product
status = await get_enrichment_status(dealer_product_id=12345)
# Returns: normalized fields, enrichment metadata, trace_id

Wiki Enrichment

Wiki enrichment populates structured content for brands and technologies:

Brands

  • Company description and history
  • Product lines and specializations
  • Technology partnerships

Technologies

  • Technical description (Gore-Tex, Vibram, PrimaLoft, etc.)
  • Benefits and applications
  • Related products

Wiki content is managed through Wagtail CMS snippets (wiki.Brand, wiki.Technology) and enriched via the wiki_researcher node defined in the project manifest.

MCP Tools

ToolTagsDescription
enrich_product_descriptionenrichment, mutateGenerate product description via LLM
get_enrichment_statusenrichment, queryCheck enrichment status for a product
list_enrichment_queueenrichment, queryList products pending enrichment
get_taxonomy_treeenrichment, queryGet category tree with product counts

Architecture

Commerce → Router → LLM Provider
│ │
│ ├── enricher node (via manifest) → description generation
│ └── wiki_researcher node (via manifest) → brand/tech research
├── enrich_product_description() MCP tool
├── taxonomy_sync management command
└── enrichment queue monitoring

Configuration

Commerce connects to the Router using auto-discovery via Redis, while the actual LLM models are configured in the Project Manifest.

Terminal window
# Commerce .env
REDIS_URL=redis://localhost:6379 # Enables auto-discovery of contextunity.router
REDIS_SECRET_KEY=... # Required if Redis encryption is enabled
# CU_ROUTER_GRPC_URL=localhost:50050 # Optional: hardcode router endpoint (bypasses discovery)

The specific models used for enrichment tasks (enricher, wiki_researcher) are mapped in the contextunity.project.yaml manifest under the graphs.nodes definitions.

File Locations

# Commerce — enrichment tools
contextunity-commerce/src/
├── mcp/tools/enrichment.py # MCP enrichment tools
├── harvester/management/commands/
│ └── taxonomy_sync.py # Taxonomy sync to Brain
└── wiki/models.py # Brand, Technology snippets
# Router — enrichment graphs (planned)
cu/router/cortex/graphs/commerce/
├── lexicon/ # Lexicon enrichment graph
└── ontology/ # Ontology enrichment graph