AI meal planning platform powered by LangGraph StateGraph for meal generation, RAG pipeline with Qdrant for semantic recipe retrieval, and Langfuse for full LLM observability. Structured output validation ensures dietary constraints are enforced across non-deterministic LLM outputs.
Key Features
- LangGraph StateGraph with generation, validation, and diversity enforcement nodes
- RAG pipeline with Qdrant vector DB for semantic recipe retrieval
- Structured output validation ensuring dietary constraints across non-deterministic LLM outputs
- Langfuse integration for token/cost tracking, trace visualization, and prompt versioning
- USDA nutritional data validation for macro targets
- Arize Phoenix for LLM evaluation and debugging
Tech Stack
AI
Backend
Frontend
Infrastructure
Challenges & Solutions
Reliable AI Meal Generation
LLM outputs are non-deterministic — generated meals could have invalid nutritional data, duplicate recipes, or fail dietary constraints.
LangGraph StateGraph with dedicated nodes for generation, validation, and diversity enforcement. Each node validates structured output against schemas before passing to the next stage.
Recipe Retrieval Quality
Simple keyword search returned irrelevant recipes. Users with specific dietary needs (keto, vegan, allergen-free) got poor matches.
RAG pipeline with Qdrant vector DB for semantic recipe retrieval. USDA nutritional data validates macro targets. Retrieval quality improved significantly over keyword-based search.
Real-Time Streaming Across Services
Meal generation takes 10-30s through the AI pipeline. Users need immediate feedback, but the response crosses 3 service boundaries (Python → NestJS → Next.js).
SSE streaming pipeline where Python FastAPI streams tokens to NestJS, which proxies them to the Next.js frontend. Users see meals being generated in real-time.