Full-stack AI meal planning platform built as a pnpm monorepo. NestJS handles business logic while a separate Python FastAPI service runs the AI brain — LangGraph StateGraph for meal generation, RAG pipeline with Qdrant for recipe retrieval, Langfuse for full LLM observability.
Key Features
- LangGraph StateGraph with generation, validation, and diversity enforcement nodes
- RAG pipeline with Qdrant vector DB and USDA nutritional validation
- Real-time SSE streaming from Python → NestJS → Next.js
- Langfuse integration for token/cost tracking, trace visualization, and prompt versioning
- SEO-optimized with SSG/ISR for 95+ Lighthouse performance score
- Stripe integration for subscription billing
Tech Stack
Backend
AI
Frontend
Infrastructure
Challenges & Solutions
Reliable AI Meal Generation
LLM outputs are non-deterministic — generated meals could have invalid nutritional data, duplicate recipes, or fail dietary constraints.
LangGraph StateGraph with dedicated nodes for generation, validation, and diversity enforcement. Each node validates structured output against schemas before passing to the next stage.
Real-Time Streaming Across Services
Meal generation takes 10-30s through the AI pipeline. Users need immediate feedback, but the response crosses 3 service boundaries (Python → NestJS → Next.js).
SSE streaming pipeline where Python FastAPI streams tokens to NestJS, which proxies them to the Next.js frontend. Users see meals being generated in real-time.
Recipe Retrieval Quality
Simple keyword search returned irrelevant recipes. Users with specific dietary needs (keto, vegan, allergen-free) got poor matches.
RAG pipeline with Qdrant vector DB for semantic recipe retrieval. USDA nutritional data validates macro targets. Retrieval quality improved significantly over keyword-based search.