I don't add AI
to your product.
I design products
around AI.
Most engineering teams treat LLMs as a layer on top of existing architecture. That's the wrong starting point. I design systems where the model is the operating core — with real tool access, structured reasoning, and autonomous action built in from day one.
Let's Talk →Systems that think. Software that acts.
The gap isn't capability.
It's architecture.
Foundation models are powerful enough to replace entire workflow layers — but only if your system is designed to let them.
What this looks like
in practice
End-to-end from first commit to production — AI-native from the ground up.
Reasoning systems, not chatbots
Products where an LLM has genuine tool access — to your databases, APIs, and external services — and uses it autonomously. Users state intent. The system acts.
Serverless, built for models
Streaming, WebSocket-based interfaces. Async background agents. Multi-model architectures that use the right model for the right task. Infrastructure that doesn't bottleneck your AI.
Architecture that doesn't age badly
Model selection, RAG design, agentic loop architecture, tool schema design. I work with engineering leads early — the decisions made in week two are the ones you live with in year two.
Built,
not theorized.
Everything here comes from systems I've designed and shipped — streaming agentic interfaces, async background reasoning pipelines, multi-model architectures with structured output extraction, real-time tool-calling over live data.
I don't consult on AI from the outside. The stack I recommend is the stack I build with.
An AI-native travel research app. Multi-turn agentic conversations with live web search, streaming responses, async preference extraction, and structured plan generation.
Building something that actually needs to think?
If you're an engineering leader who's done with demo-ware and wants LLMs doing real work inside your product — let's get into it.
Work With Me →