principles I keep coming back to
Dream big, stay grounded
My whole career has been 0-to-1, in startups and inside big companies. People have always told me I'm a dreamer with my feet on the ground — and that balance is why it works. Dreaming lets me see what could exist; staying grounded in user insight and what can actually ship is what makes the vision real. Neither half works alone.
Think in systems
If I've done something twice, it should be automated; if something broke, the system broke (not just the bug); if a pipeline gets used, it should get better each use. I ship features — and I ship the system that makes the next feature easier.
Let users lead
Building has become cheap and fast — the real differentiation now is the person who knows their users best, and the insights no one else has decoded. Decoding what people actually mean (not just what they say) is the puzzle I love most, and the killer insight is what unlocks everything downstream.
Assume the model will fail
I build every pipeline with evals from day one, cross-model review, human gates on creative outputs, and failure-mode tests per output type, so drift gets caught before users ever see it. This is the part of the craft I think most teams still underinvest in.
AI as partner, not replacement
AI works best for me as a thought partner: something that helps me think better, not something that thinks for me. The judgment about what to build, what to kill, and what to notice that nobody else flagged is the work I want to keep closest, because it's the part that doesn't delegate.
Product
"Vibe coding has a ceiling. I built the bridge to Claude Code."
A VS Code extension for people who've hit the ceiling on Lovable, Replit, Bolt.
I was vibe coding using Replit, burning $300–400/month on credits, watching my engineer friends gently tell me my code wasn't shippable. I wanted to move to VS Code, but the jump was too big. I realized I wasn't the only one stuck in that gap, so I built the bridge.
- Port your project. 10 minutes, start to finish: GitHub, local install, Supabase, Vercel, and the integrations you need.
- One-click controls. VS Code that feels like a playground: preview, save, and launch buttons right where you expect them.
- Teach what you actually need. 20 lessons that close the vibe-coder-to-real-dev gap: GitHub, error handling, code quality, god files, and the rest.
- How to ship production-ready code through vibe coding. Proving it on my own code was the whole point.
- How to iterate positioning from real user signal. The Reddit pipeline (next card) helped me hone in on the true benefits.
- How to ship evals from day 1. Not something to bolt on after a feature ships.
Built with: TypeScript · VS Code Extension API · Next.js · Supabase · Tailwind · Vercel · Claude Code
→ letsvibecheck.aiProcess
A product development pipeline I built to ship production-ready code as a vibe coder.
The question I kept getting stuck on: how do I move from "this works on my machine" to actual shipping? The answer wasn't reinventing the wheel — it was applying normal product dev rigor (PRD → plan → build → review → ship) and gluing the steps together with opinionated AI agents, peer review at every stage, and an Airtable memory layer that ties the whole thing to my roadmap.
- PRD. I describe what I want; an agent drafts the PRD; we iterate on the key decisions; cross-model peer review before sign-off.
- Plan. A detailed implementation plan with phases, checkpoints, and a testing checklist; technical decisions surfaced in plain language; cross-model peer review.
- Mockup. HTML mockups generated from the PRD before implementation begins: visual alignment catches gaps specs miss.
- Build. Claude Code does the heavy lifting; I stay in the loop on judgment.
- Verify. Reconciles output against the PRD and the plan: catches scope drift; checks spec compliance, accessibility, brand.
- Code review. 8-lens deep review (security, performance, edge cases, accessibility, complexity, etc.) before shipping.
- Ship. Final gate: readiness checks, rollback plan, changelog, help center docs, README/CLAUDE.md updates, merge.
- Every PRD, plan, and review writes back to Airtable.
- Features auto-generate from session notes; status syncs across the pipeline (Verify → "Built", Ship → "Live").
- The pipeline isn't a series of one-shots: it's wired to a persistent roadmap I can search, re-prioritize, and learn across features.
- Peer reviews: added when I noticed I was missing technical issues on first pass.
- Mockups: added when specs alone weren't enough to catch UX misunderstandings.
- Verify: added when I noticed implementation drifting from PRDs without anyone catching it.
- Airtable sync: added when I kept losing track of great future feature ideas.
- Overnight autonomous runs: pipeline can build and review while I sleep.
Built with: Claude Code · 8 custom skills · Gemini peer review · Airtable (memory layer)
Process
A pipeline that surfaces 30-50 vibe coders a day who have the problem I'm solving.
I wanted to know if Vibecheck was solving a real problem, and I wanted a real list of users I could actually reach. Surveys would have filtered them through my assumptions; I wanted the raw signal. So I built a scraper and synthesis loop across vibe-coder subreddits that surfaces people who are stuck right now.
- Scans for stuck signals. 30–50 posts/day across vibe-coder subreddits, filtered for credit burn, refactor walls, and platform-switching threads.
- Clusters themes weekly. Rising ones get flagged so I notice patterns before they're obvious.
- Compounds into the roadmap. Insights write back to my discovery doc so they accumulate across features instead of evaporating.
- How to validate a thesis with real user signal. The pipeline confirmed the Vibecheck problem before I wrote a line of copy, and the signal kept shaping the roadmap once I shipped.
- How to build an end-to-end data pipeline. My first, which pulled me through scraping without API keys, rate-limit handling, local embeddings + clustering, and structured output.
- How to use continuous user validation as a feedback loop. Running this shifted my positioning twice and keeps me from sideways-drifting on features.
Built with: TypeScript · Custom Reddit scraper (redditless) · Claude Agent SDK · Local embeddings (@xenova/transformers) · K-means clustering · Airtable · Postgres
Product
A content automation system for my fashion Instagram — now also a public sales site.
I'm a fashion content creator, and tracking sales across hundreds of brands was eating my time. So I built a system that scans brand emails, organizes the good sales, and streamlines posting to Instagram. I ended up opening it up as a public site too, which earns affiliate income as a side effect.
- Inbox → organized. Brand emails scanned automatically: the good sales surface to the top, ready to use.
- IG-ready. Post-setup that makes sharing a sale a single action, not a copy-paste marathon across tabs.
- Public-facing site. Same system, exposed as a site where readers can shop: affiliate links make it a small passive revenue stream.
- How to choose between RAGs, LLMs, and other extraction methods. The email scanner forced me to reason about each, because the wrong call scales the errors.
- How to ship in slices, not full visions. WSS is where I learned to break features down instead of packing everything in, and where my PRD + code-review process first took shape.
- How to run first-round testing autonomously with AI. Faster feedback loops and fewer regressions before anything hits human review.
Built with: TypeScript · React · Prisma · Anthropic + OpenAI + Gemini · Langfuse · Gmail API · Railway
→ wellspentstyle.comProcess
An agent that pulls the killer insights from my interviews — and grades my interviewing.
One of my rules: if I'm doing something twice, I automate it. I was running interviews and manually doing all the work — pulling killer insights, logging all the data, drafting thank-you emails. So I built an agent that does it in one pass, with the Mom Test built into the coaching layer.
- Killer insights & pain points. The takeaways worth acting on, and the actual pain (what they have, not what they say they want).
- Contradictions. Stated vs. revealed beliefs. Almost always the highest-signal output, and the easiest thing to miss as a human.
- ICP fit scoring. Stops me from acting on insights from users who aren't actually my audience: the most common research mistake.
- Interview quality coach. Mom Test best practices applied to my own interviews: talk-time ratio, question types, missed opportunities. Graded so I can actually improve.
- Thank-you email. Personalized follow-up, ready to send.
- How to architect parallel agent pipelines. Five specialized analyses running concurrently for a single artifact.
- How to route models by task. Haiku for simple matching, Sonnet for analysis-heavy steps; real money at real interview volume.
- How to design for the interviewer, not just the insights. The coaching dimension compounds across every interview after.
Built with: TypeScript · Claude Agent SDK · Sonnet + Haiku (parallel + cost-optimized) · Zod schemas · Postgres · Airtable
Process
A 15-step content pipeline integrated into our Airtable system — compressed new event creation from days to an hour.
Breakout was selling events faster than we could create new ones. Each new event took 2–3 days of manual content work — script, run-of-show, slide deck, prep emails — so the catalog couldn't keep up with demand. I built a pipeline directly into our Airtable system that compressed the whole production loop to about an hour.
- 15 surfaces, one input. Run-of-show, script, tech notes, slide deck, site copy, promo image, prep and follow-up emails: all generated from the event concept.
- Integrated, not bolted on. Built directly into our Airtable system, so the team could see and edit every step (no separate tool, nothing to migrate).
- Human gates everywhere. 3 options at every creative output, pick the best.
- How to decompose creative work into small leaps. The model couldn't go from concept to script in one step, but concept → run-of-show → script worked because each leap was small enough. Same pattern I use across every pipeline today.
- How to build pipelines that change what a business can do, not just how fast. 90% time savings was real, but the bigger win was new formats and faster reaction to people's needs that the old way couldn't support. That's a big part of why we got acquired.
- How to design pipelines around what people enjoy doing. Getting rid of tedious manual production was as much of a win as the speed. The team actually enjoyed using it.
Built with: Airtable · OpenAI GPT-4 · OpenAI image model · Google Docs · Airtable Automations · Custom Airtable scripts (2023 stack)
Product
A custom phone case paired with a live wallpaper that matched the design.
All our data said phone cases mattered a lot for whether people thought a Nexus or Pixel was cool — but volumes were too small to convince accessory makers to build many. I came up with Live Case to fix it: a custom case paired with a matching live wallpaper that used Google products like Maps and Photos. I built the team, pitched it through Google, and shipped it as a real business.
- Custom case for Nexus and Pixel. 33 SKUs across 5 markets, each one designed by the user from their own Google content.
- Customized through Google Photos and Maps. Pull a photo or a map view, drop it on a case, get it shipped to your door.
- Matching live wallpaper + NFC button. The case's design followed users back to the device: the wallpaper paired automatically, the NFC tap launched contextual actions.
- How hardware demands a different kind of discipline. Once a case shipped, it shipped. I still build with that upfront-work mindset today, including how I think about evals on AI products.
- How to coordinate across many surfaces and teams to ship one experience. Case, customization platform, wallpaper engine, NFC, and supply chain were all separate systems that had to work as one product.
- How to turn user data into product strategy. The case insight was the wedge; the rest was figuring out how to ship it without the inventory risk that had blocked everyone else.
Built with: Custom hardware design · NFC integration · Android Live Wallpaper · Customization web platform · Google Photos + Maps APIs · Supply chain partnerships
Teaching
An AI training and development consultancy with 100+ custom GPTs and workflow automations across 30+ companies.
Running Spark Club alongside Breakout was the fastest way I found to learn AI. Teaching forced me to be precise about things I'd been doing on instinct — and I watched 90% of participants apply what we built within a week.
- 100+ custom GPTs and workflow automations. Built collaboratively with teams on their actual problems, not generic templates from AI Twitter.
- Training programs at 30+ companies. 90% of participants applied skills immediately.
- Modern education methods, not corporate training. Growth mindset, play, and hands-on experimentation: designed for real humans encountering AI for the first time.
- How to teach AI in a way that compounds. Prompt building blocks (context, reasoning, rules, examples), chain-of-thought decomposition, and multi-model cross-review are the patterns I now use across every pipeline I build.
- How to make AI adoption stick. Let the naysayers be heard. They're usually right about a specific risk. Acknowledge it, address it, and they become informed allies instead of blockers.
- How far behind most people actually are. You can't lead AI adoption until you understand the gap between AI Twitter and the rest of the world. Meet people where they are, not where you wish they were.
Built with: ChatGPT (custom GPTs) · Claude · Multi-model cross-review · Custom workflow automations