Dane Pamuspusan
Dossier № 05 Startup MVP · Vibecoded v0.3 · 2026

Crumb.

Snap your fridge. Get three recipes you can actually cook tonight.

RoleSolo founder + builder
StatusBeta · 30 testers
Duration3 weekends
StackNext.js · Claude · Supabase

Australians throw out roughly $2,500 of food per household every year — and the most common reason isn’t laziness, it’s decision fatigue. You open the fridge at 6:47pm, see half a zucchini, leftover rice, a sad bunch of coriander, and order Uber Eats. Crumb is a one-photo answer to that moment.

How it works

  • Step 1 — Snap. Open the camera, photograph what’s in your fridge or pantry. No tagging, no typing.
  • Step 2 — Detect. A vision model identifies ingredients and rough quantities, with a quick “looks right?” review screen.
  • Step 3 — Cook. Three recipes ranked by how few extra ingredients you’d need (often zero), each tagged with estimated time, difficulty, and a “use it before it goes off” urgency flag.

Why “vibecoded”?

Crumb is my first solo product MVP, built end-to-end in three weekends. I didn’t write a PRD, didn’t draw wireframes — I described the experience to Claude, prototyped, used the app myself for dinner, hated half of it, and iterated. It’s a deliberate experiment in how much one data-minded person can ship now that AI handles the boilerplate.

The discipline I kept from the analytics side: every screen earns its place with data. I instrumented funnels from day one (snap → review → recipe-select → start-cooking) and used drop-offs to decide what to fix next, not vibes.

The stack

Next.js 15TypeScriptTailwind CSSClaude Sonnet 4.6Supabase · Auth + PostgresVercelPostHog

What I built myself vs with AI

  • Me: the product idea, the data model, the funnel instrumentation, the recipe-ranking heuristic, every UX decision.
  • AI: 80% of the React/TS scaffolding, the Tailwind polish, and the first-pass prompt engineering for the vision pipeline.

The interesting realisation: AI was excellent at writing code and terrible at knowing what to write. Product judgement — what to cut, what’s worth a third pass, when “good enough” is genuinely good enough — stayed firmly with me.

Early results

  • 62% of users who snap a photo end up selecting a recipe (target was 40%).
  • 28% of selectors hit “start cooking” — the next funnel step I’m focused on.
  • Most-requested feature from the first 30 testers: dietary filters (vegetarian, halal, gluten-free) — already in v0.3.

What’s next

Two real questions before this stops being a side project. One: can the recipe-ranking model learn a household’s actual taste from a few thumbs-ups, not just match ingredients? Two: is the right wedge audience uni students (cheap, time-poor) or young families (waste-conscious, planning-poor)? The data from the next 200 testers will decide.