insights
AI & Product DesignVibe CodingAI ToolsUX Design

Vibe Coding Is Not Product Design

you can now generate a full UI by describing it in words. that's incredible. it's also the fastest way to build something nobody wants to use.

Julien Hosri

Julien Hosri

Creative Managing Partner

April 2, 20266 min read
a screen showing an AI chat prompt generating a UI

a new way to build products

"vibe coding" is the term that emerged in early 2025 for prompt-driven development. you describe what you want. an AI generates it. you iterate by describing changes. the code appears. you ship.

the workflow is seductive. it feels like the future. and for certain use cases (internal tools, prototypes, personal projects), it is genuinely effective.

but for products that need to work for real users? vibe coding has a fundamental problem. it optimizes for the builder's intent, not the user's need.

the builder's lens vs. the user's lens

when you prompt an AI to "build a dashboard with user analytics," you get a dashboard. it has charts. it has metrics. it looks professional.

but the prompt didn't ask:

  • which metrics actually matter for this specific user?
  • what decisions will the user make based on this data?
  • what should happen when the data is empty? when it's loading? when there's an error?
  • should this user even see a dashboard, or would a simple summary be more useful?

these aren't technical questions. they're product questions. and they determine whether the dashboard is useful or just decorative.

vibe coding skips product questions by default. the AI gives you what you asked for, not what the user needs. and those are rarely the same thing.

the AI builds exactly what you describe. the problem is that what you describe is filtered through your assumptions about the user, and those assumptions are usually wrong.

the consistency problem

real products have consistency. the same patterns appear across every screen. the same spacing, the same interaction logic, the same component behavior. this consistency is what makes a product feel professional and learnable.

vibe-coded products almost never have this. each prompt generates something new. the button style changes between screens. the navigation behaves differently in different sections. the spacing is arbitrary because it was never defined as a system.

design systems exist for a reason. they encode decisions about how components look and behave so that every screen is consistent. when you skip the design system and generate each screen independently, you get a collection of screens that happen to live in the same app. not a product.

the testing gap

the most dangerous thing about vibe coding is how it feels. it feels productive. you can generate 10 screens in a day. you can see your product taking shape in real time. the dopamine hit is real.

but generation is not validation. the question isn't "can I build this fast?" The question is "should I build this at all?"

traditional product design includes testing at every stage. wireframes are tested before high-fidelity design begins. prototypes are tested before development begins. the product is tested before launch. each round of testing catches problems early, when they're cheap to fix.

vibe coding has no testing stage. it goes from idea to implementation with nothing in between. and the problems that testing would have caught only surface after launch, when they're expensive to fix.

where vibe coding works

let's be fair. vibe coding is genuinely useful in specific contexts:

internal tools: if the user is your own team, you understand their needs directly. the testing gap is smaller because you can watch them use it and iterate in real time.

throwaway prototypes: if you need to demonstrate an idea quickly and don't plan to ship it, vibe coding is the fastest way to make something tangible. just don't confuse the prototype with the product.

component generation: if you have a well-defined design system and need to generate individual components that match the spec, AI tools are excellent. the design decisions are already made. the AI is just implementing them.

personal projects: if you're the only user, your assumptions about the user are correct by definition.

where vibe coding fails

anything with multiple user types. the moment you have two different people using the same product for different reasons, prompt-driven development breaks down. you can't hold both perspectives in a single prompt.

anything with complex flows. login, onboarding, error handling, empty states, permissions, notifications. real products have dozens of states that need to be designed. prompts generate the happy path. they miss everything else.

anything where retention matters. first impressions are easy. getting users to come back requires deep understanding of their goals and habits. no prompt can encode that.

the synthesis

AI tools are part of the future. nobody disputes that. the question is where in the process they belong.

they belong in the build phase. after the user research. after the flow mapping. after the design system. after the testing. when the decisions are made and validated, AI tools can generate the implementation faster than any human.

they don't belong in the definition phase. the work of understanding users, mapping flows, and making product decisions requires human judgment. it requires empathy. it requires the willingness to be wrong and the discipline to test before committing.

build fast. but define carefully. the order matters.

work with maxiphy

want us to apply this to your product? let's talk.

we offer free 30-minute discovery calls. no pitch, no commitment. just an honest conversation about your product.