Welcome

The Open Vector is free and open.

Every lesson, guide, and resource is yours to explore without an account. But if you sign in, your progress is tracked across sessions and devices — pick up where you left off and build your learning record.

03 The Pipeline18 min

Validation

Test the real thing, not a picture of it.

What Is Validation?

Validation is the process of testing your solution against reality. Not "do I like it?" or "does my team approve?" but "does this actually solve the problem for the people who have it?"

You built a prototype. You think it works. Now you find out. Validation is the moment of truth — the experiment that tells you whether to keep building, pivot, or start over.

This is uncomfortable. Your idea might be wrong. Your assumptions might be flawed. The thing you spent time building might not work. That discomfort is the point. Finding out now, with a prototype, is infinitely cheaper than finding out after you have built the whole product.

Validation is not about proving you are right. It is about finding out what is true. The most valuable outcome is learning what you got wrong — because now you can fix it.

Usability Testing

The most powerful validation method is also the simplest: watch someone use your thing. Not explain it. Not demo it. Watch them, silently, as they try to accomplish a task.

Sit next to someone (or share your screen remotely). Give them a task: "You found a recipe you like. Save it to your collection." Then be quiet. Do not help. Do not explain. Watch what they do.

Where do they hesitate? What do they click first? Do they find the right path? When they get stuck, what do they try? Every hesitation is a design flaw. Every wrong click is a labeling problem. Every moment of confusion is an insight worth more than a hundred opinions.

Five users is enough. After five usability tests, you will have found 85% of the problems. This is a research-backed number from the Nielsen Norman Group — it has held up across decades of testing.

The Task-Based Approach

Never ask "what do you think?" Ask "can you do this?" Opinions are unreliable. Performance is measurable.

Write three to five tasks before the test. Each task should correspond to a core job your product serves. "Add this recipe to your collection." "Find a recipe you saved last week." "Remove a recipe you no longer want."

For each task, observe: Did they complete it? How long did it take? Did they need help? Did they take the expected path? What surprised them?

After all tasks, then you can ask opinions. "What was the hardest part?" "What would you change?" "Would you use this?" At this point, their opinions are grounded in actual experience, not hypothetical imagination.

Metrics That Matter

For a prototype, you do not need analytics dashboards. You need answers to three questions:

Can they use it? Task completion rate. If three out of five users cannot save a recipe without help, the core interaction is broken.

Do they want it? After using it, would they use it again? Are they asking when it will be ready? Or are they politely saying "that is nice" and never thinking about it again?

What is broken? A ranked list of problems, ordered by severity. Severity = frequency times impact. A problem that every user hit and that blocked them from completing a task is more severe than a cosmetic issue one person noticed.

Types of Feedback

Not all feedback is equal. Learn to categorize it:

Behavior feedback is what people do. They clicked the wrong button, they could not find the search, they gave up on step three. This is the most reliable signal.

Preference feedback is what people want. "I wish it had dark mode." "Can you add categories?" These are feature requests. They are useful input but not commands — you decide whether they align with the product vision.

Opinion feedback is what people think. "It looks nice." "I do not like the color." Opinions are the least reliable signal. They are influenced by mood, context, and desire to be polite.

Weight your decisions toward behavior, inform them with preferences, and be cautious with opinions.

Pivot, Persevere, or Iterate

After validation, you have three options:

Persevere: The core concept works. Users completed the tasks, wanted to use it again, and the problems were minor. Keep building. Move to shipping.

Iterate: The concept has potential but the execution needs work. Users wanted the product but struggled with specific interactions. Go back to prototyping, fix the issues, and validate again.

Pivot: The core concept does not work. Users could use it but did not want to. The job you identified is not important enough, or your approach to solving it misses the mark. Go back to ideation or even research.

The hardest decision is the pivot. You have invested time and emotion. But pivoting with a prototype is cheap. Pivoting with a launched product is expensive. This is exactly why you validate before you ship.

Exercise

Test with Three People

Take your prototype from the previous lesson. Write three tasks that a user should be able to accomplish. Find three people (friends, family, colleagues — anyone who is not you). Give them the tasks one at a time and watch them work. Do not help unless they are truly stuck. After all tasks, ask: "What was the hardest part?" and "Would you use this again?" Write down every problem you observed, ranked by severity. You now have a validated, prioritized list of what to fix next.

Go Deeper