Your team wants to add “just one more section” to the homepage. Marketing needs their latest campaign front and centre. Customer service is drowning in support tickets about information people can’t find (even though it’s already on 3 different pages on your website) and your bounce rate keeps climbing (is that good or bad?!)
Nobody can agree on what actually matters and your job is to ‘fix the website’.
Sound familiar?
This is the problem top tasks analysis solves.
What top tasks analysis delivers.
Top tasks is often described as “prioritisation research.” That’s true, but it’s so much more!
Done right, a top tasks analysis:
- gives you permission to say no, because your no is backed by data that stakeholders can’t argue with.
- focuses your budget on what drives results, instead of servicing ego projects and internal politics.
- validates your information architecture, so it aligns with how people really think, as opposed to how your organisation is structured.
When everyone’s convinced their department’s content deserves prime real estate on the site, a top tasks analysis provides unified direction. And you get quantifiable metrics (through task completion rates) that you can measure and improve over time.
A real life story, in case you’re not convinced
Liverpool City Council ran a top tasks survey in 20091 and discovered they were publishing more content about things citizens cared about least. The more important something was to residents, the less attention it got. That inverse relationship was almost perfect.
They deleted 80% of their website. User satisfaction went up.
Imagine that. Top tasks doesn’t just tell you what matters. It gives you the evidence to confidently remove what doesn’t.
When to use top tasks analysis
It’s a significant investment of time and effort, so not to be chosen lightly. So when is it a good idea? Well, that would be when:
- Your website has become a dumping ground where every department adds content but nobody removes anything
- Navigation has become an archaeological dig through organisational history
- Stakeholders can’t agree on priorities because everyone thinks their content deserves a spot on the homepage.
- Meetings go in circles and internal politics trumps user needs
- You’re redesigning and need evidence for difficult decisions, because “I think” doesn’t cut it when you’re proposing to bury someone’s pet project
- Analytics show traffic but no completion, people arrive but they’re not converting, completing forms, or finding what they need.
- You have a hunch that you’re solving the wrong problems.
- Your information architecture makes sense to nobody except the people who built it. Organised by department? By format? By whatever seemed logical to the highest paid person in the room?
Here’s a telling example: Microsoft’s Excel team noticed huge search volume for “remove conditional formatting.”2 So they created a comprehensive help page. It got massive traffic and terrible satisfaction scores. After months of research, they discovered the real task wasn’t removing conditional formatting. That was a symptom. The real task was learning to use conditional formatting properly in the first place. People were searching for the fix because they’d made a mess of it.
Top tasks forces you to find the true task/problem, not just the symptom.
The two approaches: Discovery vs validation
There are two ways to run top tasks research. They serve different purposes, and it’s ok to use both, depending on what you’re trying to achieve.
1. When you’re in discovery
This approach leans toward exploration rather than validation. It’s often favoured by content teams and marketers, particularly on content-heavy sites.
How it works
You trigger an exit intent survey on your website. Ask an open question like “What were you looking for today?”, “What brought you to the site?”, or “Did you find what you needed?”. You might show a short list of suggested tasks alongside a free-text field, but the emphasis is on discovery. You’re letting users describe their intent in their own words.
| Pros | Cons |
|---|---|
| Surfaces niche needs you might have missed | Requires manual, time-consuming analysis |
| Captures the actual language users use | Data is “messy” and harder to quantify |
| Highlights content gaps and mismatches | Low-effort responses (e.g., “help”) are common |
Use this approach when you’re early in research and need insight more than validation, you suspect users are arriving with needs you haven’t considered, you’re shaping a content strategy or discovering gaps, or you don’t yet know enough to create a meaningful shortlist.
2. When you need validation and prioritisation
This method is championed by Gerry McGovern. It’s rigorous, time-consuming, and surprisingly effective.
How it works
You present users with 50-80 tasks in random order. Ask them to select their top 5. Not rank them, just select. This forces real trade-offs. You can’t have everything. The constraint is intentional. Most users care deeply about a small number of things. Everything else is noise. By forcing users to make trade-offs, you uncover what truly matters to them.
Curate your list
Begin by scouring analytics for top searches and pages, review support tickets and sales conversations, audit competitor and peer websites, interview stakeholders across departments. Find everything! You should end up with a list of 200-400 potential tasks.
Next, shortlist ruthlessly, with 5-8 collaborative sessions. Get started by:
- Strip verbs so “Find a job” becomes “Jobs”.
- Removing formats such as newsletters, videos, reports.
- Removing channels like Twitter, Facebook.
- Removing organisational units like HR, Marketing, Sales.
- Deleting dumping grounds like Resources, Quick Links, FAQ.
- Forgetting about audience segments like “Women’s health” versus “Men’s health”.
Keep each task under 65 characters and aim to trim your list to 50-80 final tasks.
The shortlisting process is where the real work happens. You’re forcing your organisation to have conversations it’s probably never had before. What is a task, really? Is “Events” a task, or is it a format? Is “About Us” something users want to do, or something you want to tell them? When stakeholders say “customer stories” is essential, are users actually coming to read stories, or are they looking for proof of credibility? These sessions are intense. They’re also some of the most valuable strategic work you’ll do.
Survey target users
You’ll need 100-400 responses for a balanced outcome. Present one list, randomly ordered, and let users take their pick. After around 30 responses, top tasks start emerging.
Analyse the results
Identify which 5-8 tasks capture the first 25% of votes, look for patterns across segments, and document the drop-off from top tasks to tiny tasks (which is usually quite dramatic).
| Pros | Cons |
|---|---|
| Clear, quantifiable prioritisation showing the tasks that matter most | Dependent on the initial list quality |
| Reduces internal debate with hard data | Limited discovery of new tasks |
| High stakeholder buy-in, less opinion | Significant time investment |
What are you left with?
When you’re done, you know exactly which tasks matter most. A clear, quantifiable prioritisation based on what matters most to the people you’re trying to help. You can design navigation around these tasks, allocate budget and resources based on them, and measure task completion rates over time.
Just as importantly, you have ammunition. When someone wants to add that “just one more section” to the homepage, you can point to the data and show it serves a task that got 0.3% of the vote.
Use this approach when you’re restructuring an existing site or product, you need stakeholder buy-in for difficult decisions, you want to validate your information architecture before committing to it, or you’re measuring success by task completion rates.
Most effective projects use both approaches at different stages.
Starting with content-led discovery, you’ll get to understand how users think and what language they use, before moving to UX-led validation to prioritise and make decisions. In short, discovery tells you what’s possible. Validation tells you what matters most.
How top tasks can be used to validate your information architecture
Top tasks is one of the most powerful information architecture validation tools a digital experience professional has.
Most information architecture work happens in a vacuum. You run card sorts. You do tree testing. You create taxonomies based on mental models research. All good practices. But if the category labels in your tree test or closed card sort are even a little bit wrong, you won’t be any the wiser, because you’re starting from a place that doesn’t resonate / doesn’t match your users mental model.
Top tasks validates information architecture by answering the fundamental question: does your structure support what people actually want to do?
Let’s say your top tasks are:
- Find a job (28% of vote)
- Learn about training and development (19%)
- Discover pay and benefits (15%)
- Read policies and procedures (12%)
- Get IT support (10%).
If your main menu is organised by department (HR, IT, Finance, Operations), you’ve got work to do. Users don’t think “I need to find something from HR.” They think “I need to understand parental leave policy.”
Top tasks gives you the hierarchy your information architecture should reflect. Primary navigation should surface your top 5-8 tasks. Secondary navigation can handle the next tier. Everything else lives in search, footer links, or gets deleted.
When you validate your information architecture with top tasks data, card sorting results make more sense because you can weight the importance of categories. Tree testing becomes more meaningful because you test paths to top tasks, not just any tasks. Navigation decisions become defensible because you can say “This task got 2% of the vote, it doesn’t deserve primary navigation.”
I’ve seen organisations spend months debating whether “Resources” should be in the main nav. Top tasks ends that conversation in one survey. If “Resources” isn’t a task users vote for, it doesn’t belong in primary navigation. Drama solved.
Note: Tiny tasks aren’t top tasks
Remember Liverpool City Council? They published more about tiny tasks and less about top tasks. This happens everywhere.
Your content team writes about what’s new, what’s interesting, what they’re proud of. Marketing pushes campaigns. Departments want visibility.
None of this correlates with user needs.
Top tasks shows you this disconnect immediately. You can create a simple matrix with task importance from top tasks survey on the X-axis and content volume or navigation prominence on the Y-axis. Plot your current site. I guarantee you’ll see the inverse relationship. The stuff users care about most gets the least attention. The stuff they don’t care about dominates.
That visualisation alone is worth the price of admission. It ends arguments.
A top tasks analysis isn’t quick work, but here’s why it’s worth the effort.
Stakeholder management get easier
Everyone thinks their content is critical. The legal team believes “Terms & Conditions” deserves primary navigation. Marketing is convinced users are desperate for the company story. HR wants employee profiles prominently featured.
Top tasks forces these conversations into the open, which means you’ll spend more time managing internal politics than running the actual research. But that’s actually the point. These shortlisting sessions create clarity where there was confusion. Set clear ground rules from the start: we’re identifying user tasks, not defending departmental interests. Once everyone’s working from the same data, those endless circular debates about what deserves homepage placement simply stop. The data settles arguments that would otherwise drag on for months.
Alignment on priorities
Convincing people that “News” isn’t a task is surprisingly difficult. It’s a format. What do users want from news? Updates on service changes? Industry trends? Background on decisions? You’ll have versions of this conversation dozens of times. “Events” isn’t a task. “Resources” isn’t a task. “About Us” definitely isn’t a task.
People confuse organisational structure with user needs. They confuse content types with user goals. A task describes what someone wants to achieve, not how you structure your content. Once everyone gets this, something shifts. Decisions on content hierarchy get easier. The team develops a shared language and a shared understanding of what matters. You’re not just getting data, you’re building consensus.
Stronger appetite for implementation.
Having the data is one thing. Getting buy-in to act on it is another. You’ll deliver results showing someone’s pet project got 0.4% of the vote. They’ll argue it’s still important. They’ll say users don’t know what they need. They’ll claim the survey missed their key audience.
This is where executive sponsorship matters. You need someone with authority to say, “The data is clear. We’re focusing on top tasks.” With that backing, top tasks becomes the end of the politics, not just more research. It gives everyone permission to focus on what matters without fear of offending departments or stakeholders.
And once you implement changes based on top tasks, the results speak for themselves. When task completion rates go up, when bounce rates go down, when customer satisfaction improves, the value becomes obvious. You’re not arguing about opinions anymore. You’re measuring outcomes.
Tips if you’re running this internally.
You’re not just running research. You’re creating alignment where there was fragmentation. You’re building consensus through evidence. You’re providing cover for difficult decisions. You’re breaking political stalemates with neutral data.
The value isn’t just in the survey mechanics. It’s in getting everyone to agree on what matters.
Position the work as collaborative discovery, not departmental evaluation. Make it clear from the start that this is about user needs, not internal politics. Get executive sponsorship early. Without someone backing you when results threaten pet projects, this becomes research that nobody acts on.
Be prepared for resistance. Top tasks threatens tiny tasks. It makes clear that much of what organisations produce isn’t what users need. Some people won’t want to hear that. Your job is to help them hear it anyway.
The gift of consensus and clarity.
Top tasks analysis isn’t about creating a perfect list. It’s about giving your organisation clarity and having the confidence to say no.
The best websites aren’t comprehensive. They’re focused. They do a few things extraordinarily well rather than many things poorly. They reflect user priorities, not organisational charts. They make the common case easy and the edge case possible.
That focus doesn’t come from adding more. It comes from having the evidence and the courage to remove what distracts from what actually matters.
When you can look at your website and honestly say “These are the six things our users need most, and we’ve designed around them,” you’ve done something most organisations never achieve.
1 & 2 Smashing Mag

