How to Prioritize Strategic Initiatives — When Everything Is a Priority
You’re juggling a mountain of great ideas.
It’s overwhelming, caught between wanting to show progress and fearing wasted effort on the wrong initiative. That chaos—constant pivots, competing demands, and fuzzy goals—means you don’t just need a to-do list; you need a simple, defensible way to decide which big idea to tackle next.
This article shares that system: the VIPS Framework, an easy, evidence-based tool to help you and your team prioritize strategic initiatives. It’s not about guessing. It’s a clear, logical process to spot the projects with the biggest wins and lowest risk.
💡
Key Takeaways
Prioritize Problems, Not Solutions: The first and most critical step is to rank the underlying user problems you could solve, not the potential solutions.
Use Leading Indicators: Don't try to guess the final outcome. Instead, score potential problems on observable, evidence-based factors that reliably predict future success.
The VIPS Framework: The best problems to solve are those that score highly across complementary pillars: Value (Is this problem worth solving?), Influence (Will solving it be supported?), Pathway (Can we solve it cost-effectively?), and Strength (What is the strength of our evidence for these scores?).
Adopt an Efficient Process: First prep your data and process. Then scan your top problems for Value and Influence first. Then, tie-break the finalists using Pathway and Strength so you focus our deep-dive efforts on only on the most promising candidates.
The Challenge of the Fuzzy "North Star" When Prioritizing Strategic Initiatives
In our model, we focus on one north star: helping the organization achieve a clear Leadership Win. This is about making real progress toward its mission-critical goals and cementing its position as a industry leader.
We begin with a “Functional Win”—solving a tough, unmet need that helps the organization do its core work better. That win is the strongest sign a Leadership Win is possible, because projects that deliver real impact and strong support naturally boost leadership standing.
Because these wins can feel vague, especially in fast-paced organizations still defining them, our VIPS framework targets leading indicators—the tangible, observable signals available now—to turn fuzzy goals into a clear, evidence-based path.
As you advance your initiative, the exact nature of these wins will become clearer through alignment with peers and executives.
💡
Think of the criteria as a living framework.
As we work together and learn, we’ll keep checking: Are these the right signals? What should we update for next time? That way, the framework just gets smarter and more useful with every project.
The VIPS Framework: How to Prioritize Strategic Initiatives
To find the user problems that will deliver a "win," we will score each one against three complementary pillars, which we'll call the VIPS Framework. This ensures we choose a problem that is not just interesting, but the foundation of a successful project.
Value: Is this problem truly a valuable one worth solving?
Influence: Will major influencers support or champion it — or will it be sabotaged?
Pathway: How quick and easy is the path to a tangible win by solving this problem?
Strength: It directly asks, "How strong is the evidence supporting our scores?”
Wait, What Are We Scoring? Problems Before Solutions
Many excellent product frameworks, like the Kano Model or UTAUT2, are designed to evaluate and prioritize potential solutions. For example, the Kano Model helps you categorize features (like a "dark mode") as Must-Haves or Delighters. Similarly, UTAUT2 helps you predict if users will adopt a specific new technology you show them.
These are critical tools, but they are often used too early. The riskiest decision a team can make is to fall in love with a solution before they have fallen in love with the problem.
💡
Focusing on the problem first might feel slow, but it’s what unlocks lasting momentum—for three big reasons: (Toggle)
Risk Reduction. If you skip this step, you risk building a shaky foundation that can’t be fixed later (most projects never get a do-over).
Creativity. Zeroing in on the core issue and its root causes lets you come up with more creative, affordable solutions—instead of just running with the first idea that pops up.
Trust. When you show you truly get your customer’s challenges from the start, you earn their trust right away—and become a partner, not just another vendor with a canned fix.
A Simple Process to Prioritize Strategic Initiatives
To make this process as efficient as possible, we will break it into a few steps:
Prep your Data & Process
Scan for Core Value
Tie-Break based on Feasibility
This process is designed to help us quickly narrow down our options and focus our deep-dive efforts only on the most promising candidates.
Step 1: Prep Your List & Process
The quality of our decision is only as good as the quality of our inputs. We get quality when we have a manageable list of true user problems. Otherwise, this exercise will be overwhelming and we won’t do it.
Narrow Your List. Gather a manageable list of real user problems (around 10-20) focused on a specific audience, like an organization or function.
What to do if this isn’t possible
Have your top sponsor — or the teammate with the deepest internal context — scan the full list and use their judgment to pick the most important subset.
Convert to Problems (if applicable). Turn those into true problem statements that reflect user frustrations—not solutions. A problem statement involves a user and a specific frustration or pain they are feeling in their current workflow. A problem statement is not simply a lack of a solution.
Examples of problem statements
Less Productive:
“Project managers need to put their task list in their calendar.”
This statement is disguised as a problem but is actually a solution. It's already decided that "integrating a task list with a calendar" is the answer. It doesn't explore the user's actual frustration, which could potentially be solved in many other, possibly better, ways.
More Productive:
“As a project manager, I get frustrated when I have to manually duplicate tasks from my project plan into my calendar to block out time, which is time-consuming and leads to errors if I forget to update both.”
This good example is powerful because it doesn't presume a solution. The team could now brainstorm multiple potential solutions: a one-click sync button, an intelligent assistant that suggests calendar blocks, a completely new type of integrated view, etc. It opens up creative possibilities by focusing on the real user problem.
Assign an “Empowered Scorer.” This is someone who drafts initial scores with notes, invites feedback, adjusts scores based on input, and finalizes rankings. They document everything for transparency and future reference.
What about a live workshop?
A live workshop is a common way to build consensus, and it can be great for open-ended brainstorming. However, for the specific task of scoring, it has two major drawbacks:
It favors opinions over evidence. In a live meeting, the loudest or most senior voice can often dominate, and decisions can be made based on pressure rather than a thoughtful review of the data.
It is highly inefficient. It requires blocking many hours of synchronous time from your entire team, making it a slow and expensive way to work through a detailed rubric.
For these reasons, we strongly recommend the async ”Empowered Scorer” model. It ensures feedback is thoughtful, evidence-based, and documented, leading to a faster, smarter, and more defensible decision.
Step 2: The Core Scan (Value & Influence)
We will first score our manageable list of problems on their Value and Influence. This quickly tells us which problems are both important to solve and have the support needed to succeed.
1) Value (Cost of Not Solving the Problem)
The Core Question:Is this a problem people desperately need solved?
Rubric (Hit Toggle)
✔️
To make scoring easier, we use a simple checklist.
For each "Yes," the problem gets +1.
Each criteria can get up to 5 total points.
Evidence Checklist
+1 Definition
+0 Definition
High Impact (Cost): Does this problem have a clear, high cost in terms of wasted time, money, or resources?
+1: Yes, there is a clear, large, and quantifiable dollar, time, or mission/program cost.
+0: The cost is minimal or hard to define.
High Impact (Breadth): Does this problem affect a large number of people in their workflow?
+1: Yes, it affects a whole department or multiple teams.
+0: It only affects a small group or is an edge case.
High Unmet Need: Have users explicitly stated this is a top-three pain point in their workflow for which they have no good existing solution (unique problem for them).
+1: Yes, we have direct quotes from our audience stating this is a top 3 pain point.
+0: It's an issue, but not a top-of-mind one for users.
Active Workarounds: Is the team already trying to solve this problem with inefficient, manual workarounds they’re dissatisfied with?
+1: Yes, they're constantly building and reworking their own spreadsheets, checklists, or other "duct tape" solutions to cope.
+0: No, the current workaround (or lack thereof) is stable and sufficient.
Burning Platform: Is there a critical, near-term event or deadline that makes solving this problem urgent now?
+1: Yes, there is a clear external driver.
+0: No, it’s internally driven without any major consequences if missed.
Examples
High-Scoring Example: ("Manually compiling weekly compliance reports"): Scores a 5/5.
It's a high-cost problem for the entire compliance team (Breadth), the Chief Compliance Officer has called it a top priority (Unmet Need), they use complex, error-prone spreadsheets (Active Workarounds), and a new regulatory audit is scheduled (Burning Platform).
Low-Scoring Example ("Updating internal team contact information"): Scores a 1/5.
While it affects many people (Breadth), the cost of the problem is low, no one has requested a better solution, and there is no urgent deadline.
2) Influence (Political Support & Uniqueness)
The Core Question:Will solving this be supported, or will it be sabotaged?
Rubric (Hit Toggle)
Evidence Checklist
+1 Definition
+0 Definition
Active Champion: Is there any influential leader or individual contributor who is actively and passionately asking for this solution?
+1: Yes, there is an internal leader actively advocating to solve this.
+0: There is no clear, influential champion.
Executive-Level Sponsor: Is the primary champion a senior executive whose support carries significant weight?
+1: Yes, the champion has significant decision-making authority or influence (e.g., senior VP or executive level).
+0: The champion is a manager or individual contributor.
Strategic Alignment: Does this use case directly support a currently funded, high-priority public initiative?
+1: Yes, it clearly addresses a persistent challenge for a top-three and active strategic priority.
+0: The alignment is weak or non-existent.
Unique Approach: Does solving it offer a unique or innovative approach that enhances our leadership position?
+1: Yes, it offers a unique angle that no one or very others are doing.
+0: No, this approach is a standard "keeping the lights on" task.
Low Political Risk: Does this use case avoid threatening the turf, budget, or pride of any other powerful stakeholders or have any hidden political dependencies?
+1: Yes, it's a "win-win" or an internal efficiency tool and so it is also politically self-contained.
+0: No, it will likely create political conflict and requires another team’s buy-in.
Examples
High-Scoring Example ("Manually compiling weekly compliance reports"): Scores a 5/5.
The CCO is an active, senior champion, it aligns with the "audit readiness" goal, and it's an internal tool with low political risk.
Low-Scoring Example ("Updating internal team contact information"): Scores a 1/5.
While strategically aligned, it has no single champion and actively threatens the turf of every department, making it politically toxic.
Step 3: The Feasibility Tie-Breaker
For the handful of top-scoring problems from the prior step, we will then use Execution Feasibility (Pathway) and our Confidence Score as the final tie-breakers to select our #1 priority.
3) Pathway (Execution Feasibility)
The Core Question:How quickly and easily can we get a tangible win by solving this problem?
Rubric (Hit Toggle)
Evidence Checklist
+1 Definition
+0 Definition
Strengths Alignment: Does our team (and/or chosen technology stack) have a proven track record of solving this type of problem?
+1: Yes, this is directly in our wheelhouse or is a strong fit for the capabilities of our core technology.
+0: No, this is new territory for us or is unlikely to be a problem our technology can easily solve.
Low Workflow Complexity: Is this simple problem or use case, involving minimal data or team complexities?
+1: Yes, it is a simple, well-established workflow that requires little data sharing or cross-team alignment.
+0: No, it requires novel or complex workflows.
Easy Data Access: Can we get easy access to the high-quality data needed for this use case?
+1: Yes, the data is available and accessible.
+0: No, getting the data will be a major hurdle.
Easy People Access: Can we get easy access to the target users and stakeholders needed for feedback?
+1: Yes, they are available and willing to engage.
+0: No, they are hard to reach or have no time.
Technical Complexity: After a high-level check with the tech team, are there any major technical dependencies, complexities, or assumptions in trying to apply our tech to this problem?
+1: Yes, it is technically self-contained and simple, relying on established tech or minimal/no effort to implement.
+0: No, another major project must be completed first or requires novel or complex technical work.
Examples
High-Scoring Example ("Manually compiling weekly compliance reports"): Scores a 4/5.
It aligns with the team's strengths, is technically simple, and people are accessible. The only risk is a potential minor delay in data access from one system.
Low-Scoring Example ("Updating internal team contact information"): Scores a 1/5.
It's a new area for the team, requires complex real-time data feeds they don't have access to, and has a major prerequisite of a brand new tool the team has never used and needs to get budget for.
4) Strength (Confidence in Your Scores)
The Core Question:How much of this is fact, and how much is a guess? This is a lens through which we view the other scores. A high-scoring but low-evidence idea is a risky bet.
Rubric (Hit Toggle)
Evidence Checklist
+1 Definition
+0 Definition
1. No Data: Is the core of this idea based on a gut feeling or an internal-only opinion? Get +1 if YES.
+1: Yes, it's an initial hypothesis.
+0: N/A
2. Secondary Data: Is the idea supported by research or the experiences of other organizations?
+1: Yes, we have market research or case studies.
+0: No, we have no external supporting data.
3. Verbal Interest: Have we spoken directly to our target users and confirmed this is a top pain point?
+1: Yes, we have direct quotes from recent interviews from our target customer.
+0: No, we are assuming this is a pain point.
4. Initial Testing: Have we seen users take a small but meaningful action that indicates they want this?
+1: Yes, we have an artifact from an initial test (e.g., prototype feedback).
+0: No, we only have their words, not their actions.
5. Repeated Action: Have we seen a large number of users repeatedly take the desired action?
+1: Yes, we have quantitative data showing repeated, desired behavior (e.g., pilot usage).
Low Scoring ("Streamlining the Current Grant Writing Process"): Scores 1/5
This is a pure "gut feeling" idea with no secondary research, user interviews, or testing to support it. It is a high-risk, unvalidated guess.
Medium Scoring ("Manually compiling weekly compliance reports"): Scores a 3/5.
We have a gut feeling it's a good idea (+1), we know other companies have done this (+1), and we have direct quotes from our compliance team saying they desperately need it (+1). However, we have not yet shown them a prototype.
High Scoring (after validation via pre-selling or other approaches): Scores a 5/5.
You later took the compliance reports idea and validated it getting higher quality evidence from the rubric above. We have the initial idea (+1), market data (+1), user requests (+1), positive prototype feedback (+1), and data showing high daily use of the current dashboard (+1). We have extremely high confidence in this idea.
💡
If a promising idea scores low because of guesses, don't discard it completely.
Instead, add it to your backlog. Use the rubric as your research plan to generate better evidence to support it.
When Not to Use This Framework for Prioritizing Strategic Initiatives
This framework is a powerful tool for making complex, high-stakes decisions. However, it is overkill for some situations. Using a simpler method is the right strategy when
You Have Perfect Data: In the rare case that you have a perfect, quantitative model (like a pure ROI calculation) that all stakeholders already trust, that simpler model may be sufficient.
The Path is Obvious: If there is a true "burning platform" crisis (e.g., a system outage), the priority is clear. The time for strategic prioritization is after the immediate fire is out.
The Decision is Low-Risk: If a decision is easily reversible or the cost of failure is very low, a simple "good enough" choice is better than a lengthy analysis.
FAQ on How to Prioritize Strategic Initiatives
What is a prioritization framework?
A prioritization framework is a structured tool that helps teams make objective, consistent, and defensible decisions about what to work on next. It typically involves scoring potential initiatives against a set of pre-defined criteria to determine which ones will deliver the most value.
What are the most common prioritization methods?
Common methods include the Eisenhower Matrix (Urgent vs. Important), RICE (Reach, Impact, Confidence, Effort), ICE (Impact, Confidence, Ease), and Value vs. Effort. The VIPS Framework presented here is a strategic-level model designed to be more nuanced than these simpler scoring systems.
How do you balance competing priorities at work?
The key is to use an external, objective system that all stakeholders agree upon. The process of collaboratively scoring initiatives using a shared framework like the VIPS model is the best way to build consensus and make everyone feel that the final decision was fair and logical, even if their own project was not selected.