BLOG

Scaling agile organisations: stop using MoSCoW to prioritise initiatives

Anyone with product development experience knows there is always more work than there are people. So, how do you know what to work on first?

Jean Henson
Abstract shapes
Abstract shapes

Sometimes, this answer is easy, especially if the organisation or group is small and the work is single-threaded, meaning everything comes from one source, removing multiple projects and multiple requests from colleagues. More importantly, there are no conflicting asks of the development team. Most organisations and groups are less fortunate and need a way to identify what is most important to work on first.    

Several prioritisation methods include RICE, KANO, Walking Skelton, Eisenhower Matrix, Value vs. Complexity/Effort, MoSCoW, WSJF, and more. Each uses different criteria to help Product/Project Managers identify the most critical work. As organisations scale, the prioritisation method is often not re-evaluated, and the organisation soon finds the technique no longer supports their work.  

Many organisations at the beginning of their journey choose simple prioritisation methods like KANO, ABCDE, Eisenhower Matrix, Value vs Complexity/Effort, Walking Skeleton or MoSCoW. Due to its simplicity, the MoSCoW method is one of the most widely used prioritisation methods.

The MoSCoW prioritisation method (or MoSCow Analysis) is used frequently by teams. This prioritisation method was created by Dai Clegg in 1994 while they worked at Oracle to help their team prioritise tasks. MoSCow is an acronym for prioritisation categories:  Must have, Should have, Could have, and Won't have.  

There are rules for each category that guide whether the requirement can be assigned to a specific category. While this method is simple, it focuses on waterfall-based organisations, small products, and teams.  

Why do we say to stop using this method with agile? As an agile organisation grows and becomes more complex, MoSCoW cannot grow/scale along with it. 

These agile methods could improve several things for more oversized products and organisations trying to scale. 

As you grow to become more agile, your needs in a prioritization method change as well.  Often, this means prioritizing much larger and more complex projects.  This is where MoSCoW and prioritization methods like it fall short.  Some of these shortcomings include the following:

Lack of Big Picture thinking organisational strategy

This methodology needs to consider the organisation's priorities and business goals. Instead, it works better to identify the criticality of specific requirements/features.  

Lack of scoring mechanism/Data Driven 

The absence of data-driven decision-making often drives teams into working on someone's "pet" project rather than the most important thing needed in the organisation. MoSCoW may have requirements for each group; however, the lines between those groups can blur quickly, making it difficult to know what is most important to work on first truly.

It does not support complicated shared backlogs. 

As organisations scale their agile teams, their backlogs often see more teams contributing to a single strategic initiative. Unfortunately, MoSCoW can't weigh various factors that will help organisations prioritise work that is time sensitive. 

Agile prioritisation methods

What do we recommend? As an agile organisation, especially scaling, choosing a method requires thoughtful consideration. It is important to identify and understand the needs of the organisation. Some methods are much better for prioritising work for new products, while others are better for mature products. How important is it to the organisation that decisions are data-driven? Do you need to balance between value and technology?  

If the organisation already uses a prioritisation method, when is the last time it has delivered success and still supports the way of working? Validating the prioritisation method is much like the need to revisit processes as organisations grow to ensure they still keep the changes in the organisation.  

Choosing or validating a prioritisation method begins with understanding:

  • What should prioritisation achieve? 
  • What is the goal of prioritisation for the organisation? i.e. prioritisation should put the must-have items at or near the top of the backlog interspersed or thoroughly ahead of 'should haves'.
  • What are the use cases? Teams, ARTs, and Portfolios have different needs. 
  • Where is work for the teams or teams of teams coming from? 
  • Is work coming from more than one business owner or unit?
  • How mature is the product? Is it brand new, growing, or is it mature?
  • Are teams responsible for work contributing to the "big picture"?
  • Are teams struggling to complete work because they keep getting diverted due to HiPPO requests (Highest Paid Persons Opinion) even though the data suggests otherwise?
  • Is the most important work derailing because decisions are made by emotions or feelings rather than being data-driven?
  • What is the tolerance of the teams, ARTs or Portfolio as it applies to complexity? This means some Prioritization methods are more complex than others.  What is the overall organizations tolerance for complexity? While some organizations don't mind complexity others struggle.
  • What data is available that can help with the prioritisation of work? 

Consider the answers to the questions above when researching various prioritisation methods and narrow down the ones that align to your organisation. Some prioritisation methods such as Task management = MoSCow, Kano, Eisenhower Matrix Product Portfolio Management (see Data/Value/Tech driven methods below) are geared toward task management, while others do a better job of supporting Product Portfolio Management. This is why it is essential to know what you want your result to be at the end of prioritisation. The best way to determine which works best for the organisation is to experiment. Pick one or two methods and try them out. The organisation may need various prioritisation methods to achieve the desired result. 

Which model will work best?

Selecting a prioritisation method is highly personal from an organisational perspective. What works well for one organisation may not work well for others. If there is a low threshold for complexity, this needs consideration when deciding on a methodology. There is much information on the internet about all the methods below. Adaptavist can help your organisation choose the process that will work best for you!  

Data/Value/Tech-Driven Methods

A handful of methods are data, value, and technical-driven, meaning they provide higher consistency and predictability when used. You need a method that uses a combination of information (Data/Value/Tech) to make decisions. It's not an either or when scaling. No method is perfect, and even those that are data-driven can potentially be manipulated depending on the data used. As part of the prioritisation process, I recommend including a review to validate the data, especially if done individually.

WSJF (Weighted Shortest Job First) model

This method is popular in medium to large organisations that want a data-driven way that not only supports prioritising work for emerging and mature products but also considers value provided to the customer and effort of the team. This methodology considers the cost of delay to understand the economic benefit of the work. This methodology is not simple and utilises modified Fibonacci as the scale to rate each item, starting with 1 (the smallest). 

The following points for this method include:

  • User-Business Value – the value of the work as it relates to the Customer or Business
  • Time Criticality – the urgency of the work. Are there market rhythms to consider, or perhaps a regulatory or compliance deadline?
  • Risk Reduction – Does the work reduce risk to the business?
  • Job Size – measures the size of the work as it relates to the other items in consideration.

The weighted Scoring Model

This method is flexible because organisations can identify what is most important to them when prioritising work. As the name implies, the Weighted Scoring Model incorporates a mechanism to weigh the chosen criteria against the others. The key to this method is that once the organisation agrees on the requirements and associated weight, it is only updated when necessary (i.e., the organisation must adhere to regulations it was not previously required and needs to add that as one of the criteria). This will provide consistency over time in how the organisation prioritises.

The RICE (Reach Impact Confidence Effort) model

This prioritisation method focuses on potential impact. RICE uses a scoring model that focuses on the criteria below. ICE is a similar method; the only difference is that it excludes "Reach”, thus eliminating the impact variable. This model requires the organisation to access data such as users/customers will use the MVP/Feature, what the impact is, confidence in the estimates and the effort in person months to deliver.

How can we help?

As your organisation grows and scales, it is necessary to review your processes regularly to ensure you haven't outgrown them. Reviewing your chosen prioritisation methods is also essential when beginning an agile transformation. When any organisation grows from Essential to Portfolio or Full SAFe, not only will new processes need to be added, but existing processes need to be reviewed and adjusted. 

Our experienced agile experts at Adaptavist can help you review and adapt your processes and support by providing a roadmap to help you get to your desired agile state. 

Contact us today to discuss your specific requirements.

Sometimes, this answer is easy, especially if the organisation or group is small and the work is single-threaded, meaning everything comes from one source, removing multiple projects and multiple requests from colleagues. More importantly, there are no conflicting asks of the development team. Most organisations and groups are less fortunate and need a way to identify what is most important to work on first.    

Several prioritisation methods include RICE, KANO, Walking Skelton, Eisenhower Matrix, Value vs. Complexity/Effort, MoSCoW, WSJF, and more. Each uses different criteria to help Product/Project Managers identify the most critical work. As organisations scale, the prioritisation method is often not re-evaluated, and the organisation soon finds the technique no longer supports their work.  

Many organisations at the beginning of their journey choose simple prioritisation methods like KANO, ABCDE, Eisenhower Matrix, Value vs Complexity/Effort, Walking Skeleton or MoSCoW. Due to its simplicity, the MoSCoW method is one of the most widely used prioritisation methods.

The MoSCoW prioritisation method (or MoSCow Analysis) is used frequently by teams. This prioritisation method was created by Dai Clegg in 1994 while they worked at Oracle to help their team prioritise tasks. MoSCow is an acronym for prioritisation categories:  Must have, Should have, Could have, and Won't have.  

There are rules for each category that guide whether the requirement can be assigned to a specific category. While this method is simple, it focuses on waterfall-based organisations, small products, and teams.  

Why do we say to stop using this method with agile? As an agile organisation grows and becomes more complex, MoSCoW cannot grow/scale along with it. 

These agile methods could improve several things for more oversized products and organisations trying to scale. 

As you grow to become more agile, your needs in a prioritization method change as well.  Often, this means prioritizing much larger and more complex projects.  This is where MoSCoW and prioritization methods like it fall short.  Some of these shortcomings include the following:

Lack of Big Picture thinking organisational strategy

This methodology needs to consider the organisation's priorities and business goals. Instead, it works better to identify the criticality of specific requirements/features.  

Lack of scoring mechanism/Data Driven 

The absence of data-driven decision-making often drives teams into working on someone's "pet" project rather than the most important thing needed in the organisation. MoSCoW may have requirements for each group; however, the lines between those groups can blur quickly, making it difficult to know what is most important to work on first truly.

It does not support complicated shared backlogs. 

As organisations scale their agile teams, their backlogs often see more teams contributing to a single strategic initiative. Unfortunately, MoSCoW can't weigh various factors that will help organisations prioritise work that is time sensitive. 

Agile prioritisation methods

What do we recommend? As an agile organisation, especially scaling, choosing a method requires thoughtful consideration. It is important to identify and understand the needs of the organisation. Some methods are much better for prioritising work for new products, while others are better for mature products. How important is it to the organisation that decisions are data-driven? Do you need to balance between value and technology?  

If the organisation already uses a prioritisation method, when is the last time it has delivered success and still supports the way of working? Validating the prioritisation method is much like the need to revisit processes as organisations grow to ensure they still keep the changes in the organisation.  

Choosing or validating a prioritisation method begins with understanding:

  • What should prioritisation achieve? 
  • What is the goal of prioritisation for the organisation? i.e. prioritisation should put the must-have items at or near the top of the backlog interspersed or thoroughly ahead of 'should haves'.
  • What are the use cases? Teams, ARTs, and Portfolios have different needs. 
  • Where is work for the teams or teams of teams coming from? 
  • Is work coming from more than one business owner or unit?
  • How mature is the product? Is it brand new, growing, or is it mature?
  • Are teams responsible for work contributing to the "big picture"?
  • Are teams struggling to complete work because they keep getting diverted due to HiPPO requests (Highest Paid Persons Opinion) even though the data suggests otherwise?
  • Is the most important work derailing because decisions are made by emotions or feelings rather than being data-driven?
  • What is the tolerance of the teams, ARTs or Portfolio as it applies to complexity? This means some Prioritization methods are more complex than others.  What is the overall organizations tolerance for complexity? While some organizations don't mind complexity others struggle.
  • What data is available that can help with the prioritisation of work? 

Consider the answers to the questions above when researching various prioritisation methods and narrow down the ones that align to your organisation. Some prioritisation methods such as Task management = MoSCow, Kano, Eisenhower Matrix Product Portfolio Management (see Data/Value/Tech driven methods below) are geared toward task management, while others do a better job of supporting Product Portfolio Management. This is why it is essential to know what you want your result to be at the end of prioritisation. The best way to determine which works best for the organisation is to experiment. Pick one or two methods and try them out. The organisation may need various prioritisation methods to achieve the desired result. 

Which model will work best?

Selecting a prioritisation method is highly personal from an organisational perspective. What works well for one organisation may not work well for others. If there is a low threshold for complexity, this needs consideration when deciding on a methodology. There is much information on the internet about all the methods below. Adaptavist can help your organisation choose the process that will work best for you!  

Data/Value/Tech-Driven Methods

A handful of methods are data, value, and technical-driven, meaning they provide higher consistency and predictability when used. You need a method that uses a combination of information (Data/Value/Tech) to make decisions. It's not an either or when scaling. No method is perfect, and even those that are data-driven can potentially be manipulated depending on the data used. As part of the prioritisation process, I recommend including a review to validate the data, especially if done individually.

WSJF (Weighted Shortest Job First) model

This method is popular in medium to large organisations that want a data-driven way that not only supports prioritising work for emerging and mature products but also considers value provided to the customer and effort of the team. This methodology considers the cost of delay to understand the economic benefit of the work. This methodology is not simple and utilises modified Fibonacci as the scale to rate each item, starting with 1 (the smallest). 

The following points for this method include:

  • User-Business Value – the value of the work as it relates to the Customer or Business
  • Time Criticality – the urgency of the work. Are there market rhythms to consider, or perhaps a regulatory or compliance deadline?
  • Risk Reduction – Does the work reduce risk to the business?
  • Job Size – measures the size of the work as it relates to the other items in consideration.

The weighted Scoring Model

This method is flexible because organisations can identify what is most important to them when prioritising work. As the name implies, the Weighted Scoring Model incorporates a mechanism to weigh the chosen criteria against the others. The key to this method is that once the organisation agrees on the requirements and associated weight, it is only updated when necessary (i.e., the organisation must adhere to regulations it was not previously required and needs to add that as one of the criteria). This will provide consistency over time in how the organisation prioritises.

The RICE (Reach Impact Confidence Effort) model

This prioritisation method focuses on potential impact. RICE uses a scoring model that focuses on the criteria below. ICE is a similar method; the only difference is that it excludes "Reach”, thus eliminating the impact variable. This model requires the organisation to access data such as users/customers will use the MVP/Feature, what the impact is, confidence in the estimates and the effort in person months to deliver.

How can we help?

As your organisation grows and scales, it is necessary to review your processes regularly to ensure you haven't outgrown them. Reviewing your chosen prioritisation methods is also essential when beginning an agile transformation. When any organisation grows from Essential to Portfolio or Full SAFe, not only will new processes need to be added, but existing processes need to be reviewed and adjusted. 

Our experienced agile experts at Adaptavist can help you review and adapt your processes and support by providing a roadmap to help you get to your desired agile state. 

Contact us today to discuss your specific requirements.