Lean Project and Portfolio Management (LeanPM®) Framework

Redefining #ProjectManagement. Free to Read, Free to Use.

Work in Progress
Follow us on LinkedIn

Creation and absorption are performed through Plan-Do-Check-Act cadences (see the Lean Development Life Cycle chapter), adapted to the evolutionary cycle used and the specific context of the project. 

In this chapter we present concepts and practices applicable to the creation and absorption process in all lean projects.

Visual Management is the practice of using visual tools to manage work. It improves the efficiency and effectiveness of project management through better communication, collaboration and decision-making.

Typical visual tools are card wall, visual schedule, visual project timeline, task/kanban/Scrum board, workflow chart, project dashboard, A3 report, cumulative flow diagram and other workflow analytics visuals, burn down or burn up chart, velocity chart, value stream map, resource calendar, etc. These tools provide readily available and visible information about the work, including workflow, dependencies, priorities, project status, team performance, trends, quality information, issues and impediments.

The proper use of visual tools has several benefits:

  • Fosters communication, sharing and collaboration.
  • Reduces the information overload. Visual aids synthesize, compress and focus information. They simplify complex concepts and facilitate the interpretation and understanding of data. The human brain processes visual information much faster than text. When we add graphic elements to a text, they help the text to be perceived better and more quickly.
  • Helps build transparency and trust within the integrated project team.
  • Improves decision making. An A3 report, for instance, can document multi-month work in both critical and distilled information, with many visuals that can be interpreted at a glance. Such a report may be presented and consumed and, based on its contents, a decision can be taken – all this in just a few minutes.
  • Eliminates the need for reporting. Visual project status tools, which are updated in real time, abolish the need for periodic status reporting and significantly speed up the decision-making.
  • Improves planning, coordination and control by visualizing status, work-in-progress, dependencies, priorities, and workflow blockers and bottlenecks.
  • Creates a stimulating and fun work environment and positively influences team behavior and attitudes.
  •  Increases productivity, reduces errors and defects, and helps keep the focus on workflow.

Decentralized project management is the practice of delegating non-strategic management functions to lower levels in the project organization, down to the level of individual team members. It doesn’t replace centralized management, but complements it.

There are several reasons for decentralizing project management:

  1. It fits well with the lean culture and fosters harmonization of interests, collaboration, personal initiative and shared responsibility.
  2. Enables effective communication, fast feedback and quick response to problems and opportunities where they appear and by the people closest to them. This improves decision-making, throughput, quality and speed.  
  3. Creates multiple mini project managers, eliminates micro-management, and allows the leaders to focus on the strategic aspects of the project.
  4. Utilizes the initiative and creativity of team members.  
  5. Makes everyone empowered and responsible for project success.

We are grateful to Don Reinertsen for formulating and explaining the principles of decentralization. He addresses the issues of balancing centralization and decentralization, maintaining alignment, and the factors and human aspects of decentralization. We summarize these principles below, but encourage readers to learn more from his book. [1]

  • Problems and opportunities that are perishable are best dealt with quickly, and to do this, there is a need for decentralized control. Problems that have significant economies of scale or are big and infrequent, are best serviced with centralized resources.
  • When there is sufficient information, use triage to choose a centralized or decentralized approach to a problem. When information is lacking, attempt to resolve the problem with decentralize resources, but if this doesn’t happen within a reasonable time, escalate the problem for centralized resolution.
  • To improve efficiency of centralized resources, use them for normal, daily work as decentralized resources and quickly mobilize them, following a pre-prepared plan, to engage with a big problem.
  • Base the choice to use inefficient decentralized resources on the economic outcome. The faster response time may outweigh the inefficient resource use.
  • Achieving alignment without centralized control is a challenge. Overall alignment creates more value than local optimizations.
  • Use project goal (“why”) and minimum constraints (“what” and “how”) to provide direction and maintain coordination.
  • Define clear roles and boundaries for responsibilities, to improve communication and avoid gaps and overlap.
  • Focus the entire team on a main effort that will drive project success and subordinate other activities. When conditions change, be able to shift the main effort easily and swiftly.
  • The focus can be changed faster when there is a small team of skilled people, simpler and optimized feature set, resource reserve, and flexible product architecture. To respond rapidly to uncertainty, team members should coordinate their local initiatives through continuous peer-to-peer communication.
  • To enable absorption of variations locally, a portion of resource reserves should be decentralized at various levels within the project.
  • To rapidly reduce uncertainty, focus early efforts on high technical and market risks.
  • Provide the information they need to people who are allowed to make decentralized decisions.
  • To speed up decision-making, ensure that most decisions are decentralized and involve fewer people and fewer management layers.
  • Whatever drives economics should be used to measure performance and to provide incentives. If response times drive economics, then measurements and incentives for the team should be aligned to response time, not to efficiency.
  • The demand for internal resources may create conflicts between projects that require elevating decisions to a higher level (centralized management). As for external resources, price internal resources and use an internal market to balance the demand and supply for all projects and to enable decentralized decision-making.
  • Taking initiative and rapid response by team members is crucial, and is far better than a superior decision implemented late. The initiative should be encouraged and team members need to be given a chance to practice it.
  • Real-time, face-to-face, voice communication enables fast feedback.
  • Decentralized control requires trust in the team. Trust arises from the ability to anticipate the behavior of other team members, which is built through shared experience. Thus, trust can be built by maintaining continuity in the teams and by working in small batches which increases the number of learning cycles.

The design of project products (deliverables) is crucial for project success, because once a deliverable is designed, the biggest part of its benefits and costs are fixed.

By the time a product design is completed, 80% of the product’s life cycle cost has already been determined, while 60% of these costs are committed by the concept development phase [2]. To a large extent, the design also determines the benefits of the product.

The traditional sequential design process requires that each subsequent phase be carried out only after the completion of the previous one. The work is done in large batches, and all the information is transferred in a single batch to the next stage. This accumulates variability and creates queues that interrupt the workflow.

Here is how Don Reinertsen describes the implications of the phase gate process: [3]
“The work product being transferred from one phase to another is 100 percent of the work product of the previous phase. This is maximum theoretical batch size and will result in maximum theoretical cycle time. It also leads to maximum possible delays in interphase feedback.”

In sequential design, team members work in isolation and there is an information, communication, and often a physical wall between them. The work product is handed over by throwing it over the wall, and when something needs to be fixed, it’s thrown back over it.

There are several drawbacks to this approach:

  • Design errors are detected late and are costly to correct. The options selected in the previous phases limit the choices in the subsequent phases. 
  • The sequence of phases and the frequent need for rework extends the cycle time.
  • Stakeholder feedback is infrequent and untimely.
  • Early coordination of various aspects of design is limited and there is a lack of focus on important aspects of the product's life cycle, such as customer value, manufacturability, serviceability, operability, and social and environmental sustainability.

The result is a wasteful process and limited control over the product’s life cycle benefits and costs.

Centralized design seeks to improve the process by having team members work simultaneously on different parts of the design. Their work is coordinated by a central authority (e.g., design integration manager), who also serves as a communication hub.

Team collaboration and stakeholder involvement are limited. The work is performed internally within the team, and the work product is handed over to the downstream process in a waterfall fashion.

Centralized design improves horizontal coordination, shortens cycle time and reduces the need for rework, but leaves the fundamental problems of sequential design unresolved.

Concurrent design takes its name from the simultaneous (concurrent) performance of tasks, but it’s much more than that. It involves several important practices:

  • Integrated design teams. The work is performed by integrated multifunctional teams that have joint responsibility for the outcome. All specialties, the customer, suppliers and other stakeholders work together from the earliest design stages. The wall is “removed” and handovers are proactively avoided.
  • Life cycle considerations. All product life cycle aspects are considered from the early design steps to maximize life cycle benefits and minimize life cycle costs.
  • Parallelization of phases and tasks. Design phases overlap and whenever possible, design tasks are performed simultaneously, which reduces cycle time and the time to market. Supported by collaborative teamwork (“removing the wall”), the parallel approach improves coordination, accelerates feedback, and reduces errors. When information from other design phases is lacking, it’s temporarily replaced by assumptions.
  • Evolutionary process. Concurrent design is of an evolutionary nature. The process is iterative, with small work batches, fast feedback and adjustment. The design evolves in successive iterations.

By overcoming the disadvantages of sequential design, concurrent design has the potential to shorten the time to market, improve quality, increase customer value, and reduce waste.

Each design alternative represents a specific point within a multidimensional design space, which is the totality of all design alternatives, defined and constrained by a set of design parameters.

The conventional point-based design (whether sequential or concurrent) develops one preferred alternative, which is improved or modified in successive iterations until a satisfactory solution is reached. The changes move the location of the alternative to a new preferred point in the design space, which provides justification for calling this approach "point to point". [4]

Figure 10.1: Point-Based Design

Point-Based Design

Typically, the preferred alternative is selected upon completion of the concept development phase. As 60% of the life cycle costs have been determined by the end of this stage, the scope for further significant improvements is limited.

When an alternative is selected at the end of the conceptual phase, this is accompanied by narrowing the set of design parameters. The design space is reduced and subsequent refinements are made within narrow limits. Fixing the design parameters early results in very high costs if changes have to be made later in the design process. If the selected alternative proves unsuitable, it’s replaced with another one, creating a new constricted design space.

Figure 10.2: Point-Based Design: Macro View

Poin-Based Design: Macro View

In theory, this approach can always lead to an optimal solution if enough iterations have been run, but this comes with a high cost for the negative iterations. In practice, the design team is often satisfied with the sub-optimal solution achieved after they’ve run out of budget and time.

The point-based design approach is suitable for low-risk, rapid incremental improvements of existing products or when the design space is necessarily very restricted, but not for radical break-throughs and typically not for new product development.

Once a base design is selected, it’s refined through successive iterations until it fully meets objectives.

Set-based design (set-based engineering) is an approach in which multiple design alternatives are explored rather than a single one.

The process begins with the setting of broad design parameters, based on which multiple alternatives are defined, explored and progressively developed in increasing detail. This is a process of creating knowledge and reducing design-related uncertainty. The gained knowledge is used to eliminate infeasible and inferior alternatives, while the design parameters are narrowed. The process continues until there is only one alternative left, which is refined to complete the design. The probability of getting an optimal design outcome is greater than with a single option (point-based) design.

This approach avoids early commitment to a single option and delays the decision to choose an alternative until the most responsible moment, when the team has sufficient information and knowledge about the design. To reduce costs and increase process reliability, extensive prototyping and testing are used, as well as reusable knowledge (e.g., documented lessons learned from previous design experience).

Complex products that integrate subsystems and components require greater design flexibility. Instead of fixing the constraints for individual subsystems and components, each sub-team designs a set of options within broad parameters. Each specialty explores multiple alternatives and analyzes the tradeoffs from their own perspective. All design sets are narrowed down in parallel. The team looks for intersections of the sets and gradually converges them into a single solution, which is then optimized.

The parallel set-narrowing process is illustrated in the following figure, based on a sketch by Toyota’s manager of body engineering: [5]

Figure 10.3: Parallel Set-Narrowing Process

Parallel Set-Narrowing Process

The approach that blends the concurrent and the set-based design in Set-Based Concurrent Engineering (SBCE) is known as the Second Toyota Paradox. Toyota's design process seems inefficient, as they delay design decisions, give suppliers partial information (communicate “ambiguously”), and explore a large number of prototypes. However, they not only design better cars, but do it faster and cheaper, because well-informed decisions offset the additional cost. [6]

The SBCE framework, developed by Sobek, Ward and Liker, is the one which is most widely used in practice. It comprises three general principles and nine implementation principles, and the steps are based on Toyota's best practices. [7]

Figure 10.4: Principles of Set-Based Concurrent Engineering

Principles of Set-Based Concurrent Engineering

The choice between point-based and set-based design is a matter of economic optimization. Exploring each additional design option reduces the risk (and cost) of failure and adds development cost – so there is a need to consider the trade-off between risk reduction and cost. The optimal number of parallel design alternatives (N) occurs when the incremental benefit of the Nth alternative equals its incremental cost, and this number can equal one. [8]

Or to put it more precisely, if the design option N+1 is the first to add a negative incremental net value and all subsequent options also have negative contributions, then the optimal number of design options is N. This is the number that maximizes the net value of exploring a certain number of alternatives.

Built-in quality is a practice of getting the right quality “the first time”.

Inspection is a reactive approach. It doesn’t change quality but just registers it after the fact. Thus, the proactive approach to quality management requires building in quality from the start, as it cannot be added on if it’s not already there.

Figure 10.5: Proactive vs. Reactive Quality Control

Proactive vs. Reactive Quality Control

Here, we use the term “built-in quality” to describe the overall approach of lean project management used to achieve quality.

Quality is the totality of features and characteristics of project deliverables that generate value for the customer. The objectives of quality management, which is a subset of value generation management, are:

  1. To satisfy the customer
  2. To maximize the contribution of quality-induced life cycle benefits and costs to the project’s net value

These two goals must be in harmony, but this isn’t always the case. For instance, while a customer can be satisfied with the functionality of a software product with any technical quality (which is invisible to them), bad technical quality can reduce the customer’s life cycle benefit and increase their life cycle cost.

That’s why, when assessing quality, we must consider both external characteristics (usefulness) and internal characteristics such as manufacturability, maintainability, complexity/simplicity, flexibility, reliability, changeability, extendibility, and reusability.

Thus, we can define good quality as this set of external and internal features and characteristics that satisfies the customer and whose contribution to the project’s net value is maximized. Poor quality is defined as either something that doesn’t satisfy the customer or when a contribution to the project’s net value isn’t maximized, i.e., a set of features and characteristics that generates absolute or relative waste.

The lean project management approach to quality aims to close the five gaps in quality management:

  1. Between customer satisfaction and economic efficiency
  2. Between customer’s actual and perceived needs
  3. Between customer’s actual needs and the needs perceived by the project team
  4. Between the customer needs as perceived by the project team and the actual design
  5. Between the design and the actual deliverables

The foundation of built-in quality is the lean culture and its two pillars are economic efficiency and proactive quality management, while flow management enables these pillars.

Lean culture is a prerequisite for effective achievement of the two objectives of quality management, through its people-centered mindset and the focus on value creation, waste elimination, systems thinking, and continuous improvement.

The economic efficiency principle requires that all decisions regarding quality account for their long-term benefits and costs, with the goal of maximizing the contribution of quality to the project’s net value.

Flow management reduces waste and enables proactive quality management by the use of pull, small batches of work, work-in-progress limits, fast feedback, workflow visualization, and queue control.

Figure 10.6: The Lean Project Management Approach to Quality

The Lean Project Management Approach to Quality

Proactive quality management involves two approaches: Quality by Design and Lean Quality Control.

The Quality by Design approach designs quality by:

  • Defining customer value. The customer and the project team must work together to achieve a complete understanding of the customer's actual needs.
  • Collaboratively designing project deliverables that will generate value for the customer. The team performs lean concurrent design to explore options and analyze trade-offs, while focusing on life cycle aspects, including suitability for creation (an equivalent of manufacturability). They use set-based concurrent design and extensive experimentation, simulation, prototyping and testing, as appropriate. The stakeholders negotiate and agree on the requirements that best satisfy the quality objectives. The final design will produce quality requirements that should be achieved.
  • Designing a deliverable creation process that is most likely to achieve the quality requirements, including quality generating components: people capabilities, work methods and workflow steps, tools and equipment, and materials. Thus, the team defines a work standard that, if respected, is likely to deliver first time quality.

Once the customer value, the design (and the requirements), and the work standard are established, the team performs the workflow with lean quality control. Both Quality by Design and Lean Quality Control are powered by checking, learning, acting and adapting, to achieve continuous improvement.

Lean Quality Control involves three practices:

  • Front Loaded Testing
  • Test-First Development
  • Managing Quality at the Source

Front Loaded Testing is a proactive approach to quality control, which involves multiple-level testing with many small and fast tests as early as possible in the creation process. In a hierarchy of, for example, unit, component, sub-system and system tests, the focus is placed on testing the lower-level items at the expense of the costly and slow higher-level tests.

This approach prevents defects from entering the system and avoids expensive and time-consuming rework.

This is an approach to create tests before creating the product or any of its components. Then, the team aims to make the tests pass by creating components that fulfil test requirements. Thus, creation is proactively informed and guided by the tests. The test-first approach encourages creating and testing in small batches and a simple design which is barely sufficient to pass the test.   

The test-first practice must promote efficiency. The test should be minimized only to check the desired characteristics, and the created components should be minimized in scope only to pass the test.

Behavior-Driven Development is an evolution of the original test-first practice. It creates a shared understanding of requirements between stakeholders around scenarios of component behavior from a user’s perspective. The desired behavior must be met and serves as a test for the created component. 

Special languages can describe the desired behavior in a way that is both understandable to stakeholders and provides a structure that makes the description an executable specification.

For instance, the Gherkin language uses structuring keywords at the start of each line of the specification, followed by а descriptive non-technical text. The descriptions of steps start with Given, When, Then, And, or But. [9]

A behavior-driven specification of steps looks like this:

Given I am making a smoothie

And I have added almond milk

And I have added strawberries

And I have added yogurt

When I check my recipe

Then I need to add nothing

Quality at the source requires checking the quality at every workflow step, and detecting and solving nonconformities at the source. It works as follows:

  1. Define quality requirements
  2. Define work standard to meet the requirements
  3. Detect and visualize nonconformities as early as possible
  4. Resolve nonconformities as soon as possible
  5. Identify and eliminate the root cause to prevent a reoccurrence of the problem

Quality at the source is achieved by:

  • Standardized work – documented people capabilities, work methods, workflow steps, equipment, tools and materials that will reduce process variability and will produce a controlled output.
  • Workflow visibility and signaling about problems so that the team can quickly engage with them.
  • Performer and process customer checks. Quality should be an individual responsibility of every team member and shared responsibility of the team. The performers should check their own work before a work item moves to the downstream step. No issues should be passed on. The process customers should check the inputs from the upstream processes. These checks will reduce the need for testing, inspection and rework.
  • Stop and fix it mentality. As soon as a quality problem that cannot be fixed quickly is identified, the team stops working and directs all efforts to solving the problem. As the project incurs costs while the work is stopped, the team should be motivated to eliminate the root cause as quickly as possible. The focus is on the system of early problem detection and reporting, and quick problem solving, rather than on the individual mistakes and accountability of team members. Stop and fix it is a culture where problems are not hidden and don’t wait on the side to be solved. They get fixed quickly and their impact on the workflow is minimized.

At the most fundamental level, a team is a group of people who depend on each other to do a common job, but lean projects need specific lean teamwork practices.

Lean projects directly serve value streams and, although they are temporary, often they are regular. Thus, value streams create around themselves a dynamic network of temporary value streams (projects). The network of projects becomes part of the value stream and lean project teams can be viewed as subsets of the value stream team.

Effective value stream teams break down organizational silos. They are cross-functional, dedicated and long-lived and are constantly developing their ability to create value. Therefore, as a part of the value stream team, effective project teams should also be cross-functional, dedicated to a single project, long-lived and constantly improving.

When the customer is external, the project would benefit from stable performer-and-customer value stream teams, which need to be integrated into a single team. This is a model where work is brought to people rather than people being brought to work. 

Lean teams are cross-functional. They have all the skills, competencies and diversity needed to perform the work efficiently and effectively, and create value.

It’s preferable for team members to have T-shaped skills – deep expertise in at least one domain and broad expertise in other domains. While they excel in their core tasks, they should be able to collaborate with experts from other domains and perform a broader range of tasks effectively.

“Working together always works. It always works. Everybody has to be on the team. They have to be interdependent with one another.”
- Alan Mulally, CEO of Ford

Traditionally, the project team involves the people who carry out the activities of transforming inputs into outputs and the people who manage the project. All others who may be affected or may affect the project, including the customer and the users, are considered stakeholders outside the project team. This model disrupts communication, collaboration and interaction between stakeholders.

For instance, when users are seen as external to the product development team, developers may not have direct but only indirect contact with them through the product owner. This is a recipe for huge ineffectiveness.

Lean project teams are integrated, meaning that they integrate all stakeholders, including customers, users, partners and suppliers. The communication between the people who create project assets and the customer and users is direct and intensive. Team building, development, and performance improvement activities are performed by the integrated team.

Stakeholders should be appropriately involved in regular team activities and decision-making. Those stakeholders who for any reason don’t take part in specific team activities should be represented on the team by their avatars (stakeholder avatars).

The integrated team is especially important for ensuring that project deliverables are absorbed by the customer value stream. When the team is integrated, the absorption is done not only within the project but also by the project team, instead of being done outside the project and by an external team.

Decentralized project management needs self-organizing teams that have the autonomy to decide for themselves what to do and how, to realize the reason (why) for the project.

"Individuals and Interactions over Processes and Tools"
- Agile Manifesto

People work best when they can choose what processes and tools to use, rather than using the ones that management have chosen or, worse, have imposed, because they think it’s the best. It’s more important to have the right people who interact effectively, united by shared objectives, and the way they work should result from their interaction, not be a pre-condition for it.

In a self-organizing team, the tasks are not assigned by the management, but by the team members pulling the work and deciding internally who does what, when and how. They don’t work in isolation, but have an overview of the work of the entire team and frequently communicate and coordinate with others.

A well-functioning self-organizing team creates team consciousness and intelligence that is greater than the sum of the individuals. Team intelligence can best adapt, experiment, solve problems, learn and evolve.

Allowing and encouraging the team to self-organize is a powerful way to get them involved and take ownership and responsibility for their work.

“Without involvement, there is no commitment. Mark it down, asterisk it, circle it, underline it. No involvement, no commitment.”
- Stephen R. Covey, “The 7 Habits of Highly Effective People”

With decentralization, the role of management changes. While focusing on strategic decisions, collaborative and servant leaders do not command and control but create a supportive environment and empower the team to take responsibility for their day-to-day work.

Leaders provide the team with training, coaching and mentoring to gain self-organizing skills and with resources to perform the work. They create an environment of psychological safety, free of fear and uncertainty. Mistakes are tolerated and seen as a means of learning and improvement.

Team members are not judged by their individual performance, which leads to information hiding and creates uncertainty and competition. Instead, leaders encourage teamwork and evaluate and recognize overall team performance.

Instead of rigid control that impedes creativity and spontaneity, the key is for management to employ subtle control through: [10]
  • Carefully selecting team members to achieve a balance and personality fit
  • Creating open work environment
  • Encouraging communication with the customers to better understand their needs
  • Evaluating and rewarding based on team performance
  • Anticipating and tolerating mistakes, but expecting their early detection and quick resolution
  • Encouraging self-organizing behavior by project partners
"A lack of transparency results in distrust and a deep sense of insecurity."
- Dalai Lama

While visibility makes information easily accessible and easy to use, transparency makes the necessary information available.

Transparency means that stakeholders communicate openly with each other and be explicit about their needs, expectations, concerns, problems, mistakes, progress, etc. They share the relevant information at their disposal.

Transparency provides information about the actions and decisions of stakeholders, and the motives behind them, and thus creates predictability. This helps build trust.

When information is shared openly and regularly, the implicit is turned into explicit and the assumptions into facts. Errors, problems and opportunities are detected earlier and feedback is faster and more effective. Expectations are clearer and more likely to be met.

Creating an environment of psychological safety without fear and judgment is the most important condition for achieving transparency. Everyone should be able to express their opinions and concerns freely and be open about their desires, problems, mistakes and failures. Team members should be trained, mentored, and encouraged to be transparent.

Transparency tools like information radiators and daily stand-up meetings make it work in practice.

Flow is the movement of work items through the creation and absorption process in a steady, continuous stream. (The properties of the ideal workflow.)

Flow management seeks to minimize the following negative economic consequences. They should be considered in the context of the overall economic framework that is addressing the cumulative effect of the impact of all project variables: [11]
  • Formation of process queues of waiting work, which lead to accumulations of too much work-in-progress (WIP). This increases variability, risk, and cycle time and lowers efficiency, quality, and team motivation.
  • The ineffectiveness of trying to control timelines directly instead of controlling queue size, which provides control over timelines.
  • The economic consequences of high levels of capacity utilization (efficiency) which affect the cost of time.
  • The negative economic impact of variability.
  • The effects of large batch sizes of work: increased uncertainty and cycle time and reduced speed of feedback.

The goal of workflow management is to contribute to the maximization of the project’s net life-cycle benefit.

The three major ways to influence the flow are:

  • Controlling WIP
  • Managing queues
  • Controlling batch sizes

The amount of WIP affects the team’s throughput and the cycle times.

The Lean Portfolio Management chapter showed the application of the Little’s law to the project portfolio system. It can also be applied to a project and to the individual process stages within a project.

Little’s law reveals an important relationship between the WIP, the throughput and the cycle time:

Average WIP = Throughput * Average Cycle Time

where

Cycle Time is the time it takes a work item to pass through a work process. This is the time a work item is work-in-process/work-in-progress.

Work-in-Progress (WIP) is the inventory of work (measured in work items or amount of work) in a work process.

Throughput is the average output of a process per unit time (throughput is always an average value, so we don't need to use "average throughput").

Note that Little’s Law "cannot always be treated as if any pair of variables selected from throughput, WIP and cycle time can be independently altered to set the third variable to a desired target" [12].

The following should be kept in mind about Little’s Law:

  1. The law can be used for optimization but isn't itself an optimization model.
  2. When two of the variables are known, the third can be found, but the law cannot be used to predict how a change in one of the variables will affect another variable.
  3. A change in throughput (average cycle time) will cause the ratio of average WIP and average cycle time (throughput) to change proportionally.
  4. A change in average WIP will result in a change in average cycle time and/or throughput. When both are affected, the change can be in opposite directions.
  5. The relationships between the three variables are not linear. The one between throughput and average WIP is curvilinear (see Figure 10.7 below).

The law can be applied directly to projects that comprise independent work items which run end-to-end through the same workflow, such as user stories that represent incremental values for the customer.

The overall cycle time can be assessed for a project with a certain level of total WIP (including that part which is in queue) using historical information about the team’s throughput in similar projects with the same workflow process. The total WIP includes all work items in the project.

The relationship is:

Total Cycle Time = Total WIP/Throughput

The forecast completion time for a project with 192 items and a throughput of 48 items per week is 4 weeks.

Suppose historical data about the throughput are not yet available, but the initial data show that the average cycle time for the team is 0.5 weeks per item. There are 16 people on the team and the optimal level of WIP has been assessed at 1.5 items per person, including the items in queue. Thus, the average WIP is 24 items (16*1.5).

Now the team’s throughput can be forecast as:

Throughput = Average WIP/Average Cycle Time = 24/0.5 = 48 items per week

But let's say there is a need to complete the project in 3 weeks. The total number of work items remains the same, and it is necessary to increase the team’s throughput to 64 items per week. How many people are needed on the team?

Using the formula above, 32 items of WIP (64*0.5) are needed on average and according to the policy of 1.5 items of WIP per person, 21 people are needed on the team. The assumption is that the enlarged team will have the same average cycle time.

Note that the examples above will also be valid if the work is measured in story points or ideal hours.

These are simplified examples. To improve the quality of estimates, statistical forecasting can be used.

The table below shows a project with 20 work items that took 26 working days to complete. All work items go through the same workflow (e.g., Preparation, Development and Acceptance) and they can be anything – from software features and user stories to prefabricated houses, handmade jewelry and 3D printed models. The throughput can be easily calculated by dividing the number of work items by the number of days: 20/26 = 0.77 items per day.

But perhaps it is necessary to check whether Little's law applies to this project. To do this, the average cycle time (2.80 days) and the average WIP (2.15 items) must be calculated. Dividing 2.15 by 2.80, results in a throughput of 0.77 – just as much as calculated above.

This proof provides peace of mind. However, is it possible to complete any other similar project with 20 work items in 26 days? Not really. In fact, the statistical probability of completing another project in 26 days or fewer is about 50%.

The timing of a future project cannot be calculated, but we can get a probability distribution that will improve the quality of our forecast.

For this purpose, historical data from a similar project with the same context are needed: a stable team and work process, and consistent work items policy (type, definition, approximate size and complexity). Let's assume that this is so in the current case.

In this example, the work items are of similar but not identical size and complexity. The cycle time varies from 1 to 5 days, and the WIP – from 1 to 4 items.

Estimation Example

Of interest is the data pattern. It reflects variations in cycle time, WIP, and team productivity. This pattern also registers the consequences of the obstacles and problems encountered during the project. Errors, defects, reiterations and rework may have shaped the data, too.

Monte Carlo simulation is a tool that can decipher the pattern of historical data and calculate the probabilities of occurrence of the possible outcomes for future projects. It uses a computational algorithm fed by data from a repeated random sampling (random sampling – imagine playing roulette that generates random numbers at a casino in Monte Carlo).

The data in the example are not enough to win the roulette game, but they suffice to forecast the delivery time.

So, let’s go back to the project game and turn the roulette 1000 times (repeat the random data sampling 1000 times), and voila! Here's what we get:

Monte Carlo Probability

Probability density function (PDF) defines the probability for a discrete variable. The probability of completing a project with 20 work items in 13 or 31 days is 1.5% and 27%, respectively.

Cumulative distribution function (CDF) shows the probability that the project will be completed in a certain number of days or less. For instance, the probability of completing a project with 20 work items in up to 27 days is 55% – this is the sum of the discrete probabilities for 27 days and each individual case over a smaller number of days. The corresponding probability for 32 days is 85%.
Monte Carlo Probability Chart

If we feel comfortable enough with an 85% certainty, we can assume that we will complete the next 20-items project in no more than 32 days.

With the Monte Carlo simulation, we can forecast the time to complete a project with any number of work items. There is an 85% probability that we will complete a project with 33 or 55 items in no more than 54 or 91 days.

As a shortcut to quickly forecast time with 85% certainty, in this specific case a throughput of 0.6 (check 20/32, 33/54 and 55/91) can be used. In fact, what the Monte Carlo algorithm extracts from the historical data is the probability distribution of the throughput.

We already have a time estimate. What about the cost?

Suppose a team of three works full-time on their projects (including the project for which there are historical data). They cost $1,000 a day. In addition, the indirect costs are $100 a day and the average material cost for each work item is $50.

The estimated cost for a project with 33 items will be $61,050 with 85% certainty. Here, the costs are almost entirely a function of the working days, so the probability distribution of the estimated costs will mirror that of the estimated time.

Playing roulette using historical data gives much more reliable estimates than playing poker using planning poker or other similar estimating techniques.

However, how can Little’s law be applied to assess the total cycle time of a project that does not consist of independent work items which run end-to-end through the workflow? This is the case with many types of projects. Work items cannot be used to measure the work and the throughput. Other units of work need to be used to measure the effort, like hours.

A project with a total effort (total WIP) of 1200 hours and a throughput of 80 hours per week will have a total cycle time of 15 weeks.

This approach will only work as an approximation and if there is historical information about the same team, and for similar projects with (ideally) identical workflow profiles in terms of the number and type of process stages, distribution of the efforts between these stages and between the tasks which are on and off the critical path, etc.

Alternatively, the formula can be applied to assess the cycle time of each process stage, and then determine the total project cycle time while focusing on the critical path activities that affect the cycle time.

Contrary to popular belief, minimizing WIP doesn’t minimize cycle times and project cycle time, as WIP has the important function of buffering variability. Instead of just restricting WIP, we actually need to find the optimum level and set an upper and lower limit.

The WIP level determines three zones of system performance: [13]

  • High WIP level – overload zone. Excessive WIP increases cycle times but does not improve the throughput. WIP has to be reduced to improve system performance.
  • Low WIP level – starvation zone. When WIP is reduced to a minimum, process cycle times are the shortest possible. But there is no buffer in the system and it’s exposed to the negative effect of variability. Throughput can drop dramatically due to process starvation, which affects the overall project cycle time.  
  • Optimal WIP level – optimal zone. This is the WIP level that ensures best system performance with maximum throughput and minimal cycle time.

Figure 10.7: WIP Limits

Troughput and WIP

The optimal WIP at all process stages helps to achieve a smooth workflow and to reduce unevenness (Mura) which in turn reduces waste.

"At low WIP levels (below the Critical WIP), increasing WIP greatly increases throughput while cycle times are changed little. But at high WIP levels, the reverse is true: cycle times increase linearly with increasing WIP while throughput is almost not affected." [14]

- Mark L. Spearman and H.J. James Choo

Excessive WIP not only affects cycle time, quality and the feedback speed but reduces productivity and throughput because of multitasking.

When we focus on a task, we gather relevant contextual information, which is necessary for its effective performance. Initially, this information is stored in our short-term memory, but when we focus long enough, it moves into our long-term memory and we develop context awareness about the task.

However, when we shift our focus to another task before finishing the previous one, we switch the context. We empty our short-term memory to free capacity for the new context, and when we switch back to the previous task, we have to recover the lost information. Each switch or interruption is associated with a cognitive loss and wastes time and energy. Multitasking lowers the quality of work and provokes stress.

The task-switching penalty includes: [15]

  • Wasted time for physically performing the switch
  • Rework of untimely aborted work
  • Time to restore the context
  • Frustration cost
  • Loss of team binding effect

Therefore, the optimal WIP level is the one that produces the best balance between cost-of-time savings (reduction of cycle time) and the benefit of better productivity (minimized multitasking), and the cost of capacity underutilization (resulting from variability and starvation).

WIP limits are a pull-based tool for matching the work with a team’s capacity. The constrained WIP helps to focus a team’s collaborative efforts, not just on doing work but on fast completion of specific tasks.

If we set only an upper WIP limit, it would mean that we are satisfied with any lower level, even zero. Of course, zero WIP means zero throughput and zero capacity utilization. Therefore, we need to set both the maximum and minimum WIP limits to frame its optimal level.

Placing a WIP constraint at each process stage limits the total WIP in the system. This can be done using a kanban system. When the maximum limit for a certain stage is reached, it stops taking work from the upstream process and the restraining signal gradually propagates upwards. Once the process stage frees up capacity to pull new work, the demand signal propagates upwards and the smooth workflow is resumed.

Intermediate WIP buffers help to synchronize the flow between processes that use batches of work of different sizes.

Figure 10.8: Kanban Board with Minimum and Maximum WIP Limit

Kanban Board with Minimum and Maximum WIP

The original Little’s Law deals with queueing systems which consist of discrete objects that enter a system at some rate, spend some time in queues and in service and after being serviced, leave the system.

The Law says that “under steady state conditions, the average number of items in a queuing system equals the average rate at which items arrive, multiplied by the average time that an item spends in the system”

Little’s Law will hold if two assumptions or conditions are satisfied: [16]

  • Boundary condition: a finite time window to start and end with an empty system.
  • Conservation of customers – all customers that enter the system will be serviced and will exit the system. No customers are lost and the number of arrivals equals the number of departures.

When the law is stated in terms of the average output (as with an operating or project system), rather than the arrival rate, and the system isn’t empty at the beginning and end of the time window, then it also applies, at least as an approximation, if the following conditions are met: [17]

  • Conservation of flow – all work items that enter the system will be processed and will exit the system. No work items are lost, and the number of arrivals equals the number of departures.
  • The size of the WIP is roughly the same at the beginning and end of the time interval. There must be neither significant growth nor decline in the WIP's size.
  • The average age or latency of the WIP is stable. When the WIP never drops to zero, the jobs shouldn’t be getting older or younger. Aging work units accumulate passive WIP that doesn’t leave the system, while the law overstates the actual time in system for those units that have left it.

The above conditions have practical implications. On the one hand, they are needed so that we can have reliable flow metrics. On the other hand, when the WIP size is optimal, they facilitate smooth flow and sustainable throughput. Therefore, these conditions should advise the process policies.

Little’s Law conditions reinforce the economic framework. For instance, there is another reason to actively control the aging work items which are not only a symptom of problems and waste but also distort the flow metrics.

Our TTSS addition to Little's Law states that the Total Time Spent [by Items] in a System (TTSS) during a specific period is equal to the average number of items in the system (L) over that period multiplied by the duration (P) of the period.

Mathematically, it can be expressed as:

TTSS = L * P

where:

  • TTSS is the total time spent by items in the system,
  • L is the average number of items in the system, and
  • P is the duration of the period.

Interestingly, this metric doesn't directly depend on the quantity of items or the time they spend in progress (or within the system); rather, it relies on the average number of items present throughout the period.

TTSS is a measure of system load or utilization. Here are examples of how it can be calculated:

  • For a period of 65 days, the average number of items in a system is 33. Therefore, the total time the items spend in the system over that period (TTSS) is 33 x 65 = 2145 days.
  • An organization has an average of 22.7 projects in progress. If we assume that there are no canceled or paused projects and that all projects start and finish within a year, what would be the sum of the durations of all the completed projects for that year? TTSS = 22.7 x 1 year = 22.7 years.

The cumulative projects-in-progress time within a given period (TTSS) is determined by the average number of projects in progress (WIP), not directly by the number of projects and their durations.

A queue is a sequence of entities (in project management – work items or jobs) awaiting their turn to be serviced.

A queueing system consists of jobs, the job arrival process, queue or waiting line, servers and service process, and the departure process.

Projects are queueing systems whose queues can accumulate a WIP inventory and affect cycle time. Therefore, it’s important to understand the fundamental properties of queuing systems, the factors influencing queues, and how they can be managed.

Figure 10.9: Queueing System

Queueing System

If a job arrives in the system every 20 minutes, and the service time for each job is exactly 20 minutes, no queue will form and the queue time will be zero. The cycle time (queue time plus service time) will be equal to the service time. Server utilization will be 100%.

Suppose four jobs (A, B, C and D) are expected to arrive at 13:00, 13:20, 13:40 and 14:00, but job B is 10 minutes late and arrives at 13:30. The server will be idle for 10 minutes and its utilization will fall below 100%. There is no way to compensate for this, and utilization for a certain period can only go down. Job C will arrive on time, but will have to wait in the queue for 10 minutes while the server is busy.

If job D arrives 10 minutes earlier (at 13:50), this will restore the average inter-arrival time (20 minutes), but since job C will be completed at 14:10, D will be finished at 14:30. If the next job arrives on time (at 14:20) it will be in a queue for 10 minutes.

In another case, let's assume that the four jobs arrive on time, but the service time varies – it’s 5 minutes less for job A and 5 minutes more for B. After completing job A, the server will remain idle for 5 minutes and its utilization will decrease, and after job B is serviced, a queue of 5 minutes will be formed.

The examples above show that variations in inter-arrival and service times can lead to queuing and reduced capacity utilization.

Queueing systems can be described by a three-factor A/S/c notation where A characterizes the arrival process – the distribution of inter-arrival times, S describes the service process – the distribution of service times, and c indicates the number of servers.

Queueing models can be very complicated. The basic model that can describe a project queueing system is the M/M/1 queue – Markov arrival process/Markov service process/Single server. In this model, the inter-arrival and service times are exponentially distributed and the jobs are serviced by a single server.

In an M/M/1 queue, jobs arrive one at a time and inter-arrival and service times are exponential random variables. The average inter-arrival time is known, but both the arrival and service process are stochastic (random) and memoryless. The distribution of inter-arrival and service times does not depend on the history of each process, but only on its present state.

The project team, which has shared responsibility for the work, can be considered as a single server. The server may be an infrastructure or machine, such as a 3D printer or test server.

Variability causes queues and queue sizes increase with capacity utilization of the server. The M/M/1 model can be used to describe the relationships between the average service time, arrival rate (jobs per unit time), capacity utilization, and waiting time: [18]

(1) Cycle time = Waiting time in queue + Service time

(2) System utilization or Utilization of the server (the fraction of time the server is busy) = Arrival rate/Service rate

or

(3) System utilization = Arrival rate * Average service time

(4) Service rate = 1/Average service time

(5) Probability of an empty system = 1−System utilization

(6) Average waiting time in queue = (System utilization/Probability of an empty system) * Average service time

For instance, if the arrival rate is 4 jobs per day and the service rate is 5 jobs per day, the utilization is 0.8 (80%). The service rate is the average number of jobs that can be served at 100% utilization. The server is idle when it’s available to perform the service, but the job queue is empty. If work is just stopped, this will not reduce the utilization, because the capacity will not be available.

When service time is sustainable, the average cycle time depends only on the average waiting time (1) which in turn depends only on the system utilization (6). Therefore, it’s important to find out more about the relationship between system utilization (team’s capacity utilization) and the queues. Let's look at an example.

Suppose that the average time to service a job is one day. What will be the average waiting time in queue (AWT)?

Let's start with 20% utilization. We can expect that the arriving jobs will wait on average 0.25 days in queue (see formula 6 above).

AWT (20% utilization) = (0.2/0.8)*1 = 0.25*1 = 0.25 days

Let's now increase the utilization:

AWT (40% utilization) = (0.4/0.6)*1 = 0.67 days

AWT (50% utilization) = (0.5/0.5)*1 = 1 day

AWT (60% utilization) = (0.6/0.4)*1 = 1.5 days

AWT (67% utilization) = (0.67/0.33)*1 = 2 days

AWT (75% utilization) = (0.75/0.25)*1 = 3 days

AWT (80% utilization) = (0.8/0.2)*1 = 4 days

AWT (90% utilization) = (0.9/0.1)*1 = 9 days

AWT (95% utilization) = (0.95/0.05)*1 = 19 days

AWT (99% utilization) = (0.99/0.01)*1 = 99 days

AWT (100% utilization) = (1/0)*1 =

The project work is variable, which interrupts the workflow. The work items are of varying complexity and size. Team productivity and completion times vary. Rework and reiterations cause even greater interruptions.

When there is slack in the system, the effect of these interruptions may not be dramatic, but as the slack decreases, the system clogs.

As capacity utilization increases, the waiting time gets disproportionately longer. At 50% utilization, the waiting time is equal to the service time, but at 75% and 80%, the waiting time is three and four times longer than the service time.

Comparing cycle time and service time, at 50% utilization, cycle time averages two times the service time and at 90% utilization, cycle time averages ten times the service time.

The queue grows dramatically when utilization exceeds 90% and becomes infinitely long at 100%.

Therefore, matching team capacity and job demand requires planning for an appropriate capacity utilization, not 100% utilization. There must be slack in the system to absorb variations.

The slack – the time when team members are not busy – is necessary for another important reason. It improves team effectiveness. Slack provides flexibility, reduces stress, improves quality, increases security and provides resources for change, innovation and learning. [19]

The optimum utilization should be based on the trade-off between the cost of queue, the cost of capacity (efficiency loss) and the effectiveness gains.

Figure 10.10: Capacity Utilization and Queuing

Capacity Utilization and Queuing

It is important to remember that capacity utilization is a ratio of the rate of arrival of jobs to the service rate (and the service rate is calculated at 100% utilization). To reduce utilization and the queue size, it is necessary to decrease the arrival rate or increase the service rate (team capacity), or both.

The M/M/1 queue assumes that work items arrive one at a time. When multiple items arrive simultaneously in a given time instant, this is known as “bulk or batch arrival”. This is the situation in traditional project management, where jobs arrive in large batches that can be considered as aggregations of individual work items.

Bulk arrivals further increase variability and the waiting time.

The project workflow usually involves a series of queues. To manage queues, it's important to visualize them. We can hardly manage what we cannot see and imagine. Visibility drives action for improvement.

In projects with physical deliverables, most queues and WIP are physical inventories that accumulate and signal problems. However, in many projects, including all knowledge work projects, queues and WIP become digital information which is stored on electronic media.

The kanban board is a great way to visualize the workflow, queues and WIP. The process steps are mapped onto the board and work items that should flow through the process are visually represented by kanban cards.

In the example below, the clustering of cards in the Preparation: Done column is a queue, and it indicates a problem with the work going through this stage of the process (problem with processing the work at the Development stage). The empty area in Preparation: Done shows that the work is blocked upstream and that the downstream part of the process may soon be starved.

Figure 10.11: Workflow Visualization

Workflow Visualization

Within the project, the WIP (the work in the creation and absorption system) includes the work that is being worked on and the work that is waiting in a queue. The same applies to the WIP of the individual process stages. Technically, WIP that exceeds the optimal level should be considered a work in queue.

When the WIP isn’t limited, invisible queues are formed. When the queues have been moved into separate process sub-stages, formally, all other work items in progress are being worked on. But in fact, when the work in progress is large, some of these items are being serviced, and the others are waiting in queue.

If we work on all the items at the same time, just to eliminate the queue artificially, the multitasking will only make things worse. By constraining the WIP, the queues become more visible.

Another popular tool for visualizing workflow and queues is the Cumulative Flow Diagram (CFD).

Figure 10.12: Basic Cumulative Flow Diagram

Basic Cumulative Flow Diagram

The CFD is a tool for visualizing the workflow. It presents project progress, the stability and trends of the flow, and signals problems.

Figure 10.12 shows a basic CFD for a single stage workflow. The vertical axis shows the amount of cumulative work that has been started or completed at any given time. The horizontal axis is a time scale. The Arrivals and Departures lines represent the amount of work that has been started and completed. The work between the two lines has started, but has not yet been completed, i.e., it’s a work in progress.

The vertical distance between the two lines shows the work in progress at the corresponding moment, and the horizontal distance indicates the time elapsed to move a unit of work from the beginning to the end of the process, i.e., the cycle time.

The slope of the lines shows the rate at which the work arrives (we take on new work) and leaves the process (we finish the work started). We can easily check if the arrival rate is approximately equal to that of the departure (more or less parallel lines), which would mean that the WIP and cycle time remain stable. The sections of the Done line show what the throughput is for a certain period.

In the example above, the arrival rate is greater than the completion rate. We start more work than we finish and the WIP, and the cycle time, are constantly increasing, while the throughput is decreasing.

Figure 10.13: Ideal Cumulative Flow Diagram (1)

Ideal Cumulative Flow Diagram

Figures 10.13 and 10.14 present a cumulative flow diagram of an ideal workflow, which has the following properties:

  • The cycle time is short, constant and predictable
  • The WIP in each process stage is constant and optimal
  • The throughput is constant and predictable
  • All work arrival and departure rates are constant and equal
  • The process stages are perfectly synchronized – the inter-stage transitions are performed with a common and constant rhythm
  • There is no unevenness (Mura) and the workflow demonstrates perfect stability
  • The forecasts are reliable

Figure 10.14: Ideal Cumulative Flow Diagram (2)

Ideal Cumulative Flow Diagram

Of course, such a flow isn’t only ideal but also idealistic. Real CFDs look very different, yet the ideal image helps us to see anomalies that signal obstacles, blockages and bottlenecks, increasing WIP and cycle time, productivity problems, quality issues and rework, changes in priorities, scope creep, etc.

The diagram does not reveal the root cause of the problems, but calls for in-depth analysis.

Figure 10.15:  Cumulative Flow Diagram

Cumulative Flow Diagram

Figure 10.15 shows several of the many possible problem situations that need deeper analysis:

  • Scope change (A)
  • Bottleneck (B) – the Acceptance sub-process takes on a new work but does not complete any work
  • Work is blocked or is put on hold – for example, because of a change in priorities (C)
  • WIP and cycle time increase and the completion date moves forward (D)

The queue size is a leading indicator of cycle time and the cost of time, and so knowing that is important for decision-making. The size of the queue can be measured according to the amount of work in the project that isn’t yet done, and the average flow rate can be used to estimate the expected time to complete.

The queue time can also be measured by tracking a work item and measuring how long it takes to go through a queue.

To measure the waiting time of a sequence of queues in a multistage process, the times when an item arrives and departs the process have to be recorded, as well as the time spent in service. The queue time is the difference between the total time elapsed from the beginning to the end of the process (cycle time) and the time that the item is being worked on.

Queues can be controlled by dealing with variability, demand, and supply.

Dealing with variability

Variability is inherent in project work, and it’s a natural product of knowledge creation and innovation. Therefore, trying to eliminate the variability of requirements, size and duration of tasks, team productivity, reiterations, etc., is dangerous and can undermine effectiveness.

There are two options for dealing with variability:

  • Provide slack in the project through capacity, WIP and time buffers
  • Reduce uncertainty by creating knowledge through a series of experiments (hypothesis testing)

Buffers and experiments have a cost which must be traded off for the benefit of reducing queue size.

Controlling demand and supply

WIP constraints limit the number of projects in progress and the amount of work in the creation and absorption system of a project. Local WIP limits can be used to minimize the total queue cost by shifting the queue location.

However, there may still be long queues in the portfolio and project backlogs and demand may need to be constrained by:

  • Preventing arrivals into the queueing system – blocking new projects and new requirements
  • Ejecting items from the queueing system: removing projects by raising the ROI threshold, purging low-priority project requirements, and looking for aging work items and deciding whether they are still valuable or should be purged

Supply can be influenced by providing additional temporary capacity. Examples of supply-oriented responses to emerging queues are: [20]

  • Applying additional resources to the queue – even small extra efforts can significantly reduce the queue size
  • Directing resources working part-time on the project to high-variability priority tasks on the critical path
  • Using high-powered experts who are intentionally not fully loaded and are able to respond quickly to an emerging queue
  • Using generalizing specialists (T-shaped resources) who can handle a wide range of tasks and can be quickly reassigned where needed
  • Cross-training team members at adjacent processes so they can help each other

A batch is the amount of work that passes from one stage to the downstream process stage. The size of the batches has a huge impact on the flow.

The sequential development provides the opportunity for the largest batch size and the longest project cycle time. One development phase must be completed for the next to begin. And the entire work product is transferred as one batch from one phase to another. [21]

Large batches require more time to complete and, until then, the downstream work is put on hold. Cycle time is extended and the delivery of value is delayed. Trust is broken. Feedback opportunities are infrequent. Errors and problems are discovered later, when rework is costly. Learning and reducing uncertainty takes more time, and the risk of failure increases.

The work process lacks flexibility, and the response to problems and opportunities is slow. Complexity increases, which requires bigger integration effort and makes it difficult to identify the root causes of problems.

Large batches of work require funding in large batches, so the lean principle of incremental funding cannot be applied.

Finally, as we discussed above, large batches are bulk arrivals into a queueing system and they increase variability and the queue size.

Figure 10.16: Large Batches

Large Batches

With such major shortcomings of large batches, smaller batches can significantly improve project performance.

How small should the small batches be? The optimal batch size is a tradeoff between the cost of holding an item onto the batch and the transaction cost of sending the batch to the next process (e.g., the cost of software deployment).

Actually, small batches are the practical tool for achieving incremental creation and absorption.

Large batches are synonymous with a massive amount of work fatally falling over a series of steep drops, too steep to be overcome in the opposite direction.

If large batches are a problem, can't we break them down into separate work items and get small batches? Unfortunately, the magic will not always work.

Figure 10.17: Large Batches and Queues

Large Batches and Queues

Let's look at Figure 10.17. Completed Phase 1 items pile in a queue in the Phase 1: Done column and do not proceed to the next step of the process. Any completed Phase 1 item cannot release a Phase 2 work item.

The Phase 1 work can get meaningful feedback from the stakeholders only after we fully complete it. Only when all Phase 1 items are finished can we do a phase review and move the entire work product to Phase 2. 

Hence, in order for a work item or a pool of work items to be a separate batch, two conditions must be met:

1) It must be able to receive meaningful stakeholder feedback

2) It must exit the project or pass from one process step to another and release subsequent work

Individual batches can form larger batches. The projects can be seen as batches, the size of which is reduced by using Minimum Viable Projects

The capacity to work in small batches depends on our ability to create a workflow that allows for quick feedback and fast movement of work items down the stream. In stark contrast, the sequential development naturally requires working in large batches, while effective concurrent development needs smaller batches.

Figure 10.18: Vertical vs. Horizontal Slices of Work

Vertical vs. Horizontal Slices of Work

To reduce batch sizes, we can:

  1. Use concurrent development and work in vertical slices of functionality.
  2. Apply the principle of minimum viability not only to projects but also to work items.
  3. Use a flexible system structure that allows independent changes in project deliverables.
  4. Split the work items into the smallest units of customer value. This also applies to the units of process customer value.
  5. Create a workflow that allows frequent and fast feedback.
  6. Minimize handoffs and the amount of work being transferred between team members.
  7. Integrate and absorb the work products regularly and often.

This is a Lean metaphor. Rocks are hidden under the water. When the water level is high, a small part of them appears above it, but when the level drops, we begin to see more and bigger rocks, and other obstacles.

The water level represents the WIP inventory, the size of the queues, the batch size and the PDCA cadences. When they are big and high, they hide many problems and weaknesses. When we start to constrain them, problems, weaknesses and bottlenecks become visible and we can address them.

To improve project performance, empirically limit WIP and the batch size, control queues and set more frequent PDCA cadences. Then deal with the issues, adjust and repeat.

Follow us on LinkedIn

We'd love to hear from you. 

Share your thoughts in the comments below!

_______________

[1] Reinertsen, Donald G. (2009). The Principles of Product Development Flow: Second Generation Lean Product Development. Celeritas Publishing

[2] Anderson, David M. 2014. Design for manufacturability: how to use concurrent engineering to rapidly develop low-cost, high-quality products for lean production. CRC Press

[3] Reinertsen, Donald G. (2009). The Principles of Product Development Flow: Second Generation Lean Product Development. Celeritas Publishing

[4] Ward, Allen, Jeffrey K. Liker, John J. Christiano and Durward Sobek II. Spring 1995. “The Second Toyota Paradox: How Delaying Decisions Can Make Better Cars Faster”, Sloan Management Review

[5] Ibid. 

[6] Ibid.

[7] Sobek II, Durward K., Allen C. Ward and Jeffrey K. Liker. Winter 1999. Toyota’s Principles of Set-Based Concurrent Engineering. Sloan Management Review

[8] Reinertsen, Donald G. (2009). The Principles of Product Development Flow: Second Generation Lean Product Development. Celeritas Publishing

[9] https://cucumber.io/docs/gherkin/reference (last accessed 2022-10-20)

[10] Takeuchi, H; Nonaka, I (1986). "The New New Product Development Game". Harvard Business Review (January): hbr.org/1986/01/the-new-new-product-development-game (last accessed 2022-10-20)

[11] Reinertsen, Donald G. (2009). The Principles of Product Development Flow: Second Generation Lean Product Development. Celeritas Publishing

[12] Spearman Mark L. and H.J. James Choo (12/09/2018). Unintended Consequences of Using Work-In-Process to Increase Throughput. Project Production Institute: projectproduction.org/journal/unintended-consequences-of-using-wip-to-increase-throughput (last accessed 2022-10-20)

[13] Ed Pound, Factory Physics Inc.: https://factoryphysics.com/flow-benchmarking (last accessed 2022-10-20)

[14] Spearman Mark L. and H.J. James Choo (12/09/2018). Unintended Consequences of Using Work-In-Process to Increase Throughput. Project Production Institute: projectproduction.org/journal/unintended-consequences-of-using-wip-to-increase-throughput (last accessed 2022-10-20)

[15] DeMarco, Tom. 2001. Slack: getting past burnout, busywork, and the myth of total efficiency. Broadway Books

[16] Little, J. D. C. and Graves, S. C. (2008). Little’s Law. In: D. Chhajed and T.J. Lowe (eds.) Building Intuition: Insights from Basic Operations Management Models and Principles, Springer Science+Business Media

[17] Ibid.

[18] Hopp, Wallace J. (2008). Single Server Queueing Models. In: D. Chhajed and T.J. Lowe (eds.) Building Intuition: Insights from Basic Operations Management Models and Principles, Springer Science+Business Media

[19] DeMarco, Tom. 2001. Slack: getting past burnout, busywork, and the myth of total efficiency. Broadway Books

[20] Reinertsen, Donald G. (2009). The Principles of Product Development Flow: Second Generation Lean Product Development. Celeritas Publishing

[21] Ibid.


{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
>