Chapter 16. Operations Perspective

Effective business operations use capital in an efficient way to deliver what the business sells. In other words, we have to discover just the right number of resources needed to deliver the best-possible customer service. In this chapter, we use data to avoid late deliveries and slow responses to our customers' needs without bankrupting our business.

We will start this chapter by examining company-wide indicators and then we will work our way through the data to discover opportunities to improve our internal operations. We will expand on historical analysis and add the ability to use statistics to predict future supplier behavior. We will also add accuracy to our predictive analysis with the help of an integrated planning tool that we will use to confirm future demand.

In this chapter, we will review the following topics while we create our QlikView operations perspective:

  • Operations data model (accumulating snapshot)
  • On-time in-full analysis
  • Predicting supplier lead times
  • Supplier and on-time delivery correlation analysis
  • Planning in QlikView with KliqPlan

Operations data model

Operations involve multiple discrete events that are represented as documents in the ERP system. For example, our customer selling cycle includes a sales quotation, a sales order, a customer delivery, a sales return, a sales invoice, and a sales credit memo. Our supplier purchasing cycle includes a purchase order, a delivery, a return, a purchase invoice, and a purchase credit memo.

Although we can create a transactional fact table that allows us to analyze each discrete event, we are interested in analyzing the relationship between the events more than the events themselves. For example, we want to know how much time it took to deliver a product after receiving its originating purchase order. It would also be insightful to compare the quantity that we delivered with the quantity of the originating purchase order. We would have to work between multiple rows in a transactional fact table to discover this information; and just like a row-based database, we would find it challenging to work between rows in QlikView.

Therefore, we create a table where one instance of the whole operational process is stored in only one row. Each instance is an evolving event that is updated every time it reaches a milestone in the process. For example, in the beginning a row may only contain a sales order. When this sales order becomes a sales delivery, we update this row with data from the sales delivery. We refer to this fact table as an accumulating snapshot.

We measure the following evolving events in our data model:

  • Customer selling cycle
  • Supplier purchase cycle

We keep the accumulating snapshot table simple by modeling every cycle as a one-way, linear process. In our ERP system, we can begin this process with any discrete event. For example, we may start the selling cycle with a sales quotation, or we could skip this step and start with a sales order. In the script, we assume that a document created from nothing is a new cycle and concatenate a new row to the accumulating snapshot.

When we generate a document from another document, we assume that we are adding on to an existing cycle. We link this new document's data with an existing row that contains the base document's data. For example, if we generate a sales invoice from a customer delivery, we will insert this invoice into the same row as the base delivery document.

Each event contains multiple dates and amounts that correspond to different discrete events in the process. For example, one row can potentially contain the sales order date, the delivery date, and the sales invoice date. As such, accumulating snapshots tend to have more columns and fewer rows than a transactional fact table.

Tip

Although we focus on the number of rows in a fact table when we estimate the size of a QlikView application, the number of columns in a fact table is also important to consider. A large number of columns (>50) in a fact table may cause slow response times.

As sales and purchasing cycles have a similar sequence of events, we use the same columns for each cycle. For example, we use a column called [Order Quantity] to hold the quantity of both sales and purchase orders. We differentiate between each order using a field called [Process Type] that holds a text value that is either Sales or Purchasing. Even with this table optimization, the fact table contains almost one hundred columns.

Operations data model

Most of this data model's dimensions are similar to the ones from previous perspectives. The difference is in how it handles dates. Each date has its own master calendar and they all share a common master calendar. We need a bridge table to link this common master calendar to the fact table so that it can handle the multiple date fields that exist in each row. Before we look deeper into this subject in the next section, let's review the 7Ws of our operations data model:

Dimensions

7Ws

Fields

Comments

Who

Customer

This is a dimension that we first saw.

Who

Sales Person

This is a dimension we first saw.

Who

Supplier

This is a dimension we first saw.

What

Item

This is a dimension we first saw.

What

[Quote Line Status],

[Order Line Status],

[Delivery Line Status],

[Invoice Line Status]

This is how we keep track of both finished and pending cycles in the accumulating snapshot. The last step of pending cycles has an O as its line status. Otherwise, the line status is C.

When

Month, Year

This is a common set of calendar fields where we store the dates of multiple discrete events.

When

[Quote Due Date],

[Order Due Date],

[Delivery Due Date],

[Invoice Due Date]

This is where we store event-specific due dates.

How

[Quote First Step],

[Order First Step],

[Delivery First Step],

[Invoice First Step]

This indicates with a Yes or No value how the operations cycle was started.

Where

Warehouse

Warehouse is a dimension that we first saw in Chapter 15, Working Capital Perspective.

Metrics

7Ws

Fields

Comments

How Many

[Quote Quantity],

[Order Quantity],

[Delivery Quantity],

[Invoice Quantity]

These are the metrics that we measured in our transactional fact table, which are also in an accumulating snapshot. However, all these metrics are on one row and we begin the name of each metric with the name of the discrete event that it measures.

Handling multiple date fields

When we have multiple date fields on one row, we can't just fit every date into one calendar. At the same time that we handle multiple date fields in one common calendar, we also create a master calendar for each important date. The important dates in this case are when we create a new document in a cycle. We use these dates and the same calendar subroutine that we used in previous perspectives to create a separate calendar for each date field for that date. We use calendars that correspond to certain documents in the analysis that we perform in this perspective:

call Qvc.CalendarFromField('_KEY_Quote_Date','QuoteCalendar','Quote'); 

We also create a common calendar that helps the user navigate through the data without having to first think about what calendar to filter. This calendar behaves as shown in the following figure. When we select a process cycle that is identified by _KEY_ProcessID, which is equal to 1399, we notice that it contains multiple dates starting from May 15, 2012, until June 9, 2012. As a result, we can see in the common calendar's Year and Month filters that both May and June 2012 are possible values:

Handling multiple date fields

We make this behavior possible by creating a bridge or link table between the fact table and the CommonCalendar dimension. This link table is called DateLink and it stores the date of each cycle's discrete events in individual rows:

Handling multiple date fields

The following code is an example of the script to include the dates of the first two discrete events in the DateLink table. The other dates can also be added in the same way:

DateLink:Load _KEY_ProcessID,[Quote Document Date] as _KEY_Date,'Quote' as [Document Type]Resident Operational_ProcessWhere not isnull([Quote Document Date]);

Concatenate (DateLink)
Load 
   _KEY_ProcessID,[Order Document Date] as _KEY_Date,'Order' as [Document Type]Resident Operational_ProcessWhere not isnull([Order Document Date]); 

Now that we've reviewed the operations data model and how we handle multiple date fields, let's start to analyze our operational cycles.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset