Traditional Culture Encyclopedia - Traditional culture - 6 Sigma Minitab Software Training Tools for Run Charts and Histograms How to Understand Variability

6 Sigma Minitab Software Training Tools for Run Charts and Histograms How to Understand Variability

A run chart is simply a running record of continuous data (a measurement) over a specific time frame, as shown in Figure 1. Basically, process performance is measured as a function of the variability of the time frame. This data is usually represented as a line graph. We can identify and study changes over time. The way the data points are distributed in the graph can be compared with the target data points for the particular measured data. Run charts are also easy to create and interpret, so they can create a great deal of value with little effort. In order to interpret the data accurately, you must collect enough data. Typically you should collect about 50 data points.

Running charts are sometimes referred to as time series charts or trend charts. When looking for trends over time, it's important to figure out when you're seeing changes in the data for a particular reason. A special cause is a unique situation that doesn't normally occur, such as the failure of an important piece of equipment in a production process. When evaluating run charts, the focus should be on general variation. Variation in a process is normal; however, one of the goals of a traditional Six Sigma team is to try to reduce general variation so that the variability around the goal becomes less and less. Creating and evaluating a run chart over time will help you determine whether the variation in your process is getting larger or smaller; the data plotted will not tell you what is causing the variation around the goal, but it will guide you in making changes to the process. For example, we could create a run chart to monitor the BRM's timely submission of adverse events to the FDA.

If our regulatory adherence over time is fairly small but drops significantly in a given month, then we can then evaluate what happened that month to determine why regulatory adherence dropped.

Perhaps we know that in that month. Due to an asbestos problem in an office building, there was an unexpected temporary closure of a regional case processing center, and the staff there had to be evacuated immediately, so the data could not be evaluated and processed. Unremediable situations such as this one can occur; it is an example of "causal" variation. What we can remedy and improve are the normal expected variations that can affect processes and their outcomes. The focus should be on monitoring variation around a specific target; for example, if data points are dropping over time and generally falling below the target, it would be prudent to determine what is causing the normal variation to slip and correct it.