Gotta Go Lean Blog

Calibration

Calibration is the process of comparing the measuring or output capabilities of a piece of equipment to a known standard. This allows for one of two basic actions that make sure that the equipment can be used effectively. You can apply a correction factor to the instrument, much like “Kentucky Windage.” Or you can actually adjust the machine to remove the gap between the reference standard and the item being calibrated. In the common vernacular, Read more…

C-Level Executive

C-Level executives are the top individuals in an organization’s hierarchical structure. The most common are: CEO (Chief Executive Officer) CFO (Chief Financial Officer) COO (Chief Operating Officer) There are also frequently c-level executives in charge of marketing or information technology. Some organizations will even go further and assign a chief to things such as diversity or risk. C-level executives can make or break a Lean effort. They have tremendous power and a great deal of Read more…

An example of a KPI board with countermeasures

Baseline Metrics / Baseline Measures

A baseline metric is a snapshot of the state of a process or operation prior to making improvements. In effect, it is the “before” measures of a process. It is important to know baseline metrics prior to making any changes. Without a clear understanding of your initial capabilities, you’ll be hard-pressed to determine whether or not your changes actually made any improvement. Taking baseline measurements also gives the added benefit of scrutiny. In many cases, Read more…

10-Foot, 3-Second Rule

The “10-Foot, 3-Second” rule is a rule of thumb regarding visual controls. From 10 feet away, you should be able to assess the status of an operation within 3 seconds. This does not mean that a novice needs to be able to do it, but someone who is familiar with the visual controls should be able to interpret the situation quickly and in passing. That means that visuals need to be big, and they need Read more…

10 foot, 3 second rule

Availability

Availability is exactly what it sounds like. It is a state of readiness to perform a task or operation. The term can be applied to a person, process, or piece of equipment. OEE and Availability The term availability is a component of OEE or overall equipment effectiveness. This is a common measure of how suited a piece of equipment is to a production process. It is calculated as a function of availability x performance x Read more…

Attribution Theory

Attribution theory is the study of the psychology behind how people attribute causes to the way they behave and the resulting outcomes. There are two types of attribution. Internal: With internal attribution, the cause of the behavior is believed to be a function of the characteristics and personality traits of a person. External: External attribution, also known as situational attribution, assigns the cause to environmental factors related to a situation that the person is put Read more…

Artisan Processes

An artisan process is one that relies on the skills of workers over strong processes. In the past, artisans were held in extremely high regard. This was primarily due, though, to the lack of a reasonable alternative to obtain high quality goods. Modern production processes and the sophistication of products, coupled with the ever-increasing expectation of quality, have driven a shift away from artisans and towards process-driven processes. Production that relies on the skill and Read more…

Analytic Hierarchy Process (AHP)

The Analytic Hierarchy Process, or AHP, is a decision-making tool developed in the 1970’s by Thomas L. Saaty. Its key characteristics are that it breaks big decisions into smaller ones, and relies on direct, one-on-one comparisons to make judgments. Essentially, the analytic hierarchy process breaks criteria into progressively smaller criteria, and then compares the criteria at each level in a head to head manner. This takes complex decisions and breaks them down into a series Read more…

Analysis of Variance (ANOVA)

Analysis of variance, also known as ANOVA, is a sophisticated statistical modeling technique that looks at variation within and between two or more groups. This is a useful tool in Six Sigma because being able to recognize variation in results lets you zero in on the process that created that variation. The statistics behind the tool also help you isolate the general fluctuation (randomness) of a process from special cause variation. Sometimes differences in output Read more…

Alternative Hypothesis

The alternative hypothesis is the assumption that there is a statistically significant difference between two sets of data. This is essentially the opposite of the null hypothesis. The alternative hypothesis is accepted if the null hypothesis is rejected. What this might mean in practice is that there is a statistical difference between the number of defects actually found in a sample and the number that would be expected if there were no additional factors influencing Read more…

Null Hypothesis

“Null hypothesis” is a statistical term that basically means that there is an assumption that there is no statistical difference between observations. For example, the null hypothesis would say that any differences between a sample and a population would be due only to random chance. Statistical testing then confirms or denies whether the null hypothesis is actually true or not. In practice, this might mean that a sample of products is assumed to be consistent Read more…

Beta Risk

Beta risk, statistically speaking, is the risk associated with accepting a null hypothesis when it is actually false. In other words, beta risk is a false negative in which a product is said to be free of defects when it actually has one. Beta risk is also known as a Type-II error, or consumer’s risk. See Also: Consumer’s Risk