Has statistical process control become obsolete in electronics manufacturing?

0
590
Advertisement

Statistical process control (SPC) has long been an important technique for companies looking to ensure high product quality. In modern electronics manufacturing, the complexities involved cannot be easily mapped to the fundamental assumptions of process stability. This makes traditional SPC worthless as a high-level approach to quality management, particularly in the light of the increasing amount of data being collected. An approach compliant with the Lean Six Sigma philosophy, allowing a wider scope than SPC, is better in identifying and prioritising relevant improvement initiatives.

By Vidar Gronas, sales director, skyWATS

SPC was introduced in the 1920s, and it was designed to address the manufacturing techniques of that era. One of its purposes was to detect undesired behaviour early enough, allowing for quick intervention and improvements. The limitation of SPC is that it is based on information technology of that era—a landscape completely different from modern times. Apart from IT, even product complexities and capabilities were different back then. In fact, the measurements in manufacturing operations those days have nothing in common with today’s measurements. Apart from this complexity, globalised markets have driven up the volume of manufacturing, so the amount of output data today is unimaginable compared to the standards in 1920.

Fundamental limitations
SPC appears to still hold an important position among original electronics manufacturers (OEM). It is found in continuous manufacturing processes —for calculating control limits and attempting to detect out-of-order process parameters. In theory, such control limits help visualise if things are turning from good to bad. But a fundamental assumption of SPC is that you have been able to remove or account for the common cause variations from the process, which means that all variations remaining are special cause ones—these are the parameters you need to worry about when they start to drift.

Advertisement

An electronic product today can contain hundreds of components. It will experience many design modifications due to factors such as component obsolescence. It will be tested at various stages during the assembly process and go through multiple firmware revisions. Its software versions will be tested by different test operators, how it operates during variances in environmental factors will be gauged, and so on.

A dynamic manufacturing process
An example of this is Aidon, a manufacturer of smart metering products. According to the firm’s head of production, Petri Ounila, an average production batch…

  • Contains 10,000 units
  • Has units containing over 350 electronic components each
  • Experiences more than 35 component changes throughout this build process

This results in a ‘new’ product or process every 280th unit. In addition, there are changes to the test process, fixtures, test programs, instrumentation and more. The result is an estimated average of a ‘new process’ every 10th unit or less. Or to put it in other words, this means 1,000 different processes in manufacturing a single batch.

How would you even begin to eliminate common cause variations here? And should you even try?

Even if you managed, how would you go about implementing the alarming system? A method in SPC, developed by Western Electric Company back in 1956, is known as Western Electrical Rules, or WECO. It specifies certain rules where a violation justifies investigation, depending on how far the observations are from ranges of standard deviations. One problematic feature of WECO is that it, on an average, will trigger a false alarm every 9175th unrelated measurement.

False alarms everywhere!
Let’s say you have an annual production output of 10,000 units. Each gets tested through five different processes, and each process has an average of 25 measurements. Combining these, you will get up to 62 false alarms per day on an average, assuming 220 working days per year.

Let’s repeat that—assuming that against all odds and reason, you were able to remove common cause variations, you would still be receiving 62 alarms every day. People receiving 62 emails per day from a single source are likely to block them, leaving potentially important announcements unacknowledged, with no follow up. SPC savvy users will probably argue that there are ways to reduce this by new and improved analytical methods. You might be subjected to suggestions like the following:

“There are Nelson Rules, we have AIAG, or you should definitely use Juran Rules.”
“You need to identify auto-correlation structures to reduce the number of false alarms!”
“What about this ‘ground-breaking, state-of-art’ chart developed in the early 2000s, given it a go yet?”

But even if we managed to reduce the amount of false alarms to five per day, could it represent a strategic alarm system? By adding actual process dynamics to the mix, can SPC produce a system that manufacturing managers can rely on …one that keeps their concerns and ulcers at bay?

Enter KPIs
What most key performance indicators (KPIs) do is to make assumptions on a limited set of important parameters to monitor, and carefully track these by plotting them in their control charts, X-mR charts or whatever they use to try and separate the wheat from the chaff. These KPIs are, in reality, very often captured and analysed downstream in the manufacturing process, after multiple units are combined into a system.

An obvious consequence of this is that problems are not detected where they happen or as they happen. The origin could easily come from one of the components or processes upstream, manufactured one month ago in a batch that by now has reached 50,000 units. A cost-failure relationship known as the 10x Rule says that for each step in the manufacturing process that a failure is allowed to continue, the cost of fixing it increases by a factor of 10. A failure found at the system level can mean that technicians will need to pick apart the product, an act that in itself gives opportunities for new defects. Should the failure be allowed to reach the field, the cost implications can be catastrophic. There are multiple recent examples of firms that had to declare bankruptcy due to the prospect of massive recalls. A recent example is Takata filing for bankruptcy after a massive recall of airbag components, which may exceed 100 million units.

Real-time dashboards and drill-down capabilities allow you to quickly identify the contributors to poor performance. Here, it is apparent that Product B has a single failure contributing to around 50 per cent of the waste. There is no guarantee that Step 4 is included in monitored KPIs within a SPC system, but it is critical that the trend is brought to your attention

One of the big inherent flaws of SPC, according to the standards of modern approaches such as Lean Six Sigma, is that it makes assumptions of where problems are coming from. This is an obvious consequence of assuming stability in what in reality are highly dynamic factors, as mentioned earlier. Trending and tracking a limited set of KPIs only enhance this flaw. This again kicks off improvement initiatives likely to fail at focusing on your most pressing or cost-efficiency issues.

A modern approach
All this is accounted for in modern methods for quality management. In electronics manufacturing, this starts by an honest recognition and monitoring of your true First Pass Yield (FPY). In this case, ‘true’ means any kind of failure must be accounted for, even if it only came from something as simple as the test operator forgetting to plug in a cable. Every test after the first represents waste, resources the company could have spent better elsewhere. True FPY represents perhaps your single most important KPI; yet, most OEMs have no real clue what theirs is.

Live dashboards
By knowing your FPY, you can break this KPI down in parallel across different products, product families, factories, stations, fixtures, operators and test operations. Having this data available in real-time as dashboards gives you a powerful ‘Captain’s view’. It lets you quickly drill down to understand what the real origin of poor performance is and make interventions based on economic reasoning. Allocating this insight as ‘live dashboards’ to all involved stakeholders also contributes to enhanced accountability for quality. A good rule of thumb for a dashboard is that unless the information is given to you, it won’t be acted on. We simply don’t have time to go looking for trouble.

As a next step, it is probably critical that you are able to quickly drill down to a Pareto view of your most occurring failures, across any of these dimensions. By now, it could very well be that tools from SPC become relevant in order to learn more details. But you know that at this stage, you are applying these on something of high relevance, not based on educated guesses. You suddenly find yourself in a situation where you can prioritise initiatives based on a realistic cost-benefit ratio.

Repair data
The presence of repair data in your system is also critical—it cannot be exclusively contained in an MES system or external repair tool. Repair data supplies contextual data that improves root-cause analysis, plus contains other benefits. From a human resource point of view it can also tell you if products are blindly retested, as sometimes normal process variations take the measurement within the pass-fail limits. Or if the product is in fact taken out of the standard manufacturing line and fixed, as intended. Have no illusions —it is not rare to see products being retested more than 10 times within the same hour.

In short, quality-influencing actions come from informed decisions. Unless you have a data management approach that is able to give you the full picture, across multiple operational dimensions, you can never optimise product and process quality, or your company’s profits.
You can’t fix what you don’t measure.

Advertisement

SHARE YOUR THOUGHTS & COMMENTS

Please enter your comment!
Please enter your name here

Are you human? *