top of page

Quality Maintenance (Hinshitsu Hozen): A TPM Practitioner Guide to Building a Prevention-Based Quality System

  • May 10
  • 33 min read

Updated: 11 hours ago

By Allan Ung | Founder & Principal Consultant, Operational Excellence Consulting

Published: 10 May 2026


An industrial operator wearing safety glasses and a grey work jacket stands before a high-precision laser cutting machine in a clean, modern factory. He holds a handheld digital control unit to monitor active processing conditions while a spark indicates the machine is in operation. To his left, a control station with multiple monitors displays process data, illustrating the integration of technical expertise and data-driven condition management.
Establishing and Sustaining the Standard: A shopfloor professional monitors critical process parameters in real-time, ensuring that equipment conditions align with the rigorous standards defined in the Quality Maintenance (QM) Matrix. This proactive management of 4M conditions—Man, Machine, Material, and Method—is the foundation of a zero-defect manufacturing environment.

Allan Ung is the Founder and Principal Consultant of Operational Excellence Consulting (OEC), a Singapore-based management consultancy established in 2009. With over 30 years of experience leading operational excellence and quality transformation across manufacturing, technology, and global operations — including senior roles at IBM, Microsoft, and Underwriters Laboratories (UL) across Asia-Pacific — Allan brings deep shopfloor and strategic expertise to every engagement. He holds the following qualifications and recognitions: Certified Management Consultant (CMC, Japan), Certified Lean Six Sigma Black Belt, JIPM-certified TPM Instructor, TWI Master Trainer, and former National Examiner for the Singapore Business Excellence Award. Allan has designed and facilitated TPM implementations and operational excellence programmes for organisations across semiconductor, automotive, industrial manufacturing, logistics, and public sectors. His clients include Temic Automotive (Continental), Analog Devices, Amkor Technology, STATS ChipPAC, Panasonic, Micron, Lam Research, Infineon Technologies, Dorma, and Tokyo Electron, as well as Singapore government ministries and statutory boards.

The Inspection Trap: When Detection Masquerades as Quality Assurance

The quality assurance function at most manufacturing plants has a name that is technically accurate and operationally misleading. "Quality assurance" implies something preventive — that quality has been built into the process, that the conditions for consistent output are established and maintained, and that defects are stopped before they occur. In practice, what most QA functions actually do is quality detection. They inspect finished goods, reject non-conforming parts, investigate defect events after the fact, and manage the remediation cost of products that have already been made incorrectly. This is not assurance. It is expensive, reactive failure management dressed in the language of control.

I have worked with manufacturers across Asia-Pacific for more than thirty years, and the pattern is almost universal. A plant will have a quality management system, a quality policy, defect rate KPIs on the management review dashboard, and a team of quality engineers who are genuinely skilled and dedicated. What it will not have — in the vast majority of cases — is a systematic, maintained understanding of which specific equipment conditions are causally responsible for which specific defect types, and a structured programme to ensure that those conditions remain within the ranges that guarantee good product. The quality function knows what defects are occurring. It rarely knows, with engineering precision, why the equipment is producing them and what measurable conditions need to be controlled to stop it.

This gap — between knowing the defect exists and understanding the equipment conditions that generate it — is the territory that Quality Maintenance (Hinshitsu Hozen) occupies. It is also the reason that many plants plateau on quality improvement despite active quality programmes: they are very good at detecting defects, investigating incidents, and taking corrective actions. They are not building the condition-based, prevention-oriented quality infrastructure that would make most of those incidents unnecessary.

This article is a practitioner guide to that infrastructure. It is addressed to manufacturers who want to understand what Quality Maintenance actually is within the TPM framework, how it differs from the conventional quality management approach, and what it genuinely takes to move from a detection-based quality system to a prevention-based one. The previous article in this cluster covered Focused Improvement (Kobetsu Kaizen) in depth — and the relationship between FI and Quality Maintenance is one of the most important structural interdependencies in TPM. I will return to it explicitly.

What Quality Maintenance (Hinshitsu Hozen) Actually Is Within the TPM Framework

Quality Maintenance is a condition-based, equipment-centred quality assurance methodology. Its operating premise is that defects are caused by equipment conditions — and that controlling the conditions prevents the defects. This sounds straightforward. The organisational implications are significant and disruptive.

In the conventional quality management model, quality is primarily the responsibility of the quality function. Production operates the equipment. Quality checks what it produces. Engineering investigates significant defect events and recommends changes. The three functions work in their respective lanes, connected through escalation and review processes but not integrated around a shared, maintained model of which equipment conditions cause which quality outcomes.

In Quality Maintenance, this division is explicitly rejected. JIPM positions QM as a mandatory TPM pillar — not because quality is less important than reliability or productivity, but because genuine quality assurance cannot be achieved by a quality function operating downstream of the production process. It requires production, maintenance, quality engineering, and process engineering to build and sustain a shared, living understanding of the causal relationship between equipment precision, process conditions, and product quality characteristics. That understanding is what the QA matrix and QM matrix formalise — and maintaining those matrices as living management tools, rather than as workshop outputs that are filed and forgotten, is the organisational discipline that separates a genuine Quality Maintenance programme from a quality improvement exercise that happened once.

The JIPM definition of Quality Maintenance makes the logic explicit: it means figuring out the equipment conditions in which defects will not occur, setting those conditions up as standards, monitoring and measuring actual equipment conditions over time, and confirming that actual conditions are within the standards. The sequence matters. First, establish the conditions. Then, check and measure those conditions periodically. Then, prevent defects by keeping conditions within standard range. Then, predict the possibility of defects by monitoring trends in measured values before they breach the standard. And then — critically — take preventive action before the condition drifts far enough to generate a defect. This is condition control, not defect detection. It operates upstream of the production of bad product rather than downstream of it.

The contrast with the conventional approach is fundamental. A conventional quality system sets zero defects as an aspiration that may be acknowledged as unachievable in practice, focuses inspection and action at the point of product completion, and is reactive by design. A Quality Maintenance system sets zero defects as a specific engineering target derived from eliminating the causal equipment conditions that produce defects, focuses control at the equipment conditions rather than at the product, and is preventive by design. Both systems will tell you when defect rates are rising. Only one of them tells you, before the defect rate rises, that the equipment condition responsible for a given defect type is drifting toward its threshold — and that preventive action is needed now.

The prerequisite for this preventive capability is the analytical work that most organisations have never done: tracing each recurring defect type to the specific, measurable equipment conditions that produce it, verifying that causal relationship with rigour, and setting condition standards that can be monitored at a frequency sensitive enough to detect drift before it generates bad product. This analytical work is demanding. It requires quality engineers and maintenance technicians to collaborate at a depth they rarely achieve in conventional quality management arrangements. It requires data on both defect occurrence and equipment condition, integrated in a single analytical frame. And it requires the organisational will to pursue root causes rather than accepting incremental defect reduction through tighter inspection as an adequate response.

Zero Defects as an Engineering Target

Zero defects in TPM's Quality Maintenance framework is not a motivational slogan. It is a specific, derivable target that becomes achievable when every equipment condition causally linked to every significant defect type is identified, characterised, and controlled within a verified standard range. The logic is direct: if defects are caused by equipment conditions, and those conditions are maintained within ranges that have been verified to produce zero defects, then zero defects is the expected output of the system, not an aspirational one.

This requires a precise understanding of the relationship between chronic and sporadic defects. Sporadic defects — sudden, significant departures from the normal defect level — are typically caused by a specific event: a component failure, a material substitution, an operating error. They attract immediate attention, are usually traceable to a discrete cause, and are addressed through restoration and corrective action. Chronic defects are more dangerous, and far more expensive. They persist at a low-level baseline, are accepted as "normal," and resist conventional corrective action because their causes are multiple, interrelated, and poorly understood. The OEC framework is explicit on this point: chronic defects require innovative measures to reduce the loss level to its fundamental minimum, not merely recovery measures to restore it to the previous baseline. And they will not be eliminated by experience, intuition, or incremental tightening of inspection standards. They require analytical rigour — specifically, the kind of physical analysis that traces each defect phenomenon to the equipment conditions that generate it and identifies what those conditions need to look like to make the defect impossible.

Time-series chart showing percentage of loss on the vertical axis and time on the horizontal axis. A horizontal dashed line marks the extreme value baseline. A tall narrow spike labelled "sporadic loss" rises sharply above the baseline and returns, with an annotation indicating that recovery measures are required to reduce it to the original level. A broader, persistent elevation labelled "chronic loss" sits above the extreme value line, with an annotation indicating that innovative measures are required to reduce it to the extreme state. Three numbered points mark the progression from chronic baseline through sporadic event and back.
Sporadic Defects & Chronic Defects: Sporadic losses respond to recovery measures that restore performance to its previous level. Chronic losses — the persistent low-level baseline that quality maintenance targets — require innovative analytical measures to reduce the loss level to its fundamental minimum. Zero defects is achievable only when chronic defect causes are eliminated, not merely managed. Source: Adapted from "Course on Total Productive Maintenance" by JIPM, 1992, via Operational Excellence Consulting.
Two-by-two matrix with "Causes" on the horizontal axis (Known on the left, Unknown on the right) and "Countermeasures" on the vertical axis (Known at the top, Unknown at the bottom). The top-left quadrant (known causes, known countermeasures) instructs: Apply countermeasures. The top-right quadrant (unknown causes, known countermeasures) instructs: Establish and perform countermeasures. The bottom-left quadrant (known causes, unknown countermeasures) instructs: Perform countermeasures and check results. The bottom-right quadrant (unknown causes, unknown countermeasures) instructs: Apply cause and effect analysis; Apply P-M analysis. Quadrant numbers 1, 2, and 3 reference the preceding chronic defect chart.
Steps of Countermeasures to Eliminate Chronic Defects: When causes are unknown, conventional corrective action cannot be applied. The countermeasures matrix makes explicit what most quality systems leave implicit: chronic defects with unknown causes require cause-and-effect analysis and — for complex, interrelated causal systems — P-M analysis. This is the analytical gateway through which Quality Maintenance passes that conventional quality management does not. Source: Adapted from "Course on Total Productive Maintenance" by JIPM, 1992, via Operational Excellence Consulting.

Quality loss — the Q component of Overall Equipment Effectiveness (OEE) — is the most direct metric connecting equipment conditions to quality outcomes. Within the 16 major losses framework, defect and rework loss (Loss 7) and yield loss (Loss 14) account for the quality dimension of OEE, along with measurement and adjustment loss (Loss 13) — the overhead cost of inspection and correction that itself grows in proportion to the degree to which quality is managed through detection rather than prevention. A plant whose quality system is primarily detection-based will consistently underperform on OEE quality rate, and it will also carry a hidden cost in measurement and adjustment loss that represents the organisational effort to manage quality problems that a prevention-based system would not produce. The relationship between QM and OEE Benchmarking is therefore direct: understanding where your OEE quality rate stands relative to world-class benchmarks tells you how large the gap is between your current quality system's performance and what condition-based quality control could deliver.

The 4M Framework: Understanding the Causal Logic of Defect Generation

Quality Maintenance is grounded in the 4M framework — the analytical structure that organises all possible sources of quality defects into four categories: Machine, Material, Method, and Men/Women. Understanding the 4M framework at the level QM requires is not the same as being familiar with the Ishikawa diagram. The 4M framework in QM is not a brainstorming tool for possible defect causes. It is a systematic, conditions-based specification of what every relevant element of the production system must be in order to guarantee good product.

Diagram showing four production input boxes labelled Material, Machine, Method, and Men/Women arranged on the left, all connecting through arrows to a central "Products" output. Two defect source labels appear between the inputs and the product: "Causal abnormalities in the equipment and processing methods" on the machine/method/material side, and "Occurrence of human mistakes" on the men/women side. Two control mechanism labels appear on the right: "Quality Maintenance" aligned with the equipment and process abnormality path, and "Mistake-proof" aligned with the human mistake path. A header reads "Guarantee for 100% Good Parts."
Guaranteeing 100% good product requires controlling both sides of the defect equation simultaneously: Quality Maintenance addresses causal abnormalities in equipment and processing methods across the Machine, Material, and Method dimensions; mistake-proofing addresses the occurrence of human mistakes in the Men/Women dimension. Neither alone is sufficient. Source: Adapted from "Course on Total Productive Maintenance" by JIPM, 1992, via Operational Excellence Consulting.

The Machine category is the most technically demanding. It encompasses the processing equipment itself (cleanliness, lubrication, fastener tightness, absence of loose parts), the jigs and tools (clean and undamaged fixings, unworn reference surfaces, freedom from chipping or wear), and the measuring instruments that are used to verify product quality (clean measurement elements, smooth mechanisms, proper calibration). The consequences of failing to maintain each of these conditions are specific and traceable: inadequate lubrication degrades equipment movement and leads to processing defects; loose fasteners create play in the machinery, generating dimensional variation; worn jigs cause processing defects through inconsistent workpiece positioning; miscalibrated instruments produce measurement errors that either allow defective product to pass or reject good product unnecessarily. None of these are abstract relationships. Each describes a physical mechanism that can be verified through observation and measurement.

The Material category addresses two distinct dimensions: the intrinsic quality of incoming materials (composition, dimensions, surface condition, hardness uniformity) and the quality of output from preceding process steps, which functions as the input material for the step under consideration. Both must meet specification for the downstream process to produce conforming output. A process that is otherwise perfectly controlled will produce defects if its input material is out of specification — which is why the QM approach requires that material conditions be verified as part of the conditions framework, not addressed separately by procurement or supplier management.

The Method category covers process conditions (rotation speeds, feed rates, temperatures, pressures, flow rates), working methods (workpiece positioning, setup procedure, work sequence), and measurement techniques (measurement methods, application pressures, and adherence to measurement standards for measuring devices). Method conditions are frequently the least well-controlled of the four categories, because they involve human execution of procedures that may be imprecisely documented or inconsistently followed. The QM approach requires that method conditions be specified with the same precision as equipment conditions — specific parameter values, specific ranges, specific monitoring frequency.

The Men/Women category addresses something beyond skill: it addresses the motivational quality the OEC framework describes as morale — the genuine desire to produce good quality, the keenness to identify and eliminate minor flaws, and the inclination to report anything that appears slightly different from normal rather than normalising it and continuing. This is the human prerequisite for quality maintenance: operators who understand the connection between the conditions they maintain and the quality of the product they produce, and who treat minor anomalies as signals rather than noise.

The 4M framework matters to Quality Maintenance because it structures the investigation. When a team is tracing a chronic defect type to its root causes, the question is not "what might be causing this?" — a question that invites speculation. The question is "which 4M condition, in which process step, is the physical mechanism generating this defect?" — a question that structures the investigation toward conditions that can be identified, measured, and controlled.

The QA Matrix: Mapping Defects to the Processes That Generate Them

The QA matrix (Quality Assurance matrix) is the first of the two primary QM tools and the entry point for any systematic quality investigation. It is a structured table that maps quality defect modes to the individual process steps where they originate — making visible, in a single analytical frame, which processes are generating which defects, whether those processes are operating with known condition gaps, and where the most significant quality problems are concentrated in the production flow.

QA matrix table for a semiconductor CMP process. Columns represent five process steps: Slurry Preparation, Carrier/Head Setup, CMP Polishing, Post-CMP Clean, and Metrology. Rows represent five defect modes: Surface scratch (Rank A), Thickness non-uniformity (Rank A), Edge chipping (Rank B), Particle contamination (Rank A), and Surface roughness (Rank B). Cells are marked with filled double circle for process that definitively generates the defect, single circle for contributing process, triangle for forecast defect source, and dash for no relationship. Surface scratch is generated in Carrier/Head Setup and CMP Polishing. Thickness non-uniformity is generated in Slurry Preparation and CMP Polishing. Particle contamination is generated in Slurry Preparation and Post-CMP Clean. A legend and OEC copyright footer appear below the matrix.
The QA matrix maps each defect mode against the process steps that generate it, distinguishing between steps that definitively produce the defect (◎), steps that contribute to it as a factor (○), and steps forecast to generate it under certain conditions (△). Defect modes are ranked by severity — A for major, B for moderate — to focus investigative effort where quality risk is highest. In this CMP example, surface scratch and particle contamination are confirmed as originating in two distinct process steps each, directing the QM analysis to those specific equipment conditions. Source: Operational Excellence Consulting. Illustrative example.

Preparing a QA matrix requires working through the quality standards for the product in question and identifying all quality characteristics and defect modes that affect conformance. From there, a block flow diagram is constructed covering every process step — from major processes through to auxiliary processes — and defect occurrence data is surveyed and confirmed for each step. The team then classifies quality characteristics by the severity of their defect modes, prioritising those where defects cause major losses or functional failure over those where defects can be remedied. The resulting matrix shows, for each combination of process step and defect mode, whether the process is one that definitively generates that defect, contributes to it as a factor in defects occurring elsewhere, or is expected to generate it based on the physical analysis.

The key points of QA matrix construction that the OEC framework emphasises are worth stating plainly, because they are where most attempts at QA matrix development lose rigour. First, classification of defect severity must be done by the quality assurance function — not by production, and not by the improvement team alone — because ranking defect modes requires the cross-process perspective and customer impact knowledge that the QA division holds. Second, all managers responsible must participate in the relativity analysis, contributing their process-specific knowledge to the mapping of individual processes to defect modes. A QA matrix constructed by a single function, or by a team that has completed the exercise in a workshop without going back to verify the relationships against actual defect data and shopfloor observation, will have gaps and inaccuracies that undermine every downstream analysis that depends on it.

The QA matrix is not a standalone tool. Its output — the set of process steps confirmed as defect generators, and the defect types associated with each — is the input that drives the next stage of the QM analysis: the investigation of the specific 4M conditions at each process step that are causally responsible for each confirmed defect type.

The QM Matrix: From Process Analysis to Condition Standards

If the QA matrix tells you where defects are being built into the product, the QM matrix (Quality Maintenance matrix) tells you what equipment conditions at each process step must be controlled to prevent them. It is the more analytically demanding of the two tools, and it is the one that most frequently becomes a static document rather than a living management instrument.

QM matrix table for the CMP Polishing process step in semiconductor wafer manufacturing. Four equipment parts are listed in the left column: Polish Head, Polishing Pad, Slurry Feed System, and Pad Conditioner. For each part, two management items are specified with their inspection method, standard value, inspection interval, and responsible party. Management items include down pressure (3.5 ± 0.3 psi, pressure gauge, per lot, Operator), rotation speed (85 ± 5 rpm, tachometer, per lot, Operator), pad thickness (≥ 1.8 mm, micrometer, daily, Maintenance), pad temperature (55 ± 5°C, IR sensor, per lot, Operator), slurry flow rate (200 ± 10 ml/min, flow meter, per lot, Operator), slurry pH (10.5 ± 0.3, pH meter, per batch, QA Engineer), conditioner force (25 ± 2 N, load cell, daily, Maintenance), and sweep speed (30 ± 3 rpm, tachometer, per lot, Operator). Four quality characteristic columns on the right show the impact rating of each condition: filled double circle for high impact, single circle for medium impact, triangle for low impact, and dash for no direct relationship. A legend and OEC copyright footer appear below the matrix.
The QM matrix translates the QA matrix findings into actionable condition standards for the CMP Polishing step — the process step confirmed as the primary defect generator for three of the five defect modes. For each equipment part, the matrix specifies the management item, the measurement method, the acceptable parameter range, the inspection frequency, and the responsible party, alongside the impact rating of each condition on each quality characteristic. Standard values are specific and measurable: down pressure at 3.5 ± 0.3 psi, slurry pH at 10.5 ± 0.3, pad thickness no less than 1.8 mm. These are the conditions that, if maintained within their stated ranges, prevent the associated defects from occurring. Source: Operational Excellence Consulting. Illustrative example.

The QM matrix is constructed by mapping the management items identified through P-M analysis (discussed below) against the quality characteristics from the QA matrix. For each management item, the team specifies the inspection method, the management threshold — the specific acceptable range for the condition parameter — the inspection cycle, and the responsible inspector. The degree of impact of each management item on each quality characteristic is rated: high impact, medium impact, or low impact. Items rated as high impact are the priority monitoring obligations of the quality condition control system; their drift before breaching the standard must be detectable with enough lead time to allow preventive action.

The practical consequence of operating without a Quality Maintenance system is visible in a benchmarking study I conducted in 2014 with three of the semiconductor manufacturers in this cluster's client roster — Analog Devices General Trias, STATS ChipPAC, and Amkor Technology Philippines. The study examined OEE performance and measurement systems across all three organisations. One of its findings was that all three companies excluded the Quality Rate component from their OEE calculation, on the grounds that quality losses were deemed to be outside their control. This was not negligence. These were well-run operations with sophisticated OEE analytics infrastructure — ADGT with automated systems across availability, performance, and quality tracking; STATS with a purpose-built analytics platform capable of drilling to hourly resolution. The exclusion was a rational response to an organisational reality: without a systematic method for connecting equipment conditions to quality outcomes, the quality rate sits outside the equipment management system. It is a product characteristic, not a controllable process variable. Quality Maintenance is precisely the methodology that changes that status — by establishing the condition-based understanding that makes quality loss as manageable as availability loss or performance loss. The same benchmarking report recommended P-M Analysis as a tool for improving OEE at all three sites. P-M Analysis is the analytical engine of Quality Maintenance. The recommendation and the methodology belong together.

What makes a well-constructed QM matrix versus a superficial one is the specificity and verifiability of its condition standards. This is the most common failure mode in QM matrix development, and it deserves candid treatment. Vague condition standards — "keep clean," "check alignment," "inspect regularly" — are useless for QM purposes. They cannot be objectively verified, cannot be trended, and cannot generate a meaningful signal that a condition is drifting toward its defect-generating threshold. A condition standard has operational value only when it specifies a concrete parameter (pump feed pressure, shaft seal clearance, stuffing box water injection volume), a specific measurement method (which instrument, at which location, under which operating conditions), a specific acceptable range with defined upper and lower limits, and a specific inspection frequency calibrated to how quickly that condition can deteriorate to its threshold value under normal operating conditions.

The QM matrix also identifies Q Components — those specific equipment components where deviation from the specified standard will directly and reliably produce a quality defect. Q components are the highest-priority monitoring items in the QM system, and they should be physically identified on the equipment itself — not just recorded in the matrix — with visual indicators showing the standard value and the current reading. A Q component tag on a pressure gauge at the feed pump, showing a standard value of 1.0 ± 0.2 kg/m² with the current reading visible at a glance, is the physical embodiment of the QM principle: condition control, not product inspection.

The monitoring gap — the failure mode that sits between identifying the right conditions to monitor and actually detecting drift before it produces a defect — is where many QM programmes that have reached the matrix construction stage fail in practice. Identifying that pump feed pressure is a Q component that must be maintained within a specified range is necessary but insufficient. The monitoring interval must be short enough, relative to the rate at which that condition can deteriorate, that the trend toward the threshold is detectable and actionable before the threshold is breached. If a condition can drift from its optimal value to its defect-generating threshold in four hours, monitoring it once per shift is not adequate condition control — it is an inspection system with a high probability of discovering the condition at or beyond its threshold rather than approaching it. Calibrating monitoring frequency to deterioration rate is one of the most technically demanding aspects of QM matrix finalisation, and it is one that requires both engineering knowledge of the equipment and operational knowledge of how the equipment behaves under real production conditions.

The static document failure mode is the other challenge that any honest assessment of QM practice must address. QM matrices constructed in workshops — even well-facilitated workshops with appropriate cross-functional participation — frequently become static documents because the organisational discipline to maintain them as living management tools does not exist. Three months after a QM matrix workshop, the matrix may be filed, the condition monitoring routines may have been quietly absorbed into or displaced by other inspection activities, and the production and maintenance functions may have returned to their normal operating pattern with no visible change in how quality conditions are managed. This happens not because the participants were uncommitted, but because no governance mechanism was established to ensure that the matrix is reviewed and updated as conditions change, that monitoring results are actually being trended and acted upon, and that the connection between matrix content and daily inspection practice is maintained. Governance is not a secondary consideration in Quality Maintenance. It is a precondition for the matrices to be anything other than documentation.


The Eight-Step Quality Maintenance Methodology


Quality Maintenance follows an eight-step structured methodology that maps onto the Plan-Do-Check-Act cycle. It is worth noting that JIPM-licensed implementations across different industries have presented this same methodology in seven or ten steps depending on how certain stages are grouped — the seven-step version, for example, combines the condition establishment and elimination steps into a single improvement arc and treats monitoring and improvement of checking methods as a separate maintenance arc — but the analytical sequence, the tools required at each stage, and the outputs expected are consistent across all versions. The eight-step structure used here is the most analytically granular and the most directly mapped to the JIPM self-assessment criteria.


Flowchart showing the eight steps of Quality Maintenance arranged vertically with a PDCA cycle overlay. Steps 1 through 3 (Verify Existing Situation, Investigate Processes where Defects Occur, Identify and Analyse 4M Conditions) are in the Plan phase. Step 4 (Plan Action to Correct Deficiencies) bridges Plan and Do. Steps 5 through 6 (Establish Conditions for Good Products, Eliminate Flaws in 4M Conditions) are in the Do phase. Step 7 (Consolidate Checking Methods) is in the Check phase. Step 8 (Determine Standard Values and Revise Standards) is in the Act phase. Each step includes a summary of key activities on the left and right panels.
The eight-step Quality Maintenance methodology follows the PDCA cycle — from verifying the existing situation and constructing the QA matrix (Steps 1–2), through 4M condition analysis and deficiency correction (Steps 3–4), to condition standard establishment, finalisation, consolidation of checking methods, and creation of the QM matrix with revised standards (Steps 5–8). Source: Operational Excellence Consulting.

The first four steps constitute the analytical investigation — the rigorous front-end work that determines whether everything that follows will be grounded in verified causal understanding or merely in well-intentioned assumption. Step 1 establishes the quality baseline. The team verifies quality standards and quality characteristics, creates a flow diagram of the individual processes involved in building quality into the product, and stratifies defect phenomena across the full range of relevant factors: by machine, by time of occurrence, by material lot, by operating condition, and by the people and work methods involved. This stratification is not a formality. It is the analysis that reveals whether a defect occurs consistently across all conditions — suggesting a process design issue — or concentrates under specific circumstances — suggesting a condition gap that varies across equipment instances or operating states. The distinction determines the entire direction of the investigation that follows.


Step 2 constructs the QA Matrix to map investigative focus. The matrix identifies the exact process steps where each defect mode originates, making visible in a single analytical frame which processes are generating which defects, and ensuring that the investigation proceeds with the technical rigour the JIPM framework requires rather than with the intuition-led cause-attribution that characterises most conventional defect investigations.


Step 3 is where the analytical rigour of Quality Maintenance is most visibly tested. Using P-M Analysis as the primary tool, the team documents the physical phenomenon behind each defect type and traces it through its generative mechanism — whether rooted in mechanics, fluid dynamics, material behaviour, or process chemistry — to the 4M conditions that produce it. What matters here is the quality of the physical analysis: teams that trace the defect phenomenon to its generative mechanism through first principles identify root conditions that their countermeasures can reliably address. Teams that conduct the analysis at the level of brainstormed possibilities rather than physical investigation identify conditions that are plausible but not necessarily causal, and they find that defects persist or resurface after countermeasures have been implemented. The investigation results at this stage include actual measured values from shopfloor survey compared against standard values for each condition, making every identified deviation a confirmed finding rather than a hypothesis.


Step 4 translates confirmed findings into action. A deficiencies chart is developed to document every gap found in the 4M conditions, and countermeasures are planned to address each one — either restoring equipment to its basic condition where deterioration is the cause, or improving equipment that cannot meet the required condition standard in its current state. Separating the planning of action from its execution is not bureaucratic caution; it is the stage gate that prevents teams from implementing the first plausible fix before the full causal picture has been established.


Implementation and optimisation occupy Steps 5 and 6. Step 5 addresses situations where the conditions for building in quality are still unclear after the initial investigation — where the causal relationship between a processing condition and a quality characteristic has not yet been established with sufficient confidence to set a standard. The team returns to processing principles and equipment mechanisms, establishes defect causes with greater certainty, and optimises settings and setup procedures. Design of experiments may be used here to confirm the quantitative relationship between a condition parameter and a quality outcome before a standard is set. Step 6 exposes and eliminates any remaining flaws in the 4M conditions and finalises the specific condition set that consistently achieves good product. Together, these two steps close the gap between a theoretical understanding of which conditions matter and a verified, shopfloor-confirmed set of conditions that the monitoring system can now be built around.


Step 7 — consolidating checking methods — is technically important and frequently underinvested, because it is the step that determines whether the QM monitoring system remains operationally sustainable or becomes an ever-growing burden that eventually collapses under its own weight. All checkpoints are classified into three categories: static precision conditions, which are dimensional and positional relationships that do not change under operating loads; dynamic precision conditions, which reflect accuracy under operating conditions and are typically monitored through vibration measurement; and processing conditions, which are operating parameters such as temperature, pressure, speed, and flow rate. The consolidation principle is to standardise checks at the highest-order condition possible — the condition that best predicts the behaviour of subordinate conditions — rather than monitoring every contributing component separately. Vibration measurement is particularly valuable for dynamic precision consolidation because it integrates the effects of multiple underlying conditions (bearing wear, shaft imbalance, structural looseness) into a single measurable signal. The goal is to reduce the total number of monitoring activities required while maintaining or improving the sensitivity and reliability of the condition monitoring system. Improvements to make checks quicker, simpler, and executable by operators without specialist equipment are implemented at this stage.


Step 8 institutionalises the gains. Standard values are determined for all consolidated checks, and the QM Matrix is created as the formal record of those standards — specifying for each management item the inspection method, the acceptable parameter range, the inspection frequency, and the responsible inspector. Material, checking, and work standards are revised to incorporate the established condition controls. Q components are identified and physically marked on the equipment. The trend monitoring and preventive action system that makes the QM programme ongoing is established. And the education dimension that the JIPM framework makes explicit is addressed: operators must understand not just what to check, but why — the physical mechanism connecting the condition they are monitoring to the defect it will produce if it drifts beyond its threshold, and the preventive action required when the trend indicates the standard is being approached. This understanding is what transforms a checking routine from a compliance activity into genuine condition management.


The Analytical Toolkit for Quality Maintenance

The QM toolkit contains a specific set of tools that work together in the analytical sequence, and understanding when each is appropriate — and where each fails when applied poorly — is part of what it means to practise Quality Maintenance rather than to describe it.

P-M Analysis is the primary analytical engine of Quality Maintenance investigation. The two letters represent both "Phenomenon and Physical" (the starting point for analysis) and "Mechanism and 4Ms" (the causal framework through which that phenomenon is investigated). The essence of P-M analysis is systematic completeness: every physical mechanism through which the phenomenon could occur is identified; every 4M condition related to each mechanism is enumerated; every condition is investigated against its standard value; every deviation is treated as a potential causal factor. This level of thoroughness is what allows P-M analysis to succeed where conventional root cause tools have failed — by preventing the investigation from stopping at a convenient explanation rather than continuing to the actual cause. Applied poorly, P-M analysis produces comprehensive lists of possible factors without the physical analysis that determines which of them are actually causally connected to the defect phenomenon. Teams that skip or abbreviate the physical analysis step — the precise, physical-principles-based explanation of how the phenomenon is generated — tend to generate a factor list that is broad but not analytically grounded, and their investigations frequently fail to identify the true root conditions.

5 Why analysis plays a complementary role in Quality Maintenance, particularly for investigating the management and organisational dimensions of why a condition gap exists rather than the physical mechanism of how the defect is generated. Why is the condition below its standard? Because the inspection was not performed. Why was the inspection not performed? Because the inspection standard was not incorporated into the AM checklist. Why was it not incorporated? Because the QM matrix was completed but its condition items were never translated into inspection tasks for the operations department. This chain of organisational causation is not what P-M analysis is designed to trace — but it is often the reason that technically correct condition standards are not being maintained in practice.

Cause-and-effect (fishbone) analysis provides the initial structuring tool for defect investigation — organising the team's understanding of potential 4M causes into a visual framework before the more rigorous P-M analysis and 5-why investigation proceed. Its value is in breadth rather than depth: it ensures that all four 4M categories are considered, that the team does not prematurely converge on the most obvious causal factor, and that the subsequent investigation covers the full causal landscape rather than the subset that first came to mind.

Statistical Process Control (SPC) functions as the primary condition monitoring instrument within an operating QM system. Once condition standards have been established and verified, SPC charts applied to the critical condition parameters provide the trend monitoring capability that makes preventive action possible: the chart shows not just whether the current measurement is within the standard range, but whether the parameter is trending toward its limit — allowing intervention before the threshold is breached. The process capability indices Cp and Cpk serve a dual role in QM. As diagnostic inputs, they reveal whether the current process — before QM conditions are established — has sufficient inherent capability to meet the quality standard. A Cpk below 1.0 for a quality characteristic means the process will produce defects even under normal operation, which identifies it as a priority for QM investigation. As verification outputs, Cp and Cpk confirm that the condition standards established through the eight-step methodology have produced a process that is genuinely capable of consistently meeting the quality standard — not merely one that is meeting it on average with significant variation on either side.

FMEA (Failure Mode and Effects Analysis) supports QM in two specific contexts. During Step 5 of the eight-step methodology, FMEA is used to assess the risk profile of identified equipment conditions — ranking them by their severity of impact on quality, probability of occurrence, and detectability with current monitoring methods. This produces a prioritisation of condition monitoring effort: the conditions with the highest risk priority numbers are the Q components that demand the most rigorous and frequent monitoring. FMEA is also valuable during condition standard validation, assessing whether the proposed condition controls — if maintained within their specified ranges — actually reduce the risk profile to an acceptable level, or whether additional controls are needed.

The QM Team: Cross-Functional Participation and Why It Matters

Quality Maintenance is the TPM pillar most frequently treated as a quality department responsibility rather than a cross-functional equipment management activity. This treatment produces an organisational arrangement in which quality engineers construct and own the QA and QM matrices, production operators are recipients of the resulting monitoring requirements rather than contributors to their development, and the maintenance function is consulted periodically but not integrated as a core participant in the ongoing condition management system. The result, in most cases, is matrices that are technically incomplete, monitoring routines that are disconnected from daily inspection practice, and a quality management system that looks like Quality Maintenance but operates like a more structured version of the conventional detection-based approach.

The reason QM requires genuine cross-functional integration is not procedural — it is analytical. Constructing an accurate, causally grounded QM matrix requires two kinds of knowledge that reside in different parts of the organisation and are rarely combined at the depth QM demands. Quality engineering holds the defect data — the accumulated record of what defects occur, where, under what conditions, and with what frequency. Maintenance holds the equipment condition knowledge — the understanding of how each equipment component behaves over time, what its failure modes are, how its deterioration manifests in measurable parameters, and what precision it is actually capable of delivering in its current state. Neither function, working alone, can construct a causally valid QM matrix. The quality engineer who does not understand how bearing wear manifests in spindle runout, and how spindle runout produces dimensional variation in the finished part, cannot trace a dimensional defect to its equipment condition root cause with confidence. The maintenance technician who does not understand which quality characteristic is affected by spindle runout, under which product specifications the effect becomes a defect, and what the customer impact is when the specification is breached cannot assess the quality significance of a condition deviation they observe during inspection.

The relationship between QM monitoring activities and the daily inspection routines of Autonomous Maintenance teams is the organisational interface that makes QM operational on a day-to-day basis. AM teams are the primary executors of the condition monitoring that the QM matrix specifies. The AM Step 4 general inspection activities explicitly incorporate equipment-competent operators trained in the mechanisms, functions, and precision requirements of their equipment — which is precisely the competence required to monitor quality-critical conditions effectively. When the QM matrix is completed and its condition standards are finalised, those standards are incorporated into AM inspection checklists, making Q components part of the daily inspection routine rather than a periodic check conducted by quality engineering. The AM team becomes the first line of condition monitoring for quality-critical parameters; quality engineering becomes the analytical function that interprets trends and initiates action when conditions approach their thresholds.

The organisational dynamics that determine whether a QM programme produces genuine condition control or impressive matrices that are never operationalised are fundamentally the same dynamics described in the Focused Improvement article: management visibility, resource allocation, and governance. QM pillar leadership — typically a quality engineering manager working alongside the TPM Promotion Office — must have sufficient authority to convene cross-functional teams, to require that QM conditions be incorporated into AM and PM standards, and to escalate when monitoring results indicate that a Q component is approaching its threshold. Without this authority, the QM programme will consistently lose its cross-functional character as each function reverts to its primary operational priorities.

QM and the Other TPM Pillars: Concrete Interdependencies

Quality Maintenance does not produce results in isolation from the other TPM pillars. Its interdependencies are directional and specific — and understanding them in concrete terms is the difference between a QM programme that contributes to the plant's quality performance and one that coexists with other TPM activities without meaningfully influencing them.

The relationship between Quality Maintenance and Focused Improvement is the most operationally active of the pillar interdependencies. The QA/QM matrix functions as the FI programme's quality loss backlog: it identifies the defect types and process steps where condition control is weakest, translating that identification into specific FI project themes. When the QM matrix reveals that a particular process step is generating chronic surface defects because a condition affecting surface finish precision is inadequately controlled, that finding is an FI theme. The FI project team investigates the specific equipment conditions causing the control gap, develops countermeasures through the eight-step Kobetsu Kaizen methodology, implements and verifies them, and then feeds the confirmed results back into the QM matrix as a revised condition standard. Over successive FI-QM cycles, the plant progressively eliminates the quality loss themes that the matrix identifies as highest priority — closing the gap between its current quality rate and the zero-defect target one verified condition standard at a time. The FI team's P-M analysis capability is also directly applicable to QM investigation: when a QM team is struggling to trace a chronic defect to its physical root condition, escalating to an FI P-M analysis project brings the appropriate analytical depth to the investigation.

The relationship between Quality Maintenance and Autonomous Maintenance flows in both directions. AM creates the foundational equipment condition on which QM analysis depends — the OEC framework is explicit that forced deterioration must be eliminated through AM before QM condition standards can be reliably established, precisely because deteriorated equipment cannot be accurately characterised. A machine that is in a state of accelerated deterioration due to contamination, inadequate lubrication, and loose fasteners cannot be analysed to identify its quality-critical conditions — because the conditions you observe are not its conditions; they are its conditions plus the effects of neglect. AM Steps 1 through 3 restore the baseline from which quality condition analysis can proceed. In the other direction, QM feeds quality-critical condition items back into AM inspection checklists — ensuring that the operators who are closest to the equipment daily are checking the conditions that matter most for product quality, and that their inspection activity has a specific, explained connection to the quality outcomes it is protecting.

The relationship between Quality Maintenance and Planned Maintenance is centred on the integration of quality-critical conditions into the maintenance strategy. Q components identified through QM analysis are incorporated into PM inspection routes and maintenance schedules, with intervals calibrated to the deterioration rate of each quality-critical parameter rather than to generic component lifecycle estimates. When PM condition monitoring detects that a Q component is deteriorating toward its quality threshold, the maintenance system triggers restoration before the threshold is breached — embodying the preventive principle at the core of Quality Maintenance. Conversely, PM records of condition measurements over time provide the trend data that QM trend analysis requires: they are the empirical basis for understanding how quickly each quality-critical condition deteriorates under normal operating loads, which is the information that determines whether monitoring intervals are adequate for preventive action.

Sustaining the QM Programme: The Organisational Discipline That Separates a Document from a System

The most important thing I can say about sustaining a Quality Maintenance programme is also the most uncomfortable: the majority of plants that have constructed a QA/QM matrix are not using it as a live management tool. They have conducted the workshop, produced the matrices, incorporated the immediate findings into operating procedures, and moved on. The matrices exist. They are accessible. They are not being updated, they are not driving monitoring activity, and the connection between their content and the daily management of equipment conditions has eroded. The result is a plant that can produce a QA/QM matrix when assessed but cannot demonstrate that its condition monitoring system is actively preventing quality defects that it would otherwise produce.

This failure mode is not unique to Quality Maintenance — the Focused Improvement article addressed the equivalent failure mode for FI project standardisation — but it is particularly damaging in QM because the entire value of the QM approach depends on the matrices being living documents that are updated as equipment changes, as new defect types emerge, and as FI projects verify and improve condition standards. A static QM matrix is not merely incomplete — it is actively misleading, because it implies a level of condition control that the plant is not actually exercising.

The management systems required to prevent this degradation are specific. The QM matrix must have an explicit update cycle — at minimum an annual review, and a triggered review whenever a new defect type emerges, whenever a piece of equipment is modified, or whenever an FI project produces a verified condition standard that supersedes the existing matrix entry. The update review must be cross-functional: it is not adequate for quality engineering to review the matrix in isolation. The monitoring results from AM inspection routines must be aggregated and trended by the QM pillar team at intervals short enough to identify drift before it generates defects. When Q component measurements are trending toward their threshold values, an escalation process must exist that moves from operator reporting to technical investigation to preventive maintenance action without losing time to organisational ambiguity about who is responsible.

The JIPM self-assessment criteria for the Quality Maintenance pillar provide the clearest available map of what a progression from nominal compliance to genuine quality maintenance excellence looks like. At the entry level of the JIPM assessment, assessors are looking for evidence that defect occurrence data is being accumulated and analysed — that defects are categorised by phenomenon, that countermeasures are being implemented, and that the organisation understands the relationship between failure modes and production processes. A plant that cannot demonstrate this categorisation and countermeasure activity has not yet begun Quality Maintenance in any meaningful sense, regardless of whether it has a quality management system in place. At the intermediate level, the JIPM criteria require a functioning QA matrix that clarifies the relationship between failure modes and production processes, a QM matrix that is being used to maintain the conditions for good products, and Q component configurations that allow operators to confirm critical conditions daily. The progression from this level to the advanced level is marked by the integration of Q component management into daily AM inspection routines, the demonstrated reduction of defect rates attributable to the condition control system, and — at the highest level — activities that exceed the Q component management standard, including online condition monitoring of critical parameters, SPC applied to quality-critical process variables, and process capability analysis confirming that the Cpk for critical quality characteristics is above the level at which the process can reliably maintain zero defects.

Most plants seeking the JIPM TPM Excellence Award will find, during their self-assessment, that they are at the transition between the first and second levels of the QM maturity scale. They are accumulating defect data, categorising it, and taking corrective actions — but the QA matrix has not been constructed or is incomplete, or the QM matrix exists but the condition monitoring system it specifies is not fully operational, or the Q components are identified in the matrix but not physically marked on the equipment and not incorporated into AM inspection checklists. Closing these gaps is not intellectually difficult. It requires sustained attention, cross-functional collaboration, and — most importantly — a QM pillar leader who treats the matrices as management tools to be actively maintained rather than documentation deliverables to be produced once.

The financial logic for building and sustaining a genuine QM programme is straightforward. In-process defect rates, cost of quality (scrap, rework, inspection, test, warranty), and the share of management bandwidth consumed by quality problem investigation and customer complaint handling are all expressions of the same underlying gap: the gap between what the production system consistently produces and what the quality standard requires. Quality Maintenance, implemented with rigour and sustained with appropriate governance, closes that gap not by increasing inspection intensity — which adds cost without improving the process — but by eliminating the equipment conditions that generate defects in the first place. The plants I have worked with that have implemented QM seriously — Analog Devices, Infineon Technologies, and STATS ChipPAC, among others — have not merely reduced their defect rates. They have changed the character of their quality management activity: fewer firefighting events, more structured condition monitoring, and a visible, documented connection between the equipment conditions they maintain and the quality outcomes they achieve.

From Detection to Prevention: The Discipline That Makes the Difference

Every plant that manages quality through end-of-line inspection has the same structure: a production process that generates defects at some rate, followed by an inspection process that catches some of them, followed by a rework or scrap process that handles the ones that are caught, followed by a warranty or complaint process that handles the ones that are not. The cost of this structure is not just the direct scrap and rework. It is the management overhead of investigation and corrective action, the customer relationship damage from escapes, the engineering time spent on incident reviews rather than improvement projects, and the organisational attention habitually directed toward the consequences of defects rather than toward their causes.

Moving from this detection-based structure to a prevention-based one requires exactly what Quality Maintenance provides: a systematic, analytical understanding of the causal relationship between equipment conditions and quality outcomes, formalised in living management tools, integrated into daily inspection practice through Autonomous Maintenance, connected to equipment improvement through Focused Improvement, and sustained by maintenance strategy through Planned Maintenance. The transition is not quick, and it is not comfortable. It requires quality engineers and maintenance engineers to work together at a depth they rarely do. It requires production operators to take ownership of quality-critical equipment conditions in addition to their production responsibilities. It requires managers to invest in the analytical work of building and maintaining the QM matrix rather than accepting defect rates as features of the landscape to be managed reactively.

The discipline that makes the difference between a plant that has done QM work and a plant that has a QM programme is the same discipline that distinguishes a genuine Focused Improvement system from a collection of improvement events: connection, continuity, and governance. The QA and QM matrices must be connected to daily inspection routines. The monitoring results must feed continuously into the trend analysis that triggers preventive action. The FI-QM cycle must progressively eliminate quality loss themes rather than documenting them. And the management review process must treat QM programme health — measured by both activity indicators and quality results — as a standing agenda item rather than an occasional report.

As Kiichiro Toyoda observed, every defect is a treasure if the company can uncover its cause and work to prevent it across the organisation. Quality Maintenance is the structured discipline for uncovering those causes, setting them up as conditions to be controlled rather than defects to be detected, and building the organisational capability to sustain that control over time. That is the work. Begin it deliberately.


About the Author



Allan Ung, Founder & Principal Consultant, Operational Excellence Consulting (Singapore)

Allan Ung is the Founder and Principal Consultant of Operational Excellence Consulting, a Singapore-based management training and consulting firm established in 2009. With over 30 years of experience leading operational excellence and quality transformation in manufacturing-intensive environments, Allan's expertise spans Lean Thinking, Total Quality Management (TQM), TPM, TWI, ISO systems, and structured problem solving.


He is a Certified Management Consultant (CMC, Japan), Lean Six Sigma Black Belt, JIPM-certified TPM Instructor (Japan Institute of Plant Maintenance), TWI Master Trainer, ISO 9001 Lead Auditor, and former Singapore Quality Award National Assessor.


During his tenure with Singapore's National Productivity Board (now Enterprise Singapore), Allan pioneered Cost of Quality and Total Quality Process initiatives that enabled companies to reduce quality costs by up to 50 percent. In senior regional and global roles at IBM, Microsoft, and Underwriters Laboratories, he led Lean deployment, quality system strengthening, and cross-border operational transformation.


Allan has facilitated TPM, OEE and Lean programmes for organisations including Temic Automotive (Continental), Analog Devices, Amkor Technology, STATS ChipPAC, Infineon Technologies, Panasonic, Micron, Lam Research, Tokyo Electron, Dorma, and NEC. He holds a Bachelor of Engineering (Mechanical Engineering) from the National University of Singapore and completed advanced consultancy training in Japan as a Colombo Plan scholar.


His philosophy: "Manufacturing excellence is achieved through disciplined systems, capable leadership, and sustained execution on the shopfloor."


His practitioner-led toolkits have been utilized by managers and organizations across Asia, Europe, and North America to build Design Thinking and Lean capability and drive organizational improvement.


For enquiries about Quality Maintenance, TPM, or operational excellence consulting, visit www.oeconsulting.com.sg or contact us directly through the OEC website.


Related Articles in the TPM Practitioner Guide Series









  • OEC TPM Maturity Diagnostic: A Practitioner Guide — Bridges implementation gaps with a four-level maturity model based directly on JIPM award checklists, translated into practical descriptors that make the assessment entirely actionable for practitioners.


Build TPM Capability in Your Organisation


At Operational Excellence Consulting, I deliver customised TPM, OEE workshops and implementation programmes for manufacturing organisations across Singapore and the Asia-Pacific region — from foundational two-day workshops to multi-year TPM implementation support, facilitated by a JIPM-certified TPM Instructor.


👉 Explore our TPM training courses and practitioner-led resources:


Operational Excellence Consulting offers a full catalog of facilitation‑ready training presentations and practitioner toolkits covering Lean, Design Thinking, and Operational Excellence. These resources are developed from real workshops and transformation projects, helping leaders and teams embed proven frameworks, strengthen capability, and achieve sustainable improvement.


👉 Explore the full library at: www.oeconsulting.com.sg/training-presentations




© Operational Excellence Consulting. All rights reserved.

bottom of page