top of page

OEC TPM Maturity Diagnostic: A Practitioner's Guide to Benchmarking Your TPM Programme Across All Eight Pillars

  • 13 hours ago
  • 23 min read

By Allan Ung | Founder & Principal Consultant, Operational Excellence Consulting (OEC)

Published: 16 May 2026


Three manufacturing professionals—including an external auditor and plant engineers wearing white hard hats and high-visibility safety vests—conduct a collaborative peer audit on a clean, modern factory floor. One auditor points to an equipment component while referencing a digital tablet, systematically benchmarking shop-floor realities against the structured criteria of the OEC TPM Maturity Diagnostic framework.
Operational Excellence in Action: A cross-functional peer audit team utilizes the OEC TPM Maturity Diagnostic tool directly on the shop floor to bridge the gap between daily operations and JIPM Excellence standards. By assessing real-world equipment conditions and operator competencies across all eight pillars, the team moves away from subjective "gut-feel" metrics and transitions toward a data-driven, standardized maturity roadmap.

Allan Ung is the Founder and Principal Consultant of Operational Excellence Consulting (OEC), a Singapore-based management consultancy established in 2009. With over 30 years of experience leading operational excellence and quality transformation across manufacturing, technology, and global operations — including senior roles at IBM, Microsoft, and Underwriters Laboratories (UL) across Asia-Pacific — Allan brings deep shopfloor and strategic expertise to every engagement. He holds the following qualifications and recognitions: Certified Management Consultant (CMC, Japan), Certified Lean Six Sigma Black Belt, JIPM-certified TPM Instructor, TWI Master Trainer, and former National Examiner for the Singapore Business Excellence Award. Allan has designed and facilitated TPM implementations and operational excellence programmes for organisations across semiconductor, automotive, industrial manufacturing, logistics, and public sectors. His clients include Temic Automotive (Continental), Analog Devices, Amkor Technology, STATS ChipPAC, Panasonic, Micron, Lam Research, Infineon Technologies, Dorma, and Tokyo Electron, as well as Singapore government ministries and statutory boards.


There is a question that surfaces in every TPM programme I have worked with, usually around the twelve-month mark, sometimes earlier if the initial enthusiasm has been particularly strong. The team has been working through the AM steps. The FI pipeline has projects running. The activity boards look busy. Someone in management asks how the programme is going, and the TPM coordinator gives a positive answer, because by the visible metrics available — number of projects, step completion rates, OPL count — things appear to be progressing.


And then the organisation decides to pursue a JIPM TPM Excellence Award, and the pre-assessment visit happens, and the gap between the programme as it appeared internally and the programme as it actually is becomes visible in a way that is deeply uncomfortable for everyone involved.


I have been present at both ends of that experience — as a TPM coach preparing client programmes for formal recognition, and as someone who has had to facilitate the difficult conversation that follows a pre-assessment that revealed the gap between self-perception and reality. What I have observed consistently, across semiconductor fabrication plants in Malaysia, automotive component manufacturers in the Philippines, industrial equipment producers in Germany, and precision manufacturers in Singapore, is that the problem is almost never a lack of effort. It is a lack of a structured diagnostic framework capable of revealing what is actually happening across all eight pillars simultaneously, at a level of specificity that makes improvement priorities obvious rather than arguable.


That is why I developed the OEC TPM Maturity Diagnostic.


This article is a practitioner's guide to the tool — what it is, how it was built, how to use it, what the scores reveal, and how to act on them. If you are looking for a guide to navigating the JIPM TPM Excellence Award application process itself — the award hierarchy, the Self-Checklist scoring criteria, what formal assessment visits actually evaluate — I have addressed those questions separately in TPM Self-Assessment and the TPM Excellence Award: A Practitioner's Guide. The two articles are designed as complementary reading: the JIPM self-assessment guide tells you what the destination looks like and how to apply; this article gives you the diagnostic instrument to understand where you are starting from, and how far you have to go.


The Problem That Existing TPM Diagnostic Tools Do Not Solve


Before describing what the OEC TPM Maturity Diagnostic is, it is worth being precise about what it addresses — because the problem it was designed to solve is specific, and understanding the specificity matters for understanding how to use it.


The JIPM Self-Checklist — the official self-assessment instrument for the Award for TPM Excellence — is an excellent tool for a particular purpose: determining whether a plant is eligible to apply for formal JIPM recognition, and identifying which required criteria need to be strengthened before application. It covers the right categories, asks the right questions, and provides a scoring scale (0–5 per item) that is well-grounded in observable activity levels. I recommend it to every client organisation that is seriously considering the award pathway, and I have discussed it in detail in the companion article referenced above.


But it has a structural characteristic that limits its usefulness as a cross-pillar diagnostic instrument for the majority of TPM programmes, particularly those in their first two or three years of implementation. The Self-Checklist is award-entry-oriented — it is calibrated to describe the boundary between "not yet eligible" and "eligible to apply." The granularity it provides at the lower end of the maturity spectrum is limited, because that end of the spectrum is not where its primary purpose lies. A plant scoring 1.5 average on the Self-Checklist and a plant scoring 2.3 average are both well below the eligibility threshold of 2.5, but they are in very different situations that require very different interventions, and the Self-Checklist does not illuminate that difference with precision.


There is also a pillar consistency problem. Autonomous Maintenance benefits from the JIPM seven-step framework, which gives AM a built-in progression model that assessors and practitioners can reference with shared understanding. Step 3 means something specific. Step 4 means something specific. When someone says their AM programme is at Step 3 on 85% of target equipment, a calibrated practitioner knows what that implies about cleaning standards, lubrication, operator training, and visual management. No equivalent step framework exists in the same explicit form for the other seven pillars. This creates an inherent inconsistency in how different pillars are understood and assessed internally: AM benefits from structural clarity that FI, PM, QM, ET, SHE, EM, and OI simply do not have in the same codified form.


The OEC TPM Maturity Diagnostic was built to address both of these gaps. It provides a four-level maturity model — Foundational, Developing, Defined, World-Class — across five dimensions for each of the eight pillars, giving practitioners a consistent vocabulary and a consistent benchmark structure across the entire programme simultaneously. It draws its content directly from Checklist C of the JIPM TPM Excellence Award and the JIPM Self-Checklist, translated into practitioner-level descriptors that make the assessment operational — not just scoreable, but actionable.


The Diagnostic Model: Architecture and Design Choices


The OEC TPM Maturity Diagnostic covers eight pillars: Focused Improvement (FI), Autonomous Maintenance (AM), Planned Maintenance (PM), Quality Maintenance (QM), Education and Training (ET), Safety, Health and Environment (SHE), Early Management (EM), and Office/Administrative Improvement (OI).


For each pillar, the diagnostic defines five dimensions — the most important sub-domains of that pillar's activity. Each dimension is assessed across four maturity levels. The structure gives a maximum score of 20 per pillar (five dimensions, each scored from 1 to 4) and a maximum total score of 160 across all eight pillars.


A structured strategic model overview diagram of the OEC TPM Maturity Diagnostic framework. The graphic illustrates the intersection of the 8 core pillars of Total Productive Maintenance with a standardized 5×4 evaluation matrix, demonstrating how manufacturing plants objectively transition from a reactive baseline to a JIPM-aligned world-class standard.
The Blueprint for Plant-Wide Calibration: An architectural overview of the OEC TPM Maturity Diagnostic model. By implementing a standardized 5×4 evaluation matrix uniformly across all eight domains, this framework provides operations leaders with an aggregated, mathematically sound line-of-sight into their operational realities, completely removing subjectivity from corporate benchmarking. Source: Operational Excellence Consulting.

The choice of four levels rather than the JIPM's 0–5 scale is deliberate. The 0–5 scale is excellent for calibrated experts scoring against well-understood criteria, and for that purpose it is superior. But four levels — each with a full, descriptive paragraph explaining what the level looks like in practice — are better suited to the cross-functional calibration exercise that a plant-level diagnostic workshop requires. A pillar leader, a plant manager, and a front-line supervisor can all read a level-3 descriptor and discuss whether it accurately characterises their situation. A numerical score of 3.5 on a 0–5 scale does not facilitate that conversation in the same way.


The four levels carry specific meaning that I designed to create genuine distinctions rather than merely gradations:


Level 1 — Foundational describes a state in which basic reactive practices exist but systematic infrastructure does not. Data is not consistently collected or structured. Standards either do not exist or exist on paper without being followed. Management review of pillar metrics does not occur at a cadence that makes the metrics actionable. A programme at Level 1 in a dimension is not failing — it is at an early, honest stage that requires specific infrastructure investments before analytical sophistication makes any sense.


Level 2 — Developing describes a state in which a framework exists and is being applied, but key structural gaps remain. The methodology is present but not yet disciplined. Results are visible in some areas but not consistently across the programme. The pillar is progressing but would not survive close external scrutiny without revealing significant inconsistencies.


Level 3 — Defined describes a state that I associate with genuine readiness for JIPM assessment consideration. A Defined programme has a consistent system operating across its scope. Data management is structured. Standards are documented and followed. Management reviews are regular and use pillar metrics as genuine inputs to decision-making. Horizontal deployment occurs — not just as a good intention but as a tracked process. A programme scoring primarily at Level 3 across most dimensions is not yet world-class, but it has built the infrastructure that makes world-class achievable.


Level 4 — World-Class describes a self-sustaining system that is contributing to management goals beyond the immediate pillar, generating knowledge that other plants or organisations can learn from, and producing measurable results that compound over time. Level 4 represents the upper tier of what JIPM assessors look for in the Special Award and above categories, and is genuinely rare in the programmes I have assessed.


The score interpretation is structured in three bands. A pillar score of 5–12 out of 20 places the programme in the Foundational band — significant infrastructure gaps exist, and the priority is building data systems, management standards, and governance before investing in analytical sophistication. A score of 13–16 places the programme in the Intermediate band — the core methodology exists, but specific dimensions need targeted investment to reach the Defined level. A score of 17–20 indicates a Mature programme — and the question at this level is not whether the system is established, but whether it is genuinely driving compounding results, or has begun to calcify into a compliance activity.


What the Five Dimensions Cover, Pillar by Pillar


I want to walk through the five dimensions of each pillar with enough specificity to be useful, because the value of a diagnostic framework is not in its structure — it is in whether the questions it asks are the right questions. The selection of five dimensions per pillar involved deliberate choices about what is most important to evaluate, and those choices are worth explaining.


Focused Improvement (FI)


The five FI dimensions are Loss Visibility and Quantification, Project Structure and Methodology, Analytical Tool Proficiency, Horizontal Deployment, and Programme Governance and Pipeline Depth.


A wide-format presentation slide titled 'OEC Focused Improvement Maturity Diagnostic' showcasing the OEC signature design language. The slide features a deep navy title bar, a teal accent sub-banner, an instructional left sidebar explaining the three-band evaluation scores (Foundational, Intermediate, Mature), and a highly detailed 5-row by 4-column matrix mapping out progression criteria from Level 1 to Level 4.
Anatomy of a Pillar Evaluation: A sample view of the OEC Focused Improvement (FI) Maturity Diagnostic matrix slide. Note the granular progression from a reactive, manual baseline (Level 1) to an advanced, financially synchronized, and automated loss-cost matrix ecosystem (Level 4). This rigorous definition across five distinct dimensions gives teams clear milestones to bridge the gap toward official JIPM recognition. Source: Operational Excellence Consulting.

Loss Visibility and Quantification is the foundational dimension because it is impossible to drive meaningful FI without understanding the structure of losses. The dimension asks not just whether OEE is tracked, but whether the loss-cost matrix is being used to convert equipment and operational losses into financial impact — and whether that financial translation is driving the selection of improvement themes. In my experience assessing plants across Asia-Pacific, this is the single most commonly underdeveloped dimension in FI programmes: teams are tracking OEE and running Kaizen events, but the project selection is not grounded in a rigorous loss analysis. The result is that effort is invested in visible problems rather than high-impact ones.


Horizontal Deployment is the dimension that most reliably distinguishes a mature FI programme from an active one. Many programmes produce good Kaizen results at the project level but treat each project as standalone. The systematic question — does this countermeasure apply to similar equipment? — is only asked at programme closure when explicitly required by a governance process. At Level 3, horizontal deployment is a tracked activity with explicit ownership, not an aspiration. At Level 4, the countermeasures that emerge from FI projects are actively feeding the MP information database and influencing how new equipment is specified.


Autonomous Maintenance (AM)


The five AM dimensions are Equipment Condition and Cleaning Standards, Operator Inspection and Abnormality Detection, AM Step Progression and Standard-Setting, AM–FI–PM Integration, and Management Systems and Governance.


The AM–FI–PM Integration dimension is one that I added specifically because it is the most commonly neglected structural element in otherwise active AM programmes. It is not unusual to find a plant where the AM circle is progressing through the steps, the FI team is running improvement projects, and the PM group is managing the maintenance calendar — and none of these three activities are formally connected to each other. Abnormalities identified through AM cleaning do not get routed to the FI pipeline. Recurring findings do not feed the PM schedule. The three pillars operate in parallel rather than as a reinforcing system.


At Level 3 on this dimension, there is a formal routing process for abnormalities — a document, a handoff protocol, a tracking mechanism that converts an AM finding into an FI candidate or a PM update. At Level 4, the system is self-reinforcing: AM standards are updated when FI closes a project that reveals a new normal condition, and PM schedules are adjusted based on what AM inspection routes are actually finding.


The Management Systems and Governance dimension reveals something that activity-board-rich programmes frequently obscure: the difference between AM as a management system and AM as a shop-floor activity. Activity boards, step completions, and OPL counts measure activity. The governance dimension asks whether AM metrics — tag closure rates, abnormality detection frequency, step audit scores — are reviewed at a cadence that makes them management inputs rather than administrative records.


Planned Maintenance (PM)


The five PM dimensions are Failure Data Management and Analysis, Maintenance Method Selection (TBM/CBM), Maintenance Efficiency and Spare Parts Management, AM–PM Role Clarity and Integration, and Maintenance Skills and Programme Governance.


Maintenance Method Selection is the dimension where I most frequently observe a gap between what organisations believe they are doing and what is actually happening at the equipment level. The progression from breakdown maintenance to time-based maintenance (TBM) to condition-based maintenance (CBM) is well understood in principle, but the implementation of that progression requires a criticality assessment — a formal ranking of equipment by the consequence of failure — and most plants that believe they have a planned maintenance system have not completed a rigorous criticality assessment across their full equipment population. Without criticality assessment, TBM intervals are set by intuition or convention rather than by failure data, and the investment in predictive tools is often applied to equipment where the consequence of failure does not justify the cost.


Quality Maintenance (QM)


The five QM dimensions are Defect Visibility and Data Collection, Root Cause Analysis and Recurrence Prevention, QM Matrix and Condition Standards, 4M Condition Control and Quality Assurance, and Programme Governance and Zero-Defect Targets.


The QM Matrix and Condition Standards dimension is the one that most clearly differentiates quality maintenance as a TPM pillar from quality control as a QA function. The QM matrix — mapping failure modes to process steps, and then to the equipment conditions (machine parameters, material specifications, method standards, and operator skills) that must be controlled to prevent those failure modes — is the central organising tool of QM. Its absence is the most reliable indicator of a QM programme that is in name only. Its presence at Level 3 means not just that the matrix exists and was created historically, but that it is a live document updated with every defect event and referenced daily in production management.


Education and Training (ET)


The five ET dimensions are Training Needs Assessment and Design, Skills Visibility and Competency Assessment, Maintenance Skill Training Infrastructure, TPM Qualification and Certification, and Training Governance and Results Integration.


The Training Governance and Results Integration dimension is one that most ET programmes score poorly on, because it requires closing a loop that is genuinely difficult to close: the loop between training investment and TPM activity performance. At Level 4, the TPM programme can demonstrate that training ROI is measurable — that improvements in operator competency scores correlate with improvements in abnormality detection rates, and that technician certification rates correlate with maintenance system performance indicators. This is rare, but it is what a mature ET pillar looks like.


The Maintenance Skill Training Infrastructure dimension is where the Maintenance Dojo concept is evaluated. Organisations that have established a dedicated space for practical maintenance skill training — a Dojo equipped with equipment mock-ups, lubrication training stations, and diagnostic practice tools — are investing in something that has compounding value over time: not just in the skills of current employees, but in the organisation's ability to onboard and develop new technical staff systematically.


Safety, Health and Environment (SHE)


The five SHE dimensions are Legal Compliance and Hazard Recognition, Accident and Near-Miss Management, Safety Measures and Risk Elimination, Environmental Management and Conservation, and Safety Culture and Programme Governance.

The Near-Miss Management dimension is the one I use most frequently as an early diagnostic indicator of safety culture maturity. Near-miss reporting frequency is a leading safety indicator, not a lagging one — and in my experience, the plants with the best safety records are not the plants with the lowest historical incident rates, but the plants with the highest near-miss reporting rates relative to their workforce size. A high near-miss reporting rate indicates a culture in which people believe that reporting is safe, useful, and valued. A zero near-miss reporting rate in a plant of significant complexity is not an indicator of safety excellence; it is an indicator that the near-miss reporting system is not being trusted.


Early Management (EM)


The five EM dimensions are MP Information Collection and Utilisation, Equipment Development Management System, Initial Phase Control and Vertical Start-up, Design for Maintainability and Operability, and Programme Governance and Cross-functional Coordination.


The Initial Phase Control dimension is one that many plants discover they have no data to score honestly, because they have never measured vertical start-up performance. At Level 1, new equipment is commissioned reactively — problems are addressed as they emerge during ramp-up, and no one has defined what "good" looks like for start-up time or start-up loss. At Level 3, vertical start-up targets are defined before commissioning, measured systematically, and the results feed back into the MP information system that drives future equipment specifications.


Office / Administrative Improvement (OI)


The five OI dimensions are Loss Recognition and Waste Identification, Work Improvement Activities, Administrative Skills and Multi-skilling, Production Support and Supply Chain Effectiveness, and Programme Governance and Efficiency Targets.


OI is the pillar that many plants treat as aspirational rather than operational, and it is the one where the gap between declared activity and actual activity is typically widest. The Production Support and Supply Chain Effectiveness dimension is where this gap is most consequential — because administrative inefficiency (slow procurement, poor scheduling accuracy, information latency) has direct impact on production performance that never appears in equipment-based OEE calculations. A Level 3 programme in this dimension measures administrative support effectiveness through production KPIs — inventory turns, schedule adherence, procurement lead time — rather than treating administration and production as separate performance domains.


What the Modal Score Reveals: An Honest Benchmark


Over the years in which I have been facilitating TPM diagnostic assessments across plants in semiconductor, automotive, and industrial manufacturing, a pattern has emerged in the scores that I think is worth sharing directly, because it contradicts the self-assessment scores that most organisations are sitting on before an OEC diagnostic.


The modal score per pillar, for plants in their first three years of TPM implementation, is consistently between 8 and 10 out of 20. That places the typical programme solidly in the Foundational band on the OEC diagnostic — well below the Intermediate threshold of 13, and significantly below what I observe JIPM assessors expecting from a programme making a credible case for the Award for TPM Excellence.


This does not mean these programmes are failing. It means they are exactly where a programme at 12–24 months of implementation should be, if it has been building honestly: active, progressing, with real results in some areas, but with significant infrastructure gaps that will need to be closed before formal assessment is realistic.


What the modal score conceals is the pillar variation. Almost no programme scores uniformly across all eight pillars. The typical pattern is that AM scores relatively higher than average, because it has the seven-step framework to provide structural clarity. FI scores vary enormously depending on whether loss-cost analysis has been developed; it is the pillar where I most frequently see an 8-point spread between highest and lowest-scoring plants in similar industries. QM and EM are the pillars where I most consistently see Level 1 scores, because they are the ones where the analytical infrastructure required for Level 2 is rarely in place in the early years.


The diagnostic score per dimension is more actionable than the pillar total, because it tells you not just that a pillar needs attention, but which specific infrastructure elements need to be built next. A plant scoring 2-3-2-1-2 across the five FI dimensions knows immediately that Horizontal Deployment and Programme Governance are the priorities — not analytical tool proficiency, which is already developing. A plant scoring 1-2-1-1-2 knows it needs to start with Loss Visibility before anything else will make sense.


The OEC TPM Maturity Diagnostic and the JIPM Self-Checklist: Two Instruments, Two Purposes


I want to be precise about the relationship between this diagnostic and the JIPM Self-Checklist, because they serve different purposes and the distinction matters for how you use them.


The JIPM Self-Checklist is the official instrument for determining whether your programme meets the minimum conditions for a JIPM award application. It is calibrated to the award eligibility boundary. Its two pass criteria — all Required items scored at 3 or above, and an average across all items of at least 2.5 — define the minimum readiness threshold for a formal application. Using the Self-Checklist, a plant can determine whether it is ready to apply, and identify which specific required items need to reach the threshold before application makes practical sense.


The OEC TPM Maturity Diagnostic operates at a different level of the same question. It is not award-entry-oriented; it is programme-development-oriented. It is designed to reveal where your programme stands across all eight pillars simultaneously, at a level of diagnostic specificity that makes improvement priorities clear and allows resource allocation decisions to be made deliberately rather than intuitively. Where the JIPM Self-Checklist tells you whether you are at the starting gate, the OEC diagnostic tells you which track to run on and what is slowing you down.


The two instruments are most valuable when used in sequence. I recommend the OEC diagnostic as the first step — run it twelve to eighteen months before you intend to apply for a JIPM award, and use the results to design the improvement programme that will close your largest gaps. Then run the JIPM Self-Checklist six months before your intended application date, to confirm that the required items have reached their minimum thresholds and to identify any last-mile gaps in the award eligibility criteria.


Used this way, the two instruments are not in competition — they are complementary. The OEC diagnostic tells you where to invest; the JIPM Self-Checklist tells you whether you are ready to be evaluated.


How to Run the Diagnostic: Protocol and Facilitation


The OEC TPM Maturity Diagnostic is designed to be used in a facilitated workshop setting, not as a solo desk exercise. This is not a bureaucratic preference — it is an empirical observation about how self-assessment scores drift when the social dynamics of group evaluation are not structured deliberately.


The workshop should involve the TPM pillar leaders, the plant manager, and ideally one or two front-line representatives from the shopfloor. Five to eight people is the optimal group size for productive calibration. The session should be allocated a full half-day — approximately four hours — for all eight pillars, assuming participants have read the diagnostic descriptors in advance.


The protocol I use to counteract score inflation is straightforward but important:


First, pillar leaders do not score their own pillar. They score a peer pillar and have their own pillar scored by a peer. This breaks the defensive dynamic in which a pillar leader's stake in a high score for their domain competes with accurate assessment.


Second, scores are assigned individually and written down before any group discussion begins. The range of scores across evaluators for each dimension becomes the starting point for calibration, not the end point. Disagreements of two or more levels on any dimension are not resolved by averaging — they are treated as evidence that the dimension needs to be understood more clearly.


Third, significant disagreements are settled empirically rather than argumentatively. When evaluators disagree on whether AM's Equipment Condition dimension is at Level 2 or Level 3, the right response is not to debate the question in the conference room — it is to walk to the equipment and look. The shopfloor does not lie, and the best diagnostic workshops I have facilitated have involved multiple shopfloor walks that produced genuine recalibration of scores because what was visible on the floor did not match the description that the pillar leader had offered in the room.


Fourth, the facilitator's role is not to produce high scores but to produce accurate ones. This requires explicitly naming the dynamic at the beginning of the session — saying, aloud, that the purpose of this exercise is to find gaps, and that a low score on a dimension is useful information that the programme needs, not a reflection on the people responsible for it. I usually frame this by sharing the modal score observation: most programmes of this age score between 8 and 10 per pillar, and scoring in that range is not a problem — it is an honest starting point.


Reading the Results: From Scores to Priorities


After the workshop, the score profile across all eight pillars produces a radar chart that reveals the shape of your programme's development. Programmes with a relatively uniform score profile — where all pillars cluster in the same band — are typically constrained by a shared infrastructure weakness: inadequate data collection systems, a governance structure that reviews metrics too infrequently, or a management system that has not yet made TPM a genuine operational priority. The intervention for these programmes is not pillar-specific; it is structural.


Programmes with high variance across pillars — where some pillars score in the Intermediate band while others score in the Foundational band — are typically constrained by resource allocation: one or two pillars have received disproportionate attention, while others have been left to develop on their own. These programmes usually have a strong AM programme (because it is the most visible and the most directly connected to the original TPM launch activities) and a weak OI or EM programme (because these pillars are the furthest from the shopfloor activities that dominate the TPM team's attention). The intervention is rebalancing: redirecting coaching attention, facilitating cross-pillar learning, and setting explicit development goals for the underinvested pillars.


Within each pillar, the dimension profile is the primary action-planning input. The lowest-scoring dimension within a pillar is almost always the right place to start — not because it is the most visible weakness, but because the diagnostic is designed so that the lower dimensions in the maturity model are the infrastructure foundations on which the higher dimensions depend. A plant that scores Level 1 on Loss Visibility and Quantification in the FI pillar and Level 3 on Project Structure will find that its Level 3 project structure is operating without an adequate loss-analysis foundation, which means its project selection is likely to be suboptimal regardless of how well the projects are executed.


The Action Planning Framework


The diagnostic output feeds directly into the Action Plan section of the OEC TPM Maturity Diagnostic tool, which structures improvement commitments across pillar, dimension, improvement action, target, owner, timeline, and status.


I want to offer some specific guidance on how to make action planning from a maturity diagnostic meaningful rather than formulaic, because the most common failure mode at this stage is the production of a long action plan that lists every gap identified in the diagnostic and assigns everything to a two-month completion window. Plans of this kind are not action plans — they are wishful thinking documents.


The discipline I apply is to identify no more than two priority dimensions per pillar for the first improvement cycle, and to require that each action item have a verifiable output — not an activity, but a deliverable that can be inspected. "Implement the QM matrix" is not a useful action item; it is a project. "Complete the QA matrix for Product Family A's top three failure modes, with process-step linkages reviewed by the QM pillar leader by [date]" is a useful action item. The difference is specificity, verifiability, and ownership.


The action plan should be reviewed monthly in the TPM steering meeting, with score updates fed back into the diagnostic at six-month intervals. The diagnostic is not a one-time exercise; it is a biannual health check that reveals whether the improvement programme is producing genuine maturity gains or merely adding activity.


What This Diagnostic Does Not Replace


I want to be direct about two things the OEC TPM Maturity Diagnostic does not replace, because being clear about limitations is part of using any tool honestly.


First, it does not replace shopfloor observation. A diagnostic workshop produces scores based on the collective knowledge of the participants. That knowledge is necessarily filtered by the blind spots and optimistic interpretations that any group brings to self-evaluation. The scores should always be cross-checked against direct shopfloor observation — ideally by someone who did not participate in the scoring session, and who is looking specifically at the dimensions where participants disagreed most strongly.


Second, it does not replace the JIPM Self-Checklist for award application purposes. The JIPM Self-Checklist is the official required instrument for determining award eligibility, and assessors use it as a reference in their evaluation. If your programme is targeting the Award for TPM Excellence, you need to be fluent in the Self-Checklist criteria, not just in the OEC diagnostic dimensions. The two instruments are complementary, not interchangeable, and both should be in the hands of every TPM programme director who is serious about formal recognition.


The Connection to OEC's TPM Consulting Practice


This diagnostic tool was developed directly from OEC's experience facilitating TPM implementations and award preparation programmes across Asia-Pacific. The dimensions were not derived from academic literature or theoretical frameworks — they were derived from the specific, repeated observations of what separates a TPM programme that achieves breakthrough results from one that remains perpetually active but underdeveloped.


The semiconductor plants in Malaysia and Singapore where I observed the highest AM step progression rates but the weakest PM integration were the data source for the AM–FI–PM Integration dimension. The automotive manufacturers in the Philippines and Thailand where FI projects produced excellent local results that were never horizontally deployed were the data source for the FI Horizontal Deployment dimension. The industrial equipment manufacturers in Germany and Singapore where the Training pillar was rich in activity but disconnected from skill measurement and TPM performance outcomes were the data source for the ET Training Governance dimension.


Each dimension in the diagnostic encodes a pattern of programme development that I have observed enough times, across enough contexts, to be confident that it is not idiosyncratic — it is structural. The gaps it reveals are the gaps that JIPM assessors have historically identified in the same programmes, and closing them is the practical work of becoming award-ready.


For further reading on specific pillar implementation, I have written practitioner guides for Autonomous Maintenance, Focused Improvement, and the overall TPM framework — each of which provides the depth of implementation guidance that a diagnostic framework can identify the need for, but cannot itself provide.


Conclusion: Seeing Your Programme Clearly


The most valuable thing a diagnostic instrument can do for a TPM programme is restore the capacity for accurate self-perception that organisational familiarity and social dynamics tend to erode over time. Programmes that have been active for twelve months or longer develop a shared narrative about their own progress — a narrative that is almost always more positive than the underlying evidence supports, because the people who build the narrative are the same people whose effort it describes.


The OEC TPM Maturity Diagnostic is designed to interrupt that narrative with something more useful: a structured, evidence-grounded, cross-pillar picture of where the programme actually stands, at a level of specificity that makes the next improvement step obvious rather than arguable.


It will not tell you that your programme is excellent when it is not. It will not spare you the discomfort of discovering that the pillar you thought was your strongest is in fact your most structurally underdeveloped. It will not confirm your existing conclusions. What it will do — used honestly, in a properly facilitated workshop, by people who are genuinely committed to finding the truth about their programme — is give you the diagnostic clarity that makes real improvement possible.


That is, ultimately, what any good diagnostic should do.


About the Author



Allan Ung, Founder & Principal Consultant, Operational Excellence Consulting (Singapore)

Allan Ung is the Founder and Principal Consultant of Operational Excellence Consulting, a Singapore-based management training and consulting firm established in 2009. With over 30 years of experience leading operational excellence and quality transformation in manufacturing-intensive environments, Allan's expertise spans Lean Thinking, Total Quality Management (TQM), TPM, TWI, ISO systems, and structured problem solving.


He is a Certified Management Consultant (CMC, Japan), Lean Six Sigma Black Belt, JIPM-certified TPM Instructor (Japan Institute of Plant Maintenance), TWI Master Trainer, ISO 9001 Lead Auditor, and former Singapore Quality Award National Assessor.


During his tenure with Singapore's National Productivity Board (now Enterprise Singapore), Allan pioneered Cost of Quality and Total Quality Process initiatives that enabled companies to reduce quality costs by up to 50 percent. In senior regional and global roles at IBM, Microsoft, and Underwriters Laboratories, he led Lean deployment, quality system strengthening, and cross-border operational transformation.


Allan has facilitated TPM, OEE and Lean programmes for organisations including Temic Automotive (Continental), Analog Devices, Amkor Technology, STATS ChipPAC, Infineon Technologies, Panasonic, Micron, Lam Research, Tokyo Electron, Dorma, and NEC. He holds a Bachelor of Engineering (Mechanical Engineering) from the National University of Singapore and completed advanced consultancy training in Japan as a Colombo Plan scholar.


His philosophy: "Manufacturing excellence is achieved through disciplined systems, capable leadership, and sustained execution on the shopfloor."


His practitioner-led toolkits have been utilized by managers and organizations across Asia, Europe, and North America to build Design Thinking and Lean capability and drive organizational improvement.


For enquiries about our facilitated diagnostic workshop, or discuss TPM programme development and award preparation support, visit www.oeconsulting.com.sg or contact us directly through the OEC website.


Related Articles in the TPM Practitioner Guide Series


This article is part of a practitioner guide series on Total Productive Maintenance. Related articles include:








Build TPM Capability in Your Organisation


At Operational Excellence Consulting, I deliver customised TPM workshops and implementation programmes for manufacturing organisations across Singapore and the Asia-Pacific region — from foundational two-day workshops to multi-year TPM implementation support, facilitated by a JIPM-certified TPM Instructor.


👉 Explore our TPM training courses and practitioner-led resources:


Operational Excellence Consulting offers a full catalog of facilitation‑ready training presentations and practitioner toolkits covering Lean, Design Thinking, and Operational Excellence. These resources are developed from real workshops and transformation projects, helping leaders and teams embed proven frameworks, strengthen capability, and achieve sustainable improvement.


👉 Explore the full library at: www.oeconsulting.com.sg/training-presentations




© Operational Excellence Consulting (OEC). All rights reserved.


bottom of page