top of page

OEE Benchmarking: A Practitioner's Guide to Measuring, Comparing, and Improving Equipment Performance

  • May 2
  • 29 min read

Updated: May 10

By Allan Ung | Founder & Principal Consultant, Operational Excellence Consulting

Published: 02 May 2026


Group photo of Allan Ung with participants from the semiconductor back-end manufacturing partner organisations during a site visit. Site visits were a critical component of the Xerox-model benchmarking process, enabling structured knowledge exchange and peer learning across organisations.
Allan Ung with participants from the semiconductor back-end manufacturing partner organisations during a site visit. Site visits were a critical component of the Xerox-model benchmarking process, enabling structured knowledge exchange and peer learning across organisations.

Allan Ung is the Founder and Principal Consultant of Operational Excellence Consulting, a Singapore-based firm established in 2009. With over 30 years of experience leading operational excellence and quality transformation across manufacturing, technology, and global operations — including senior roles at IBM, Microsoft, and Underwriters Laboratories — Allan brings deep shopfloor expertise to every learning room he enters. A Certified Management Consultant (CMC, Japan), Lean Six Sigma Black Belt, TPM Instructor, TWI Master Trainer, and former Singapore Business Excellence Award National Assessor, he has facilitated TPM, OEE and Lean programmes for organisations including Temic Automotive (Continental), Analog Devices, Amkor Technology, STATS ChipPAC, Panasonic, Micron, Lam Research, Infineon Technologies, Dorma, and Tokyo Electron.

The Question Every Manufacturing Leader Asks — and Rarely Answers Well


"How does our OEE compare to our competitors?"


It is one of the most natural questions in manufacturing management, and one of the most difficult to answer rigorously. Every organisation wants to know whether its equipment performance is genuinely competitive — whether the numbers it tracks internally are meaningful when placed alongside those of peer organisations running similar assets.


The problem is that most attempts to benchmark OEE are shallow. Companies exchange headline figures — "our OEE is 65%," "ours is 72%" — without examining the assumptions underneath those numbers. One organisation includes changeover time in its availability loss calculation. Another deducts standby time when no product is waiting. A third measures performance rate using a theoretical ideal cycle time derived from design specifications, while a fourth uses an empirically derived one from time studies. The headline figures are incomparable. And yet organisations routinely make strategic decisions — investments in new equipment, staffing ratios, maintenance budgets — on the basis of comparisons that are, in reality, meaningless.


There is a deeper problem underneath this one. OEE is widely misused even before any benchmarking attempt is made. It is deployed as a performance KPI when it is designed as a diagnostic improvement tool. It is averaged across an entire plant when it is designed to measure a single machine. It is compared across sites running different products under different operating conditions, producing numbers that look comparable but are not. And it is chased as a target — "we need to hit 85%" — in ways that actively harm operational performance.


I have been a JIPM-certified TPM Instructor for over two decades. In that time, I have designed and facilitated multiple structured OEE benchmarking studies across Asia-Pacific, working with semiconductor manufacturers, precision engineering firms, and industrial manufacturers. The most important lesson I have taken from that work is not about OEE scores. It is about the conditions under which OEE scores can be validly compared — and the structured process required to make those comparisons genuinely useful.


This article is a practitioner's guide to using and benchmarking OEE properly: understanding what OEE actually measures, recognising its most common misuses, designing a benchmarking study rigorously, aligning definitions honestly, reading results correctly, and — most importantly — translating findings into improvement actions that move the organisation forward.


What OEE Actually Measures — and What It Does Not


Before tackling benchmarking, it is worth being precise about what OEE is and what it is designed to do — because a significant portion of the errors that make OEE benchmarking misleading originate in a misunderstanding of the metric itself.


Overall Equipment Effectiveness (OEE) is the measurement used in TPM (Total Productive Maintenance) to indicate how effectively machines are running. It is calculated from three factors: Availability, Performance, and Quality. Each factor addresses a different dimension of how close a manufacturing process is to perfect production.


Availability compares potential operating time to the time in which the machine is actually producing. It captures downtime losses — equipment breakdowns, the largest of the six major equipment losses — as well as set-up and adjustment losses caused by changeovers, die exchanges, and start-up adjustments.


Performance (sometimes called Operational Efficiency) compares actual output to what the machine should be producing in the same time. It captures speed losses — reduced operating speed because the equipment cannot be run at its original or theoretical rate — and minor stoppages, the pervasive and underestimated losses caused by machine halting, jamming, misfeeds, and blocked sensors that typically cannot be recorded automatically without suitable instrumentation.


Quality compares good output to total output, capturing process defects and rework as well as start-up yield losses — the material and time wasted during commencement of production, changeover, and adjustment.


The formula is: OEE = Availability × Performance × Quality


The result is expressed as a percentage, representing the proportion of planned production time that is genuinely productive — making good product, at the right speed, without interruption.


What OEE does not measure is equally important to understand. OEE does not measure an entire value stream — for a value stream, metrics such as MTTR (Mean Time to Repair) and MTBF (Mean Time Between Failures) are more meaningful. OEE cannot exceed 100%. OEE is not a measure of efficiency in the broader sense; it is a measure of equipment effectiveness relative to planned production time. And OEE, by design, is an improvement diagnostic — a tool for directing improvement effort at the right losses in the right sequence — not a standalone performance KPI.


This last point is the one most commonly missed, and it matters enormously in a benchmarking context, as the next section explains.


The Most Common Misuses of OEE — and Why They Undermine Benchmarking


Understanding OEE's misuses is prerequisite knowledge for anyone designing or interpreting a benchmarking study, because many of the errors that make OEE comparisons meaningless are rooted in these misuses.


Misuse 1: Using OEE as a primary business KPI rather than a diagnostic tool.


OEE was designed to diagnose improvement opportunities — to surface where losses are occurring and direct improvement effort productively. When it is elevated to a primary business KPI, the incentive structure changes in ways that can actively damage performance. Teams that are measured and rewarded primarily on OEE scores may overproduce to inflate the metric — running equipment beyond rated parameters, building inventory that no customer has ordered, or overstaffing processes to drive up throughput numbers — all of which improve the OEE calculation while worsening actual business outcomes. The organisation's customers do not care about internal OEE scores; they care about On-Time In-Full (OTIF) delivery and reliability. Treating OEE as the primary target often shifts focus away from those customer-facing outcomes.


Misuse 2: Comparing OEE across different machines, plants, or products without adjustment.


Comparing OEE results across different equipment types, product families, or operating contexts — without explicitly accounting for the differences — is one of the most common and most damaging errors in OEE management. A machine running a single product at high volume will naturally report higher OEE than one running twenty products at low volume with frequent changeovers, even if the operators and maintenance team managing the second machine are doing outstanding work. The numbers are not comparable. Drawing conclusions from such a comparison is worse than having no comparison at all, because it creates false confidence in decisions that are grounded in misleading data.


Misuse 3: Averaging OEE across an area, shift, or plant.


Averaging OEE across a group of machines or a production area hides the specific losses and specific assets that are dragging down performance. An area average of 62% tells you nothing about which machine at 43% is the bottleneck suppressing throughput, or which shift's performance pattern differs from the others. The value of OEE comes precisely from its granularity — its ability to pinpoint where losses are occurring. Averaging removes that granularity and replaces it with a number that feels informative but is not.


Misuse 4: Manipulating the OEE calculation.


When OEE becomes a primary performance target, the temptation to manipulate the calculation increases. Common manipulation tactics include: logging actual downtime as planned maintenance or engineering time so that it does not count against Availability; using an unrealistically slow ideal cycle time as the performance rate denominator, so that actual output always appears close to theoretical maximum; or selectively measuring only the best-performing machines or shifts. Each of these tactics improves the reported number while making the underlying performance worse — because it prevents the organisation from seeing and acting on the real losses that OEE is designed to surface.


Misuse 5: Chasing the 85% "World Class" benchmark.


The 85% OEE figure has been repeated so frequently, in so many contexts, that many manufacturing managers treat it as a universal performance standard. It was never designed to be universal — it originated in a specific context of discrete part manufacturing on high-volume, low-mix product lines, and has no inherent applicability to operations with different product and process characteristics. The more damaging problem is the message the target sends: improvement teams that have been working toward 85% as an endpoint tend to disengage once they cross it, treating it as a destination rather than a waypoint. Understanding why OEE is where it is — and what specific losses remain — is consistently more valuable than achieving any particular number.


All five of these misuses generate noise in OEE benchmarking. An organisation that manipulates its OEE calculation, averages across all its machines, and uses a non-standard definition of standby time will produce a headline OEE figure that is meaningless when compared against a peer's rigorously calculated single-machine result. Rigorous benchmarking requires identifying and correcting for all of these sources of incomparability before any comparison is made.


What OEE Benchmarking Is — and What It Is Not


With the misuses in mind, OEE benchmarking can be precisely defined: it is the structured process of comparing OEE measurement approaches, calculation methodologies, and performance outcomes across two or more organisations — with the explicit goal of identifying performance gaps, understanding their root causes, and learning from better-performing peers.


This definition has three important implications.


First, benchmarking is a structured process, not an informal exchange of numbers. A meaningful benchmarking study requires a defined methodology, a carefully designed questionnaire, site visits, and a disciplined analysis framework. Without structure, what passes for benchmarking is usually just sharing — and sharing without context is unreliable.


Second, OEE benchmarking is concerned with measurement approaches, not just scores. How organisations measure OEE is at least as important as what they measure. Two organisations can report identical OEE scores using fundamentally different formulas and different definitions of time states, availability, and performance rate. Before comparing results, you must compare methodologies.


Third, and most critically, OEE benchmarking is oriented toward learning and improvement, not competitive positioning or ranking. The most valuable output of a well-run benchmarking study is not a league table. It is a set of insights — about your own practices, about what better-performing peers are doing differently, and about where improvement effort will yield the highest return.


Why OEE Benchmarking Matters in a TPM Context


OEE is the central metric of any serious TPM implementation. It is the quantitative expression of the Six Big Losses — the framework JIPM developed to categorise every form of equipment performance loss into actionable categories. Understanding the relationship between the Six Big Losses and the three OEE components is foundational to making any benchmarking comparison meaningful.


The Six Big Losses map onto the OEE components as follows. Availability losses encompass equipment breakdowns and set-up and adjustment losses. Performance losses encompass minor stoppages and idling, and reduced speed losses. Quality losses encompass process defects and rework, and start-up yield losses.


The cascading impact of minor stoppages deserves particular emphasis because it is consistently underestimated. When equipment stalls, jams, or idles due to a temporary problem, the effects extend well beyond the individual machine: the effectiveness of the affected equipment drops immediately, other linked machines are idled, product quality defects tend to increase, and idle machines represent energy loss without output. Yet minor stoppages are rarely taken seriously. The size of the loss is not obvious when each individual stoppage is brief. Teams treat the symptom — clearing the jam — rather than the cause. And without proper instrumentation, the cumulative time lost to minor stoppages is invisible in aggregate OEE data.


In this context, OEE benchmarking serves two functions that internal measurement alone cannot. First, it provides external calibration — a reality check on whether your internal OEE targets are appropriately stretching or whether you are celebrating performance that peers consider unremarkable. Second, it provides best practice intelligence — specific insights into how peer organisations are managing the losses you are struggling with, so that you can learn from their experience rather than reinventing solutions.


Both functions are valuable. But they require the kind of rigorous, structured benchmarking process described in the following sections.


Designing the Study: The Xerox Benchmarking Model


The most robust framework for manufacturing benchmarking studies is the Xerox Benchmarking Model — a ten-step process developed by Xerox Corporation in the 1980s, when the company undertook one of the most comprehensive manufacturing benchmarking programmes ever conducted. The model organises the benchmarking process into four phases.


Phase 1: Planning


The planning phase establishes the foundations for the entire study. It involves three steps: identifying precisely what will be benchmarked (in the case of OEE, this includes both the measurement methodology and the performance outcomes); identifying comparative organisations — partners who operate similar equipment, in similar market segments, and who are willing to share data in a structured, confidential format; and determining the data collection method.


For an OEE benchmarking study, the data collection method typically combines a structured questionnaire with structured site visits to each partner's facility. The questionnaire design is critical and underappreciated. The questions must be specific enough to surface genuine methodological differences, but not so narrow that they miss contextual factors. Questions about lot size, product mix, changeover frequency, equipment age, and the classification of standby or no-WIP time are as important as questions about the OEE formula itself.


Phase 2: Analysis

The analysis phase covers determining the current performance gap and projecting future performance levels. In OEE benchmarking, this means comparing both measurement approaches and results across all partners, accounting for the contextual factors that make direct comparison difficult, and identifying where gaps exist — and why.


This is where most amateur benchmarking studies fail. Organisations compare headline OEE figures without accounting for the factors that make those figures non-comparable. A rigorous analysis explicitly tests whether apparent gaps in OEE scores reflect genuine performance differences or simply differences in how time, loss categories, or performance standards are defined and measured.


Phase 3: Integration

The integration phase covers communication of findings and establishment of functional goals. This means presenting findings to key stakeholders — managers and operators, not just engineers and analysts — in ways that are clear, credible, and actionable. It means translating analytical findings into specific, realistic OEE improvement targets that the organisation commits to pursuing.


Phase 4: Action

The action phase is where the benchmarking investment is realised. It covers developing action plans, implementing specific improvement initiatives, monitoring progress, and recalibrating benchmarks periodically as performance and competitive context evolve. A benchmarking study that ends with a report and no action plan has added no value. The test of any benchmarking study is what changes as a result.


Aligning OEE Definitions Before Comparing Results


The single most important analytical step in any OEE benchmarking study is to establish whether the participating organisations are measuring OEE in compatible ways. This sounds straightforward. It rarely is.


In practice, I have encountered two primary OEE standards in semiconductor and precision manufacturing: the JIPM standard and the SEMI E10 standard. Both provide frameworks for defining equipment time states and calculating OEE components. They are broadly compatible — but the terminology differs, and the way individual organisations translate these standards into their own measurement systems introduces further variation.


Several specific sources of definitional variation must be examined in any rigorous study.


Standby and No-WIP Time. One of the most significant differences between OEE measurement approaches is the treatment of time during which equipment is operational but has no product waiting to be processed. Some organisations include this standby time in their total time denominator, which reduces their reported OEE. Others deduct it — arguing that OEE should measure performance during the periods equipment is actually in use. Neither approach is wrong, but comparing results across organisations that use different approaches, without adjustment, produces misleading conclusions. In one study I facilitated, the same organisation's OEE appeared as 41% under one method and 61% under the other — a 20-percentage-point difference arising entirely from this definitional choice.


Quality Rate Inclusion. Some organisations — particularly in back-end semiconductor test operations — exclude Quality Rate from their OEE calculation on the grounds that test yields are determined by upstream fabrication processes and are not within the test operation's control. This is a defensible position, but it must be explicitly acknowledged when comparing results with organisations that include Quality Rate.


The Definition of Ideal Cycle Time. Performance Rate measures how closely actual throughput approaches the theoretical maximum. But organisations differ in how they establish the theoretical maximum — the ideal cycle time. Some use a pure measurement of the fastest sustainable rate (Pure UPH). Some extract it from a product data management system. Others establish it through time studies. These differences produce different denominators for the performance rate calculation, which affects the resulting OEE score.


Changeover Classification. Whether changeover time is classified as a scheduled downtime loss counted inside the Availability calculation, or as a planned maintenance or external loss potentially excluded from it, significantly affects reported Availability and thus overall OEE. The principled position — and the one most consistent with JIPM and SEMI E10 standards — is to include breaks and changeovers in Planned Production Time. If time could theoretically be used for value-added production to meet customer demand, it should be included in the OEE calculation. Excluding it hides the true capacity cost of changeovers and creates an artificial floor that prevents the organisation from seeing and addressing one of its most significant losses.


Method of Measuring Speed Losses. Organisations differ in how they capture minor stoppages and speed losses — some use automated machine monitoring systems that log every stoppage automatically; others rely on operator-recorded data. Operator-recorded minor stoppages are systematically under-reported because operators attend to individual stoppages without perceiving or recording their cumulative scale. A benchmark comparison that places an organisation using automated monitoring against one using manual recording will appear to show higher losses in the automated organisation — when in reality the automated organisation simply has better visibility of losses that exist equally in both.


In a study I facilitated across three semiconductor back-end manufacturing organisations, the definition-mapping process revealed that despite surface-level differences in terminology and formula structure, all three partners' OEE definitions and calculation methods were fundamentally compatible with both JIPM and SEMI E10 standards. That finding was itself a meaningful result — it established that the subsequent comparison of OEE scores was methodologically sound.


Comparing OEE Results: Reading the Numbers Correctly


Once you have established methodological comparability, you can compare OEE results — but with appropriate caution. OEE scores, even across organisations using compatible measurement approaches, can reflect very different underlying operational realities.


The factors that most commonly create incomparability in OEE results, even after methodological alignment, are:


Lot Size and Processing Time. Smaller lots increase changeover frequency, which increases the proportion of time lost to changeovers and reduces Availability. In low-volume, high-mix manufacturing environments — which are common in semiconductor test operations — small lot sizes can be the single largest driver of OEE reduction. When comparing OEE across organisations with different lot size profiles, this effect must be isolated.


Product Mix Complexity. Equipment running complex products — those requiring long test programmes, multiple insertion steps, or frequent parameter changes — will generally report lower OEE than equipment running simple, standardised products. Apparent OEE gaps may therefore reflect product mix differences rather than operational performance differences.


Changeover Frequency. An organisation running fewer product types on the same equipment will naturally report higher OEE, all other things being equal. The appropriate comparison is not raw OEE but OEE adjusted for the changeover burden each organisation's product mix imposes.


Equipment Age. Older equipment is generally expected to be less reliable. In practice, I have found this relationship to be less predictable than it appears. In one benchmarking study, the organisation with the youngest equipment (averaging two years for its newest platform) showed OEE levels broadly on par with organisations whose equivalent equipment averaged seven to eight years. The newer equipment had received more intensive improvement focus; the older equipment had been subject to sustained autonomous maintenance and planned maintenance programmes that kept it performing well. Equipment age is a factor, but it is not a determinant of OEE performance.


Bottleneck Status. The OEE of bottleneck equipment has much greater operational significance than the OEE of non-bottleneck assets. Improvement investments that raise the OEE of a bottleneck translate directly into throughput gains. The same investments applied to non-bottleneck equipment may produce no throughput benefit at all. When comparing OEE results across organisations, it is important to understand whether the equipment being compared serves a comparable function — including whether it is a bottleneck — in each organisation's operation.


In the study I facilitated, the group average OEE across all partners (measured on a consistent basis) was approximately 50% for a defined set of semiconductor test platforms over a five-month period. Once lot size variation was taken into account, the OEE levels were deemed broadly comparable across organisations. The lesson was precisely the one this section describes: apparent differences in headline OEE figures were largely attributable to contextual factors rather than to genuine differences in operational effectiveness.


Stacked bar chart titled Comparison of OEE Results Jan-May 2014 showing four bars for Partner A, Partner B, Partner C, and Partner C No WIP. Each bar reaches 100 percent and is divided into six time-state segments: Productive Time in green at the base, then Standby Time in orange, Non-Productive Time in yellow, Engineering Time in light peach, Scheduled Downtime in red, and Unscheduled Downtime in blue at the top. An OEE line connects the four bars showing values of 55 percent for Partner A, 55 percent for Partner B, 41 percent for Partner C, and 61 percent for Partner C when standby time is excluded from the calculation.
Comparison of OEE results across three anonymised semiconductor back-end manufacturing organisations, January–May 2014, measured on equivalent test platforms. Partner A and Partner B each recorded 55% OEE. Partner C recorded 41% when standby (no-WIP) time was included in the calculation — and 61% when standby time was excluded. The 20-percentage-point difference for Partner C arises entirely from one definitional choice, with no change in actual operational performance. The three-partner group average using a consistent methodology was approximately 50%. Source: OEE Performance and Measurement System Benchmarking Study, Operational Excellence Consulting, 2014.

Five Factors That Drive OEE Differences Across Sites


Across the benchmarking studies I have facilitated, five factors consistently emerge as the most significant drivers of OEE differences across sites. None of these factors appears in the OEE formula. All of them matter more than the formula.


Factor 1: Lot Optimisation


The impact of lot size on OEE in high-mix manufacturing environments is consistently underestimated. In environments where changeover time is non-trivial, doubling the average lot size can reduce changeover frequency by half — potentially adding several percentage points to OEE through the Availability component alone. Lot merging strategies, where operationally feasible, represent one of the highest-return, lowest-capital OEE improvement levers available.


Factor 2: OEE Analytics Capability


Organisations that can see their OEE data in real time — drilled down to individual machine, product, shift, and loss category — are significantly better positioned to act on it than organisations that review aggregated weekly reports. The gap between reactive and proactive analytics capability is stark and practically important.


Manual data capture is reactive by nature: it produces historical information with slow reaction times and a snapshot view of losses at the moment of review. Because it depends on human recording, it is also inherently inaccurate, reflecting perceived losses rather than the true losses on equipment. Automated data capture, by contrast, enables rapid reaction, continuous visualisation, and real-time data that reflects actual losses — including the minor stoppages and speed variations that manual recording systematically under-captures.


The practical implication for benchmarking: when comparing OEE results between an organisation using automated monitoring and one using manual recording, the comparison is not simply between two performance levels. It is between an organisation with genuine visibility into its losses and one that may be flying partially blind. The organisation with better analytics will almost always appear to have higher losses initially — because it can see losses the other cannot.


Factor 3: Maintenance Maturity


The organisations with the strongest OEE performance are not those with the largest maintenance budgets. They are the ones that have implemented the systematic TPM maintenance cycle: Autonomous Maintenance carried out by trained operators who understand their machines, Planned Maintenance based on actual failure mode data, and the systematic use of MTTR and MTBF trend data to identify recurring failure modes and drive recurrence prevention. Equipment maintenance maturity is directly observable during site visits — in the cleanliness and organisation of equipment, in operators' ability to describe normal versus abnormal conditions, and in the quality of maintenance records.


Factor 4: Operator Engagement and Awareness


In all three organisations I benchmarked in one study, managers, engineers, and supervisors understood why the OEE number was where it was. But operators — the people whose daily actions have the most direct impact on OEE — had limited awareness of what OEE meant or how their behaviour affected it. This is a significant and consistently underaddressed gap.


A fundamental principle in OEE management that the benchmarking findings reinforced: the OEE chart cannot promote improvement if it does not get back to the shop floor. OEE information must be shared, made visible, and explained in terms that connect daily operator actions — cleaning, lubrication, autonomous inspection — to the performance numbers operators see. Operators who receive timely, shift-by-shift OEE feedback and understand the connection between their daily practices and the numbers they see perform at a measurably higher level than those who simply execute tasks without context.


One practice observed during a site visit that exemplifies this principle: end-of-shift performance summaries posted at each workstation, combined with hourly productivity monitoring in the final two hours of each shift to prevent the productivity decline that commonly occurs as workers mentally transition out of the shift. This simple practice — visible, immediate, specific feedback — had a measurable impact on shift-end performance without any capital investment.


Factor 5: Improvement Methodology Discipline


The organisations that achieve sustained OEE improvement share one characteristic: they apply structured improvement methodologies consistently, rather than relying on ad hoc problem-solving. The specific methodology matters less than the discipline with which it is applied. In the benchmarking study, organisations used TQM, Lean Six Sigma, and Root Cause Analysis frameworks — all different, all effective when applied consistently. What distinguished the organisations with stronger improvement trajectories was not their methodology but their discipline: improvement projects that reached root cause, not just symptom; changeover improvements that were standardised and sustained, not just demonstrated; and feedback loops that connected improvement results to the next cycle of analysis.


OEE Analytics: From Reactive to Proactive


One of the most revealing dimensions of any OEE benchmarking study is the comparison of how organisations collect, process, and act on OEE data. The range of practice is wide, and the gap between levels of maturity is larger than most organisations appreciate.


At the advanced end of the spectrum are organisations with fully integrated, automated OEE analytics that capture data at machine level in real time, process it automatically, and present it through dashboards that can be interrogated at multiple levels of granularity — by machine, product, shift, loss category, and individual lot. These systems enable fast, precise improvement. When a minor stoppage pattern is visible in real-time data, engineers can investigate immediately rather than waiting for a weekly report. The organisation is operating proactively.


At the other end are organisations that track OEE through manual data entry by assigned staff, reporting on a shift or daily basis, with analysis conducted on spreadsheets. These organisations are not failing — they may have sound OEE processes — but they are working harder for their insights, and they are necessarily limited in how quickly and precisely they can respond to OEE signals. The organisation is operating reactively.


Between these extremes, a common intermediate position is an organisation whose core OEE data systems — one tracking availability losses, one tracking performance rate, one tracking quality — operate independently, without integration. Analytics are partially automated but require manual consolidation before meaningful analysis can occur.


Three practical guidance points for OEE analytics development:


First, prioritise automating data capture over automating analysis. Manual data entry is the most significant source of OEE data error and under-reporting. The priority should be machine-level automated data capture that eliminates the human recording step. Analysis tools can follow.


Second, avoid analysis paralysis. Abundant OEE data is an asset only if it drives action. If an analytics system produces more information than the organisation can act on, the additional complexity is generating cost, not value. The relevant test of any OEE analytics system is not "how much data can it capture?" but "does it make it easy for improvement teams to identify what to fix and take action?"


Third, design analytics outputs for each audience. Operators need shift-level, machine-level feedback in immediately understandable formats — an Andon-style display showing current status, or a simple shift-end summary. Engineers need loss category drill-downs and trend data. Managers need plant-level trend lines and improvement project status. The same underlying data, presented for each audience's decision-making context, serves all three. A single consolidated report that tries to serve everyone typically serves no one well.


The 85% "World Class" Myth: A Practitioner's Perspective


No discussion of OEE benchmarking is complete without addressing the most widely repeated and least useful figure in manufacturing management: the 85% "World Class OEE" standard.


The 85% benchmark — and its component targets of 90% Availability, 95% Performance, and 99% Quality — appears in the original TPM literature and has been cited so frequently that many manufacturing managers treat it as a universal performance standard.


It is not, and it was never intended to be. The 85% figure originated in a specific context: discrete part manufacturing, running dedicated equipment on high-volume, low-mix product lines. In that context, it is a meaningful and stretching target. Applied to semiconductor test operations running low-volume, high-mix portfolios, or to process manufacturing environments with fundamentally different loss profiles, it is arbitrary.


There are two ways in which the 85% standard actively damages OEE management. The first is when it is applied to contexts where it is not meaningful, generating a gap that does not reflect genuine operational underperformance and that misdirects improvement investment. The second is when it is achieved — because improvement teams that have been working toward 85% as an endpoint frequently disengage once they cross that threshold. The number has been reached; the project is complete. But OEE improvement has no ceiling. The relevant question is always: what are the remaining losses, what is causing them, and what is the most cost-effective way to reduce them?


In one benchmarking study, organisations with OEE levels in the 55–65% range for high-mix test operations were performing well relative to the actual constraints of their business — small lot sizes, frequent changeovers, complex product mixes. Applying an 85% target to those operations without adjustment would have created a performance gap that was operationally meaningless, and potentially directed improvement investment toward areas — such as capital-intensive equipment upgrades — where it would produce far less return than operational and scheduling improvements on the existing equipment.


The practical guidance: set OEE targets in the context of your specific operating environment — your product mix, lot size profile, equipment age, and market requirements. Benchmark against peer organisations facing comparable constraints, not against a universal standard that was never designed for your context.


From Benchmarking to Improvement: The Six Approaches That Consistently Deliver Results


A benchmarking study that produces a report is a research project. A benchmarking study that changes what an organisation does is an improvement initiative. The difference lies in the quality of the action planning that follows the analysis.


The improvement strategies that consistently emerge from well-run OEE benchmarking studies can be organised into six approaches. The improvement goals for each of the Six Big Losses that underpin these approaches are: breakdowns reduced to zero; set-up and adjustment time minimised to under ten minutes with zero adjustments; reduced speed losses eliminated so that equipment matches or exceeds design specifications; minor stoppages reduced to zero; defects and rework expressed in parts per million (ppm) approaching zero; and start-up losses minimised through optimised changeover and adjustment procedures. These are stretching targets, not immediate expectations — but having quantified goals for each loss category ensures that improvement effort is directed at the right problems in the right priority order.


Approach 1: Structured Root Cause Analysis (5 Why Analysis)


The OEE benchmarking analysis identifies where losses are occurring and at what relative magnitude. The improvement response must address root causes, not symptoms. 5 Why analysis — applied by repeatedly asking "why" until the underlying condition that allowed the problem to occur is identified — is the most accessible and widely applicable root cause tool available to improvement teams. It is particularly effective for equipment reliability problems where the failure chain is discoverable through systematic questioning. The sought outcome of a 5 Why exercise is the root cause of the defined problem, not the resolution itself — that comes in the next stage of problem-solving.


Approach 2: Autonomous Maintenance


The most effective single intervention for sustaining OEE improvement is a well-implemented Autonomous Maintenance programme. The seven steps of AM build progressively: initial cleaning that serves as inspection; countermeasures against contamination sources and difficult-to-access areas; establishment of cleaning and lubrication standards; general inspection of the equipment system; autonomous inspection using operator-developed standards; standardisation through visual workplace management; and finally, full autonomous equipment management as part of normal operations.


The key discipline that makes AM effective is that operators develop four specific equipment-related skills through this progression: detecting abnormalities, correcting and restoring abnormalities, setting optimal equipment conditions, and maintaining those optimal conditions independently. An operator who has cleaned every surface of a machine, tightened every fastener, and checked every lubrication point develops an intuitive sense of that machine's normal condition. Abnormalities become immediately detectable — and detectable before they become failures.


Approach 3: Focused Improvement (Kobetsu Kaizen)


Focused Improvement — Kobetsu Kaizen in Japanese — is the TPM pillar that targets the elimination of specific, prioritised losses through cross-functional project team activity. Where Autonomous Maintenance is a daily discipline carried out by production workers, Focused Improvement is a periodic, intensive project that goes beyond maintaining basic operating conditions to directly improving equipment performance, freeing processes from chronic losses and the effects of design weaknesses.


Focused Improvement teams work through the Six Big Losses systematically using a structured PDCA approach, combining tools such as Pareto analysis to prioritise the highest-impact losses, 5 Why and fishbone analysis to identify root causes, and Why-Why Analysis for deeper investigation. The team composition — drawing from operators, technicians, engineers, and managers — ensures that both operational knowledge and technical expertise are applied to each problem.


Approach 4: Quick Changeover (SMED)


For high-mix operations where changeover frequency is a significant OEE driver, structured changeover reduction using the Single Minute Exchange of Die (SMED) methodology can deliver substantial Availability improvements. Changeover times can typically be reduced to single-digit minutes through systematic application of the three SMED stages: first, separating internal and external changeover tasks — sorting out the operations that can be prepared and completed while the machine is still running, which alone can reduce changeover time by 30 to 50 percent; second, converting remaining internal setup activities to external ones through preparation of operating conditions in advance and standardisation of functions that currently require adjustment; and third, streamlining all remaining setup time through parallel operations, quick-release clamping, and numerical positioning settings that eliminate trial-and-error adjustments. In the long term, SMED enables smaller lot sizes and improved responsiveness to customer demand in addition to its direct OEE impact.


Approach 5: Mistake-Proofing (Poka-Yoke)


Where quality losses are significant, mistake-proofing — designing processes and tooling to prevent errors at source rather than detecting them downstream — is more effective than inspection. Poka-Yoke techniques operate at two levels: prevention, which makes mistakes physically impossible; and detection, which ensures that when mistakes do occur they are caught immediately before becoming defects. In OEE terms, quality-driven improvement is frequently overlooked in favour of availability and performance rate improvements, but in environments where yield losses are material it represents a significant opportunity.


Approach 6: P-M Analysis for Chronic Losses


For complex, chronic losses that resist resolution by 5 Why analysis and standard root cause approaches — recurring failures with multiple interrelated causes that vary with each occurrence — P-M Analysis provides a more rigorous framework. P-M Analysis was developed specifically to overcome the weaknesses of traditional methods in dealing with complex chronic problems.


The "P" stands for both "phenomenon" (the abnormal event to be controlled) and "physical" (the perspective from which the phenomenon is examined). The "M" refers to "mechanism" and to the 4Ms — Machine, Men (operator actions), Method, and Material — which provide the framework of causal factors to be investigated. The essence of P-M Analysis is to examine every physical detail so that no causal factor is missed.


The eight steps of P-M Analysis are: clarify the phenomenon precisely; conduct a physical analysis of how the phenomenon is generated; identify constituent conditions of the physical analysis; study the 4Ms for all possible causal factors; establish optimal conditions and standard values for each factor; survey causal factors to identify abnormalities against those standards; determine which abnormalities need to be addressed based on their relationship to the phenomenon; and propose and implement improvements. Although it demands more time and expertise than 5 Why analysis, P-M Analysis has reduced chronic losses to zero in many manufacturing environments where standard approaches had repeatedly failed.


What Benchmarking Reveals About Human Factors — and Why They Deserve Their Own Study


One of the most consistent findings from structured OEE benchmarking studies is that human factors — operator discipline, shift handover practices, supervisory behaviours, training quality, and workplace culture — are as significant as technical factors in determining OEE outcomes. Yet human factors are systematically underexamined in most OEE improvement programmes, precisely because they are less visible and harder to measure than equipment parameters.


A recommendation I make consistently to organisations completing a first round of OEE benchmarking: plan a follow-on study specifically focused on human factors. Topics that deserve structured benchmarking attention include: how operators are trained and recertified on equipment, how shift handovers are conducted and what information is transferred, how supervisors coach operators on OEE-related behaviours, how improvement ideas from the production floor are captured and acted upon, and how recognition and reward practices reinforce high-OEE behaviours.


The Training Within Industry Job Instruction (JI) module is directly applicable to OEE improvement. Inconsistent training by supervisors is one of the most common causes of inconsistent OEE performance across shifts — the same equipment performing significantly differently on different shifts because operators have learned slightly different practices from different supervisors. TWI JI provides supervisors with a structured, four-step method for teaching operating and maintenance tasks consistently, eliminating the variation that inconsistent training introduces into OEE performance.


Knowledge Management: The Multiplier Effect


A benchmarking study that identifies best practices and shares them with the teams directly involved has created value. A benchmarking study whose findings are systematically captured, structured into transferable knowledge assets, and deployed across all relevant platforms, sites, and teams has created exponentially more value.


The practical implications are three. Best practices identified through benchmarking should be structured into standard work documents, one-point lessons, and training materials that make the knowledge transferable to any operator or technician on any platform — not kept within the team that participated in the study. Improvement approaches that prove effective on the benchmark equipment platforms should be systematically evaluated for applicability to other platforms and production areas. And the benchmarking process itself — the questionnaire, the site visit protocol, the analysis framework — should be documented and institutionalised so that future studies can be conducted more efficiently, building on the methodological investment already made.


Conclusion: What a Rigorous Benchmarking Study Actually Delivers


The most valuable output of a rigorous OEE benchmarking study is not a comparison of OEE scores. It is a set of structured, validated insights that the participating organisation could not have generated through internal analysis alone.

From my experience facilitating these studies, the most consistently valuable insights fall into four categories.


Methodological validation — confirmation that your OEE measurement approach is compatible with recognised standards and with peer organisations' approaches. The discipline of mapping your own system against JIPM or SEMI E10 standards, and against peer organisations' systems, surfaces idiosyncrasies and hidden definitional choices before they become entrenched and before they distort future benchmarking comparisons.


Performance calibration — an honest assessment of whether your OEE performance, adjusted for the contextual factors described in this article, is broadly on par with peers, ahead of them, or behind them. This is the external reality check that internal analysis cannot provide.


Practice benchmarking — specific intelligence about how peer organisations manage the losses you are struggling with. The value here is not copying practices wholesale, but understanding the principles underlying better performance and adapting them intelligently to your own context.


Improvement prioritisation — clarity about which improvement initiatives will deliver the greatest OEE impact relative to the resources required. Benchmarking that identifies lot optimisation as the highest-return lever for a high-mix operation should redirect investment away from capital-intensive technical improvements toward operational and scheduling disciplines that deliver comparable results at a fraction of the cost.


The semiconductor back-end manufacturing study I facilitated confirmed all four of these output types. The participating organisations left with validated OEE measurement approaches, calibrated performance positions, specific best practice intelligence from peer site visits, and a prioritised improvement roadmap grounded in both their own operational data and their peers' experience.


OEE improvement without benchmarking is navigating without a map. You may be moving in the right direction. You may not. Benchmarking tells you where you are, shows you where others have reached, and helps you choose the most effective path forward.


About the Author



Allan Ung, Founder & Principal Consultant, Operational Excellence Consulting (Singapore)

Allan Ung is the Founder and Principal Consultant of Operational Excellence Consulting, a Singapore-based management training and consulting firm established in 2009. With over 30 years of experience leading operational excellence and quality transformation in manufacturing-intensive environments, Allan's expertise spans Lean Thinking, Total Quality Management (TQM), TPM, TWI, ISO systems, and structured problem solving.


He is a Certified Management Consultant (CMC, Japan), Lean Six Sigma Black Belt, JIPM-certified TPM Instructor (Japan Institute of Plant Maintenance), TWI Master Trainer, ISO 9001 Lead Auditor, and former Singapore Quality Award National Assessor.


During his tenure with Singapore's National Productivity Board (now Enterprise Singapore), Allan pioneered Cost of Quality and Total Quality Process initiatives that enabled companies to reduce quality costs by up to 50 percent. In senior regional and global roles at IBM, Microsoft, and Underwriters Laboratories, he led Lean deployment, quality system strengthening, and cross-border operational transformation.


Allan has facilitated TPM, OEE and Lean programmes for organisations including Temic Automotive (Continental), Analog Devices, Amkor Technology, STATS ChipPAC, Infineon Technologies, Panasonic, Micron, Lam Research, Tokyo Electron, Dorma, and NEC. He holds a Bachelor of Engineering (Mechanical Engineering) from the National University of Singapore and completed advanced consultancy training in Japan as a Colombo Plan scholar.


His philosophy: "Manufacturing excellence is achieved through disciplined systems, capable leadership, and sustained execution on the shopfloor."


His practitioner-led toolkits have been utilized by managers and organizations across Asia, Europe, and North America to build Design Thinking and Lean capability and drive organizational improvement.


For enquiries about TPM implementation, OEE benchmarking or operational excellence consulting, visit www.oeconsulting.com.sg or contact us directly through the OEC website.


Related Articles in the TPM Practitioner Guide Series


This article is part of OEC's TPM Practitioner Guide cluster. For a complete understanding of the TPM management system and its components, see the related articles below:




  • Autonomous Maintenance: A Practitioner's Guide — The seven-step AM development pathway, the AM-PM partnership that makes both pillars work, and the practitioner discipline required to build genuine operator ownership of equipment condition.






Build TPM Capability in Your Organisation


At Operational Excellence Consulting, I deliver customised TPM, OEE workshops and implementation programmes for manufacturing organisations across Singapore and the Asia-Pacific region — from foundational two-day workshops to multi-year TPM implementation support, facilitated by a JIPM-certified TPM Instructor.


👉 Explore our TPM training courses and practitioner-led resources:


Operational Excellence Consulting offers a full catalog of facilitation‑ready training presentations and practitioner toolkits covering Lean, Design Thinking, and Operational Excellence. These resources are developed from real workshops and transformation projects, helping leaders and teams embed proven frameworks, strengthen capability, and achieve sustainable improvement.


👉 Explore the full library at: www.oeconsulting.com.sg/training-presentations




© Operational Excellence Consulting. All rights reserved.

bottom of page