Overall Equipment Effectiveness (OEE): A Practitioner's Guide to Measuring, Understanding, and Improving Equipment Performance
- May 7
- 24 min read
Updated: 7 days ago
By Allan Ung | Founder & Principal Consultant, Operational Excellence Consulting (OEC)
Published: 06 May 2026

Allan Ung is the Founder and Principal Consultant of Operational Excellence Consulting (OEC), a Singapore-based management consultancy established in 2009. With over 30 years of experience leading operational excellence and quality transformation across manufacturing, technology, and global operations — including senior roles at IBM, Microsoft, and Underwriters Laboratories (UL) across Asia-Pacific — Allan brings deep shopfloor and strategic expertise to every engagement. He holds the following qualifications and recognitions: Certified Management Consultant (CMC, Japan), Certified Lean Six Sigma Black Belt, JIPM-certified TPM Instructor, TWI Master Trainer, and former National Examiner for the Singapore Business Excellence Award. Allan has designed and facilitated TPM implementations and operational excellence programmes for organisations across semiconductor, automotive, industrial manufacturing, logistics, and public sectors. His clients include Temic Automotive (Continental), Analog Devices, Amkor Technology, STATS ChipPAC, Panasonic, Micron, Lam Research, Infineon Technologies, Dorma, and Tokyo Electron, as well as Singapore government ministries and statutory boards.
The Metric That Tells You Everything — and the Way Most Plants Use It to Learn Nothing
Ask a plant manager what their OEE is and most will give you a number quickly and confidently. Ask them what is driving that number, and the conversation becomes considerably harder. Ask them what has changed as a result of tracking it, and harder still. In my experience across semiconductor fabrication facilities, automotive assembly operations, precision engineering plants, and industrial manufacturing across Asia-Pacific, this pattern is almost universal. OEE is measured. OEE is reported. OEE is discussed in monthly performance reviews. But in far too many plants, OEE is not actually used.
That gap — between tracking OEE and deploying it as a genuine improvement instrument — is what this guide is designed to address. Overall Equipment Effectiveness is simultaneously the most powerful equipment performance metric in manufacturing and the most routinely misapplied one. The misapplication takes predictable forms: OEE scores that are inflated by definitional choices, plant-level aggregations that hide the losses they are supposed to surface, world-class benchmarks cited in contexts where they have no relevance, and improvement programmes that chase OEE as a target rather than using OEE data to eliminate the losses that actually matter.
None of this should be taken as a critique of OEE itself. The metric, when properly defined, rigorously calculated, and honestly deployed, is a uniquely revealing instrument. It connects equipment operation to business outcomes in a way that no other single measure can. It surfaces the hidden capacity that is bleeding out of a production system every shift. And when it is embedded in the daily discipline of the shopfloor rather than confined to a monthly report, it becomes the primary feedback loop for a Total Productive Maintenance programme's improvement work. The purpose of this guide is to help practitioners get OEE to that point — and to be honest about the organisational and methodological traps that prevent most from getting there.
What OEE Actually Measures — and What It Does Not
OEE is a ratio: the comparison between a machine's actual output and what it could theoretically have produced in the same time if it had run perfectly. That framing contains the most important thing to understand about the metric before any calculation is attempted: OEE's value depends entirely on how "theoretically perfect production" is defined. Change the definition of the theoretical maximum, and you change every OEE score the metric produces. This is not a footnote — it is the central fact that separates rigorous OEE practice from OEE theatre.

The metric decomposes that overall ratio into three components: Availability, Performance, and Quality. Each component captures a different category of loss, and each maps directly to two of the Six Big Losses identified by the Japan Institute of Plant Maintenance (JIPM) — the framework that remains the definitive structure for equipment loss analysis.
Availability measures the proportion of planned production time during which the equipment was actually running. It captures downtime losses: the time the machine was available to run but was not running, for reasons including breakdowns, tooling failures, and unplanned maintenance (which together constitute Loss 1 in the Six Big Losses framework), and setup and adjustment time between production runs or product changeovers (Loss 2). Availability focuses entirely on whether the machine was running or stopped — it says nothing about how fast it was running or whether it was producing good parts when it did.
Performance compares the actual output rate during the time the machine was running against the ideal output rate at which it was designed to operate. It captures speed losses: minor stoppages and idling events — brief halts that are often not formally recorded but can collectively account for enormous amounts of lost production in automated processes (Loss 3) — and reduced speed running, which occurs when a machine is operated below its ideal cycle time due to quality concerns, operator caution, material variation, or equipment condition (Loss 4). Performance is the component that most often surprises practitioners, because speed losses are frequently invisible to the naked eye and absent from shift logs. A machine that runs continuously without a formal breakdown can still have Performance below 70% if its cycle time has drifted from the ideal.
Quality measures the proportion of total output that was produced to specification without rework. It captures quality losses: defects and rework produced during steady-state operation (Loss 5), and startup and yield losses associated with the beginning of a production run, after a changeover, or following a breakdown restart — the period during which the process is settling and producing off-specification product (Loss 6).
OEE = Availability × Performance × Quality.
The multiplication matters as much as the components themselves. Because the three factors are multiplied rather than averaged, the effect of any one below-par component is amplified by the others. A machine running at 85% Availability, 90% Performance, and 95% Quality produces an OEE of 72.7% — a figure that looks reasonable when each individual component seems acceptable. A machine at 90% Availability, 95% Performance, and 99% Quality produces 84.6%. The multiplication structure means that excellence in two components cannot compensate for significant weakness in a third, and it means that genuinely world-class performance requires sustained excellence across all three dimensions simultaneously. That is a much harder target than the component-level numbers suggest.
What OEE does not measure is equally important to understand, and I will return to this point in detail at the end of this guide. For now: OEE does not capture energy consumption, raw material yield losses that occur independently of equipment performance, scheduling inefficiency, labour productivity, or any losses that occur outside the defined planned production period. A plant can have high OEE and still be deeply inefficient in ways that OEE cannot see. Knowing what the metric does not capture is as important for good decision-making as knowing what it does.
How to Calculate OEE Correctly — and Where the Calculation Goes Wrong
The OEE formula is straightforward. Getting it right in practice is not, and the errors that produce inflated scores are consistent enough across plants that they are worth examining systematically.
Availability = Operating Time ÷ Planned Production Time
Planned Production Time is the starting point for the entire calculation. It is defined as total shift time less any time that has been deliberately excluded before production begins — specifically, planned shutdowns such as public holidays, scheduled preventive maintenance windows, and time when no production is planned. Everything else, in principle, belongs inside Planned Production Time.
The first and most consequential definitional error occurs here. Many plants exclude short breaks, meal breaks, planned maintenance, and cleaning time from Planned Production Time, which removes these periods from the OEE calculation entirely. The intent is usually to focus OEE on "pure" production time. The effect is to make OEE look better by eliminating periods during which the machine is not running and simply not counting them as losses. If a machine has a thirty-minute scheduled maintenance window every shift and that window is excluded from Planned Production Time, the OEE calculation never asks whether that maintenance time could be reduced or whether its productivity could be improved. The loss disappears from view.
The JIPM position — and the one I apply in my own consulting practice — is that Planned Production Time should include short breaks, meal breaks, planned changeovers, and planned maintenance, except for major scheduled shutdowns. If time can theoretically be used for value-creating production, it belongs inside the calculation. This approach exposes all losses and prevents the definitional narrowing that produces flattering but misleading scores. Where planned shutdowns must be excluded (genuine non-production periods), they should be formally designated and consistently applied — not adjusted shift-to-shift to make the numbers look better.
Operating Time is Planned Production Time minus Downtime. Downtime, in turn, must include all unplanned stoppages and setup and adjustment time. The practical difficulty is that setup and adjustment time is often under-recorded in shift logs: operators characterise the adjustment period after a changeover as part of the changeover itself, or log it informally without capturing the full duration. In my experience auditing OEE systems, this is one of the most common sources of Availability inflation — not deliberate gaming, but inconsistent recording practice that systematically under-counts the adjustment losses that follow every changeover.
Performance = (Ideal Cycle Time × Total Pieces Produced) ÷ Operating Time
Or equivalently: (Total Pieces ÷ Operating Time) ÷ Ideal Run Rate.
The Ideal Cycle Time — the fastest time in which the machine can theoretically produce one piece under optimal conditions — is the foundational reference point for Performance. It should reflect the machine's design specification or its demonstrated peak rate under controlled conditions, not the average rate observed during normal production, not the rate at which the machine is typically run, and not the rate that appears in the production schedule.
This distinction is where Performance inflation most commonly occurs. If the Ideal Cycle Time is set to match the current average production rate rather than the true equipment capability, Performance will always be close to 100% regardless of how much speed loss the machine is actually experiencing. I have encountered plants where what was called the "ideal" cycle time had been progressively relaxed over years, each adjustment rationalised by a specific quality or equipment concern, until the "ideal" was actually the average degraded rate — and Performance read 98% on a machine that was running at perhaps 70% of its actual design speed. The OEE looked excellent. The loss was invisible.
Establishing and defending the correct Ideal Cycle Time requires access to the original equipment specification, a willingness to confront the gap between designed and actual performance, and sometimes a targeted engineering investigation to understand what is preventing the machine from achieving its design speed. It is difficult and sometimes politically uncomfortable work. It is also essential.
Quality = Good Pieces ÷ Total Pieces
Good Pieces are those produced to specification without requiring rework or scrapping. Total Pieces is the full count including defects and rework. The critical point here is that rework counts against Quality even if the product is eventually brought to specification — the time spent reworking is captured as a Quality loss because it represents production capacity consumed without producing a conforming first-pass output.
A Worked Numerical Example
Consider a shift of 480 minutes (8 hours), with two 15-minute breaks and one 30-minute meal break. If breaks are included in Planned Production Time, the machine has 480 minutes of planned production time. Downtime logged during the shift is 47 minutes (including a breakdown and a changeover adjustment). Operating Time is therefore 433 minutes.
The machine's Ideal Cycle Time is 1 second per piece (60 pieces per minute). During the shift, 19,271 total pieces were produced, of which 423 were defective.
Availability = 433 ÷ 480 = 90.2%
Performance = (19,271 ÷ 433) ÷ 60 = 44.5 pieces/minute ÷ 60 = 74.1%
Quality = (19,271 − 423) ÷ 19,271 = 18,848 ÷ 19,271 = 97.8%
OEE = 0.902 × 0.741 × 0.978 = 65.3%

Contrast this with what happens if the Ideal Cycle Time has been relaxed to 1.25 seconds per piece (48 pieces per minute): Performance becomes (19,271 ÷ 433) ÷ 48 = 92.7%, and OEE rises to 81.6% — an apparent 16-point improvement that reflects only a definitional change, not any improvement in actual machine performance. This is not hypothetical. It is the mechanism by which well-intentioned OEE programmes gradually drift into self-congratulation.
The OEE Score Trap — Why a Single Number Is Nearly Always Misleading
In a genuinely rigorous OEE deployment, the overall OEE score is the least important number the system produces. What matters is the decomposition of that score into its components and, through the components, into the specific losses that are driving it. An overall OEE of 72% tells you almost nothing actionable. An understanding that Availability is 90%, Performance is 85%, and Quality is 94% tells you more, but still not enough. Knowing that the primary Performance loss is a pattern of minor stoppages on the transfer mechanism, clustered in the first hour of each shift and peaking on Monday mornings after the weekend shutdown — that is the information that drives improvement.
The 85% world-class OEE benchmark deserves particular scrutiny. The figure originates from Seiichi Nakajima's original JIPM TPM work, in which he proposed component-level world-class targets of 90% Availability, 95% Performance, and 99% Quality for discrete manufacturing — values whose product is approximately 85%. The benchmark has since been repeated so frequently and in so many contexts that it has acquired an authority its origins do not justify. It was never a universal standard. It reflects a specific era of manufacturing technology and a specific set of industry conditions. As I note in the OEE Benchmarking guide in this cluster, the appropriate OEE benchmark depends on the process type, the product mix, the machine design, and the operating context — and a number that represents genuine world-class performance in a high-volume discrete manufacturing operation may represent a mediocre result for a highly automated continuous process, or an unrealistically ambitious target for a complex, low-volume job-shop environment.

More damaging than the wrong benchmark, however, is the habit of treating any OEE number as a performance target rather than a diagnostic output. The moment OEE becomes a target that managers are accountable for hitting, the incentive structure of the organisation shifts — away from honest loss measurement and toward score optimisation. This is not a hypothetical risk. It is the standard outcome when OEE is deployed with the wrong intent. Planned downtime gets reclassified. The Ideal Cycle Time gets relaxed. The measurement window gets chosen to favour better-performing periods. The overall score rises. The actual equipment losses remain, unaddressed, because the organisational pressure to make the number look good has overwhelmed the technical purpose of making the losses visible.
Comparing OEE scores across machines, production lines, plants, or industries compounds the problem. A meaningful OEE comparison requires that the definitions of Planned Production Time, Ideal Cycle Time, downtime categories, and quality measurement are identical across the units being compared. In practice, they almost never are. Organisations that benchmark their OEE against industry averages or peer-company figures are, with very few exceptions, comparing numbers that have been calculated under materially different assumptions — a comparison that produces neither insight nor meaningful action. If OEE benchmarking is to serve a useful purpose, the methodological alignment must come before the comparison, not after.
The right relationship with an OEE score is sceptical and questioning: not "is our OEE good enough?" but "what is our OEE telling us about where our losses are concentrated, and which of those losses can we most effectively address?"
OEE as a Loss Analysis Tool — From Measurement to Improvement Action
The Six Big Losses framework is the bridge between an OEE score and an improvement programme. Each of the six losses maps to an OEE component, has a specific set of root causes, and responds to specific improvement approaches. Using OEE data without this mapping is like reading a blood test result without understanding what the individual markers measure — the number may be interesting, but it does not direct action.

Breakdown losses and setup and adjustment losses, which drive Availability, are addressed primarily through the Planned Maintenance and Focused Improvement pillars of TPM. Breakdowns that recur around specific components respond to component-level preventive maintenance schedules, condition-based monitoring, and, for genuinely chronic failures, the rigorous physical analysis methodology known as P-M Analysis. Setup and adjustment losses respond to SMED — the Single-Minute Exchange of Die methodology — which converts internal setup activities (those that can only be done while the machine is stopped) to external ones (those that can be done while the machine is running), and streamlines whatever internal setup remains.
Minor stoppages and speed losses, which drive Performance, are the most challenging category to address precisely because they are the least visible. Minor stoppages — events that clear themselves quickly or that operators resolve with a brief manual intervention — are frequently not recorded at all in conventional shift logs. The cumulative impact can be enormous: a machine that experiences three minor stoppages per hour, each lasting two minutes, loses six minutes of operating time per hour to events that never appear in any formal downtime record. Addressing minor stoppages requires a period of intensive observation — operators deliberately monitoring and recording every stoppage event regardless of duration — followed by Pareto analysis to identify the most frequent occurrence patterns and systematic investigation of their root causes, which are most often contamination, alignment issues, or component wear rather than the symptom that triggers the stoppage.
Speed losses, meanwhile, require a disciplined return to first principles: what is the designed operating speed of this machine, and what specifically prevents it from running at that speed? In many plants I have worked with, the current operating speed has been reduced incrementally over time in response to quality problems, equipment wear, or operator caution, and nobody can clearly articulate the original reason or whether the limitation still applies. Restoring speed requires both engineering work to address the underlying condition issues and, often, a structured programme of equipment restoration to return the machine to its designed capability — which connects directly to the Autonomous Maintenance pillar's work on restoring basic equipment conditions.
Quality losses, which drive the Quality component, connect to the Quality Maintenance pillar — the systematic identification and control of the equipment and process conditions that determine product quality. Quality maintenance begins with understanding the relationship between machine condition and product quality characteristics: which parameters, when they drift from their optimal values, predictably produce defects? Once that relationship is mapped, quality maintenance establishes condition standards and monitoring protocols that prevent equipment-driven quality variation before it produces defective output.
The distinction between chronic and sporadic losses is critical for interpreting OEE data. Sporadic losses — sudden, dramatic deviations from normal operating conditions, like a major breakdown — are visible, disruptive, and usually receive immediate attention. Chronic losses — the background level of minor stoppages, the consistent 5% speed deficit from ideal, the steady low-level defect rate that has been "normal" for years — are insidious precisely because their consistency makes them invisible. They are part of the furniture. In a plant that has measured OEE for several months, the sporadic losses show up as OEE spikes; the chronic losses define the floor from which the spikes depart. Focused Improvement's highest-value work is almost always on the chronic losses, not the sporadic ones — because chronic losses, sustained over months and years, typically represent far more total lost production than the dramatic events that dominate shift debrief discussions.
Deploying OEE on the Shopfloor — The Discipline That Makes Data Honest
A well-designed OEE data collection system is not primarily a technical problem. It is an organisational and cultural one. The accuracy and completeness of OEE data depends on whether the people collecting it — predominantly operators and line supervisors — trust that the data will be used to improve their work rather than to evaluate their performance. Where that trust is absent, data quality deteriorates predictably: downtime events are under-recorded, minor stoppages go uncounted, and the numbers that flow into the system reflect what operators believe management wants to see rather than what is actually happening on the shopfloor.
This is worth being direct about. In the vast majority of plants where I have been called in to audit OEE systems, the data quality problem is not a training or technology problem. It is a consequence of deploying OEE as a performance metric before the organisation has built the trust and the clarity of purpose that honest data collection requires. When operators believe that a high number will be praised and a low number will trigger scrutiny or blame, the incentive to record every stoppage and every quality loss honestly is replaced by the incentive to present the best defensible number. The OEE system continues to run, data continues to be collected, and reports continue to be produced — but the information content of those reports is systematically degraded.
The discipline of honest OEE deployment begins with how the metric is introduced and framed. OEE should be positioned explicitly and consistently as a loss measurement tool — a means of making visible where improvement is needed, not a report card on operator or machine performance. Improvement in OEE should be celebrated not as an end in itself but as a consequence of specific losses that have been eliminated through identifiable actions. This framing matters enormously for how frontline teams engage with the data.
It also matters who collects the data. In plants where OEE measurement is handled by an engineering team or management function, detached from the operators who run the machines, the data tends to be retrospective, incomplete, and disconnected from the improvement response. In plants where operators are directly involved in recording production data, logging downtime events, and participating in the calculation, something different happens: operators develop an intimate understanding of where their machine's losses are concentrated, they see directly the connection between their equipment's condition and its performance, and they have a genuine stake in the improvement activities that follow.
The shift cadence of OEE review is a practical reflection of this principle. A monthly OEE report is almost useless for driving improvement — by the time the data is reviewed, the specific events and conditions that drove the numbers are two to four weeks in the past, the people who observed them have moved on, and no meaningful action is possible. Daily OEE review at the shopfloor level — supported by a simple visual display updated each shift, reviewed in a brief team discussion at shift start or shift end — creates the feedback loop that makes OEE data actionable. The questions are immediate: what drove yesterday's OEE? What was the biggest loss? What will we do about it today? This daily discipline, sustained over time, is what separates an OEE system that drives improvement from one that generates reports.
Visual management is the infrastructure that makes daily OEE review possible. An OEE board — typically located adjacent to the equipment it covers, updated by the operator at the end of each shift, and reviewed by the area team at the start of each day — creates a shared, visible, and current picture of equipment performance. The board should show not just the overall OEE and its three components but the specific losses by category: how many minutes of breakdown, how many changeovers and their durations, how many minor stoppages and speed loss events, how many defects and startup losses. This level of detail makes the loss pattern visible and supports the Pareto-driven prioritisation that directs improvement work to the highest-impact opportunities.
OEE and Equipment Strategy — How the Data Should Inform Maintenance Decisions
When OEE data is collected consistently and decomposed into its component losses, it becomes a powerful instrument for equipment strategy — the decisions about how maintenance resources are allocated, which improvement pillars are most relevant for a given machine, and how to prioritise improvement investment across a mixed equipment base.
The pattern of Availability losses carries the most direct implications for maintenance strategy. A machine with frequent, short breakdowns distributed across many component types is telling a different story from a machine with infrequent but extended breakdowns concentrated in a single subsystem. The former often indicates a basic conditions problem — accumulated contamination, inadequate lubrication, loose fixings — the kind of deterioration that Autonomous Maintenance is designed to prevent through operator-led cleaning, inspection, and basic maintenance. The latter is more likely to call for engineering-led root cause analysis and component-specific preventive or predictive maintenance intervals.
Changeover and setup losses in the Availability component connect most directly to SMED and the Planned Maintenance pillar's role in coordinating planned downtime efficiently. A machine with significant setup losses that have not been subjected to SMED analysis is losing planned production time to a source that can, in most cases, be substantially reduced. The connection between setup loss data and SMED deployment is direct and actionable: the OEE data tells you how much you are losing; SMED provides the methodology to reduce it.
Performance losses — particularly minor stoppages — are the signature indicator of Autonomous Maintenance maturity. In a plant where AM is working, operators have developed the inspection and detection skills to catch the early signs of contamination, wear, and abnormal conditions that cause minor stoppages before those conditions produce events. The minor stoppage rate on a machine whose operators have completed the early steps of AM and are running a functioning daily cleaning and inspection regime should be systematically lower than the rate on a comparable machine without that coverage. Tracking minor stoppage rates across equipment with and without AM coverage creates a practical measure of AM effectiveness and, in my experience, one of the most compelling internal arguments for sustained investment in the AM programme.
Quality losses in the OEE data should connect directly to the Quality Maintenance pillar. Equipment-driven quality defects are the symptom; the underlying cause is a process condition — a parameter, a component, a surface condition — that has drifted outside its optimal range. Quality Maintenance's role is to identify and control the equipment conditions that determine product quality outcomes, so that quality is built in through maintained equipment conditions rather than inspected out through downstream defect detection. The OEE Quality component tracks the result; Quality Maintenance addresses the cause.
For plants with a mixed equipment base — which is to say, most plants — OEE data provides the quantitative foundation for improvement prioritisation. The question of which machine should receive Focused Improvement attention first, where AM resources should be concentrated, and where investment in predictive maintenance technology would produce the highest return is, in principle, answerable through systematic OEE loss analysis. The machine with the highest product of loss rate and business impact — measured in terms of lost output, rework cost, or constraint-criticality — is the right starting point. This is a more reliable basis for prioritisation than intuition, seniority politics, or the last machine to have a dramatic breakdown.
The Limits of OEE — What the Metric Cannot See
A practitioner who understands OEE thoroughly also understands its limits, and deploys it accordingly rather than treating it as a complete picture of equipment or process performance.
OEE measures what happens during planned production time. It has nothing to say about how much planned production time there is relative to total calendar time — a distinction captured by metrics like Overall Equipment Utilisation (OEU) or Overall Asset Effectiveness (OAE), which incorporate the scheduling efficiency of the equipment base. A machine that runs at 85% OEE during its planned production windows but is only scheduled for one shift out of three is performing differently from a machine at 85% OEE running three shifts — OEE cannot see this difference.
OEE measures process conformance — whether output meets specification — but not the underlying process capability that determines how robustly specifications are met. A process running at 97% Quality Rate might be operating comfortably within its capability limits, or it might be barely maintaining conformance through vigilant operator intervention. OEE cannot distinguish these situations; process capability metrics (Cpk and its variants) are needed for that analysis.
OEE does not capture raw material yield losses that occur independently of equipment performance — cases where materials are consumed or wasted in ways that do not directly reduce machine output rate or quality but nonetheless represent a significant cost. Nor does it capture energy consumption, which in energy-intensive processes like semiconductor fabrication, chemical processing, or metalworking can be as significant a performance variable as equipment uptime. These losses require separate measurement frameworks.
Labour productivity — the efficiency with which operators, technicians, and maintenance staff are deployed — is outside OEE's scope entirely. A machine that produces excellent OEE numbers while consuming excessive maintenance labour hours, or whose OEE improvement has been achieved by adding operators to manually manage around equipment problems, represents a different economic situation from one where the same OEE is achieved with standard staffing levels. Total Equipment Effectiveness (TEE) and related expanded metrics attempt to incorporate some of these dimensions, but they introduce their own definitional complexities and are less widely validated.
The practical implication is that OEE should be deployed as part of a balanced measurement framework, not as a standalone proxy for overall operational performance. It is an excellent diagnostic tool for equipment loss analysis. It is a poor substitute for a comprehensive production performance measurement system. The plants that get the most from OEE are those that are clear-eyed about both its power and its boundaries — that use it rigorously to drive loss elimination in the equipment domain, while maintaining complementary measures for the dimensions of performance it cannot see.
From Measurement to Practice — The Discipline OEE Actually Requires
OEE is not hard to calculate. It is hard to calculate honestly, consistently, and with enough rigour in the underlying definitions to produce numbers that are genuinely comparable over time and genuinely useful as improvement inputs. And it is harder still to deploy in the organisational conditions — the daily review discipline, the honest data culture, the connection between loss data and improvement action — that make it a living instrument rather than a monthly reporting exercise.
In working with manufacturers across the region over three decades, the plants where OEE has delivered sustained improvement share a common set of characteristics. They measure OEE at the machine level rather than averaging it across lines or plants. They have defined their OEE parameters — Planned Production Time, Ideal Cycle Time, quality counting methodology — with precision, documented those definitions, and applied them consistently. They have invested in making data collection straightforward for operators, and they have built a daily review discipline that connects yesterday's loss data to today's improvement response. They treat their OEE score as a diagnostic output rather than a performance target, and they resist the organisational pressure to optimise the number at the expense of the information it contains.
Most importantly, they have recognised that OEE improvement is a consequence, not a goal. The goal is the elimination of specific, identifiable losses — breakdowns, speed deficits, minor stoppages, quality failures, excessive changeover times — through the systematic application of TPM's improvement pillars. When those losses are eliminated, OEE rises. When OEE is chased directly, losses tend to hide.
That distinction — between chasing OEE and eliminating losses — is the most important thing a plant manager or TPM programme director can internalise. Every element of good OEE practice flows from it: honest data collection, rigorous calculation, loss-focused analysis, pillar-connected improvement actions, and the daily discipline that keeps equipment performance data visible and actionable at the shopfloor level. OEE, deployed with that intent and that discipline, is one of the most powerful improvement tools available to a manufacturing organisation. Deployed as a reporting metric, it is a significant investment in producing numbers that tell a flattering story while the losses accumulate, unaddressed, behind the scenes.
The question is not whether your OEE is good enough. The question is what your OEE data is telling you about where your losses are — and what you are doing about them.
About the Author

Allan Ung is the Founder and Principal Consultant of Operational Excellence Consulting, a Singapore-based management training and consulting firm established in 2009. With over 30 years of experience leading operational excellence and quality transformation in manufacturing-intensive environments, Allan's expertise spans Lean Thinking, Total Quality Management (TQM), TPM, TWI, ISO systems, and structured problem solving.
He is a Certified Management Consultant (CMC, Japan), Lean Six Sigma Black Belt, JIPM-certified TPM Instructor (Japan Institute of Plant Maintenance), TWI Master Trainer, ISO 9001 Lead Auditor, and former Singapore Quality Award National Assessor.
During his tenure with Singapore's National Productivity Board (now Enterprise Singapore), Allan pioneered Cost of Quality and Total Quality Process initiatives that enabled companies to reduce quality costs by up to 50 percent. In senior regional and global roles at IBM, Microsoft, and Underwriters Laboratories, he led Lean deployment, quality system strengthening, and cross-border operational transformation.
Allan has facilitated TPM, OEE and Lean programmes for organisations including Temic Automotive (Continental), Analog Devices, Amkor Technology, STATS ChipPAC, Infineon Technologies, Panasonic, Micron, Lam Research, Tokyo Electron, Dorma, and NEC. He holds a Bachelor of Engineering (Mechanical Engineering) from the National University of Singapore and completed advanced consultancy training in Japan as a Colombo Plan scholar.
His philosophy: "Manufacturing excellence is achieved through disciplined systems, capable leadership, and sustained execution on the shopfloor."
His practitioner-led toolkits have been utilized by managers and organizations across Asia, Europe, and North America to build Design Thinking and Lean capability and drive organizational improvement.
Related Articles in the TPM Practitioner Guide Series
This article is part of the OEC TPM Practitioner Guide Series, a structured cluster of practitioner-level articles covering Total Productive Maintenance in depth.
Total Productive Maintenance (TPM): A Practitioner's Guide — The hub article covering all eight pillars of TPM, the philosophy behind zero breakdowns, zero defects, and zero accidents, and what it takes to build a sustainable TPM culture.
Published Spoke Articles:
OEE Benchmarking: A Practitioner's Guide — How to measure, compare, and improve Overall Equipment Effectiveness using structured benchmarking methods drawn from three decades of plant-level experience.
TPM Self-Assessment and the TPM Excellence Award: A Practitioner's Guide — Streamlines the TPM self-assessment and Excellence Award journey, empowering organizations to bridge operational gaps and achieve world-class manufacturing performance.
Autonomous Maintenance: A Practitioner's Guide — The seven-step AM development pathway, the AM-PM partnership that makes both pillars work, and the practitioner discipline required to build genuine operator ownership of equipment condition.
Planned Maintenance: A Practitioner's Guide — How to design and implement a planned maintenance system that complements Autonomous Maintenance and progressively eliminates unplanned downtime.
Quality Maintenance (Hinshitsu Hozen): A Practitioner's Guide — The eight-step methodology for achieving zero defects by establishing and maintaining the precise 4M conditions required to prevent defect generation at the source.
Focused Improvement (Kobetsu Kaizen): A Practitioner's Guide — The methodology for targeting, analysing, and eliminating the specific equipment and process losses that hold OEE back.
TPM Self-Assessment and the TPM Excellence Award: A Practitioner's Guide — How to conduct a JIPM-aligned TPM self-assessment, what the award criteria reveal about PM and AM maturity, and how to use self-assessment findings to drive a structured improvement roadmap.
Build TPM Capability in Your Organisation
At Operational Excellence Consulting, I deliver customised TPM, OEE workshops and implementation programmes for manufacturing organisations across Singapore and the Asia-Pacific region — from foundational two-day workshops to multi-year TPM implementation support, facilitated by a JIPM-certified TPM Instructor.
👉 Explore our TPM training courses and practitioner-led resources:
For enquiries about TPM implementation, OEE benchmarking or operational excellence consulting, visit www.oeconsulting.com.sg or contact us directly through the OEC website.
Operational Excellence Consulting offers a full catalog of facilitation‑ready training presentations and practitioner toolkits covering Lean, Design Thinking, and Operational Excellence. These resources are developed from real workshops and transformation projects, helping leaders and teams embed proven frameworks, strengthen capability, and achieve sustainable improvement.
👉 Explore the full library at: www.oeconsulting.com.sg/training-presentations
© Operational Excellence Consulting. All rights reserved.
