Focused Improvement (Kobetsu Kaizen): A TPM Practitioner Guide to Building an Improvement System That Actually Delivers
- May 8
- 32 min read
Updated: 11 hours ago
By Allan Ung | Founder & Principal Consultant, Operational Excellence Consulting
Updated: 13 May 2026

Allan Ung is the Founder and Principal Consultant of Operational Excellence Consulting (OEC), a Singapore-based management consultancy established in 2009. With over 30 years of experience leading operational excellence and quality transformation across manufacturing, technology, and global operations — including senior roles at IBM, Microsoft, and Underwriters Laboratories (UL) across Asia-Pacific — Allan brings deep shopfloor and strategic expertise to every engagement. He holds the following qualifications and recognitions: Certified Management Consultant (CMC, Japan), Certified Lean Six Sigma Black Belt, JIPM-certified TPM Instructor, TWI Master Trainer, and former National Examiner for the Singapore Business Excellence Award. Allan has designed and facilitated TPM implementations and operational excellence programmes for organisations across semiconductor, automotive, industrial manufacturing, logistics, and public sectors. His clients include Temic Automotive (Continental), Analog Devices, Amkor Technology, STATS ChipPAC, Panasonic, Micron, Lam Research, Infineon Technologies, Dorma, and Tokyo Electron, as well as Singapore government ministries and statutory boards.
What Most Plants Call Kaizen — and Why It Is Not Focused Improvement (Kobetsu Kaizen)
Every manufacturing plant I have worked with runs improvement activities. They hold kaizen events. They form teams, post before-and-after photos on activity boards, celebrate results at the end of a project week, and move on. Some of these events produce real and lasting gains. Many produce gains that erode quietly over the following months as standards are not upheld, the root cause is not fully addressed, and the same problem resurfaces in a slightly different form. Almost none of these activities are connected to each other by a common loss register, a structured project pipeline, or a systematic mechanism for spreading the solution to similar equipment elsewhere in the plant.
This is the uncomfortable truth about kaizen in most manufacturing environments: most of it is not a system. It is a collection of events — valuable in isolation, and entirely insufficient as a strategy for sustained OEE improvement. The distinction between an organisation that runs improvement events and one that operates a Focused Improvement programme is the central insight of this article, and it is the thing that JIPM's architects of Total Productive Maintenance were most deliberate about when they elevated Focused Improvement to one of the mandatory pillars of the TPM framework.
Kobetsu Kaizen — the Japanese term that translates literally as "individual" or "focused" improvement — is not simply kaizen with a more structured methodology. It is improvement activity anchored in a comprehensive loss accounting framework, driven by quantified loss data, executed through a structured eight-step project methodology, and sustained through standardisation and horizontal deployment. When a plant has a genuine Focused Improvement programme, the improvement team is not choosing its next project based on what the production manager flagged in last week's review meeting. It is choosing based on a loss-cost matrix that tells it, in financial terms, exactly which loss category is consuming the most value in the operation — and it is managing a pipeline of three to five active projects simultaneously, each registered with the TPM Promotion Office, each working through a defined methodology, and each feeding its results back into the plant's maintenance and operating standards when it concludes.
Most plants are not there. This guide is about understanding what a genuine Focused Improvement system looks like, how to build one, and what it actually takes to sustain it.
Focused Improvement Within the TPM Framework
To understand what Focused Improvement is, it helps to understand why JIPM made it a standalone pillar rather than treating improvement as a background activity embedded in Autonomous Maintenance or Planned Maintenance. The reasoning is strategic, not structural.
In TPM's foundational design, every manufacturing organisation needs two complementary capabilities: the ability to maintain standards and the ability to improve them. Autonomous Maintenance addresses the first — by developing operators who own the basic condition of their equipment, who can detect abnormalities early, and who prevent the accelerated deterioration that comes from dirt, contamination, loose fasteners, and inadequate lubrication. Planned Maintenance addresses the reliability dimension — ensuring that equipment receives the right maintenance at the right time based on failure mode analysis and condition monitoring. But neither pillar, however well implemented, produces systematic improvement in the loss landscape. They maintain and protect what exists. They do not systematically challenge the loss levels that have been accepted as normal.
That challenge is what Focused Improvement is for. As the OEC slide deck frames it, Autonomous Maintenance upholds the gains of improvement over the long term, while Focused Improvement is the engine that generates those gains in the first place. The two pillars are deeply interdependent — and the OEC framework makes the sequence explicit: Focused Improvement should be implemented after organisations have attained a basic level with Autonomous and Planned Maintenance, precisely because you need a stable foundation of basic equipment condition before you can accurately identify which chronic losses remain and pursue them with analytical rigour.
The word "focused" is doing important work here. It refers not merely to the intensity of the improvement effort — though FI projects do require dedicated team time and analytical depth — but to the deliberate focus on specific, prioritised losses rather than a diffuse response to whatever problems happen to be visible on a given day. The word "improvement" distinguishes the activity from maintenance: FI goes beyond restoring equipment to its designed condition and directly improves equipment performance, giving teams tools to free processes from chronic losses and the effects of design weaknesses. This is why the JIPM framework positions FI as a cross-functional management activity — not a departmental initiative — because eliminating chronic losses at the level of sophistication that Focused Improvement demands requires the combined expertise of maintenance, production, engineering, and quality functions working together under a structured project methodology.
The 16 Major Losses: The Foundation for Rational Theme Selection
Focused Improvement is anchored in the 16 major losses — the comprehensive framework that TPM uses to account for all the ways in which a manufacturing operation falls short of its theoretical maximum output. Understanding the 16 losses is not optional background knowledge for an FI practitioner. It is the prerequisite for every rational decision about where to focus improvement effort.
The 16 losses are organised across three categories. The first category covers equipment losses — the losses that directly reduce OEE. Breakdown loss, set-up and adjustment loss, cutting blade replacement loss, start-up loss, minor stoppage and idling loss, speed reduction loss, defect and rework loss, and shutdown loss. Each of these maps directly to one of the three OEE factors: breakdowns, set-up time, and start-up loss reduce Availability; minor stoppages and speed reduction reduce Performance; defects and rework reduce Quality. The second category covers manpower losses — management loss (waiting time from management failures to provide materials, instructions, or resources), motion loss (unnecessary operator movement from poor layout), line organisation loss (insufficient operator coverage), logistics loss (inefficient material delivery), and measurement and adjustment loss (frequent inspection and correction necessitated by process instability). The third category covers resource losses — yield loss, energy loss, and die/tool/jig losses — which represent consumption of materials and resources beyond what sound processes would require.

The reason this framework matters so much to Focused Improvement is that it provides the universal language for loss quantification. Without the 16 losses framework, an improvement team has no common basis for comparing the relative magnitude of different problems. Is the recurring defect rate in Final Grinding more costly than the speed reduction loss on the transfer line? Is the management loss from waiting for spare parts more significant than the startup loss on Line 3? These questions cannot be answered by observation or intuition alone. They require a loss accounting system that converts each category of loss into time and cost terms, creating a loss-cost matrix that enables Pareto analysis of the improvement landscape.
The OEE metric provides the primary instrument for this analysis on the equipment side. OEE — the product of Availability, Performance, and Quality rates — disaggregates the total equipment loss into its constituent components, making it possible to identify not just that a machine is underperforming, but precisely which category of loss is responsible for the largest share of the gap between actual and ideal output. A machine running at 72% OEE tells you very little. A machine running at 72% OEE because its Performance rate is 76% — driven primarily by speed reduction loss — while its Availability and Quality rates are both above 90% tells you exactly where to focus the FI analysis.
I am frequently asked, in the early stages of a TPM implementation, how to select FI themes when the data infrastructure to do it rigorously does not yet exist. It is a legitimate question, because the honest reality is that when most plants launch a Focused Improvement programme, they do not have a functioning OEE system, a completed loss register, or stratified breakdown records sorted by frequency and duration. They have some production data, some maintenance records of variable quality, and a set of observations from the production floor that reflect what is most visible rather than what is most costly.
The practical answer is to begin with whatever data is available — even imperfect data points to the high-loss areas more reliably than unaided intuition — while building the data infrastructure in parallel. Start with the bottleneck process. Measure failures, defects, and losses from the data that exists. Construct baselines from direct observation where records are absent. Use those baselines to set initial improvement targets. The first FI projects will be imperfect in their grounding, and that is acceptable. What matters is that the team develops the habit of data-driven decision-making and builds the analytical infrastructure through the act of using it. By the second and third project cycle, the loss accounting framework will be substantially more robust — and the theme selection correspondingly more rigorous.
The Eight-Step Project Structure: What Distinguishes a Kobetsu Kaizen Project
A Focused Improvement project follows a structured eight-step methodology that maps onto the Plan-Do-Check-Act cycle. Understanding this structure in full is important, because what distinguishes a Kobetsu Kaizen project from a maintenance fix or a quick kaizen event is precisely the rigour with which each step is executed — and the discipline of not skipping the analytical steps when production pressure is high.

The first step — setting the improvement topic — is where the project is formally constituted. The team selects a topic from the loss landscape, registers it with the TPM Promotion Office, forms a project team, establishes the team's rules of engagement, assigns responsibilities for each loss category within the theme, and creates a project schedule that is reviewed by senior management. This last requirement — management review at project initiation — is not bureaucratic formality. It serves two essential functions: it ensures that the team has the authority and resources it will need to implement solutions, and it creates the management accountability that prevents the project from being quietly shelved when production pressure intensifies.
The second step — understanding the situation — requires the team to identify the relevant line, process, or equipment, quantify the failures, defects, and losses through direct measurement and confirmed historical data, and establish baselines that will anchor the before-and-after comparison when results are verified. The OEC framework is specific about the importance of zero-loss thinking in target setting at this stage: goals that merely aspire to incremental improvement are insufficient. Where TPM sets targets for equipment losses, the aspiration is explicit — breakdowns should be zero, minor stoppages should be zero, defect rates should be expressed in parts per million. The gap between the current state and these targets defines the improvement opportunity, and the team must be able to explain, in financial terms, what closing that gap is worth.
The third step — exposing and eliminating abnormalities — is where FI diverges most clearly from a conventional improvement approach. Before beginning root cause analysis, the team conducts a thorough inspection of the equipment, cleaning it to expose hidden problems, restoring deterioration and correcting minor flaws, and establishing the basic equipment condition with the help of visual controls. This step borrows directly from Autonomous Maintenance Step 1, and the connection is intentional: you cannot reliably identify the root causes of chronic losses on equipment that is in a state of accelerated deterioration. Many apparent root causes of chronic problems dissolve once basic equipment condition is restored — which is exactly why the FI framework insists on this restoration before committing analytical resources to cause investigation.
Steps four and five are the analytical heart of the project. Step 4 stratifies and analyses the possible causes of the targeted losses, applying the team's toolkit of analytical methods — cause-and-effect analysis, five-why analysis, P-M analysis, FMEA, and others — to verify root causes and determine where countermeasure effort will be most effective. Step 5 develops the improvement proposals: generating alternatives, comparing their cost-effectiveness, conducting a design FMEA on the proposed solution to identify potential risks, securing management approval, and preparing the implementation plan with sufficient precision that every team member knows exactly what they are responsible for and when.
Steps six through eight execute, verify, and lock in the gains. Implementation proceeds according to plan, with trial runs before full deployment. Results are evaluated against baselines using consistent measurement methods, with Pareto or bar chart comparisons making the before-and-after story visible. When targets are achieved, Step 8 — consolidating gains — prepares the operating and maintenance standards that encode the improvement into the plant's standard work, trains operators on the new standard, and — critically — plans the replication of the proven countermeasure to similar equipment elsewhere in the plant. When targets are not achieved, the team returns to the appropriate earlier step rather than accepting a partial result and moving on.
The eight-step sequence sounds straightforward. In practice, the steps that most frequently receive inadequate attention are the analytical ones — Steps 3 and 4 — precisely because they require the most time and technical rigour, and because the pressure to "get to the solution" is almost always present. A team that skips the thorough abnormality inspection and goes straight to root cause analysis is working on a dirty baseline. A team that conducts a superficial five-why exercise and accepts the first technically defensible answer is not finding root causes — it is finding convenient stopping points. Both shortcuts produce countermeasures that address symptoms rather than causes, and the same problems return.
Theme Selection and the Loss Prioritisation Challenge
The most consequential decisions in a Focused Improvement programme are made before the first project begins: what to work on, in what order, and with what team depth. The quality of theme selection determines whether the FI programme generates meaningful financial returns or accumulates project reports that do not add up to a visible improvement in the operation's overall performance.
Good theme selection begins with Pareto analysis of the loss landscape. Across the 16 major losses, the financial impact of each category is calculated and ranked, and the team focuses its first project on the highest-cost loss that is within the team's authority and capability to address. Two elements of that sentence deserve emphasis. "Highest-cost" is not the same as "most visible" or "most frequently discussed" — the loss that dominates the daily conversation in the production meeting is often not the loss consuming the most value. And "within the team's authority and capability to address" is not a reason to settle for easier problems; it is a recognition that a project whose root causes require capital investment decisions, supplier development activity, or organisational changes outside the team's scope is likely to stall, and a stalled first project does enormous damage to the credibility of the entire FI programme.
In practice, this principle holds. In the 2014 OEE benchmarking study conducted by OEC across three semiconductor manufacturers, the equipment platform that had received the most concentrated improvement attention — ADGT's UltraFlex, the identified bottleneck — consistently outperformed the platforms where improvement effort had been more diffuse. The OEE advantage was not primarily a function of equipment age or design; it was a function of where the improvement energy had been focused.

Common mistakes in theme selection are worth naming plainly. The first is choosing the most visible problem rather than the most costly one. A machine that breaks down dramatically and loudly will always attract more management attention than a chronic speed loss of eight to ten percent that is quiet, consistent, and embedded in the operation's accepted normal. The dramatic breakdown will get a project team; the speed loss will get mentioned in OEE reports and tolerated. The speed loss is almost certainly more expensive, and a mature FI programme addresses it first.
The second mistake is selecting themes whose root causes are beyond the team's practical authority to resolve. This typically happens when theme selection is driven top-down by management direction without adequate assessment of what the team can actually achieve within the project timeframe and without capital expenditure approval. The team executes the eight steps with dedication, identifies a root cause that requires a significant design change to the equipment, presents the finding to management — and then waits. The project stalls. The team loses confidence. The FI programme loses momentum. Avoiding this pattern requires a realistic scoping conversation at Step 1 that distinguishes between what the team can do and what requires escalation, and either scopes the project accordingly or secures the necessary commitments before the project begins.
The third mistake — and perhaps the most common in plants that are genuinely committed to TPM — is running too many projects simultaneously with insufficient team depth on each. The aspiration to sustain three to five active projects at any given time is correct in principle. The organisational challenge is that each active project requires a team with genuine expertise, adequate time allocation, and management support. A plant that launches seven projects because it has seven identified themes, and then distributes its limited technical talent and maintenance capacity across all of them, produces seven projects that move slowly, analyse superficially, and close without fully achieving their targets. The discipline to prioritise — to deliberately constrain the number of active projects to what the organisation can support with depth — is counterintuitive for teams that are energised by the improvement agenda, but it is essential to generating results that are real rather than merely reported.
The Analytical Toolkit: Tools, When to Use Them, and What Goes Wrong
The analytical sophistication of Focused Improvement is what distinguishes it from ordinary problem-solving activity, and the OEC FI framework contains a clear set of tools that FI teams are expected to deploy. Understanding when each tool is appropriate, what inputs it requires, and where it fails when applied poorly is part of what it means to be an FI practitioner rather than a kaizen event facilitator.
Five-Why (Why-Why) Analysis is the most widely used root cause tool in FI projects, and also the most frequently misused. The principle is elegant: by repeatedly asking why a phenomenon occurs, you peel back the layers of symptom and proximate cause to reach the underlying mechanism that, if addressed, prevents recurrence. The OEC example illustrates the method concretely: a machine stops (the phenomenon); the circuit overload tripped (first why); the shaft wore down and seized (second why); metal cutting chips penetrated the area (third why); chips passed through the lubrication system (fourth why); there was no strainer on the inlet pipe from the tank (fifth why — the root cause). The countermeasure is obvious once the root cause is identified: install the strainer. The problem does not recur.
What goes wrong in practice is that the five-why chain stops prematurely, at a point that is technically accurate but does not identify an actionable root cause. A team investigating a machine breakdown stops at "the bearing failed due to inadequate lubrication." That is true. But it is not the root cause — it is the mechanism. Why was lubrication inadequate? Why did the lubrication failure go undetected? Why was there no inspection standard that would have caught it? A skilled facilitator who has read the JIPM training materials and worked through actual FI projects knows what a genuine root cause looks and feels like: it is an answer that points to a gap in a standard, a design weakness, an organisational process failure, or a specific technical condition that the team has the authority and capability to address. An answer that points to "operator error" without asking why the error was possible is not a root cause — it is a stopping point that happens to be organisationally convenient.
P-M Analysis (Phenomenon-Machine-Man-Material-Method) is the advanced tool reserved for chronic losses that five-why analysis has failed to resolve. Where five-why is sequential and linear, P-M analysis is systematic and comprehensive: it begins by defining the phenomenon in precise physical terms, then analyses every physical mechanism through which that phenomenon could occur, and then investigates every condition related to the 4M framework — machine, man, material, method — that might cause or contribute to each mechanism. The OEC slide deck is explicit that P-M analysis requires more time, resources, and expertise than five-why, and that FI teams typically reserve it for complex or costly problems where the conventional root cause tools have not yielded adequate insight.
The value of P-M analysis is in its completeness: by systematically examining every possible factor rather than following intuition to the most likely cause, it prevents the investigation from overlooking the actual root cause because it was unexpected. The risk of P-M analysis is in its complexity: teams that attempt it without adequate facilitation frequently produce comprehensive lists of factors without the analytical depth to verify which of them are actually contributing to the phenomenon. The tool produces value in proportion to the rigour of the physical analysis in Step 1 — defining the phenomenon precisely enough that the subsequent factor analysis is genuinely informative.
FMEA (Failure Mode and Effects Analysis) plays two distinct roles in the FI methodology. In Step 4, it provides a disciplined framework for prioritising among the root causes the team has identified — scoring each failure mode by severity, occurrence, and detection to generate a Risk Priority Number that guides where improvement effort should be concentrated. A team that has identified six contributing causes of a chronic defect problem cannot address all of them simultaneously; the FMEA helps it sequence the effort toward the highest-risk causes first. In Step 5, a design FMEA is applied to the proposed countermeasure itself — asking what could go wrong with the proposed solution, whether it might introduce new failure modes, and what monitoring is needed to verify that it performs as intended. This pre-implementation risk assessment is one of the things that distinguishes an FI project from a quick fix: the team does not implement a solution and then discover its failure modes through operational experience. It identifies them before implementation and designs the verification protocol accordingly.
Cause-and-effect (fishbone) diagrams and the broader QC toolkit — Pareto charts, scatter diagrams, control charts, histograms, check sheets, and flow charts — are the supporting analytical infrastructure for FI projects. These tools organise the team's understanding of the problem, identify patterns in loss data, and visualise the before-and-after comparison that demonstrates results. The OEC framework also references IE (Industrial Engineering) tools and VE (Value Engineering) tools for specific loss categories, particularly for motion losses, logistics losses, and yield losses where the analytical approach differs from equipment reliability investigation.
What unites all of these tools is the principle from which they flow: as Peter Senge observed, when teams fail to grasp the systemic source of problems, they are left pushing on symptoms rather than eliminating underlying causes. Every analytical tool in the FI toolkit is a mechanism for preventing that failure — for ensuring that the countermeasure addresses the actual cause, not the most visible manifestation of it.
The FI Team: Structure, Roles, and the Organisational Dynamics That Determine Success
A Focused Improvement project team is a cross-functional group of four to ten people whose composition reflects the technical demands of the specific loss theme under investigation. The team includes a formally designated leader, a management sponsor, members with knowledge of the equipment or process in question, and — in a mature TPM organisation — representation from both the maintenance function and the production operators who work with the equipment daily.
The OEC frameworks identify three leading players in any FI project. The department manager or supervisor is ultimately accountable for the department's contribution to the plant's TPM objectives and provides the management authority that the team needs to implement solutions that cross departmental boundaries. The TPM point person — a department-level coordinator who links the team's activities to the TPM Promotion Office and other functional areas — convenes team leaders, tracks activities, mediates between functions, and ensures that findings and results are communicated upward and across the organisation. The team or project leader organises and guides the team's week-to-week activities, maintains momentum through the project schedule, and bears primary responsibility for the technical quality of the analysis.
The allocation of improvement topics across different team types is explicit in the OEC framework, and it is worth understanding the logic. Department and area managers tackle the most difficult individual improvement topics — right-first-time rates, productivity, start-up losses, speed losses — where the analysis requires strategic authority and cross-functional coordination. Supervisors address moderately difficult topics — failures, quality defects, minor stops, changeover losses — that can be resolved within a defined area with adequate technical support. Technical staff take on themes requiring specialised knowledge — achieving right-first-time startup on complex equipment, extending cutting tool life, eliminating start-up losses that require process engineering insight. Special project teams address large-scope improvements — changing process sequences, layouts, or processing methods across an entire production line — that are beyond any single department's capability. Autonomous maintenance teams address the simpler themes — breakdowns, quality defects, minor stops, and changeovers that do not require advanced analytical tools.
This tiered structure reflects an important insight about FI team composition: the right team for an improvement project is determined by the difficulty and scope of the theme, not by organisational hierarchy or availability. A breakdown loss that has resisted repeated maintenance interventions over eighteen months is not an AM team topic; it is a technical staff or special project team topic. Assigning it to the wrong level produces an analysis that cannot reach the root cause, a team that loses confidence when its countermeasures fail to hold, and a loss that continues to consume value while appearing to have received attention.
The relationship between Kobetsu Kaizen teams and the AM small-group activity structure is frequently misunderstood in plants that are implementing both pillars simultaneously. AM teams — composed of production operators working within their own equipment area — are the eyes and ears of the FI programme. They are often the first to observe and report the early signs of abnormal conditions that, if left unaddressed, become chronic losses. When AM teams encounter problems that exceed their capability or authority to resolve — equipment failures requiring specialised analysis, defect mechanisms requiring process engineering input, losses whose root causes involve equipment design weaknesses — those problems become candidates for escalation to a Focused Improvement project team. This handoff from AM observation to FI investigation is one of the most valuable interfaces in a well-functioning TPM system, and it is also one of the most frequently neglected when the two pillars are implemented without adequate coordination.
The TPM Team Guide makes a point that I have seen validated in every engagement where FI teams have produced sustained results: operators play a critical role, because no one understands the machine better than the person who uses it every day. The practical implication is that FI project teams that include operators in their analysis — not as passive observers, but as active contributors of equipment knowledge — consistently develop more accurate root cause diagnoses and more operationally realistic countermeasures than teams composed entirely of engineers and maintenance specialists who visit the equipment periodically. This is not a soft principle about team engagement. It is a technical argument about where the most relevant process knowledge actually resides.
FI and the Other TPM Pillars: Concrete Interdependencies
Focused Improvement does not operate in isolation from the other TPM pillars, and understanding its interdependencies in concrete, directional terms is essential to building a TPM programme that generates compounding returns rather than a set of parallel activities that add up to less than the sum of their parts.
The relationship between Focused Improvement and Planned Maintenance is bidirectional and substantive. In the direction from PM to FI: the failure data and MTBF/MTTR analysis that a mature Planned Maintenance programme generates are among the richest inputs available for FI theme selection. When PM data reveals that a specific failure mode is occurring at intervals shorter than the planned maintenance interval — that bearings on a particular machine type are failing at 1,200 hours despite a 2,000-hour replacement schedule — that gap is an FI theme. The project team investigates the specific conditions causing early failure (contamination, misalignment, overloading, substandard replacement parts), develops a countermeasure, and implements it. The countermeasure then feeds back into PM strategy: the maintenance interval changes, the inspection standard is updated, and the failure mode that justified the FI project is removed from the PM workload because the underlying cause has been eliminated. This is the PM-FI feedback loop, and a plant that is operating it effectively will progressively reduce its PM task count as FI projects eliminate the failure modes that PM tasks were managing.
In the direction from FI to PM: FI projects frequently identify that equipment is being maintained based on time intervals rather than the actual failure mechanisms of specific components. A project investigating chronic minor stoppages on an automated line might discover that a conveyor sensor is drifting in humidity-sensitive conditions — a condition-based trigger that a fixed-interval maintenance schedule cannot reliably address. The FI countermeasure might include installing a diagnostic instrument to monitor sensor performance and trigger maintenance based on actual condition. That capability then migrates into the PM programme as a condition-based maintenance route, improving the precision of maintenance intervention for that equipment type across the plant.
The relationship between Focused Improvement and Autonomous Maintenance is equally important. AM creates the foundation of basic equipment condition on which FI analysis depends — and as I noted earlier, Step 3 of the FI methodology (Expose and Eliminate Abnormalities) explicitly borrows from AM Step 1 to restore this condition before root cause investigation begins. Beyond this structural dependency, the AM-FI collaboration produces equipment improvements that operators can actually sustain. When an FI project team develops a countermeasure — a modified guarding arrangement that makes contamination control easier, a lubrication point relocation that makes daily lubrication faster and more reliable, a sensor reorientation that eliminates false triggering — it works with the AM team to incorporate the change into the autonomous maintenance standard and train operators on the new inspection and cleaning routine. Without this AM integration, FI countermeasures that depend on sustained equipment care are vulnerable to deterioration over time.
The relationship between Focused Improvement and Quality Maintenance is mediated by the QA/QM matrix — the analytical tool that maps equipment conditions to quality characteristics, identifying which equipment parameters have the most significant influence on product quality outcomes. When the QM matrix identifies that a specific machining parameter is the primary driver of a chronic scrap problem, that identification is an FI theme. The FI project investigates why that parameter is drifting, what equipment conditions are causing the drift, and what countermeasure will stabilise it. The results — a tighter process window, a modified machine setting, an improved inspection standard — feed back into the QM matrix as a verified control point. Over successive FI-QM cycles, the plant progressively eliminates the quality loss themes that the matrix has identified as highest priority, moving toward the zero-defect aspiration that is the JIPM framework's quality target.
Sustaining the FI Programme: The Organisational Discipline That Most Plants Underestimate
A single well-executed Focused Improvement project is not an FI programme. It is evidence that the methodology works. The organisational discipline required to sustain a pipeline of three to five active projects, continuously replenished as projects close, with adequate team depth and management attention on each, and with the results of each project standardised and replicated before the team moves to the next theme — that discipline is what separates a plant that has done some FI work from one that has built an improvement system.
The management systems required to sustain this discipline are specific. Project registration with the TPM Promotion Office is not merely administrative record-keeping; it is the visibility mechanism that allows the promotion office to track project status, identify teams that are stalling, allocate facilitation support where it is needed, and report FI activity and results to senior management in a format that maintains leadership engagement and accountability. Without this registration and tracking function, FI projects become invisible to management until they close — and projects that encounter obstacles without management visibility tend not to close.
Results verification and standardisation are two distinct activities that are frequently conflated or, more commonly, both neglected in the rush to start the next project. Results verification means collecting post-implementation data using the same measurement methodology as the pre-implementation baseline, over a sufficient period to confirm that the improvement has held rather than merely appearing to hold in the initial days after implementation. Standardisation means encoding the countermeasure in formal operating and maintenance standards — standard operating procedures, inspection checklists, lubrication standards, and where appropriate, equipment design modifications — that survive team turnover, shift changes, and the passage of time. A countermeasure that is not standardised is an improvement on borrowed time.
Horizontal deployment is perhaps the most underinvested element of FI programme management, and it is the element with the highest ratio of value to effort. When an FI project team eliminates a chronic failure mode on Machine A, the question that should be asked within days of confirming results is: which other machines in the plant share the design features, operating conditions, or maintenance history that made Machine A vulnerable to this failure mode? If the answer is "machines B, C, and F," then replicating the countermeasure to those machines — adapting the solution to any minor differences in configuration — requires a fraction of the effort the original project consumed, because the root cause is already known and the countermeasure is already proven. Most plants do not do this systematically. They celebrate the Machine A result, close the project, move to the next theme, and allow machines B, C, and F to continue experiencing the same loss that Machine A no longer has. The cumulative OEE impact of this neglect, compounded across multiple FI projects over multiple years, is substantial.
A benchmarking study I facilitated in 2014 among three semiconductor manufacturers — Analog Devices General Trias, STATS ChipPAC, and Amkor Technology Philippines — illustrated this gap with unusual clarity. All three organisations had invested in OEE measurement systems of varying sophistication, and all three had access to sufficient loss data to identify where improvement effort should be concentrated. The study's conclusion was not that more data was needed. It was that the data already available was not being converted into structured improvement action on the bottleneck equipment. The specific approaches recommended to close that gap were five-why analysis, Focused Equipment and Process Improvement, and P-M analysis — the analytical core of a Focused Improvement programme. The study also found a meaningful divergence in how the three organisations handled the results of their equipment analysis: ADGT and STATS used MTBF and MTTR trend data to reduce failure recurrences and feed improvement activities; Amkor did not yet have this practice in place. The report recommended that proven improvement practices identified from the study be replicated systematically across similar equipment to create what it called a "Multiplier Effect" for the entire production operation — the same mechanism this article describes as horizontal deployment. The language differed; the organisational discipline required was identical.
The role of the TPM Promotion Office in maintaining FI momentum is often underestimated in the early stages of a TPM implementation, when the programme is energised by early successes and teams are self-motivating. The TPM Promotion Office becomes critical in the middle stages of implementation — typically twelve to twenty-four months in — when the most visible problems have been addressed, the remaining themes are more complex and require deeper analysis, and the natural enthusiasm of the initial launch has normalised into routine. At this point, the promotion office's responsibilities include identifying new FI themes from the evolving loss register, facilitating the horizontal deployment of proven countermeasures, recognising and sharing success stories across the plant, and ensuring that senior management receives regular reporting that keeps FI results visible as a measure of TPM programme health.
Measuring FI programme health requires both activity indicators and results indicators. On the activity side, the number of active projects, the number of projects completed per quarter, the percentage of projects achieving their targets, and the number of improvement themes in the pipeline are all meaningful signals of whether the programme has sustainable momentum. On the results side, the financial value of OEE improvement attributable to FI projects, the reduction in chronic failure rates across the equipment population, and the trend in the loss-cost matrix across each of the 16 loss categories tell the story of whether the improvement system is actually moving the needle on the operation's performance.
The JIPM self-assessment criteria for the Focused Improvement pillar set the bar for what genuinely excellent FI performance looks like, and most plants find the bar higher than they expected. At the entry level, JIPM assessors are looking for evidence that the organisation recognises and can quantify its major losses — that a loss register exists, that OEE data is being used to identify priorities, and that improvement activities are being connected to loss data rather than driven by intuition. This is the foundation, and many plants cannot demonstrate it credibly. At the intermediate level, JIPM is looking for an active project pipeline, structured use of the eight-step methodology, demonstrated results that are sustained through standardisation, and horizontal deployment of proven countermeasures. At the advanced level — the level that characterises world-class FI performance — the assessors are looking for evidence that the most sophisticated analytical tools, including P-M analysis and design FMEA, are being routinely applied; that the improvement targets are zero-loss rather than incremental; and that multiple examples of zero-loss achievement exist across the asset base, including examples of Karakuri Kaizen — mechanically self-sustaining improvements that eliminate the need for human intervention to maintain improved conditions. Very few plants in my experience are operating at this level. Most are at the entry-to-intermediate transition, working to move from ad hoc improvement activity to a structured project methodology. The JIPM criteria are useful not as a judgement, but as a map — showing clearly where the programme currently stands and what genuine excellence actually requires.
The OEC Focused Improvement Maturity Diagnostic
After thirty years of designing and auditing TPM programmes across semiconductor, automotive, and industrial manufacturing environments, I have found that the most useful question a plant manager can ask about their Focused Improvement programme is not "are we doing FI?" but "at what level are we actually operating?" The answer is rarely what the project board suggests.
The following diagnostic, developed through OEC's TPM consulting practice, is designed to give a plant's TPM steering committee an honest assessment of its FI maturity across the five dimensions that most reliably predict whether an FI programme will generate sustained OEE improvement or plateau after the first few projects. Each dimension has four levels. Level 1 is where most plants start; Level 4 is what JIPM's TPM Excellence Award assessors expect to see in a world-class submission.
Score your plant honestly against each dimension using the matrix below. A total score of 12 or below indicates that foundational programme infrastructure needs to be built before analytical sophistication will deliver returns. A score of 13 to 16 indicates an intermediate programme with defined, addressable gaps. A score of 17 or above indicates a mature programme — the question at this level is whether horizontal deployment and programme governance are compounding individual project results or allowing them to remain local.

Loss Visibility and Quantification. At Level 1, the plant knows OEE at the line or plant level but cannot disaggregate losses across the 16 major loss categories — improvement themes are chosen based on what is visible, not what is measured and costed. At Level 2, OEE is tracked by machine and by loss type, the primary equipment losses can be separated and ranked, and a rough loss register exists. At Level 3, a loss-cost matrix converts all major loss categories into financial impact terms, Pareto analysis drives theme selection, and the loss register is updated monthly. At Level 4, the loss register is a living management document reviewed in the TPM steering committee, loss-cost trends across all 16 categories directly populate the FI project pipeline, and zero-loss targets are set for the highest-cost categories.
Project Structure and Methodology. At Level 1, improvement activities happen when production pressure allows — there is no standard project structure, and teams disband without a defined methodology or results verification step. At Level 2, an eight-step structure exists on paper and projects follow the first three steps, but the analytical steps are compressed or skipped under time pressure. At Level 3, the full eight-step methodology is consistently applied, Steps 4 and 5 receive adequate time and team depth, and results are verified against baselines. At Level 4, the methodology is second nature, FMEA is routinely applied to proposed countermeasures before implementation, P-M analysis is used for chronic losses that five-why has not resolved, and zero-loss is the standard project target.
Analytical Tool Proficiency. At Level 1, teams use informal problem-solving and why-why analysis rarely goes beyond two or three levels. At Level 2, five-why is regularly applied but facilitator skill varies, and the causal chain sometimes stops at an organisationally convenient answer. At Level 3, five-why is applied with disciplined facilitation, FMEA is used to prioritise root causes, and the team can recognise when P-M analysis is needed. At Level 4, the full analytical toolkit — five-why, PM analysis, FMEA, process capability analysis, and SPC where relevant — is applied systematically, and multiple completed P-M analyses exist in the project record.
Horizontal Deployment. At Level 1, each project is a standalone event with no process for asking whether the countermeasure applies elsewhere. At Level 2, horizontal deployment is discussed at closure but rarely implemented — countermeasures are shared informally but not tracked. At Level 3, a formal deployment process exists: at closure the team identifies all equipment sharing relevant characteristics, and deployment is tracked by the TPM Promotion Office. At Level 4, horizontal deployment is treated as the primary mechanism for compounding FI returns, and the ratio of directly implemented versus replicated solutions is tracked as a programme health indicator.
Programme Governance and Pipeline Depth. At Level 1, there is no formal programme — no project register, no pipeline, and no management reporting. At Level 2, a project register exists and results are reviewed periodically, but the pipeline is thin and has no structured replenishment process. At Level 3, three to five active projects are maintained simultaneously with defined team assignments, the pipeline is reviewed monthly, and results are standardised into procedures at project closure. At Level 4, the FI pipeline is a strategic management tool directly linked to the loss register, replenished as projects close, and prioritised against OEE improvement targets — with results reported to plant leadership as a financial performance metric.
In the semiconductor and automotive plants I have assessed against these dimensions, the modal profile is Level 2 on Loss Visibility, Level 2 on Project Structure, Level 1–2 on Analytical Tool Proficiency, Level 1 on Horizontal Deployment, and Level 2 on Programme Governance — a total score of 8 to 10. That is well below the intermediate threshold, and significantly below what a JIPM assessor would expect to see in a plant targeting TPM Excellence Award recognition. The diagnostic is not a discouragement. It is a map. Knowing precisely where the programme stands on each dimension tells a steering committee exactly where the next investment of time and facilitation effort will produce the highest return.
The Difference Between an Improvement Event and an Improvement System
I want to return to the central distinction with which this article opened, because it is the most important thing an organisation can understand about Focused Improvement — and the most common thing that is misunderstood.
An improvement event is a bounded activity. It has a start date, an end date, a team, a topic, and (if it is well run) a result. It produces value within its scope. It does not, by itself, produce an improvement system. An improvement system is what you have when the events are connected — when each project draws its theme from a common loss register, when the results of each project are standardised and replicated, when the project pipeline is continuously replenished based on the evolving loss landscape, and when the cumulative effect of individual projects is visible as sustained OEE improvement across the equipment population.
This distinction has a direct implication for how Focused Improvement should be resourced, managed, and evaluated. If an organisation thinks of FI as a series of improvement events, it will resource it accordingly — dedicating team time to individual projects, measuring the results of those projects, and treating the programme as healthy as long as projects are completing and results are being reported. If an organisation thinks of FI as an improvement system, it will also ask: Is the loss register being updated as losses are eliminated? Are proven countermeasures being replicated? Is the project pipeline deep enough to sustain the programme through periods of operational pressure? Is the horizontal deployment mechanism actually working? Are the standards established at Step 8 being audited to confirm they are holding?
In thirty years of working with manufacturers across Asia-Pacific, I have seen both. The organisations that have built genuine improvement systems — Analog Devices, Micron, Infineon Technologies, among the clients I have worked with — did not achieve world-class OEE through a series of impressive individual projects. They achieved it by treating improvement as an ongoing organisational discipline, as Masaaki Imai observed when he noted that where there is no standard, there can be no improvement. Their FI programmes were not the most energetic or the most frequent. They were the most disciplined — in loss prioritisation, in analytical rigour, in standardisation, and in the patience to deploy results systematically rather than celebrate them locally and move on.
That discipline is available to any organisation that chooses to build it. This guide is a starting point.
About the Author

Allan Ung is the Founder and Principal Consultant of Operational Excellence Consulting, a Singapore-based management training and consulting firm established in 2009. With over 30 years of experience leading operational excellence and quality transformation in manufacturing-intensive environments, Allan's expertise spans Lean Thinking, Total Quality Management (TQM), TPM, TWI, ISO systems, and structured problem solving.
He is a Certified Management Consultant (CMC, Japan), Lean Six Sigma Black Belt, JIPM-certified TPM Instructor (Japan Institute of Plant Maintenance), TWI Master Trainer, ISO 9001 Lead Auditor, and former Singapore Quality Award National Assessor.
During his tenure with Singapore's National Productivity Board (now Enterprise Singapore), Allan pioneered Cost of Quality and Total Quality Process initiatives that enabled companies to reduce quality costs by up to 50 percent. In senior regional and global roles at IBM, Microsoft, and Underwriters Laboratories, he led Lean deployment, quality system strengthening, and cross-border operational transformation.
Allan has facilitated TPM, OEE and Lean programmes for organisations including Temic Automotive (Continental), Analog Devices, Amkor Technology, STATS ChipPAC, Infineon Technologies, Panasonic, Micron, Lam Research, Tokyo Electron, Dorma, and NEC. He holds a Bachelor of Engineering (Mechanical Engineering) from the National University of Singapore and completed advanced consultancy training in Japan as a Colombo Plan scholar.
His philosophy: "Manufacturing excellence is achieved through disciplined systems, capable leadership, and sustained execution on the shopfloor."
His practitioner-led toolkits have been utilized by managers and organizations across Asia, Europe, and North America to build Design Thinking and Lean capability and drive organizational improvement.
For enquiries about Focused Improvement, TPM, or operational excellence consulting, visit www.oeconsulting.com.sg or contact us directly through the OEC website.
Related Articles in the TPM Practitioner Guide Series
Total Productive Maintenance (TPM): A Practitioner's Guide — The hub article covering the full TPM framework, all eight pillars, and the TPM implementation roadmap.
Overall Equipment Effectiveness (OEE): A Practitioner's Guide — How to calculate, interpret, and use OEE as the primary metric driving Focused Improvement theme selection.
OEE Benchmarking: A Practitioner's Guide — Setting realistic OEE targets and understanding world-class benchmarks across industries.
Autonomous Maintenance: A Practitioner's Guide — The seven-step AM methodology and its foundational relationship to Focused Improvement.
Planned Maintenance: A Practitioner's Guide — Building a PM strategy that feeds FI theme selection and absorbs FI-generated countermeasures into the maintenance system.
Quality Maintenance (Hinshitsu Hozen): A Practitioner's Guide — The eight-step methodology for achieving zero defects by establishing and maintaining the precise 4M conditions required to prevent defect generation at the source.
TPM Self-Assessment and the TPM Excellence Award: A Practitioner's Guide — How the JIPM assessment framework evaluates FI pillar maturity and what world-class FI performance actually looks like.
OEC TPM Maturity Diagnostic: A Practitioner Guide — Bridges implementation gaps with a four-level model based on JIPM checklists, offering practical descriptors that make the assessment entirely actionable.
Build TPM Capability in Your Organisation
At Operational Excellence Consulting, I deliver customised TPM, OEE workshops and implementation programmes for manufacturing organisations across Singapore and the Asia-Pacific region — from foundational two-day workshops to multi-year TPM implementation support, facilitated by a JIPM-certified TPM Instructor.
👉 Explore our TPM training courses and practitioner-led resources:
Operational Excellence Consulting offers a full catalog of facilitation‑ready training presentations and practitioner toolkits covering Lean, Design Thinking, and Operational Excellence. These resources are developed from real workshops and transformation projects, helping leaders and teams embed proven frameworks, strengthen capability, and achieve sustainable improvement.
👉 Explore the full library at: www.oeconsulting.com.sg/training-presentations
© Operational Excellence Consulting. All rights reserved.


















