The Clean Core Illusion in IFS Cloud
Consultants promising a «vanilla» IFS Cloud implementation are either lying or don’t understand your business. A system completely devoid of modifications is a business prosthesis, not a competitive engine. By the time you hit your first mandatory service update, that promise of simplicity will turn into a financial nightmare.
The issue isn’t the existence of extensions; it is the lack of architectural control. IFS Cloud 25R2 and beyond demand a clean separation between your unique business logic and the core system. If your CRIMS are welded directly into the standard code, every update becomes a high-risk surgery instead of a routine maintenance task.
The High Cost of Standard-Only Blindness
Following «standard» blindly kills the very agility you paid for. You chose a cloud ERP to be faster, not to be constrained by a generic template that fits your competitors just as well as it fits you. When you ignore the need for proper extensions, you aren’t saving money — you are deferring debt.
- Update Paralysis: You skip critical updates because you fear breaking the system.
- Shadow IT: Your team starts using Excel to fill the gaps the «standard» system missed.
- Architectural Rot: Logic is scattered across random events rather than structured workflows.
Control the Lifecycle, Not the Code
Stop fighting the system’s nature. Use Workflows and projections where they actually generate profit. Your goal is not a pristine, untouched system; it is a resilient one. You need an environment that can be updated over a weekend, not a six-month migration project every year.
An ERP implementation is a surgical operation on a living organization. A poorly written workflow acts like an arterial blockage. Either you master the architecture, or the technical debt will eventually dictate your budget.
Clean Core doesn’t mean «no extensions.» It means «no legacy baggage.» It is the difference between a system that grows with you and one that holds you back.
Stop Paying for Mediocrity
If your current implementation feels like a straightjacket, your architecture is broken. We build extensions that survive updates and workflows that actually work. Don’t let a «vanilla» promise destroy your ROI.
Request an Architecture Audit
Verify if your current IFS Cloud setup is ready for the next release cycle before the update window closes.
Data migration in IFS Cloud represents a high-stakes clinical intervention for your corporate DNA. Treating this process as a mere technical «lift and shift» guarantees the digital preservation of existing operational inefficiencies.
The Myth of Seamless Transition
IT Directors gravitate toward the «Big Bang» approach to minimize downtime, yet they ignore the compounding debt of historical data errors. In an IFS Cloud environment, your CRIMS footprint expands exponentially when dirty data meets new cloud logic. This is the primary reason projects stall during the User Acceptance Testing (UAT) phase.
A business reset demands that you audit every row of your legacy database against future-state requirements. If a data point fails to support a current business process, it belongs in a read-only archive, not your live production environment.
Architecture Over Convenience: The Clean Core
The «Clean Core» strategy is the only way to survive the rapid update cadence of IFS Cloud. When you migrate «as-is,» you bake old limitations into a system designed for agility. This creates a technical anchor that prevents you from adopting new features like AI-driven scheduling or advanced demand forecasting.
The Data Governance Barrier
ERP Project Managers underestimate the political friction involved in data ownership. Department heads cling to redundant spreadsheets, fearing that a system reset will expose process gaps. Overcoming this requires a top-down mandate: the system of record must be the single source of truth, or the migration has failed.
FAQ: Navigating the Migration Minefield
Is migrating all historical data necessary?
No. Carrying more than two years of transactional history into IFS Cloud usually degrades performance and complicates the «Evergreen» update cycle. Use an external data warehouse for long-term reporting.
What is the biggest risk in a data reset?
Underestimating the mapping complexity between legacy sites and new Global Extension requirements. Mapping errors during the «Transform» phase lead to financial reconciliation nightmares post-go-live.
When should data cleansing begin?
Cleansing must start six months before the technical migration begins. Waiting for the «Load» phase is a recipe for project paralysis.
The Final Verdict
The technical act of moving bits from a local server to the cloud is trivial. The strategic act of redefining how your company interacts with its own information is where the value lies. Reject the comfort of the familiar. Build a foundation that supports growth rather than one that merely houses your past mistakes.
Lifecycle Alert (TL;TR)
Critical Update: Crystal Reports is returning temporarily to bridge the gap for upgrades, but its final removal date is now officially set.
- 🔴 Reinstatement: Crystal will return in IFS Cloud 25R2 via a Service Update (SU).
- 🔴 Last Call: It remains available in 26R1 to facilitate upgrades from 25R1.
- 🔴 Hard Exit: Full removal occurs in 26R2; no support exists beyond 26R1.
- 🔴 Action: Transition to IFS Cloud Native Reporting (Operational Reporting) is mandatory.
Crystal Reports in IFS Cloud: The Final Roadmap
For many years, Crystal Reports served as a cornerstone for complex layouts and external document generation within the IFS ecosystem. As businesses move toward IFS Cloud, the strategy has shifted toward a more integrated, web-native reporting architecture.
IFS has recently confirmed the revised lifecycle for Crystal Reports integration. This update is vital for customers planning their upgrade path beyond 25R1, as it provides a temporary "grace period" before the technology is retired permanently.
Confirmed Release Schedule
| IFS Cloud Release | Status of Crystal Reports |
|---|---|
| 25R2 (Service Update) | Reinstated. Crystal Reports returns via a Service Update to support current business logic. |
| 26R1 | Available. Allows customers to upgrade and maintain reporting continuity during the transition. |
| 26R2 | Fully Removed. Not available in this or any future releases. |
Planning Your Migration
The reinstatement in 25R2 and 26R1 is not an invitation to stay on Crystal Reports long-term. It is a strategic window provided by IFS to allow organizations to migrate their reporting logic to IFS Cloud Native Reporting.
Once support for 26R1 concludes, the Crystal Reports integration will officially become "End of Life." To avoid operational disruptions, companies must begin re-mapping their document layouts today.
Read our previous technical guide on Crystal Reports Integration.
Frequently Asked Questions
Q: Can I keep my Crystal Reports after 26R2?
A: No. Starting with 26R2, the integration will be entirely removed from the IFS Cloud codebase. Reports will no longer be accessible or executable within the system.
Q: Why is IFS reinstating it in 25R2?
A: To provide a bridge for customers who need more time to upgrade beyond 25R1 without losing their critical operational reporting during the process.
Q: What should I use instead of Crystal Reports?
A: The recommended path is IFS Cloud's native Operational Reporting (Report Designer) and web-based lobbies for data visualization.
January 2026 | IFS-ERP.Consulting Strategy Team
Most ERP programmes celebrate go-live like it’s the finish line. The steering committee pops the champagne, the System Integrator closes their tickets, and the project is marked «Green.» In reality, you haven’t finished the job. You have just handed over the risk.
Over the last few weeks, we’ve spoken to organisations that have invested $10m – $100m+ in ERP platforms — and are now quietly battling a different problem:
The Silent Failure
Their people don’t want to use the system. Not because the technology failed, but because adoption was treated as an event, not a strategy.
For IT Directors and CIOs, this is critical. The system is technically stable. The uptime is 99.9%. But business value is flatlining.
What Actually Went Wrong?
-
Governance owned delivery, not outcomes. PMOs measure «On Time, On Budget,» but often dissolve before «Value Realization» begins.
-
Adoption was assumed, not designed. Organizations assume that because IFS Cloud is modern, users will naturally flock to it. They won’t.
-
Training focused on “how”, not “why”. Users are taught which buttons to click, but not why the data matters to the downstream supply chain.
-
Post-go-live support vanished. The expert consultants left just as the reality of the new process hit the shop floor.
Turning the Ship Around: 4 Strategic Pivots
If your ERP is live but value isn’t, the work isn’t finished — it’s just starting. Here is how we turn ERP programmes around.
Reframe the Narrative
Shift from «System Implementation» to Enterprise Behaviour Change. When you frame it as a technical upgrade, you get technical engagement. When you frame it as an operational shift, you get executive buy-in.
Redesign Governance
In an Evergreen environment like IFS Cloud (R1/R2 releases), governance must be perpetual. Assign a Product Owner whose KPI is User Adoption, not just system stability.
Treat Adoption as Risk
Low adoption is a commercial risk, not an HR issue. Track metrics like «Percentage of POs created via automation» and report them to the Board alongside uptime.
Capability After Go-Live
Shift your training budget. Reserve 40% for months 2 – 6. Learning sticks when users are facing real scenarios, not during UAT in a sandbox.
Long-term success isn’t technical. It’s cultural.
If you are looking at your post-go-live landscape and seeing frustration instead of flow, it is time to stop patching the software and start patching the strategy.
Curious how other IFS customers are tackling post-go-live adoption? Let’s compare notes.
Schedule a Strategy CallFrequently Asked Questions
Introduction
In the fast-paced world of enterprise resource planning, IFS Cloud stands out for its agility and continuous innovation. However, architects tasked with implementing and maintaining the system face a significant challenge: balancing the need for customization with the imperative of governance. The Evergreen model of IFS Cloud, with its bi-annual updates, forces architects to confront this dilemma head-on. Customizations that once remained untouched for years now face scrutiny every six months, turning technical debt from a theoretical concern into an immediate risk.
The Evergreen Reality: A Double-Edged Sword
IFS Cloud’s Evergreen model is designed to keep organizations at the cutting edge, delivering new features and improvements twice a year. While this ensures businesses can leverage the latest advancements, it also means that every customization — no matter how small — must be re-evaluated with each update. What was once a «set and forget» modification in older versions like IFS Applications 9 or 10 now requires ongoing attention. The consequence of weak governance isn’t just inefficiency; it’s the potential for complete paralysis during upgrades.
The Tiered Governance Model: A Structured Approach
To navigate this challenge, architects must move beyond the traditional «Standard vs. Custom» binary. The Tiered Governance Model provides a framework for managing customizations based on their risk and impact. This model categorizes modifications into three tiers, each with its own governance rules and scrutiny levels:
- Tier 1: Low-Code/No-Code Configurations – Changes made through Page Designer, Custom Fields, Events, Lobbies, and Automation Workflows. While these modifications are generally «upgrade-safe,» they introduce significant data risks if left unchecked. Every custom attribute must undergo a Data Impact Assessment to determine ownership, regulatory requirements, and inclusion in the Data Warehouse.
- Tier 2: Extend on the Outside – Extensions built using REST APIs in platforms like Boomi, Azure, or Mendix. These integrations offer high flexibility with low core risk but require governance to ensure API Contract Stability. Only public and supported APIs should be used to avoid disruptions.
- Tier 3: Extend on the Inside – Modifications to Projections, Logical Units (LUs), or underlying business logic. These high-risk changes should be a last resort and must be developed within the IFS Lifecycle Experience (LE) with mandatory automated test scripts.
Operationalizing Governance: From Theory to Practice
Governance isn’t effective if it’s not enforced. To operationalize the Tiered Governance Model, architects must embed governance into every stage of the development lifecycle:
- Configuration Change Control Board (CCB) – Evaluates the method of implementation for customizations, ensuring every change aligns with architectural best practices.
- Lifecycle as the Enforcer – IFS Build Place pipelines should act as the «governance sheriff,» blocking merges that lack associated documentation or fail static code analysis. If governance artifacts are missing, the build fails — no exceptions.
- Expiration Date Strategy – Every customization should have a review date. Aggressive pruning of outdated customizations is essential to prevent technical debt from accumulating.
The Architect’s Bottom Line: Stewardship Over Speed
The role of an architect in IFS Cloud is not just to enable innovation but to ensure that innovation is sustainable. This requires a shift in mindset: from seeing governance as a barrier to recognizing it as the foundation of a resilient system. By defining ownership, lifecycle, and validation rules before a single line of code is written, architects can transform IFS Cloud from a fragile house of cards into a robust, evolving platform.
Action Plan: What You Can Do Today
- Audit your customizations and document ownership, purpose, and review dates for each.
- Enforce Data Impact Assessments for all custom attributes.
- Lock down APIs to public, supported interfaces only.
- Automate governance by configuring pipelines to block non-compliant merges.
- Schedule regular reviews and retire obsolete customizations.
- Train your team on governance responsibilities to ensure everyone understands the rules.
Conclusion
The next IFS Cloud update is always around the corner. By implementing a Tiered Governance Model and embedding governance into the development lifecycle, architects can ensure their customizations are ready for the future. Governance is not about saying «no» but about saying «do it right.»
Frequently Asked Questions
What is the Evergreen model in IFS Cloud?
The Evergreen model in IFS Cloud refers to the bi-annual updates that deliver new features and improvements. This model ensures organizations stay current with the latest advancements but requires ongoing evaluation of customizations to avoid technical debt and upgrade issues.
What is the Tiered Governance Model?
The Tiered Governance Model is a structured approach to managing customizations in IFS Cloud. It categorizes modifications into three tiers based on risk and impact: Tier 1 (Low-Code/No-Code Configurations), Tier 2 (Extend on the Outside), and Tier 3 (Extend on the Inside). Each tier has specific governance rules to ensure sustainability and compliance.
Why is governance important in IFS Cloud customizations?
Governance is crucial in IFS Cloud customizations to prevent technical debt, ensure compliance, and maintain system integrity. Without proper governance, customizations can become liabilities, hindering upgrades and increasing risks.
What is a Data Impact Assessment?
A Data Impact Assessment is a mandatory evaluation for all custom attributes in IFS Cloud. It determines data ownership, regulatory requirements, and whether the data should be included in the Data Warehouse. This assessment ensures that customizations are documented and compliant.
How can architects enforce governance in IFS Cloud?
Architects can enforce governance by embedding it into the development lifecycle. This includes using a Configuration Change Control Board (CCB) to evaluate implementation methods, automating governance through pipelines, and scheduling regular reviews to retire obsolete customizations.
What is the role of the Configuration Change Control Board (CCB)?
The CCB evaluates the method of implementation for customizations, ensuring they align with architectural best practices. It approves or rejects changes based on their potential risks and compliance with governance rules.
What is the Expiration Date Strategy?
The Expiration Date Strategy involves assigning review dates to all customizations. This ensures that workarounds or temporary solutions are retired when they become obsolete, preventing the accumulation of technical debt.
How can organizations prepare for IFS Cloud updates?
Organizations can prepare for IFS Cloud updates by auditing customizations, enforcing Data Impact Assessments, locking down APIs to public and supported interfaces, automating governance through pipelines, and training teams on governance responsibilities.
Core Concept: Consolidate Demand, Decentralize Delivery
Centralized purchasing separates the transactional flow (ordering) from the physical flow (delivery):
- Transactional Flow: Local purchase requisitions are consolidated into a single Purchase Order (PO) by a central purchaser. This PO is issued to the supplier as a unified order.
- Physical Flow: The supplier delivers goods directly to the demand site, eliminating internal inventory transactions and reducing logistical complexity.
Key Benefit: No internal inventory transactions are required between the demand site and the ordering site, as receipt and arrival registration occur at the demand site.
Strategic Configuration and Prerequisites
To implement centralized purchasing effectively, enforce the following data consistency rules:
1. Purchase Part Standardization
All sites must use identical:
- Part Numbers: Ensure the same part number is used across all locations.
- Unit of Measure (UoM): Standardize the purchase UoM and conversion factors to inventory UoM.
- Catalog Alignment: The central purchasing site’s catalog must include all parts from demand sites.
Failure to standardize: Leads to order errors, delayed deliveries, and increased operational costs.
2. Site Basic Data Setup
Configure site-level rules to define interactions between central and local entities:
- Validity Periods: Define time intervals for default purchasing sites.
- Pricing Logic: Choose whether prices are fetched from the Purchasing Site (PO Header) or Demand Site (PO Line).
- Strategic Note: Using «Demand Site» pricing simplifies part administration by limiting basic data setup to the demand site.
Operational Workflow
From Requisition to Order
The transition from local requisition to central order can be automated or manual:
- Automatic Detection: The «Central Order» option is enabled automatically if the demand site has valid centralized basic data.
- Manual Selection: Buyers can manually select the central order option and specify the site and part pricing method.
- Consolidation: Buyers can add lines to existing POs, converting normal POs to centralized POs.
Receipt and Arrival
Receipt and arrival registration are handled entirely by demand sites:
- No central action is required for part arrivals.
- Inventory transactions are recorded locally as usual.
- For direct customer deliveries, the end customer’s address is saved on the PO line, and the process is managed by the demand site.
Risk Mitigation and Data Integration
Data Consistency Risks
Inconsistent part data across sites can disrupt centralized purchasing. Mitigate risks by:
- Conducting a pre-implementation audit of part numbers, UoM, and catalogs.
- Using Data Mesh and OData projections for real-time data synchronization.
Testing Scenarios
Before go-live, test the following scenarios:
- Multi-site orders to a single supplier.
- Direct deliveries to end customers.
- Manual override of automated central order detection.
Frequently Asked Questions (FAQ)
- How does the system determine the price for a centralized order?
- A centralized order retrieves price-related information from either the Purchasing Site (PO Header) or the Demand Site (PO Line), based on configuration.
- Are internal inventory transactions required between the central site and local sites?
- No. Receipt and arrival registration at the demand site eliminate the need for internal inventory transactions.
- What happens if centralized basic data is missing during requisition conversion?
- The «Central Order» option will not enable automatically. However, buyers can manually select it and specify part pricing and site details.
- Must part numbers be identical across all sites?
- Yes. Parts must have the same number, UoM, and conversion factors across all sites to ensure seamless processing.
- How can Data Mesh improve centralized purchasing?
- Data Mesh enables real-time data synchronization across decentralized locations, ensuring consistency and reducing errors in multi-site procurement.
Implementation Checklist
Use this checklist to ensure a successful centralized purchasing implementation:
- Audit part numbers, UoM, and catalogs for consistency across all sites.
- Configure validity periods and pricing logic for each site.
- Test automated and manual central order processes.
- Simulate direct deliveries to end customers.
- Integrate Data Mesh for real-time data synchronization (if applicable).
- Train procurement teams on new workflows and error handling.
Key Performance Indicators (KPIs)
Measure the success of your centralized purchasing implementation with these KPIs:
- Reduction in the number of purchase orders by X%.
- Decrease in order processing time by Y days.
- Cost savings from bulk purchasing and improved supplier terms.
- Reduction in order errors and delivery delays.
Centralized Purchasing FAQ
How does the system determine the price for a centralized order?
A centralized order retrieves price-related information from either the Purchasing Site (PO Header) or the Demand Site (PO Line). This depends entirely on your specific configuration preferences for pricing logic.
Are internal inventory transactions required between the central site and local sites?
No. One of the key benefits of this model is that receipt and arrival registration occur directly at the demand site, effectively eliminating the need for complex internal inventory transactions.
What happens if centralized basic data is missing during requisition conversion?
The system is designed for safety; the «Central Order» option will not enable automatically if data is missing. However, buyers can intervene by manually selecting the option and specifying the necessary part pricing and site details.
Must part numbers be identical across all sites?
Yes, this is a strict prerequisite. For seamless processing, parts must share the same Part Number, Unit of Measure (UoM), and conversion factors across all participating sites.
How can Data Mesh improve centralized purchasing?
Data Mesh architecture facilitates real-time data synchronization across decentralized locations. This ensures consistency (e.g., matching part numbers) and significantly reduces errors inherent in multi-site procurement strategies.
