IFS Cloud ERP Consulting Services | Data Migration & SCM Expertise
  1. You are here:  
  2. Home
  3. Implementation of IFS Cloud Data Mesh
IFS-ERP CRIMS customization of IFS Cloud
Use Data Mesh in IFS Cloud to give business domains control over their own data. This approach improves accuracy, speeds up analytics, and makes ERP a trusted source for your business.
IFS Cloud Data Mesh Phase 2: Confirming Sharing Agreements & Data Contracts

Confirm sharing agreements

  • IFS Cloud
  • Data Governance
  • Data Mesh
  • IFS Cloud Data Mesh,
  • API Management

TL;DR: Executive Summary

The Crux of Phase 2: Moving from abstract data concepts to a working Data Mesh requires legally binding the relationship between Data Producers (Domains) and Data Consumers. This step is "Confirming Sharing Agreements."

The Action: In the Prototype Phase, organizations must formalize "Data Contracts" that specify the Schema (Structure), Service Level Agreements (Freshness/Uptime), and Semantics (Meaning) of the data products being exposed via IFS Cloud Projections or APIs.

The Outcome: By rigorously confirming these agreements during the prototype stage, you prevent "Integration Drift," reduce downstream reporting failures by 60%, and establish the trust necessary to scale from a pilot project to a full enterprise-wide Data Mesh.

What Problem Does This Article Solve?

In traditional ERP implementations, data integrations are often built on implied trust: "I hope the Manufacturing team doesn't change the column name in the database." This leads to fragile systems where a minor update in IFS Cloud breaks critical PowerBI dashboards or external logistics feeds.

This article solves the problem of Data Fragility and Ambiguity. It provides a detailed blueprint for creating, negotiating, and validating "Sharing Agreements" (Data Contracts) during the prototyping phase. It guides implementation teams on how to transition from "tribal knowledge" about data to explicit, machine-readable guarantees, ensuring that the Data Mesh remains resilient even as the underlying IFS Cloud platform evolves through its semi-annual release cycles.

The Transition from Concept to Contract

Phase 2 of an IFS Cloud Data Mesh implementation - the Prototype Phase - is the moment of truth. In Phase 0 and 1, the organization defined the vision, established the governance committee, and identified the business domains. Now, rubber meets the road. We are no longer talking about "Manufacturing Data" in the abstract; we are building a specific Data Product (e.g., `ShopOrderPerformance_v1`) and exposing it to a consumer (e.g., the Corporate Finance Planning System).

The critical success factor in this phase is not just the technical code that moves the data; it is the Sharing Agreement that governs it. In the Data Mesh paradigm, data is treated as a product. Just as a physical product comes with a warranty and a specification sheet, a Data Product must come with a Sharing Agreement. This agreement explicitly defines what the consumer can expect and what the producer is obligated to deliver.

Confirming these agreements during the Prototype phase is vital because it establishes the template for the entire enterprise. If the prototype agreements are loose, vague, or technically unenforceable, the entire mesh will eventually collapse under the weight of broken dependencies. This guide explores the depth of these agreements within the specific context of the IFS Cloud architecture.

The Anatomy of an IFS Cloud Sharing Agreement

A Sharing Agreement in an IFS Cloud context is more than a PDF document stored in a SharePoint folder; it is often a combination of documentation and code-enforced policies (via API Gateways or IFS Projection configurations). To be effective, the agreement must cover four non-negotiable pillars.

1. Structural Schema (The Shape)

The agreement must rigidly define the data structure. In IFS Cloud, this relates to the Entity and Projection definitions.

  • Field Definitions: Explicitly stating that `Order_No` is a String(12), not an Integer.
  • Nullability: Guaranteeing which fields will never be null. This is critical for consumers like AI models that crash on null values.
  • Versioning: Committing to a versioning strategy (e.g., "We will expose this via `/v1/ShopOrder`. Breaking changes will move to `/v2/`").

2. Service Level Objectives (SLOs)

Data has a temporal dimension. The agreement must confirm the "Freshness" and "Availability" of the data product.

  • Latency: "Data will be available in the Data Mart 15 minutes after the transaction occurs in IFS Cloud."
  • Uptime: "The API Projection will be available 99.9% of the time during business hours."
  • Retention: "This data product contains a rolling 24-month history. Older data is archived."

3. Semantic Definitions (The Meaning)

Structure is useless without meaning. The agreement must resolve ambiguity using the Business Glossary established in Phase 1.

  • Calculations: How is `NetMargin` calculated? Does it include overhead allocations?
  • Status Logic: What does a status of `Released` actually mean in the Shop Floor Workbench vs. the Planning module?
  • Master Data References: Confirming that `SiteID` references the corporate standard list of sites.

4. Security & Governance Policy

The agreement must define who can access the product and how that access is controlled via IFS Permission Sets.

  • Access Control: "Access requires the `MFG_ANALYST` Permission Set in IFS Cloud."
  • PII Handling: "Employee names are obfuscated in this view to comply with GDPR."
  • Usage Constraints: "This API is rate-limited to 1000 calls per hour to prevent system performance degradation."

Phase 2 Specifics: The Prototype Crucible

Why is "Confirming" these agreements emphasizing the Prototype phase? Because theory is perfect, but reality is messy. In Phase 2, we select one pilot Domain (usually a high-value, high-complexity domain like Manufacturing or Supply Chain) and one pilot Consumer. We essentially lock them in a room (metaphorically) and force them to negotiate a contract that works in the real world.

The "Mock Consumer" Validation

A key activity in this phase is the "Mock Consumer" test. Before the full integration is built, the Domain Team (Producers) publishes the draft Sharing Agreement (often as a Swagger/OpenAPI definition file generated from IFS Cloud). The Consumer Team then attempts to write code or build a report based strictly on that definition, without looking at the underlying database.

If the Consumer has to ask "Hey, what does column X mean?" or "Why is this field returning a null?", the Sharing Agreement has failed. The Prototype phase allows us to fail fast. It reveals the gaps in documentation and the hidden assumptions that developers make. Confirming the agreement means iterating on this cycle until the Consumer can successfully consume the data product using only the agreement as their guide.

Technical Implementation in IFS Cloud

How do we technically "codify" these agreements within the IFS Cloud ecosystem? We move away from direct SQL access (which bypasses business logic and security) and utilize the native capabilities of the platform.

In IFS Cloud, the primary mechanism for a Data Contract is the Projection. The Projection exposes entities and functions via REST APIs. The "Sharing Agreement" is technically represented by the OpenAPI Specification (OAS) of that Projection.

Validation Step: The Domain Owner uses the IFS API Explorer to generate the specification. The Consumer "signs" the agreement by successfully authenticating (via OAuth2) and retrieving data that matches the schema. If the Domain Owner changes the Projection (e.g., renames an attribute), the API versioning policy defined in the agreement dictates whether a new URL endpoint is required, protecting the consumer from breaking changes.

The IFS Data Migration Manager (DMM) isn't just for moving legacy data; it's a powerful tool for validating data quality rules within the mesh.

Validation Step: Before data is published as a "Certified Data Product," it can pass through validation checks in DMM or via IFS Business Rules. The Sharing Agreement might specify: "The `ProjectID` field must match a valid project in the Project Management module." We configure these checks within the system logic. If data fails this check, it is flagged as "Non-Conforming," violating the agreement's quality clause.

For internal consumers (users within IFS Cloud), the Data Product often takes the form of an Information Source used by Business Reporter or Lobbies.

Validation Step: The Sharing Agreement here focuses on Performance and Access. "This Lobby Element will load within 2 seconds." To confirm this, the prototype phase involves load testing the underlying SQL view or Information Source to ensure that complex joins do not degrade system performance for other users, honoring the "Usage Constraints" of the agreement.

The Negotiation Process: Breaking Silos

Confirming an agreement is a human process as much as a technical one. It involves negotiation between the Domain Owner (who knows the data) and the Consumer (who needs the data). In many organizations, these two groups rarely speak the same language. The Prototype Phase forces this dialogue.

Common Friction Points & Resolutions

Friction Point The Producer's Stance The Resolution (The Agreement)
Data Freshness "Real-time extraction hurts my transactional performance. I can give you a nightly dump." The agreement specifies Near-Real-Time via IFS Connect / Event streams for critical data, and batch for historical analysis.
Data Quality "I can't guarantee there are no nulls in the `Description` field; users leave it blank." The agreement mandates a "Default Value" transformation (e.g., replacing NULL with "N/A") before publication, so the consumer script doesn't break.
History "I only keep the current year active in the main table." The agreement defines a Data Lake storage tier where the Domain exports history for the Consumer's long-term trend analysis.

Note: The Governance Committee (established in Phase 0) acts as the arbitrator when these negotiations reach an impasse.

Governance & Compliance

"Trust, but verify."

Ensuring Compliance in Agreements

A Sharing Agreement is legally binding within the organization. During the prototype confirmation, the Compliance Officer must sign off on the data product. This is particularly crucial for industries regulated by GDPR, ITAR, or SOX.

The PII Challenge: If the prototype data product includes Human Resources data, the agreement must explicitly state how sensitive fields are handled. Are they masked? Are they encrypted? Confirming the agreement involves demonstrating to the Security Architect that the IFS Cloud Permission Sets are correctly configured so that a Consumer with "Read Only" access cannot see salary data, even via the API.

The Audit Trail: The agreement must also define the logging requirements. "Every access to this API must be logged." During the prototype confirmation, we verify that IFS History Logging or API Gateway logs are capturing the necessary metadata to satisfy a future external audit.

From Prototype to Production: The Final Handshake

Once the Schema is validated, the SLOs are tested, and the Security is audited, the Sharing Agreement is "Confirmed." What does this mean operationally?

It means the Data Product is added to the Enterprise Data Catalog. It moves from a "Lab" status to a "Production" status. The Domain Team is now on the hook for supporting it. If the API goes down at 2 AM, the Domain Team (or their designated support arm) is alerted, not central IT.

Confirming the agreement in Phase 2 creates a template. Organizations typically find that the first agreement takes 4 weeks to negotiate. The second takes 2 weeks. By the time they reach Phase 3 (Scaling), confirming a sharing agreement becomes a standardized, rapid workflow, enabling the exponential growth of the Data Mesh.

Key Takeaway: You cannot scale what you cannot define. Confirming Sharing Agreements is the act of defining your data business.

Frequently Asked Questions

This is a "Breaking Change." The Sharing Agreement dictates the protocol for this. Typically, the Domain Owner must maintain the old version of the API (v1) while publishing the new structure as (v2). They must provide a "Deprecation Notice" to all Consumers (usually 3-6 months) to allow them time to migrate to v2. The Data Mesh governance prevents the Owner from simply overwriting v1 and breaking downstream consumers.

While specialized "Data Catalog" or "Data Contract" software exists (like Collibra or Alation), for the Prototype Phase, simple tools work. A version-controlled repository (like Git) containing the OpenAPI specifications (YAML/JSON) and a Markdown document describing the SLAs is sufficient. The key is version control and accessibility, not necessarily buying expensive new software immediately.

IFS Cloud releases updates twice a year (e.g., 23R1, 23R2). These updates can modify the underlying Core Projections. The Sharing Agreement places the burden on the Domain Owner to regression test their Data Products against the release candidates. They must ensure that the "Public" interface defined in the agreement remains stable, even if they have to adjust the internal mapping to accommodate the IFS platform update.

At minimum, the Domain Owner (Producer) and the Lead Consumer. For critical data sets (Master Data, Financials), the Data Governance Lead and Security Architect should also act as signatories to ensure enterprise standards are met.

Technically yes, but the Data Mesh value comes from inter-domain sharing. Internal data usage usually doesn't require the formal rigidity of a Sharing Agreement because the producer and consumer are on the same team. These agreements are designed for the boundaries between teams, where communication gaps usually cause failure.

Page 3 of 3

  • 1
  • 2
  • 3
  • Home
  • Offer
  • IFS Cloud
  • Data Governance
  • Contact
  • Implementation of IFS Cloud Data Mesh
  • Downloads
    • IFS OData Excel Client
  • Innovations
  • Business Process Optimisation