Data Governance Explained Framework Benefits and Best Practices

What Problem Does This Article Solve?

The "Fragile Pipeline" Dilemma.
In traditional ERP implementations, data extraction is often built on implicit trust and "tribal knowledge." Developers query SQL views or extract Excel dumps without a formal contract. This works initially but fails catastrophically when the system evolves. When IFS Cloud receives a bi-annual release update (e.g., 25R1), or when a business process changes, these fragile pipelines break because there was no agreed-upon "Interface Contract."

This article provides a rigorous framework for Confirming Sharing Agreements. It solves the problem of ambiguity. It guides Enterprise Architects and Data Owners on how to transition from "sending data" to "serving a product," ensuring that every data exchange is governed by explicit schemas, guaranteed SLAs, and strictly defined semantics. It transforms data from a byproduct of the ERP into a reliable, engineered asset.

Phase 2: The Transition from Concept to Contract

Phase 0 and Phase 1 of an IFS Cloud Data Mesh implementation are primarily strategic. They involve defining the vision, establishing the governance committee, and mapping the high-level domains. Phase 2: The Prototype is where the rubber meets the road. It is the phase where we stop talking about "Manufacturing Data" in the abstract and start building specific, versioned Data Products.

The success of Phase 2 depends entirely on the rigorous confirmation of Sharing Agreements. In the Data Mesh paradigm, data is treated as a product. Just as a physical product like a smartphone comes with a specification sheet, a user manual, and a warranty, a Data Product must come with a Sharing Agreement. This agreement explicitly defines what the consumer can expect and what the producer (the Domain Team) is obligated to deliver.

Why "Confirm" in Prototype?

You might ask, "Why do we need to confirm agreements now? Can't we just build the integration?" The answer lies in the cost of change. Changing a data contract during the Prototype phase costs pennies; changing it once hundreds of reports and AI models depend on it costs thousands.

The "Confirmation" process is a negotiation. It is a dialogue between the Domain Owner (who knows the data's limitations) and the Consumer (who knows the business need). This dialogue often exposes hidden complexities: "You want real-time inventory? We only calculate weighted average cost nightly." Confirming the agreement resolves these discrepancies before code is written.

The "Mock Consumer" Test

A critical activity in Phase 2 is the "Mock Consumer" validation. Before the full integration is built, the Domain Team publishes the Draft Sharing Agreement (often an OpenAPI specification). The Consumer Team then attempts to write code or design a report based strictly on that document, without looking at the underlying database. If they have to ask questions, the agreement is incomplete. This "Clean Room" testing ensures the contract is self-describing and robust.

The Four Pillars of an IFS Cloud Sharing Agreement

A Sharing Agreement is not a vague email promising to "send the spreadsheet." Within the context of IFS Cloud and modern Data Mesh architectures, it is a precise technical and legal construct. To be considered "Confirmed," an agreement must fully address four non-negotiable pillars.

The agreement must rigidly define the data structure. In the IFS Cloud world, this typically relates to the definition of the Entity or the Projection being exposed.

  • Field Definitions: It is not enough to say "Order Amount." The agreement must specify: Is it a Float or Decimal? How many decimal places? If it is a Date, is it ISO 8601 format (YYYY-MM-DD) or a Unix Timestamp?
  • Nullability Contracts: This is the most common cause of integration failure. The agreement must explicitly list which fields are Mandatory (Guaranteed Not Null) and which are Optional. Consumers (like AI models) often crash on unexpected nulls.
  • Enumerations: IFS makes heavy use of "Client" vs "DB" values (e.g., 'Planned' vs '10'). The agreement must confirm which value is exposed. Best practice dictates exposing the readable Client value or providing a lookup map.
  • Versioning Strategy: The agreement must state the versioning policy. "This product is exposed via /v1/ShopOrder. Breaking changes will force a move to /v2/." This protects consumers from the "Evergreen" updates of IFS Cloud.

Data has a temporal dimension. Schema defines what data is; SLOs define when and how it is delivered. A structurally perfect dataset is useless if it arrives 4 hours too late for the morning shipping meeting.

Freshness (Latency)

The agreement must specify the maximum age of the data. "Data in this API reflects transactions up to 5 minutes ago." or "This is a nightly snapshot, refreshed at 02:00 UTC."

Availability (Uptime)

What is the guaranteed uptime? 99.9%? Does the API go down during the IFS Cloud maintenance window? The consumer needs to know to build retry logic.

Retention Policy

How far back does the data go? IFS Cloud operational tables might hold 10 years, but a high-performance API might only serve the "Active" rolling 24 months. This must be codified.

Structure is useless without meaning. The "Semantic Gap" is where business value is lost. The Sharing Agreement must resolve ambiguity using the Business Glossary established in Phase 1.

  • Calculation Logic: If the data product exposes `NetMargin`, how is that calculated? Does it include overhead allocations? Does it account for rebates? The formula must be referenced.
  • State Definitions: What does a status of `Released` actually mean in the Shop Floor Workbench compared to the Planning module?
  • Master Data References: The agreement must confirm that fields like `SiteID` or `CustomerID` reference the corporate standard MDM list, ensuring joinability with other domains.

The agreement must define who can access the product and how that access is controlled via IFS Cloud's security model.

Compliance & PII: If the data contains Personally Identifiable Information (HR data, Customer Contacts), the agreement must state how it is protected. "Employee names are masked for consumers with the `ANALYST_BASIC` role."

Permission Sets: The agreement should specify the IFS Permission Set required to consume the API (e.g., `DATAMESH_FINANCE_READ`).

Usage Constraints: To protect the operational performance of the ERP, the agreement may impose rate limits. "Consumers are limited to 1000 API calls per hour."

Technical Implementation: Codifying the Contract in IFS Cloud

Confirming a sharing agreement is not just a paperwork exercise. In the Prototype Phase, we must implement the agreement technically within the IFS Cloud architecture. We move away from direct SQL access (which is insecure and bypasses business logic) and utilize the native capabilities of the platform to enforce the contract.

Projections & API Explorer

In IFS Cloud, the primary mechanism for a Data Contract is the Projection. The Projection exposes entities via OData/REST APIs.

Implementation: The Domain Owner uses the IFS API Explorer to generate the OpenAPI Specification (OAS) JSON file. This file is the technical contract. It defines every endpoint, data type, and required parameter. The Consumer "signs" the agreement by successfully authenticating (via OAuth2) and parsing this OAS file to build their client.

Data Migration Manager (DMM)

The IFS Data Migration Manager (DMM) is not just for legacy migration; it is a potent validation engine for the Data Mesh.

Implementation: Before data is "Certified" for sharing, it can pass through DMM validation rules. The Sharing Agreement might specify: "ProjectID must exist in the Project Module." DMM enforces this integrity check. If the data fails, it is flagged as "Non-Conforming," protecting the consumer from bad data.

Information Sources

For internal consumers (e.g., users viewing Lobbies or Business Reporter), the Data Product is often an Information Source (IS).

Implementation: The agreement focuses on Performance and Access. "This Lobby Element will load within 2 seconds." Confirming the agreement involves load-testing the underlying IS or Quick Information Source (QIS) to ensure that complex joins do not degrade system performance for other users.

The Negotiation Process: Breaking Silos

Confirming an agreement is a human process as much as a technical one. It involves negotiation between the Domain Owner (Producer), who understands the data's generation and limitations, and the Consumer, who understands the business requirement. In many organizations, these two groups rarely speak the same language. The Prototype Phase forces this dialogue to happen.

The Role of the Governance Committee:
Occasionally, negotiations stall. The Consumer demands 100% real-time data, but the Producer knows this will crash the production server. This is where the Data Governance Committee (established in Phase 0) steps in. They act as the arbitrator, balancing the business value of the request against the technical cost and risk, ultimately ruling on the final terms of the Sharing Agreement.

 

Common Friction Points & Resolutions

Friction Point The Producer's Stance The Resolution (The Agreement)
Data Freshness "Real-time extraction hurts my transactional performance. I can only provide a nightly dump." The agreement specifies Near-Real-Time via IFS Connect / Event streams for critical operational data, and batch processing for historical analysis.
Data Quality "I can't guarantee no nulls in the `Description` field because users leave it blank." The agreement mandates a Transformation Rule: The Producer will replace NULL with "N/A" before publication, so consumer scripts don't break.
History "I only keep the current active year in the main transaction table." The agreement defines a Data Lake storage tier (e.g., Azure Data Lake) where the Domain exports history for the Consumer's long-term trend analysis.

Lifecycle Management: When the Agreement Changes

A Sharing Agreement is not a static artifact; it is a living document. IFS Cloud is an "Evergreen" platform, receiving functional updates twice a year. Business processes change. New regulations (like ESG reporting) emerge.

Therefore, the "Confirmation" process must include a Change Management Protocol.

Deprecation Policy

What happens when a data product is retired? The agreement must specify a "Deprecation Notice Period" (e.g., 6 months). The Producer cannot simply turn off the API; they must notify all registered Consumers and provide a migration path to the new version.

Breaking Changes

If the Producer renames a column or changes a data type, this is a "Breaking Change." The agreement dictates that this triggers a major version increment (e.g., from v1 to v2). The v1 endpoint must remain active and supported for a defined period to allow Consumers to refactor their code.

From Prototype to Production

Once the Schema is validated, the SLOs are tested via the Mock Consumer, and the Security is audited by the CISO, the Sharing Agreement is formally "Confirmed."

What does this mean operationally? It means the Data Product is added to the Enterprise Data Catalog. It moves from a "Lab" status to a "Production" status. The Domain Team is now accountable for supporting it. If the API goes down at 2 AM, the Domain Team (or their designated support arm) is alerted, not central IT. Confirming the agreement in Phase 2 creates the template for the entire organization. It establishes the "Trust Architecture" required to scale from a single pilot to a comprehensive enterprise Data Mesh.

Frequently Asked Questions

This constitutes a "Breaking Change." The Sharing Agreement dictates the strict protocol for this scenario. Typically, the Domain Owner is required to maintain the old version of the API (v1) while simultaneously publishing the new structure as (v2). They must provide a formal "Deprecation Notice" to all registered Consumers (usually 3-6 months) to allow them sufficient time to update their integrations. The Data Mesh governance framework prevents the Owner from simply overwriting v1 and breaking downstream consumers.

While specialized "Data Catalog" or "Data Contract" software platforms exist (such as Collibra, Alation, or Atlan), they are not strictly necessary for the Prototype Phase. Simple, accessible tools often work best initially. A version-controlled repository (like Git) containing the OpenAPI specifications (YAML/JSON) and a Markdown document describing the SLAs is sufficient. The critical factors are version control, discoverability, and accessibility, rather than purchasing expensive new software immediately.

IFS Cloud releases functional updates twice a year (e.g., 25R1, 25R2). These updates can occasionally modify the underlying Core Projections or database views. The Sharing Agreement places the burden of stability on the Domain Owner. They must perform regression testing on their Data Products against the Release Candidates. They must ensure that the "Public" interface defined in the agreement remains stable for consumers, even if they have to adjust the internal mapping or logic to accommodate the changes in the IFS platform.

At a minimum, the agreement requires the sign-off of the Domain Owner (Producer) and the Lead Consumer. However, for critical enterprise data sets (such as Master Data, Financials, or HR), the Data Governance Lead and the Security Architect should also act as signatories. This ensures that the agreement complies with enterprise-wide standards for security, naming conventions, and regulatory compliance (GDPR/SOX).

Technically yes, but the primary value of the Data Mesh architecture comes from inter-domain sharing. Internal data usage (e.g., a Manufacturing report used by a Manufacturing planner) usually does not require the formal rigidity of a Sharing Agreement because the producer and consumer are often on the same team or report to the same manager. These agreements are specifically designed to bridge the boundaries between domains, where communication gaps and misaligned priorities usually cause integration failures.

Metadata is the "label on the can." It makes the data product discoverable. The Sharing Agreement should mandate specific metadata tags (e.g., Domain Name, Data Classification, Refresh Rate, Owner Contact Info). This allows the Data Product to be indexed by the Enterprise Data Catalog, allowing other users in the organization to find the data they need without sending emails to IT to ask "where is the sales data?"