IFS Cloud ERP Consulting Services | Data Migration & SCM Expertise
  1. You are here:  
  2. Home
  3. Data Governance
IFS-ERP CRIMS customization of IFS Cloud
  • IFS Cloud
  • Data Governance
Data Governance Explained Framework Benefits and Best Practices
 

TL;DR: Executive Summary

The Prototype Phase (Phase 2) is the "Crucible of Trust." It is where abstract Data Mesh concepts must be converted into legally binding "Data Contracts" between Producers (IFS Cloud Domains) and Consumers.


The Risk

Without formal sharing agreements, data integrations drift. A minor schema change in an IFS Projection can silently break downstream analytics, costing thousands in remediation.

The Mechanism

A "Sharing Agreement" is not just a document; it is a technical specification (OpenAPI/Swagger) combined with Service Level Objectives (SLAs) regarding freshness, semantic meaning, and security.

The Outcome

Confirming these agreements before scaling ensures that your IFS Cloud Data Mesh is resilient, version-controlled, and trusted by the business, enabling a true "Data as a Product" ecosystem.

What Problem Does This Article Solve?

The "Fragile Pipeline" Dilemma.
In traditional ERP implementations, data extraction is often built on implicit trust and "tribal knowledge." Developers query SQL views or extract Excel dumps without a formal contract. This works initially but fails catastrophically when the system evolves. When IFS Cloud receives a bi-annual release update (e.g., 25R1), or when a business process changes, these fragile pipelines break because there was no agreed-upon "Interface Contract."

This article provides a rigorous framework for Confirming Sharing Agreements. It solves the problem of ambiguity. It guides Enterprise Architects and Data Owners on how to transition from "sending data" to "serving a product," ensuring that every data exchange is governed by explicit schemas, guaranteed SLAs, and strictly defined semantics. It transforms data from a byproduct of the ERP into a reliable, engineered asset.

Phase 2: The Transition from Concept to Contract

Phase 0 and Phase 1 of an IFS Cloud Data Mesh implementation are primarily strategic. They involve defining the vision, establishing the governance committee, and mapping the high-level domains. Phase 2: The Prototype is where the rubber meets the road. It is the phase where we stop talking about "Manufacturing Data" in the abstract and start building specific, versioned Data Products.

The success of Phase 2 depends entirely on the rigorous confirmation of Sharing Agreements. In the Data Mesh paradigm, data is treated as a product. Just as a physical product like a smartphone comes with a specification sheet, a user manual, and a warranty, a Data Product must come with a Sharing Agreement. This agreement explicitly defines what the consumer can expect and what the producer (the Domain Team) is obligated to deliver.

Why "Confirm" in Prototype?

You might ask, "Why do we need to confirm agreements now? Can't we just build the integration?" The answer lies in the cost of change. Changing a data contract during the Prototype phase costs pennies; changing it once hundreds of reports and AI models depend on it costs thousands.

The "Confirmation" process is a negotiation. It is a dialogue between the Domain Owner (who knows the data's limitations) and the Consumer (who knows the business need). This dialogue often exposes hidden complexities: "You want real-time inventory? We only calculate weighted average cost nightly." Confirming the agreement resolves these discrepancies before code is written.

The "Mock Consumer" Test

A critical activity in Phase 2 is the "Mock Consumer" validation. Before the full integration is built, the Domain Team publishes the Draft Sharing Agreement (often an OpenAPI specification). The Consumer Team then attempts to write code or design a report based strictly on that document, without looking at the underlying database. If they have to ask questions, the agreement is incomplete. This "Clean Room" testing ensures the contract is self-describing and robust.

The Four Pillars of an IFS Cloud Sharing Agreement

A Sharing Agreement is not a vague email promising to "send the spreadsheet." Within the context of IFS Cloud and modern Data Mesh architectures, it is a precise technical and legal construct. To be considered "Confirmed," an agreement must fully address four non-negotiable pillars.

The agreement must rigidly define the data structure. In the IFS Cloud world, this typically relates to the definition of the Entity or the Projection being exposed.

  • Field Definitions: It is not enough to say "Order Amount." The agreement must specify: Is it a Float or Decimal? How many decimal places? If it is a Date, is it ISO 8601 format (YYYY-MM-DD) or a Unix Timestamp?
  • Nullability Contracts: This is the most common cause of integration failure. The agreement must explicitly list which fields are Mandatory (Guaranteed Not Null) and which are Optional. Consumers (like AI models) often crash on unexpected nulls.
  • Enumerations: IFS makes heavy use of "Client" vs "DB" values (e.g., 'Planned' vs '10'). The agreement must confirm which value is exposed. Best practice dictates exposing the readable Client value or providing a lookup map.
  • Versioning Strategy: The agreement must state the versioning policy. "This product is exposed via /v1/ShopOrder. Breaking changes will force a move to /v2/." This protects consumers from the "Evergreen" updates of IFS Cloud.

Data has a temporal dimension. Schema defines what data is; SLOs define when and how it is delivered. A structurally perfect dataset is useless if it arrives 4 hours too late for the morning shipping meeting.

Freshness (Latency)

The agreement must specify the maximum age of the data. "Data in this API reflects transactions up to 5 minutes ago." or "This is a nightly snapshot, refreshed at 02:00 UTC."

Availability (Uptime)

What is the guaranteed uptime? 99.9%? Does the API go down during the IFS Cloud maintenance window? The consumer needs to know to build retry logic.

Retention Policy

How far back does the data go? IFS Cloud operational tables might hold 10 years, but a high-performance API might only serve the "Active" rolling 24 months. This must be codified.

Structure is useless without meaning. The "Semantic Gap" is where business value is lost. The Sharing Agreement must resolve ambiguity using the Business Glossary established in Phase 1.

  • Calculation Logic: If the data product exposes `NetMargin`, how is that calculated? Does it include overhead allocations? Does it account for rebates? The formula must be referenced.
  • State Definitions: What does a status of `Released` actually mean in the Shop Floor Workbench compared to the Planning module?
  • Master Data References: The agreement must confirm that fields like `SiteID` or `CustomerID` reference the corporate standard MDM list, ensuring joinability with other domains.

The agreement must define who can access the product and how that access is controlled via IFS Cloud's security model.

Compliance & PII: If the data contains Personally Identifiable Information (HR data, Customer Contacts), the agreement must state how it is protected. "Employee names are masked for consumers with the `ANALYST_BASIC` role."

Permission Sets: The agreement should specify the IFS Permission Set required to consume the API (e.g., `DATAMESH_FINANCE_READ`).

Usage Constraints: To protect the operational performance of the ERP, the agreement may impose rate limits. "Consumers are limited to 1000 API calls per hour."

Technical Implementation: Codifying the Contract in IFS Cloud

Confirming a sharing agreement is not just a paperwork exercise. In the Prototype Phase, we must implement the agreement technically within the IFS Cloud architecture. We move away from direct SQL access (which is insecure and bypasses business logic) and utilize the native capabilities of the platform to enforce the contract.

Projections & API Explorer

In IFS Cloud, the primary mechanism for a Data Contract is the Projection. The Projection exposes entities via OData/REST APIs.

Implementation: The Domain Owner uses the IFS API Explorer to generate the OpenAPI Specification (OAS) JSON file. This file is the technical contract. It defines every endpoint, data type, and required parameter. The Consumer "signs" the agreement by successfully authenticating (via OAuth2) and parsing this OAS file to build their client.

Data Migration Manager (DMM)

The IFS Data Migration Manager (DMM) is not just for legacy migration; it is a potent validation engine for the Data Mesh.

Implementation: Before data is "Certified" for sharing, it can pass through DMM validation rules. The Sharing Agreement might specify: "ProjectID must exist in the Project Module." DMM enforces this integrity check. If the data fails, it is flagged as "Non-Conforming," protecting the consumer from bad data.

Information Sources

For internal consumers (e.g., users viewing Lobbies or Business Reporter), the Data Product is often an Information Source (IS).

Implementation: The agreement focuses on Performance and Access. "This Lobby Element will load within 2 seconds." Confirming the agreement involves load-testing the underlying IS or Quick Information Source (QIS) to ensure that complex joins do not degrade system performance for other users.

The Negotiation Process: Breaking Silos

Confirming an agreement is a human process as much as a technical one. It involves negotiation between the Domain Owner (Producer), who understands the data's generation and limitations, and the Consumer, who understands the business requirement. In many organizations, these two groups rarely speak the same language. The Prototype Phase forces this dialogue to happen.

The Role of the Governance Committee:
Occasionally, negotiations stall. The Consumer demands 100% real-time data, but the Producer knows this will crash the production server. This is where the Data Governance Committee (established in Phase 0) steps in. They act as the arbitrator, balancing the business value of the request against the technical cost and risk, ultimately ruling on the final terms of the Sharing Agreement.

 

Common Friction Points & Resolutions

Friction Point The Producer's Stance The Resolution (The Agreement)
Data Freshness "Real-time extraction hurts my transactional performance. I can only provide a nightly dump." The agreement specifies Near-Real-Time via IFS Connect / Event streams for critical operational data, and batch processing for historical analysis.
Data Quality "I can't guarantee no nulls in the `Description` field because users leave it blank." The agreement mandates a Transformation Rule: The Producer will replace NULL with "N/A" before publication, so consumer scripts don't break.
History "I only keep the current active year in the main transaction table." The agreement defines a Data Lake storage tier (e.g., Azure Data Lake) where the Domain exports history for the Consumer's long-term trend analysis.

Lifecycle Management: When the Agreement Changes

A Sharing Agreement is not a static artifact; it is a living document. IFS Cloud is an "Evergreen" platform, receiving functional updates twice a year. Business processes change. New regulations (like ESG reporting) emerge.

Therefore, the "Confirmation" process must include a Change Management Protocol.

Deprecation Policy

What happens when a data product is retired? The agreement must specify a "Deprecation Notice Period" (e.g., 6 months). The Producer cannot simply turn off the API; they must notify all registered Consumers and provide a migration path to the new version.

Breaking Changes

If the Producer renames a column or changes a data type, this is a "Breaking Change." The agreement dictates that this triggers a major version increment (e.g., from v1 to v2). The v1 endpoint must remain active and supported for a defined period to allow Consumers to refactor their code.

From Prototype to Production

Once the Schema is validated, the SLOs are tested via the Mock Consumer, and the Security is audited by the CISO, the Sharing Agreement is formally "Confirmed."

What does this mean operationally? It means the Data Product is added to the Enterprise Data Catalog. It moves from a "Lab" status to a "Production" status. The Domain Team is now accountable for supporting it. If the API goes down at 2 AM, the Domain Team (or their designated support arm) is alerted, not central IT. Confirming the agreement in Phase 2 creates the template for the entire organization. It establishes the "Trust Architecture" required to scale from a single pilot to a comprehensive enterprise Data Mesh.

Frequently Asked Questions

This constitutes a "Breaking Change." The Sharing Agreement dictates the strict protocol for this scenario. Typically, the Domain Owner is required to maintain the old version of the API (v1) while simultaneously publishing the new structure as (v2). They must provide a formal "Deprecation Notice" to all registered Consumers (usually 3-6 months) to allow them sufficient time to update their integrations. The Data Mesh governance framework prevents the Owner from simply overwriting v1 and breaking downstream consumers.

While specialized "Data Catalog" or "Data Contract" software platforms exist (such as Collibra, Alation, or Atlan), they are not strictly necessary for the Prototype Phase. Simple, accessible tools often work best initially. A version-controlled repository (like Git) containing the OpenAPI specifications (YAML/JSON) and a Markdown document describing the SLAs is sufficient. The critical factors are version control, discoverability, and accessibility, rather than purchasing expensive new software immediately.

IFS Cloud releases functional updates twice a year (e.g., 25R1, 25R2). These updates can occasionally modify the underlying Core Projections or database views. The Sharing Agreement places the burden of stability on the Domain Owner. They must perform regression testing on their Data Products against the Release Candidates. They must ensure that the "Public" interface defined in the agreement remains stable for consumers, even if they have to adjust the internal mapping or logic to accommodate the changes in the IFS platform.

At a minimum, the agreement requires the sign-off of the Domain Owner (Producer) and the Lead Consumer. However, for critical enterprise data sets (such as Master Data, Financials, or HR), the Data Governance Lead and the Security Architect should also act as signatories. This ensures that the agreement complies with enterprise-wide standards for security, naming conventions, and regulatory compliance (GDPR/SOX).

Technically yes, but the primary value of the Data Mesh architecture comes from inter-domain sharing. Internal data usage (e.g., a Manufacturing report used by a Manufacturing planner) usually does not require the formal rigidity of a Sharing Agreement because the producer and consumer are often on the same team or report to the same manager. These agreements are specifically designed to bridge the boundaries between domains, where communication gaps and misaligned priorities usually cause integration failures.

Metadata is the "label on the can." It makes the data product discoverable. The Sharing Agreement should mandate specific metadata tags (e.g., Domain Name, Data Classification, Refresh Rate, Owner Contact Info). This allows the Data Product to be indexed by the Enterprise Data Catalog, allowing other users in the organization to find the data they need without sending emails to IT to ask "where is the sales data?"
What is Data Mesh? How to Implement Data Mesh: Step-by-Step

What is Data Mesh? How to Implement Data Mesh: Step-by-Step

Introduction

Data is everywhere in modern organizations. Companies collect information from customers, sales, operations, and more. But as data grows, it becomes harder to manage and use. Traditional data systems often rely on one big, central team to handle everything. This can lead to slowdowns, confusion, and missed opportunities.

Data Mesh is a novel approach to addressing these challenges. Instead of putting all the responsibility on a single team, Data Mesh treats data as a product. It empowers different business teams to own, share, and maintain their own data. These teams collaborate, adhering to shared guidelines, to ensure that data is reliable, trustworthy, and readily accessible. This approach helps organizations move faster, make better decisions, and get more value from their data .

Why Does It Matter?

Data Mesh matters because it helps organizations:

  • Reduce bottlenecks in data delivery: When only one team manages all the data, requests pile up and everyone waits. Data Mesh lets teams work in parallel, so data moves faster to where it’s needed .
  • Achieve higher data quality and trust: Teams that know the data best are responsible for it. This means fewer mistakes and more reliable information .
  • Align data with business value: Data is managed by the people who use it every day. This ensures that data supports real business needs and goals .
  • Build a scalable and agile data ecosystem: As the company grows, Data Mesh makes it easier to add new data sources and teams without slowing down .

How to Implement Data Mesh: Step-by-Step

Implementing Data Mesh is a journey. Here’s a simple, step-by-step guide to get started:


1️⃣ Define Vision and Align Strategy

  • Assess your current state — pain points, tech debt, silos:
    Start by looking at how your data is managed today. Where are the slowdowns? Are there old systems or data silos that make things harder?
  • Align with business objectives and outcomes:
    Make sure your data goals match your company’s big-picture plans. Data Mesh should help the business, not just IT.
  • Secure strong executive sponsorship and funding:
    Get leaders on board. Their support and resources are key for success .

2️⃣ Identify Data Domains

  • Break down your enterprise into business-aligned domains (e.g., Sales, Finance, Ops):
    Divide your company into logical areas, each with its own data needs.
  • Assign clear ownership and accountability to each domain:
    Make sure every domain has a team responsible for its data.
  • Focus on high-impact domains first for a phased rollout:
    Start where you’ll see the biggest benefits, then expand .

3️⃣ Form Cross-functional Data Product Teams

  • Include data engineers, analysts, product owners, and business SMEs:
    Build teams with a mix of skills—technical and business.
  • Empower teams with full lifecycle responsibility for their data:
    Teams should own their data from creation to sharing and maintenance.
  • Promote a mindset of ownership, not just custodianship:
    Teams should treat data as a valuable product, not just something to store .

4️⃣ Define and Deliver Data Products

  • Each product must have clear SLAs, metadata, lineage, and APIs:
    Set clear rules for how data is delivered, described, and accessed.
  • Prioritize discoverability and reusability:
    Make it easy for others to find and use your data products.
  • Establish feedback loops between producers and consumers:
    Listen to users and improve data products based on their needs .

5️⃣ Build a Self-Service Data Platform

  • Provide tooling for data ingestion, transformation, governance:
    Give teams the tools to bring in, clean, and manage data themselves.
  • Enable CI/CD pipelines, data observability, quality checks:
    Automate testing and monitoring to keep data reliable.
  • Focus on developer experience and autonomy:
    Make the platform easy to use, so teams can work independently .

6️⃣ Apply Federated Computational Governance

  • Set global policies: privacy, compliance, security:
    Create company-wide rules to keep data safe and legal.
  • Define who governs what at central and domain levels:
    Decide which rules are managed by central teams and which by domains.
  • Ensure automation over manual enforcement:
    Use automated tools to check and enforce rules, reducing human error .

7️⃣ Enable Data Discoverability

  • Deploy a searchable data catalog (e.g., Alation, Collibra, Amundsen):
    Make it easy for everyone to find data products.
  • Auto-register products, metadata, and ownership:
    Keep the catalog up to date automatically.
  • Make it easy to find, understand, and trust data:
    Good catalogs help users know what data is available and how to use it.

8️⃣ Promote Cultural Shift & Training

  • Upskill product owners and domain teams on product thinking:
    Teach teams how to manage data as a product.
  • Foster a culture of sharing, curiosity, and accountability:
    Encourage teams to share data and learn from each other.
  • Celebrate early adopters and internal case studies:
    Highlight successes to inspire others and build momentum .

Conclusion

Data Mesh is changing the way organizations manage and use data. By moving away from a single, central data team and empowering business domains, companies can deliver data faster, improve quality, and better support business goals. Each step—from defining your vision to building a self-service platform and applying federated governance—helps create a data ecosystem that is scalable, agile, and aligned with real business needs.

When teams own their data and work together, everyone benefits. Data becomes easier to find, trust, and use. The company can respond faster to new opportunities and challenges. By following these steps, you can build a Data Mesh that unlocks the full value of your data and supports your organization’s success now and in the future.


Real-World Example:
Companies like Saxo Bank, Gilead, and PayPal have adopted Data Mesh to break down data silos, improve data quality, and speed up data delivery. These organizations have seen better collaboration, faster insights, and more business value from their data .


This overview is designed to help you understand Data Mesh and start your journey toward a more effective, scalable, and business-aligned data ecosystem.

Promote Cultural Shift & Training: Building Skills and Mindsets for Data Mesh

Introduction

Data Mesh is a way for organizations to manage data by giving different business teams the power to own and manage their own data. This helps make data more useful, trusted, and available across the company. But technology alone is not enough. For Data Mesh to succeed, companies must also change how people think about data and how they work together. This is why promoting a cultural shift and training is a key step. When teams learn new skills and adopt new mindsets, they can unlock the full value of Data Mesh.

What Does Cultural Shift & Training Mean?

A cultural shift means changing the way people think and act at work. In Data Mesh, this means moving away from old habits where only a few experts handled data. Now, everyone in the business can play a part. Training helps people learn the skills they need for this new way of working.

For example, product owners and domain teams need to learn "product thinking." This means treating data like a product that serves customers—making sure it is high quality, easy to use, and always improving. Teams also need to be curious, willing to share what they know, and ready to take responsibility for their data. When people see data as a shared asset, they work together better and make smarter decisions.

Key Activities and Best Practices

  • Upskill product owners and domain teams on product thinking:
    Offer training sessions and workshops that teach teams how to treat data as a product. Show them how to listen to users, improve data quality, and deliver value. Use real-life examples to make lessons clear.

  • Foster a culture of sharing, curiosity, and accountability:
    Encourage teams to ask questions, share what they learn, and help each other. Make it safe to try new things and learn from mistakes. Set clear expectations for who owns what data and how it should be managed.

  • Celebrate early adopters and internal case studies:
    Highlight teams that try new ways of working and succeed. Share their stories in meetings or newsletters. This inspires others to join in and shows that change is possible.

  • Design and deliver effective training programs:
    Use a mix of online courses, hands-on workshops, and peer learning. Make training practical and relevant to each team’s daily work. Offer ongoing support so people can keep learning.

  • Encourage new behaviors and continuous learning:
    Reward teams that share knowledge or help others. Create spaces for people to ask questions and share tips. Remind everyone that learning is a journey, not a one-time event.

Challenges and Solutions

  • Resistance to change:
    Some people may be comfortable with the old way of working and worry about learning new skills.
    Solution: Show them the benefits of Data Mesh, offer support, and celebrate small wins to build confidence.

  • Lack of engagement:
    Teams may be too busy or not see the value in training.
    Solution: Make training short, practical, and linked to real business problems. Involve leaders to show that learning is important.

  • Unclear roles and responsibilities:
    People may not know what is expected of them in the new system.
    Solution: Clearly define roles, provide job aids, and check in regularly to answer questions.

Data Governance Considerations

Training and culture are key parts of good data governance. When people know their roles and understand the rules, data is managed better. Ongoing learning helps teams keep up with new policies and tools. Shared responsibility means everyone helps keep data safe, high quality, and useful.

Business and Cultural Impact

When teams learn new skills and adopt a positive culture, they work better together. They can solve problems faster, share ideas, and support business goals. Upskilling helps teams innovate and find new ways to use data. Celebrating success builds momentum and encourages others to join in. Over time, this creates a workplace where people are proud of their data and eager to help each other succeed.

Practical Tips and Checklist

Tips:

  • Start with small, focused training sessions.
  • Use real examples from your company.
  • Encourage leaders to join and support training.
  • Share success stories to inspire others.
  • Make learning ongoing, not just a one-time event.

Checklist:

  • Training programs are in place for product owners and domain teams
  • Teams are encouraged to share knowledge and ask questions
  • Early adopters and success stories are celebrated
  • Roles and responsibilities are clearly defined
  • Ongoing support and learning opportunities are available

Conclusion

Promoting a cultural shift and training is essential for Data Mesh success. It helps teams build the skills and mindsets they need to own and manage data. By upskilling teams, fostering a culture of sharing and accountability, and celebrating early wins, organizations can unlock the full value of their data. This step connects all the others in the Data Mesh journey and sets the stage for lasting change.

Enable Data Discoverability: Making Data Easy to Find and Trust

Introduction

Data Mesh is a way for organizations to manage data by giving different business teams the power to own and manage their own data. This helps make data more useful, trusted, and available across the company. One of the most important steps in Data Mesh is enabling data discoverability. When people can easily find, understand, and trust data, they can make better decisions and work more efficiently. Data discoverability is key to making Data Mesh a success.

What is Data Discoverability?

Data discoverability means making it easy for anyone in the company to find the data they need, understand what it means, and trust that it is accurate. Imagine a library where every book is well-organized, has a clear description, and you know who wrote it. In the same way, data discoverability helps people quickly find the right data, learn about its purpose, and see who is responsible for it.

For example, if a marketing team wants to analyze customer behavior, they should be able to search for “customer data” in a data catalog, see what data is available, read a simple description, and know who to contact if they have questions. This saves time and avoids confusion.

Key Activities and Best Practices

  • Deploy a searchable data catalog (e.g., Alation, Collibra, Amundsen):
    A data catalog is like a digital library for all your data. Tools like Alation, Collibra, or Amundsen help teams search for data products, see descriptions, and understand how to use them. A good catalog makes it easy to browse and search for data across the company .

  • Auto-register products, metadata, and ownership:
    Automation is important. When new data products are created, they should be automatically added to the catalog. This includes details like what the data is about (metadata), who owns it, and when it was last updated. This keeps the catalog up to date without extra manual work .

  • Make it easy to find, understand, and trust data:
    Every data product should have a clear name, a simple description, and information about its quality. Good metadata helps users know if the data is right for their needs. Trust grows when people can see where the data comes from and how it has been used before .

  • Choose and set up the right data catalog tool:
    Pick a catalog that fits your company’s needs and is easy for everyone to use. Make sure it connects to all your data sources and supports automation.

  • Automate metadata collection and registration:
    Use tools that automatically gather and update metadata. This reduces errors and saves time.

  • Keep the catalog up to date and user-friendly:
    Regularly review the catalog to remove outdated data and improve descriptions. Ask users for feedback to make the catalog better.

Challenges and Solutions

  • Missing metadata:
    Sometimes, data products are added without enough information.
    Solution: Use automation to collect metadata and require owners to fill in key details before publishing.

  • Hard-to-use catalogs:
    If the catalog is confusing, people won’t use it.
    Solution: Choose a simple, intuitive tool and provide training for all users.

  • Outdated or duplicate data:
    Old or repeated data can clutter the catalog.
    Solution: Set up regular reviews to clean up the catalog and remove unused data products.

  • Lack of trust in data:
    If users don’t know where data comes from, they may not trust it.
    Solution: Always show data lineage (where the data comes from) and ownership information in the catalog .

Data Governance Considerations

Data governance is about setting rules for how data is managed, shared, and protected. In this step, governance means making sure every data product in the catalog has clear metadata, an owner, and access controls. Automation helps enforce these rules, so nothing is missed. Catalog tools support governance by making it easy to track who owns each data product, who can access it, and how it should be used .

Setting standards for metadata and ownership helps everyone follow the same rules. This makes data more reliable and easier to use across the company.

Business and Cultural Impact

When data is easy to find and trust, teams can work faster and avoid doing the same work twice. This saves time and money. Discoverability also helps teams make better decisions, because they have the right information at their fingertips. Over time, this builds a culture of sharing and trust. People are more willing to share their data when they know it will be easy to find and used responsibly .

A strong data catalog also helps new employees get up to speed quickly, since they can easily explore and understand the company’s data landscape.

Practical Tips and Checklist

Tips:

  • Start with a simple catalog and add features as you grow.
  • Involve users in choosing and testing the catalog tool.
  • Automate as much as possible to keep the catalog current.
  • Encourage data owners to keep their products up to date.
  • Provide training and support for all users.

Checklist:

  • A searchable data catalog is in place (e.g., Alation, Collibra, Amundsen)
  • Data products, metadata, and ownership are auto-registered
  • Data is easy to find, understand, and trust
  • Catalog is regularly reviewed and updated
  • Users are trained and supported

Conclusion

Enabling data discoverability is a key step in the Data Mesh journey. It makes data easy to find, understand, and trust, helping teams work smarter and faster. By deploying a searchable data catalog, automating metadata collection, and setting clear standards, organizations can unlock the full value of their data. This step connects all the others in Data Mesh, building a strong foundation for a data-driven culture .

Apply Federated Computational Governance. Balancing Autonomy and Compliance

Apply Federated Computational Governance: Balancing Autonomy and Compliance

Moving beyond the «Centralized Bottleneck»: How to embed compliance, security, and quality standards directly into your IFS Cloud architecture while empowering domain teams to move at the speed of business.


TL;DR: Executive Summary

Federated Computational Governance is the operating model for the modern enterprise, specifically those adopting Data Mesh principles within IFS Cloud. It replaces the slow, manual «gatekeeper» model of traditional data governance with an automated, distributed approach.

  • Federated: Governance standards (what is «good» data) are defined centrally by a Center of Excellence (CoE) but executed locally by domain experts (Finance, Supply Chain).
  • Computational: Policies are enforced by code and automation (IFS BPA, Custom Events, Validation Rules) rather than manual PDF policy documents.
  • The Goal: To allow teams to innovate safely. The «platform» (IFS Cloud) automatically blocks non-compliant actions, removing the need for bureaucracy.
  • Key Tools: IFS Business Process Automation (BPA), Data Migration Manager (DMM) for quality gates, and Custom Events for real-time validation.

What Problem Does This Solve?

In traditional ERP implementations, organizations face a «Governance Paradox.»

On one hand, you need strict control over master data (Customers, Parts, Suppliers) to ensure financial reporting is accurate and regulations (GDPR, SOX) are met. On the other hand, business units need agility. If the Supply Chain team wants to onboard a new supplier to fix a shortage, they cannot wait 3 days for a Central Data Team to approve the record.

The symptoms of the old model:

  • Bottlenecks: The Central IT/​Data team becomes a queue where requests go to die.
  • Shadow IT: Frustrated business users start managing critical data in Excel to bypass the bureaucracy.
  • Erosion of Quality: When the «gate» is finally opened to speed things up, manual checks are skipped, and bad data enters the ERP.
The Cost of Inaction

Without Federated Computational Governance, your IFS Cloud upgrade becomes a «Lift and Shift» of legacy chaos, and your data remains a liability rather than an asset.


Risk: High

This article solves this by providing a blueprint for automating the rules (Computational) and distributing the responsibility (Federated), ensuring that «doing the right thing» is the easiest path for your users.

1. The Philosophy: Why «Federated» and Why «Computational»?

To understand how to implement this in IFS Cloud, we must first dismantle the terminology. This concept is a core pillar of Data Mesh, a paradigm shift introduced by Zhamak Dehghani, but its application in the monolithic world of ERP requires translation.

The Failure of Centralized Governance

Historically, governance was a «Control Tower.» A group of data stewards sat in a room, wrote 100-page policy documents, and manually approved changes. In the modern digital enterprise, data moves too fast for this. By the time the stewards approve a new product hierarchy, the market has shifted. Centralization scales linearly (you need more stewards for more data), but data grows exponentially.

Enter Federation

Federation borrows from political science. Think of the United States or the European Union. There is a «Federal» level that sets non-negotiable standards (Currency, Defense, Human Rights), and there are «State» levels that manage local nuances (Education, Zoning, Traffic Laws).

In IFS Cloud:

  • The Center (CoE): Defines global standards. «All entities must have a Global ID.» «All PII data must be masked in test environments.»
  • The Domain (e.g., Finance): Defines local utility. «A Customer must have a VAT code valid for the shipping country.» «Payment terms cannot exceed 60 days.»

This allows the Finance team to change their rules without asking IT, provided they don’t violate the global standards.

Enter Computational Governance

Computational means «Policy as Code.» If a rule is written in a PDF, it is a suggestion. If a rule is written in code, it is law. In IFS Cloud, we stop writing documents telling users not to leave fields blank. Instead, we implement a BPA Workflow that makes the «Save» button physically impossible to click until the data is valid. We embed the governance into the platform itself.


2. The Technical Toolset in IFS Cloud

How do we actually build this? IFS Cloud provides a rich ecosystem of low-code and no-code tools that serve as the engine for computational governance.

Business Process Automation (BPA)

The successor to Custom Events, BPA allows you to model workflows visually (BPMN). You can inject governance «decisions» into standard processes.

Example: When a user tries to change a Supplier’s bank account, BPA triggers a validation workflow that checks the IBAN format and requires a secondary approval (4‑eyes principle) before committing the transaction.

Custom Events & Actions

For hard «Guardrails,» PL/SQL events are unmatched. They sit at the database level, ensuring that no matter how the data enters (UI, API, Migration), the rule is enforced.

Example: A trigger that prevents the status of a Customer Order from moving to «Released» if the Customer’s credit limit is expired.

Data Migration Manager (DMM)

Often mistaken as a one-time tool, DMM is a governance powerhouse. Use its «Validation Rules» and «Legacy Data» containers to continuously audit production data.

Example: Run a nightly DMM job that scans the Part Master for incomplete descriptions or missing weights and flags them for the Engineering Domain.

IFS Lobbies & Analytics

Visibility is a form of soft governance. If users know their data errors are displayed on a public dashboard, they self-correct.

Example: A «Data Quality Health» Lobby for the Procurement Manager showing «Suppliers missing Email Addresses» in bright red.


3. Implementing the Federated Model

Building the structure is harder than building the code. You need to organize your teams around Domains. In IFS Cloud, a «Domain» usually maps to a functional module or a business process area.

Step 1: Define the Global Policies (The «Federal» Law)

The Center of Excellence (CoE) defines the non-negotiables. These are usually security, privacy, and interoperability standards.

  • Security: All users must have Role-Based Access Control (RBAC). No direct database access.
  • Privacy: Fields marked as PII (Personally Identifiable Information) must be audited.
  • Interoperability: All master data keys (Customer ID, Part No) must follow the corporate regex pattern (e.g., «C‑10001«).

Step 2: Empower the Domains (The «State» Law)

The Product Domain (Engineering) knows more about spare parts than the IT team ever will. Empower them to write their own rules.

Scenario: The Engineering Domain decides that no spare part can be «Active» unless it has a defined «Commodity Code» and «Weight.»

Action: Instead of asking IT to write a script, the Engineering Data Steward uses the IFS Business Modeler or configures a Custom Field with a mandatory setting. They own the quality of their data product.

Step 3: The «Sidecar» Concept in ERP

In microservices (Kubernetes), a «sidecar» is a process that runs alongside a service to handle logging and security. In IFS Cloud, we simulate this with Projection Configurations.

We wrap standard IFS Projections (APIs) with our governance logic. When a simplified UI (like a mobile scanning app for warehouse workers) calls the «ReceiveShopOrder« projection, our governance layer (BPA) intercepts the call, checks if the worker is certified to handle hazardous materials (if the part is hazardous), and only then allows the transaction to proceed. This is computational governance: the rule is checked at the moment of execution, every single time.


4. A Practical Implementation Roadmap

You cannot implement this overnight. It requires a phased approach.

Identify Domains: Map your IFS modules to business owners. Who owns «Inventory»? Is it Logistics or Finance? (Hint: It’s usually shared, which requires a «Data Contract»).
Classify Data: Tag fields. Which Custom Fields are critical? Which are nice-to-have?
Define Global Standards: Write the «Constitution» of your data.

Platform Setup: Configure IFS DMM for ongoing audits. Set up the BPA environment.
Pilot Domain: Pick one domain (e.g., Procurement). Implement 5 critical «Policy as Code» rules. (e.g., «No PO without a Contract Reference»).
Feedback Loop: Measure if these rules slow down the business or help it.

Enable Self-Service: Give Domain Stewards access to create their own Lobbies and basic Validation Rules.
Data Contracts: Formalize the handovers. If Manufacturing consumes data from Engineering, create a «Contract» that specifies the quality Manufacturing expects. Use Custom Events to alert when this contract is breached.

5. Challenges and Mitigations

The journey to Federated Computational Governance is fraught with cultural and technical traps.

The «Silo» Risk

Challenge: If you give Domains too much autonomy, they might create data definitions that don’t talk to each other (e.g., Finance uses «Client» and Sales uses «Customer»).

Mitigation: The CoE must enforce a «Polyglot» binding. Use the IFS Master Data Management (MDM) capabilities or a shared glossary to map these terms. The Global ID is the unifying thread.

Performance Overhead

Challenge: Too many synchronous checks (Events, Validations) can slow down the system. If saving a Customer Order triggers 50 complex SQL queries, the user experience suffers.

Mitigation: Use Asynchronous validation where possible. Let the user save the order, but put it in a «blocked» state. Let a background job validate it and release it. This keeps the UI snappy.

6. The Role of AI in Computational Governance

We cannot ignore the «AI» in «IFS Cloud». The future of governance is predictive.

Anomaly Detection: Instead of writing hard rules («Price cannot be > 1000»), use IFS AI capabilities to learn the patterns. If a user enters a price that is 3 standard deviations away from the historical average for that part category, the AI flags it. This is «Soft Computational Governance.» It doesn’t block, but it nudges.

Auto-Classification: When a new document is uploaded to Document Management, use AI to scan the text. If it contains credit card numbers, automatically tag it as «Confidential» and apply the relevant Access Control Policy.

Conclusion

Federated Computational Governance is not just a buzzword; it is the only way to scale data management in a complex ERP like IFS Cloud. By shifting from manual gatekeeping to automated guardrails, and by moving responsibility from a central bottleneck to the domain experts, you create an organization that is resilient, compliant, and agile.

Your data is no longer a static record in a database; it is a live product, constantly checked, polished, and served by the platform itself.

Frequently Asked Questions

Roles and Permissions (FNDUSER) control access (Can I see this screen? Can I edit this field?). Computational Governance controls logic and content (Can I save this specific value given the current context?). For example, a user might have permission to edit Supplier Payment Terms, but Governance rules might prevent them from setting terms to «Immediate» for a new supplier without VP approval.

Not necessarily. While external Governance tools are powerful, IFS Cloud has enough native capability (BPA, Custom Events, DMM, Lobbies) to handle 90% of governance needs for data residing within the ERP. External tools are best used when you need to govern data flowing between IFS Cloud, Salesforce, and a Data Lake.

The Domain Owner. In the old model, IT fixed data. In the Federated model, if the Sales data is bad, the Sales team fixes it. The platform (IT) provides the tools (Lobbies, error reports) to help them find and fix it efficiently, but the accountability sits with the business function.

Validation rules are a subset of it. Computational Governance is broader — it includes the automation of the lifecycle. It’s not just «Is this field valid?», but «Does this data trigger the correct downstream processes automatically?», «Is the PII masked automatically?», and «Is the audit log generated automatically?». It is holistic policy enforcement via code.

This is critical. Hard blocks can stop business. The best practice is to build «Override Workflows.» If a user hits a block (e.g., Credit Limit Exceeded), they should be able to click «Request Exception.» This triggers a BPA workflow to a manager. If approved, the system allows the transaction one time while logging the exception for audit. This keeps the business moving while maintaining governance.

Learn More About IFS Cloud BPA

Explore how Business Process Automation serves as the engine for your governance policies.

Watch Tutorial
Self-Service Data Done Right: How to Give Teams Control Without Losing Governance

Build a Self-Service Data Platform: Empowering Teams with Tools and Autonomy

 

Introduction

Data Mesh is a new way for organizations to manage data. Instead of one central team controlling everything, Data Mesh gives different business teams the power to own and manage their own data. This makes data more useful, trusted, and available across the company. One of the most important steps in Data Mesh is building a self-service data platform. This platform gives teams the tools they need to work with data on their own, without always needing help from IT or data engineers. When teams can help themselves, they move faster and make better decisions.

What is a Self-Service Data Platform?

A self-service data platform is a set of easy-to-use tools and services that let teams find, use, and manage data by themselves. Stop waiting for reports and begging IT for data. A self-service data platform gives your teams direct access to the information they need - when they need it. Marketing wants to know which ads convert? They pull the data themselves. Sales needs customer insights? They build their own dashboards. No technical skills required, no waiting for specialists. Just answers.

Key Activities and Best Practices

To build a great self-service data platform, focus on these key activities:
  • Provide tooling for data ingestion, transformation, and governance:
    Teams need tools to bring in data from different sources (ingestion), clean and organize it (transformation), and make sure it follows company rules (governance). Good tools make these steps easy and repeatable.
  • Enable CI/CD pipelines, data observability, and quality checks:
    CI/CD (Continuous Integration/Continuous Deployment) pipelines help teams make changes to data and code safely and quickly. Data observability tools let teams see how data moves and changes, so they can spot problems early. Quality checks make sure the data is accurate and reliable before anyone uses it.
  • Focus on developer experience and autonomy:
    The platform should be easy to use, even for people who are not expert developers. Clear menus, helpful guides, and simple processes help everyone feel confident. When teams can do more on their own, they don’t have to wait for IT, and they can deliver results faster.
  • Choose and set up the right tools and services:
    Pick tools that fit your company’s needs and are easy to connect with each other. Cloud-based tools are often a good choice because they are flexible and can grow as your company grows.
  • Make the platform easy to use and secure:
    Use single sign-on and clear permissions so people only see the data they should. Offer training and support to help teams get started and solve problems quickly.

Challenges and Solutions

Building a self-service data platform is not always easy. Here are some common challenges and how to solve them:
  • Tool overload:
    Too many tools can confuse people.
    Solution: Choose a small set of tools that work well together and cover most needs.
  • Lack of training:
    Teams may not know how to use the new platform.
    Solution: Offer simple guides, videos, and hands-on training sessions.
  • Security worries:
    People may worry about data leaks or mistakes.
    Solution: Set clear rules for who can access what, and use automation to enforce these rules.
  • Keeping data quality high:
    Bad data can lead to bad decisions.
    Solution: Build in automatic checks and alerts to catch problems early.

Data Governance Considerations

Data governance means setting rules for how data is used, shared, and protected. In a self-service platform, governance is built into the tools. For example, when a team brings in new data, the platform can check if it meets company standards for quality and security. Automation helps enforce these rules, so teams don’t have to remember every detail. This keeps data safe and reliable, even as more people use it.
 

Business and Cultural Impact

A self-service data platform helps teams move faster and reduces bottlenecks. When teams can get the data they need without waiting, they can make decisions quickly and respond to changes in the market. This supports business goals and helps the company stay competitive. Over time, a self-service platform builds a culture of trust and independence. Teams feel empowered to solve their own problems and share their successes with others.

Practical Tips and Checklist

Tips:
  • Start small with a few teams and expand as you learn what works.
  • Pick tools that are easy to use and connect well with your existing systems.
  • Offer regular training and support.
  • Use automation to handle routine checks and enforce rules.
  • Ask teams for feedback and keep improving the platform.
Checklist:
  • Tools for data ingestion, transformation, and governance are in place
  • CI/CD pipelines, data observability, and quality checks are enabled
  • Platform is easy to use and supports developer autonomy
  • Security and access rules are clear and automated
  • Training and support are available for all teams
  • Feedback process is set up to keep improving the platform

Conclusion

Building a self-service data platform is a key step in the Data Mesh journey. It gives teams the tools and freedom they need to work with data on their own. This leads to faster decisions, better results, and a stronger, more agile company. By focusing on the right tools, automation, and support, you can help your teams succeed and advance your Data Mesh strategy.
Define and Deliver Data Products: Turning Data into Value

Turning Data into Value

Checklist: Defining and Delivering Data Products for GEO AI

  • Engage GEO AI stakeholders to identify high-impact data product requirements.
  • Define clear ownership, SLAs, and metadata standards for each data product.
  • Ensure data products are discoverable via a catalog with geospatial tags and usage examples.
  • Implement automated quality checks for spatial accuracy and completeness.
  • Enforce security and privacy controls, especially for sensitive location data.
  • Document data products with GEO AI-specific context, such as coordinate systems and use cases.
  • Iterate based on user feedback and evolving business needs.

Define and Deliver Data Products: Turning Data into Value for GEO AI

Introduction

Data Mesh empowers organizations to manage data by decentralizing ownership to domain teams, making data more accessible, trusted, and actionable. For GEO AI applications, defining and delivering data products is a critical step in unlocking the value of spatial and location-based data. Well-designed data products enable teams to access the right information at the right time, driving innovation across logistics, urban planning, and environmental monitoring. Without robust data products, even the most advanced GEO AI strategies can fail to deliver results.

This guide provides a structured approach to creating data products tailored for GEO AI, ensuring they meet the unique demands of spatial data while adhering to Data Mesh principles.

What Does It Mean to Define and Deliver Data Products for GEO AI?

A data product is a curated dataset or service designed for reuse across an organization. In GEO AI, examples include geocoded APIs, satellite imagery repositories, or real-time traffic flow dashboards. Each data product is owned by a domain team that ensures its quality, documentation, and usability. Treating geospatial data as a product involves understanding user needs, defining clear standards, and making the data easy to discover and integrate into applications.

For instance, a customer location API might combine address data with geocoding services, while a supply chain dashboard could visualize real-time shipment tracking. The key is to package data in a way that aligns with how GEO AI teams consume it.

Key Activities and Best Practices

1. Identify High-Impact Data Products

Start by engaging with GEO AI teams to understand their pain points and requirements. Prioritize data products that address critical use cases, such as route optimization, asset tracking, or environmental modeling. Focus on delivering tangible value early to build momentum.

2. Set Clear Standards for GEO AI Data

Every data product should adhere to standardized SLAs for freshness, accuracy, and availability. For geospatial data, this includes:

  • Metadata describing coordinate systems, projections, and data sources.
  • APIs that support common GEO AI formats, such as GeoJSON or KML.
  • Documentation with examples of how to integrate the data into mapping or analytics tools.

Clear standards ensure consistency and reduce friction for users.

3. Make Data Discoverable and Reusable

Register data products in a central catalog with geospatial tags, keywords, and usage guidelines. For example, a dataset of urban heat islands should include metadata on spatial resolution, time periods, and applicable use cases. This makes it easier for teams to find and repurpose the data for different projects.

4. Build, Test, and Iterate

Develop data products incrementally, starting with a minimum viable product (MVP). Test with real users, gather feedback, and refine the product over time. For GEO AI, this might involve validating spatial accuracy or optimizing API performance for large-scale queries.

5. Automate Quality and Access Controls

Use tools to automate data validation, such as checking for geocoding errors or missing attributes. Implement role-based access controls to protect sensitive location data while enabling collaboration.

Challenges and Solutions in GEO AI

Unclear Requirements

GEO AI teams often have specialized needs that are not immediately obvious. Involve them early in the design process to clarify requirements and avoid misalignment.

Poor Data Quality

Geospatial data can suffer from inaccuracies, such as outdated coordinates or inconsistent projections. Implement automated quality checks and provide transparency about data lineage and update frequencies.

Security and Privacy Risks

Location data is often sensitive. Enforce strict access policies and anonymize data where necessary. Use automated tools to monitor compliance with privacy regulations, such as GDPR or CCPA.

Lack of Standards

Without standardized formats or naming conventions, data products can become difficult to use. Establish organization-wide guidelines for geospatial data, including preferred file formats, coordinate systems, and metadata schemas.

Data Governance for GEO AI

Governance ensures that data products are managed responsibly and aligned with business objectives. Key practices include:

  • Assigning clear ownership for each data product.
  • Defining access policies based on data sensitivity and user roles.
  • Regularly auditing data quality and documentation.
  • Balancing domain autonomy with centralized oversight to maintain consistency.

In GEO AI, governance also involves validating spatial data integrity and ensuring compliance with industry standards, such as ISO 19115 for metadata.

Business and Cultural Impact

Well-defined data products foster a culture of collaboration and innovation. When GEO AI teams can easily access and trust the data they need, they can focus on solving complex problems, such as optimizing delivery routes or predicting climate impacts. Over time, this leads to better decision-making and a stronger data-driven culture.

Practical Tips for GEO AI Teams

  • Start with a few high-value data products, such as a master address repository or a real-time asset tracker.
  • Use automation to maintain data quality and reduce manual effort.
  • Document data products with GEO AI-specific details, such as supported coordinate systems and example queries.
  • Regularly review and update data products based on user feedback and changing requirements.

Conclusion

Defining and delivering data products is a cornerstone of a successful Data Mesh strategy for GEO AI. By focusing on user needs, setting clear standards, and enforcing governance, organizations can transform raw spatial data into actionable insights. This not only supports immediate business goals but also lays the foundation for long-term innovation in GEO AI applications.

Frequently Asked Questions

What is a data product in the context of Data Mesh and GEO AI?

A data product is a packaged set of data designed for reuse by different teams. In GEO AI, this could include spatial datasets, geocoded APIs, or real-time location analytics. Each data product is owned by a domain team, which is responsible for its quality, documentation, and accessibility.

How do you ensure data products are discoverable and reusable in GEO AI applications?

Data products should be registered in a data catalog with clear metadata, standardized naming conventions, and APIs for easy access. For GEO AI, this includes geospatial tags, coordinate systems, and usage examples to facilitate reuse across projects.

What are the key challenges in delivering data products for GEO AI, and how can they be addressed?

Common challenges include unclear user requirements, poor data quality, and security risks. Solutions involve early user engagement, automated quality checks, and strict access controls. For GEO AI, ensuring spatial accuracy and compliance with geospatial standards is critical.

Why is data governance important for data products in GEO AI?

Data governance ensures that data products are trustworthy, compliant, and aligned with business goals. In GEO AI, governance includes validating the integrity of spatial data, managing sensitive location data, and enforcing access policies to protect privacy.

How can automation improve the delivery of data products in a Data Mesh framework?

Automation streamlines data quality checks, updates, and access management. For GEO AI, tools can automate geocoding validation, metadata enrichment, and API deployments, reducing manual errors and accelerating time-to-value.

What role do SLAs play in defining data products for GEO AI?

Service Level Agreements (SLAs) define expectations for data freshness, accuracy, and availability. In GEO AI, SLAs may specify update frequencies for spatial datasets or response times for geocoding APIs, ensuring reliability for downstream applications.

The ERP Master Data schema with cross‑functional teamschema, highlighting Cross-functional Data Product Teams. Output: transparent PNG file with the exact size 1200 x 627 pixels with an aspect ratio of 1.91:1.

Form Cross-functional Data Product Teams: The Heart of Data Mesh

Introduction

A Data Mesh is a new approach for organizations to manage and utilize data. Instead of having a single, central team handle all the data, Data Mesh allows different business teams to own and manage their own data. This approach helps make data more useful, trusted, and available across the company. One of the most important steps in making Data Mesh work is building cross-functional data product teams. These teams bring together individuals with diverse skills to work toward a shared goal. When done right, they help break down barriers, improve data quality, and make the business more agile 

What are Cross-functional Data Product Teams?

A cross-functional data product team is a group comprising individuals from various departments and backgrounds. Each person brings their own skills and knowledge. For example, a team might include data engineers, analysts, product owners, and business experts. Data engineers handle the technical side, analysts make sense of the data, product owners guide the team’s direction, and business experts make sure the data meets real business needs 

. By working together, these teams can create and manage data products that are useful and reliable.For example, if a company wants to improve its sales data, a cross-functional team might include a sales manager, a data engineer, a business analyst, and a product owner. Each person helps make sure the data product is accurate, useful, and easy to use.

Key Activities and Best Practices

  • Bringing Together Diverse Skills: Begin by selecting team members from various areas, including IT, business, and analytics. Ensure that each person understands their role and how they can contribute to the team's success.
  • Full Responsibility for Data Products: Give the team ownership of their data product from start to finish. This means they are responsible for creating, maintaining, and improving the data product over time.
  • Close Collaboration: Encourage the team to work closely with both business and technical sides. Regular meetings and open communication help everyone stay aligned.
  • Best Practices for Team Setup: Set clear goals and processes. Utilize tools like Agile or Scrum to facilitate the team's work in short, focused cycles. Ensure that everyone can give and receive feedback openly.
  • Ongoing Support: Offer training and resources to enable the team to continue learning and improving.

Challenges and Solutions

  • Unclear Roles: Sometimes, team members are unsure of their job responsibilities. Solve this by clearly defining each person’s role and responsibilities from the start.
  • Poor Communication: Teams can struggle if they don’t talk enough. Set up regular meetings and use shared tools to keep everyone informed.
  • Lack of Trust: Individuals from diverse backgrounds may initially lack trust in one another. Build trust by encouraging open feedback and celebrating team successes.
  • Resistance to Change: Some people may be used to working in silos. Help them see the benefits of working together by sharing early wins and positive results.
     

Data Governance Considerations

Data governance is about ensuring that data is managed safely and effectively. In cross-functional teams, it’s essential to establish clear guidelines regarding who owns the data, who has access to it, and how it should be utilised. Each team should follow company-wide standards for privacy, security, and quality. This helps keep data safe and reliable, even as teams work more independently. 

 

Business and Cultural Impact

Cross-functional teams help break down silos between departments. This leads to better collaboration and faster decision-making. When teams own their data products, they care more about quality and results. This supports business goals by making data more useful and trusted. Over time, this approach builds a culture of ownership, teamwork, and continuous improvement.

 

Practical Tips and Checklist

Tips:

  • Start small with one or two teams before expanding.
  • Choose team members who are open to learning and working with others.
  • Set clear goals and celebrate early successes.
  • Provide training on both technical and business topics.

Checklist:

  • Team members from different departments are included
  • Roles and responsibilities are clearly defined
  • Regular meetings are scheduled
  • Data governance rules are in place
  • Team has access to the needed tools and training

Conclusion

Forming cross-functional data product teams is a key step in the Data Mesh journey. These teams bring together different skills and viewpoints, helping to break down barriers and improve data quality. By giving teams ownership and support, organizations can make their data more valuable and trusted, setting the stage for long-term success with Data Mesh 

 

.

Identify Data Domains: Building Blocks of Data Mesh

Introduction

Data Mesh is a new way for organizations to manage and use data. Instead of having a single, central team handle all the data, Data Mesh allows different business teams to own and manage their own data. This makes data more useful, trusted, and available across the company. One of the first and most important steps in Data Mesh is to identify data domains. Finding and defining these domains helps teams work more effectively and ensures that everyone knows who is responsible for each set of data.

 

What is Identifying Data Domains?

A data domain is a group of related data that corresponds to a specific business function, such as Sales, Finance, or Operations. In Data Mesh, each domain is treated like a product. This means the team in charge of the domain is responsible for ensuring the data is of high quality and easy to use for others within the company. 

For example, the Sales domain might include all the data about customers, orders, and revenue. The Finance domain could include budgets, expenses, and payments. By breaking data into domains, companies can make sure the right people are in charge of the right data.

Key Activities and Best Practices

  • Break Down the Company into Business-Aligned Domains:
    Start by looking at how your business is organized. Each major area, like Sales, Marketing, or HR, can be a domain. Consider the data each team uses and creates on a daily basis. 
     
  • Assign Clear Ownership and Accountability:
    Each domain should have a team that owns it. This team is responsible for the quality, security, and sharing of the data. The domain owner acts like a product owner, making sure the data meets the needs of others. 
     
  • Start with High-Impact Domains:
    Begin with domains that have the biggest effect on your business. For example, if customer data is very important, start with the Sales or Customer domain. This helps show quick wins and builds support for Data Mesh.
     
  • Define and Refine Domains Over Time:
    Domains are not set in stone. As your business changes, you may need to split, merge, or adjust domains. Review domains regularly to make sure they still make sense for your company.
     

Challenges and Solutions

  • Overlapping Domains:
    Sometimes, two teams may want to own the same data. To solve this, talk with both teams and agree on clear boundaries. Use contracts to define who provides what data and how it is shared 
     
    .
  • Unclear Ownership:
    If no one wants to own a domain, it can lead to poor data quality. Make sure every domain has a clear owner and that their role is understood and supported by leadership 
     
    .
  • Changing Business Needs:
    As your company grows, domains may need to change. Be flexible and review domains often to keep them aligned with business goals.

Data Governance Considerations

Data governance is about establishing rules and ensuring that everyone follows them. In Data Mesh, each domain team must follow company-wide regulations for privacy, security, and data quality. The domain owner is responsible for making sure their team follows these rules. At the same time, there should be a central group that helps set standards and checks that domains are working together smoothly.

 

Business and Cultural Impact

Clear data domains enable teams to work more effectively together. When everyone knows who owns what data, it is easier to find answers and solve problems. This leads to improved data quality and enables the business to make faster, more informed decisions. It also fosters a culture where teams take responsibility for their data and are proud to share it with others. 

Practical Tips and Checklist

Tips:

  • Begin by creating a map of your business areas and the data they utilise.
  • Consult with business leaders to determine which domains are most important.
  • Assign a clear owner to each domain.
  • Write down the rules for each domain, including what data it covers and who can use it.
  • Review domains regularly and adjust as needed.

Checklist:

  •  List all major business areas.
  •  Identify the main data used and created by each area.
  •  Assign a domain owner for each area.
  •  Set clear rules and responsibilities.
  •  Review and update domains as the business changes.

Conclusion

Identifying data domains is a key step in building a Data Mesh. It helps teams take ownership of their data, improves quality, and makes it easier for everyone to find and use the data they need. By starting with clear domains, your company can move forward on the Data Mesh journey with confidence and success.

 
 

.

  1. Define Vision and Align Strategy for Data Mesh
  2. Golden Record in the Context of Master Data: Your Single Source of Truth
  3. Definition of Data Governance
  4. Implementing Data Governance in Your IFS Cloud Journey

Page 2 of 3

  • 1
  • 2
  • 3
  • Home
  • Offer
  • IFS Cloud
  • Data Governance
  • Contact
  • Implementation of IFS Cloud Data Mesh
  • Downloads
    • IFS OData Excel Client
  • Innovations
  • Business Process Optimisation