<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4011258&amp;fmt=gif">


Cluedin articles

Microsoft Fabric and CluedIn: preparing your data for the era of AI together


Microsoft Fabric and CluedIn: preparing your data for the era of AI together

At this year’s Microsoft Build event, Microsoft announced Microsoft Fabric - its brand-new data and analytics offering. In this article, we’ll outline what Fabric is, its key features, and how you can elevate Fabric’s capabilities with fully integrated, high-quality data from CluedIn.

Read More

Graph versus Relational databases: which is best?


In the world of data management, businesses face the challenge of efficiently handling vast amounts of information. Traditional relational databases have been the go-to solution for many years, but the rise of graph databases has introduced a compelling alternative, especially when it comes to managing master data.

In this article, we will explore the key differences between graph and relational databases and discuss why graph databases are particularly well-suited for master data management.

Understanding Graph and Relational Databases:

Relational databases have long been the standard choice for storing structured data. They organize data into tables with predefined schemas, where relationships between tables are established through primary and foreign keys. Relational databases excel at managing transactional data and complex queries involving multiple tables.

On the other hand, graph databases are designed to store and manage highly connected data. They use a network-like structure composed of nodes (entities) and edges (relationships) to represent data relationships. Graph databases prioritize relationships as first-class citizens, making it easier to model and traverse complex connections between entities.

Master Data Management (MDM) and its challenges:

Master Data Management focuses on creating a single, consistent, and reliable version of key data entities within an organization. This includes customer information, product catalogs, employee records, and other critical data elements. The challenges in MDM stem from the need to handle vast amounts of interconnected data and maintain data integrity across multiple systems and business units.

Why Graph Databases Excel in MDM:

1.    Relationship-Centric Model
Graph databases inherently prioritize relationships, making them ideal for managing complex interconnections within master data. Integrating data using a Graph-based MDM system is much easier and quicker than one which uses a relational database, because the Graph will naturally find connections between the data that would be impossible for you to stipulate or discover on your own.

This is particularly beneficial for a number of use cases, such as building a single customer view. To be effective, a single customer view must aggregate data from various touchpoints and channels to create a holistic profile. This means integrating both unstructured and structured data from a number of source systems and applications and finding the relationships between them. This is simply not achievable using a relational database with pre-prescribed schemas and relationships. 

Read More

Master Data Management for Life Sciences and Pharmaceuticals Industries


Master Data Management (MDM) is the process of creating and maintaining a single, accurate, and consistent source of information for an organization's critical data entities such as customers, products, suppliers, and patients. In the life sciences and pharmaceutical industries, MDM is especially important due to the ever-increasing amount of data that needs to be stored, managed, and used to drive better commercial outcomes.

In this article, we will explore the benefits of master data management in the life sciences and pharmaceutical industries, including how MDM can improve data quality, enhance operational efficiency, and support regulatory compliance.

Read More

Why modern master data management (MDM) is to MDM what lakehouses are to warehouses



As the volume and variety of data continue to grow, organizations face the challenge of effectively managing and utilizing all of their data assets for maximum business impact. Several solutions have emerged to address this challenge, including Data Lakes, Data Warehouses, Data Lakehouses, and Master Data Management (MDM) platforms.

Read More
blue and turquoise waves

The role of AI in Master Data Management


Master Data Management (MDM) is the process of maintaining a central repository  - or single source of truth - of an organization's critical data, which includes customer data, product data, and other key data entities. The data is often scattered across multiple systems and applications, and MDM helps organizations to consolidate and manage this data effectively.

As organizations have embraced digital transformation in order to better serve their customers and enhance operations, this reliable supply of high-quality, accurate, and accessible data has become even more desirable. The problem, however, is that in many cases MDM has never quite lived up to the promise of delivering it.

Do we need AI to fix MDM?

Augmented data management techniques such as zero modeling and eventual connectivity have already gone a long way toward solving some of the well-established problems with MDM. For example, it is now possible for business users to wrangle with the data directly, without continually needing the support of IT teams. Upfront data modeling, profiling, and analysis are no longer a necessity as Graph-based platforms like CluedIn can do this work for you in a completely automated fashion once the data has been ingested. Systems like CluedIn are also capable of automating the integration of data in any format from a limitless number of sources. All of this has accelerated time to data value significantly and allowed businesses to fast-track insights, intelligence, and data science initiatives.

However, traditional MDM systems have struggled to keep up with these advances, and in some cases have turned to AI as a means of bridging the gap between what they should have delivered and the reality. For example, there are MDM players today using their own AI engines to help with data lineage – i.e. cataloging the sources of master data and their domain types, and mapping how master data moves between sources and applications. Advanced MDM systems like CluedIn can already do this – without relying on AI. Another example would be using AI to help automate schema matching. Again, not a job that requires AI if you’re using a Graph-based, augmented MDM platform.

What is the role of AI in MDM?

That said, there are areas in which the use of AI can dramatically improve the speed, cost, and ease of preparing data for ubiquitous use across an organization. As advanced as an MDM platform may be, there is no doubt that AI is a force accelerator when it comes to mastering data. Here are just a few examples of the potential application of AI in MDM:

  • Data Quality: Data quality is a major concern in Master Data Management, as data is often incomplete, inconsistent, or inaccurate. Advanced systems like CluedIn have already automated much of the data cleaning and enrichment process, but AI brings a whole new level of speed and simplicity to this exercise by using machine learning algorithms to automatically identify and resolve data errors, such as duplicate records or inconsistent data formats.
  • Data Governance: Creating and enforcing effective data governance is a challenge for every organization. It not only involves creating policies and procedures to ensure that data is properly managed and secured, but also the application of them which is where many data governance efforts fall down. With AI, however, the policy or rule can be automatically enforced immediately following its input into the platform.
  • Data Democratization: One of the main problems with traditional MDM is its heavy reliance on IT teams both in terms of deployment and ongoing use. Again, platforms like CluedIn have taken a low/no-code approach in order to make the system as accessible as possible, but the potential is for AI to take this to a whole new level as natural language processing makes even the least technical amongst us data scientists.
  • Data validation: A huge benefit in the application of AI with MDM is that it can effectively act as your “MDM co-pilot”. This not means that it can explain any of the decisions it took at your on-demand, but it will also intuitively corroborate (or challenge) your decisions too.
  • Data Maintenance: Ensuring that your data is up-to-date and ready to deliver at any time is an ongoing, resource-intensive task. AI can help to automate data maintenance by using machine learning algorithms to identify changes in data records and update them automatically. The benefit of doing this is that the model can essentially train itself based on the data in the MDM system – becoming more reliable and accurate over time.

Should MDM vendors build their own AI engines?

As previously mentioned, some MDM vendors have already built their own AI engines as part of their MDM offerings. The issue with this is that their models will never be as powerful or comprehensive as dedicated AI platforms like OpenAI, Google AI, and IBM’s Watson. Developing a high-quality AI engine requires significant expertise in machine learning, data processing, and software engineering. It is also a time-consuming and expensive exercise. Although MDM vendors may have some of the required specialisms and investment capacity at their disposal, AI is not their core focus, which is why in most cases they are better off partnering with AI vendors or utilizing existing AI platforms to provide their customers with the best experience.

What’s next for AI and MDM?

It’s an exciting time for the technology industry as AI gains momentum and starts to show exactly what it is capable of. At the moment, we have only witnessed a fraction of what AI can bring to the data management industry as whole, and MDM in particular. Without a doubt, AI will bring about a major transformation in how we prepare data to deliver insight in the future, and once its potential is realized the way we master data will be changed forever.

Read More
mother researching Master Data Management for the Insurance Industry on laptop with her child sitting on her lap

Master Data Management for the Insurance Industry


Master Data Management (MDM) is a crucial process for many industries, including insurance. MDM involves the creation and management of a central repository of master data, which is used to support a wide range of business processes and decision-making activities. In the insurance industry, MDM is particularly important because of the large amount of data that insurers must manage in order to accurately assess risks, underwrite policies, and settle claims.

The Role of Master Data Management in Insurance

MDM plays a critical role in the insurance industry by providing a single source of high-quality data that can be used to support a range of business processes. This includes:

  • Risk Assessment: In order to accurately assess risk, insurers need access to a wide range of data, including demographic information, credit scores, and historical claims data. By consolidating this data, insurers can more easily analyze and leverage this information to identify trends and patterns that can help them make more educated underwriting decisions.
  • Underwriting: Once insurers have assessed risk, they must decide whether or not to underwrite a policy. This involves evaluating a range of factors, including the policyholder's history, the type of policy being offered, and the level of risk associated with the policy. By using MDM to manage this data, insurers can make more informed underwriting decisions, resulting in more accurate pricing and better risk management.
  • Claims Processing: In the event of a claim, insurers must quickly and accurately process the claim in order to satisfy the policyholder and minimize their own costs. MDM can be used to manage all of the data associated with the claim, including the policyholder's information, the type of claim being made, and any relevant documentation. This can help insurers quickly process claims and reduce the likelihood of fraud.
  • Compliance: The insurance industry is heavily regulated, with strict requirements for data management and reporting. MDM can help insurers ensure that they are meeting these requirements by supporting data governance policies and procedures, automatically categorizing and masking sensitive personal information and providing detailed data lineage.

What is the Business Impact of Master Data Management?

Currently, the biggest opportunity in MDM for insurance companies is the ability to organize data in new and innovative ways to enable advanced analytics, Artificial Intelligence (AI), Machine Learning (ML), and cognitive learning systems. Data-driven organizations are already using MDM architectures to “future-proof” their business by anticipating customer expectations and streamlining operations.

For example, CX management is the source of organic revenue growth for many insurers, and a modern MDM system can take the art and science of managing customer relationships to new levels. By consolidating data from individual policies and aggregating them into a customer/household view, or golden record, insurers can:

  • Use advanced analytics including AI to up-sell/cross-sell more efficiently and effectively
  •  Determine customer channel preferences and communicate, service, market and sell accordingly
  • Understand the status of claims reported, paid and outstanding at the customer/household level
  • Develop a customer level and household level profitability score.

Diving a little deeper, once an MDM solution is in place, insurance firms benefit in a number of ways:

  • 360° customer view – MDM enables a holistic 360° customer view that greatly improves business insights around customer sentiment and demand. This view integrates back to the master data source, ensuring the validity and accuracy of the insights gained. The golden record takes innovation in sales, service, and marketing to new levels of creativity and personalization.
  • Streamlined Customer Data Integration (CDI) – Good MDM practices enable streamlined CDI, reducing the day-today data management burden and releasing resources to focus on value-driven projects.
  • New Cross-Selling Opportunities – Advanced analytics tools can reveal hidden insights previously unknown to the organization. Insurance firms can use this insight to identify cross-selling opportunities and to prioritize specific customers or demographics with tailored sales tactics.

Challenges of Master Data Management in Insurance

Data Quality: Insurance data can be complex and difficult to manage, with a wide range of data sources and formats. While traditional MDM systems have struggled to cope with semi-structured and unstructured data, augmented platforms such as CluedIn are capable of ingesting poor quality data in almost any format in order to consolidate, clean and enrich the data ready for use.

Data Integration: Insurance data is often siloed in different systems and databases, which can make it difficult to integrate this data into a single MDM repository. Historically, this would require significant data mapping and integration efforts. However, more advanced systems like CluedIn can easily cope with hundreds of different data sources.

Governance: MDM requires strong governance to ensure that the data is managed effectively and efficiently. This includes establishing clear policies and procedures for data management, as well as providing ongoing training and support to employees. A popular option for many organizations is to use a data governance platform in conjunction with an MDM system in order to ensure that data is handled in accordance with the governance standards set as well as being easily accessible and usable by business users in various teams.

Cost: Implementing a traditional MDM system is a costly endeavour, requiring significant investments in software, hardware, and personnel. The need to model and map data beforehand also added months to the length of time taken to realize any value from these investments. All of this has changed with the advent of augmented MDM systems which remove the need for upfront data modelling and use modern technologies like Graph to allow the natural relationships between the data to emerge. Contemporary MDM systems are also Cloud-native, which means that they offer the advantages of both scale and efficiency inherent to the Cloud. 


Despite the obvious benefits of MDM, the barriers of traditional approaches have, until now, prevented many insurers from investing in this technology. With many of those hurdles now cleared, the path has opened up for insurers who want to use their data to fuel the insights and innovations they need to remain competitive and profitable. Improvements in business processes, streamlining operations, and managing risk are all vital to the success of an insurance provider, and MDM provides the foundation of trusted, business-ready data that enables them.

Read More
blue waved graphic

Data Governance and Master Data Management. What is the difference and why do I need both?


Data Governance and Master Data Management (MDM) are both important components of managing an enterprise's data assets. While they have somewhat different goals and remits, they are complementary and work together to ensure that an organization's data is accurate, consistent, and secure. The close relationship between the two can often lead to confusion over which discipline is responsible for different areas of data management, and sometimes means that the terms are used interchangeably.

Let's start by defining what Data Governance and Master Data Management are:

Data Governance: 

Data Governance refers to the overall management of an organization's data assets. This is the process of managing the availability, usability, integrity, and security of the data. It involves establishing policies, procedures, and standards for data usage and ensuring that they are followed by everyone who interacts with the data. The primary objective of Data Governance is to ensure that data is properly managed and that it is used in a way that aligns with the organization's goals and objectives.

Some of the key components of Data Governance include:

  • Data policies: These are formal statements that outline how an organization's data should be managed, who has access to it, and how it should be used.
  • Data standards: These are established guidelines and rules that govern how data is collected, stored, and used across the organization.
  • Data stewardship: This is the process of assigning ownership and responsibility for managing specific data elements within an organization.
  • Data quality: This refers to the overall accuracy, consistency, completeness, and timeliness of an organization's data.
  • Data security: This involves protecting data from unauthorized access, theft, or loss.

Master Data Management

This is the process of creating and maintaining a single, accurate, and consistent version of data across all systems and applications within an enterprise. It involves identifying the most critical data elements that need to be managed, and then creating a master data record that serves as the authoritative source for those elements. The primary objective of MDM is to ensure that these critical data elements are accurate, complete, and consistent across the enterprise.

Some of the key components of Master Data Management include:

  • Data modeling: This involves defining the structure and relationships between different data elements and creating a data model that represents the organization's master data.
  • Data integration: This involves integrating master data from various sources and systems to create a single, authoritative source of master data.
  • Data quality management: This involves ensuring that the master data is accurate, complete, and consistent across all systems and applications.
  • Data governance: This involves establishing policies, procedures, and standards for managing master data and ensuring that they are followed by everyone who interacts with the data.
  • Data stewardship: This involves assigning ownership and responsibility for managing specific master data elements within an organization.

It is fair to say that there are several areas of data management in which both Data Governance and Master Data Management have a role to play. For example, defining data quality standards and policies would most likely fall under the remit of Data Governance, whereas assuring the integrity, consistency, and relevance of individual records is the responsibility of Master Data Management. Similarly, data stewardship also has a foot in each camp. While it is generally Data Governance policies that specify how data should be managed and maintained, it is Master Data Management platforms that provide the tools for data stewards to ensure that these policies are followed.

The main differences between Data Governance and Master Data Management are:

  • Focus: Data Governance focuses on managing an organization's data assets as a whole, while MDM specifically targets critical data elements.
  • Scope: Data Governance covers all data assets within an organization, while MDM is concerned only with master data.
  • Objectives: Data Governance aims to ensure that data is properly managed and used in a way that is compliant and secure, and that aligns with the organization's goals and objectives. MDM aims to ensure that critical data elements are accurate, consistent and ready for use by all systems and applications.
  • Processes: Data Governance involves developing and implementing policies, procedures, and standards for managing data, while MDM involves creating and maintaining a single, authoritative source of master data.
  • Ownership: Data Governance involves designating ownership and responsibility for managing all data within an organization, while MDM enforces those roles and responsibilities for managing specific data assets.

Do I really need Data Governance and Master Data Management tools?

If you want to be able to use your data for value creation, and do so in a compliant and secure way, then the answer is yes.

Data Governance and Master Data Management are complementary disciplines in the sense that they both work towards ensuring the quality and integrity of an organization's data assets. Here are some of the specific ways in which they complement each other:

  1. Data Governance provides the framework for MDM: A robust Data Governance framework provides the foundation for MDM. It establishes the policies, standards, and procedures for data usage that MDM relies on to create and maintain accurate and consistent master data records.

  2. MDM ensures data consistency across systems: MDM provides a single, authoritative source of master data that is consistent across all systems and applications within an enterprise. This helps to ensure that data is not duplicated or inconsistent across different systems, which can lead to errors and inefficiencies.
  3.  Data Governance ensures data security and privacy: Data Governance policies and procedures help to ensure that sensitive data is properly secured and that data privacy regulations are adhered to. MDM relies on these policies and procedures to ensure that master data records are secure and comply with data privacy regulations.
  4. MDM enables effective decision-making: With accurate and consistent master data records, organizations can make better decisions based on reliable data. Data Governance ensures that the data is trustworthy, while MDM ensures that the data is accurate and consistent across all systems.

Benefits of implementing Data Governance and Master Data Management

Improved data quality:
Data Governance ensures that data is properly managed and secured, while MDM ensures that critical data elements are accurate and consistent across all systems. Together, these concepts help to improve the overall quality of an organization's data.

Regulatory compliance:
Data Governance policies and procedures help to ensure that an organization complies with data privacy regulations and other regulatory requirements. MDM relies on these policies and procedures to ensure that master data records are compliant with these regulations.

Better decision-making:
Accurate and consistent data is essential for effective decision-making. With Data Governance and MDM, organizations can rely on trustworthy data to make better decisions.

Cost savings:
Inaccurate or inconsistent data can lead to costly errors and inefficiencies. Data Governance and MDM help to reduce these costs by ensuring that data is accurate, consistent, and properly managed.


Data Governance and Master Data Management are complementary yet independent disciplines of data management. Both have distinct areas of responsibility and roles to play within a data estate, and in practical terms, there is little overlap between the two. While Data Governance provides the overall framework within which Master Data Management operates, one doesn’t necessarily have to come before the other and either can work autonomously.

However, as with most technology fields, the real value comes from having a set of tightly integrated tools and systems that work together to deliver greater value than the sum of their individual parts. That is certainly the case with Data Governance and Master Data Management. Organizations are demanding more from their data than ever before – they want more insights, more intelligence, and as a result, more opportunities to grow the business. Meeting that need means that you can’t afford to waste valuable time and money wrangling with data that is of poor quality and difficult to access. In combination, Data Governance and Master Data Management can provide a reliable, trusted pipeline of data that is ready to deliver insight across the business, and that is what most organizations today need to succeed.

Read More
microsoft intelligent data platform graphic

A Brief History of the Microsoft Intelligent Data Platform


The Microsoft Intelligent Data Platform is a suite of tools and services that enable businesses to manage and analyze large amounts of data. Although not officially launched until 2022, the origins of this powerful ecosystem can be traced back over 30 years. The platform has evolved over time to keep pace with changing technologies and business needs, and most recently was expanded to include technology, consulting and ISV partners to complement and build upon its capabilities.

Here's a brief history of the Microsoft Intelligent Data Platform:

The Origins of SQL Server (1989-1995)

The origins of the Microsoft Intelligent Data Platform can be traced back to the early days of SQL Server, which was first released in 1989 for the OS/2 operating system. SQL Server was designed to be a relational database management system (RDBMS) that could store and manage large amounts of data.
Over the years, SQL Server evolved and gained new features, such as support for stored procedures and triggers. Microsoft also released versions of SQL Server for Windows NT and Windows 2000, which helped make it a popular choice for enterprise-level applications.

The Rise of Business Intelligence (1995-2005)

In the late 1990s and early 2000s, the concept of business intelligence (BI) began to gain popularity. BI refers to the tools and processes that businesses use to analyze data and gain insights into their operations.
To meet the growing demand for BI tools, Microsoft released a suite of products under the banner of Microsoft Business Intelligence. These products included SQL Server Analysis Services, which allowed businesses to create multidimensional data models, and SQL Server Reporting Services, which enabled users to create reports and visualizations.

The Emergence of Big Data (2005-2010)

In the mid-2000s, the amount of data being generated by businesses began to grow exponentially. This trend was driven by the rise of the internet, social media, and other digital technologies.
To help businesses manage and analyze this growing amount of data, Microsoft introduced a new product called SQL Server Integration Services. This product allowed businesses to extract, transform, and load (ETL) data from a wide range of sources.

Microsoft launched its own Master Data Management offering - Master Data Services (MDS) - as part of Microsoft SQL Server 2008 R2 in 2010, and it has been included as a feature in every subsequent version of SQL Server. 

The Cloud Era (2010-Present)

In 2010, Microsoft launched its cloud computing platform, Azure. Azure enables businesses to build, deploy, and manage a wide range of applications and services in the cloud. It has since grown to become one of the leading cloud computing platforms, competing with other major cloud providers such as Amazon Web Services (AWS) and Google Cloud Platform.

To support the growing demand for cloud-based data management and analysis tools, Microsoft continued to evolve its suite of data tools and services. This included the release of products such as Azure Data Factory, which allows businesses to orchestrate data workflows in the cloud, and Azure Stream Analytics, which enables real-time data analysis.

Microsoft also embraced open-source technologies, such as Apache Hadoop and Apache Spark, which allowed businesses to analyze large amounts of data using distributed computing techniques.
In October 2022 Microsoft announced the creation of the Microsoft Intelligent Data Platform Partner Ecosystem, consisting of a select number of technology companies, consulting firms, and independent software vendors (ISVs) that offer solutions and services that complement the platform. CluedIn is one such partner, forming part of the Governance pillar of the platform alongside Microsoft Purview. CluedIn is a recommended Azure-native Master Data Management provider and has also been endorsed as a modern alternative to MDS.

Today, the Microsoft Intelligent Data Platform continues to evolve to meet the needs of businesses of all sizes. With its wide range of tools and services, the platform allows businesses to manage and analyze data in the cloud, on-premises, or in hybrid environments. The ultimate goal is to allow companies to realize more value from their data by shifting the emphasis away from day-to-day data management and towards value-creation opportunities.

Read More
woman writing on transparent whiteboard brainstorming master data management ideas for the banking industry

Master Data Management for the Banking Industry


In a world where our personal data is held by a multitude of different organizations, banks hold the deepest and most personal datasets. Forget Google and Facebook, their datasets pale into insignificance when compared with the sheer volume of data held by banks. From employment and property history to investments, savings, credit scores, and transactions, banks have it all.

Data challenges in the Banking Industry

With a wealth of customer and other data at their disposal, banks should be in the best position to offer their customers personalized advice, products, and services. In reality, banking customers rarely receive the kind of tailored offers and bespoke advice they should. Banks are also struggling to streamline processes, manage costs, and drive efficiencies – there is still a lot of manual work required to integrate and clean data, which inhibits a bank’s ability to gain insights and apply intelligence-based technologies.

One of the main challenges for banks is the volume of data they have. Integrating, cleaning, and enriching so many different types of data from multiple systems is not an easy undertaking. This is probably why most banks are still grappling with creating a unified view of internal, structured data. In the meantime, the market has already moved on to addressing unstructured data and using external sources to enrich it in readiness for delivering insight.

Another major consideration for banks in relation to how they manage their data is meeting regulatory requirements and ensuring high levels of compliance at all times. Banks are subject to laws and regulations addressing everything from capital requirements, financial instruments, and payment services to consumer protection and promoting effective competition. All of which place restrictions and conditions on how banks manage their data and ensure its integrity.

Drivers of digital transformation and data modernization

The imperative for banks to evolve into more digitally-enabled, data-driven institutions comes from several distinct, but undeniably related areas.

The emergence of Cloud-native, agile new market entrants is forcing banks to follow their lead and take a more holistic view of their customers and their data. Customers don’t just want to be told which product to buy next – they want personalized advice in real-time. It’s not enough for a bank to know what its customer did, they need to know why they did it and what they are likely to need in the future. In general, it is estimated that banks have the potential to reduce churn by between 10% and 20% and increase customer activity by an average of 15%. This would substantially impact revenue and is why managing customer data and preparing it for use is one of the most important use cases for Master Data Management (MDM) in the banking industry.

Building lean, efficient, and highly effective processes is also a top priority for banks that want to enhance efficiency and reduce costs. Automation, Machine Learning, and AI all have an important role to play in this effort and there is a high degree of interest in these technologies amongst banks and other financial institutions. While results to date have been mixed, partially because of a lack of trusted, governed data to fuel such projects, analyst firm McKinsey is predicting a second wave of automation and AI emerging in the next few years, in which machines will do up to 10 to 25 percent of work across bank functions, increasing capacity and freeing employees to focus on higher-value tasks and projects. To maximize the potential of this opportunity, banks first need to design new processes that support automated/AI work, and they will need a reliable supply of high-quality, integrated data to sustain them.

The compliance conundrum

One of the key drivers for effective data management in the banking sector is satisfying regulatory and compliance requirements. These regulations mean having accurate and up-to-date information with full audit trails and adequate data security protection is important. Historically, this has led to friction between the need to sufficiently protect and report on data and the desire to use it to streamline operations and customize the customer experience.

That has changed as advances in data management technologies have developed to include provisions for meeting data protection and privacy standards. Modern Master Data Management and Data Governance platforms combine the delivery of a trusted single view with the assurance of rigorous data governance capabilities that allow banks to achieve full compliance and use their data with confidence. This is accomplished

through a combination of features like Automated PII Detection, Automatic Data Masking, Data Sovereignty, Consent Management, and the setting of Retention Policies.

The time is now

Achieving fully governed, trusted data is no mean feat for a sector that accumulates a tremendous amount of data on a daily basis. It is however no longer a nice-to-have, as customers demand more from their financial providers and competitors are upping the ante in terms of convenience, flexibility, and experience. The longer a bank allows its technical data debt to grow, the harder it will be to remain competitive.

As margins shrink and new contenders enter the market, the pressure is on to find new ways of delighting customers and exceeding their expectations. For the vast majority of banks, the answers lie within their already extensive data reserves, and now is the time to tap into them.

Read More
person building jenga block tower

How a solid data foundation helps you do more with less

Knowledge is power, and having near real-time intelligence about your customers, products, locations and assets is a given for any organisation that wants to remain competitive and grow. This is particularly true during times of uncertainty and economic pressure, such as pandemics and recessions, when businesses seek not only to manage costs but also to pivot and respond to change. Consider the manufacturing company with multiple locations, product lines and suppliers. Insights into the total cost of production by product line combined with customer intelligence on buying preferences by region could lead such a company to divest or cease production of certain products. 

Little wonder then that in a recent EY article, it was claimed that 93% of organisations plan to continue to increase their data and analytics spending from 2022 - 2027. In fact, despite the current economic slowdown, Gartner expects global IT spending to climb to $4.6 trillion in 2023, registering a year-over-year increase of 5.1%. According to Gartner analysts, “enterprise IT spending is recession-proof.” This makes sense, as technology – and analytics and business intelligence systems in particular – are a vital enabler of automation, improved digital experiences and reducing complexity. All of which are geared towards reducing costs and boosting the bottom line.

Hang on a minute though! Isn’t this exactly what many businesses have been trying to do for years? Digital transformation is nothing new, and neither is the need to collect and analyse swathes of data in order to fuel that transformation. Organisations have already made significant investments in Data Lakes, Data Warehouses, Business Intelligence tools and Analytics platforms – and the people who administer them. Yet still it seems like something is missing. Still it appears that technology and business leaders are searching for the insights they need and are prepared to pump in more investment in order to find it.

But is the answer really spending more money on more tools and data stewards, scientists and engineers to deploy and manage them? Or is there a more fundamental, basic problem that needs addressing? We think there is. We believe that one of the major reasons enterprises are struggling is because they have a fundamental data quality issue, in that they have no way of managing data quality in a consistent and automated fashion. They have the data, and they have the tools to analyse it, but the data itself is siloed, inconsistent, stale, inaccurate… or all of the above. The output of analytics and business intelligence tools can only ever be as strong as the input – and if the data is poor, so the results will be too.

Surely there’s a way of taking poor quality data from different sources and bringing it together in a way that is consistent, accurate and scalable? This has been the promise of Master Data Management (MDM) systems for the past 30 years, but in reality the process still involves a great deal of time and effort from Data Engineers whose time would be better spent elsewhere. This is one of the reasons why 75% of MDM projects still fail, because instead of automatically fixing the majority of data quality issues and empowering business users to address the exceptions, human involvement from data and IT specialists is still required most of the time.

The problem is not that organisations have not invested enough in managing and analysing their data estate. The issue, until now, has been that the tools available to master that data have not been fit for purpose. Traditional MDM systems can only deliver a single view or Golden Record AFTER the modelling work has been done upfront – by you. Which typically takes months, requires heavily involvement from IT staff and is out of date by the time it’s complete, thus fuelling the disconnect between MDM and business value.

We get it! In order for you to have over your data, it’s always easier to build the end state and work back from there, right? Yes, in some ways it is, but when you put this into practice, it never works. Or maybe it worked for the first few systems, and then got way too hard as more data came into your systems or you were required to make model changes. This is not unique to MDM, in fact it is exactly why there has been a new influx of "modern Data Warehouse" vendors. They follow the same principle of not forcing everything to conform to the same model on entry, but rather keep the data in an unmodelled state and build different models on the fly. This was ONLY made possible because of the economic model offered by the cloud whereby suddenly you could access hundreds of machines and pay at per-second-scale.

Modern MDM platforms remove many of the barriers to creating value from data by automating or eliminating manual tasks, and using advanced technologies like Graph, AI and ML to ingest data from thousands of sources and allowing the natural data model to emerge. This means leaving data in its natural state, and projecting it into your Information Models on demand. This also opens the door to having an MDM system that can host your data model in 15, 20 or 25 different shapes. They are also built for the Cloud, which means that they are easy to install, simple to set up and can scale up or down as you require. Perhaps most importantly, they break the traditional model of technology teams assuming sole responsibility for mastering data by giving business users access to the data and the power to run their own queries, to build their own information models and interrogate data in their own ways.

In many ways, the new breed of MDM platform doesn’t look much like MDM at all. We believe that MDM as we know it will cease to exist, and we are entering a new category – that of Augmented Data Management – which seeks to simplify and speed up the process of preparing data for insights. It will still have many of the inner workings and expectations of MDM, but will capitalize on the influx of modern technology. With so much riding on the output of analytics and intelligence initiatives, it is crucial that a fully governed, reliable, compliant and accurate supply of data is made available to the whole organisation.

So before building the case for increasing your investment in data and analytics, it is worth considering that it might not be more investment that is needed, but a change in focus. A solid data foundation – one that doesn’t discriminate between different types and sources of data – should always be the first priority when it comes to providing actionable intelligence and commercial insights to the business. Without it, analytics and BI tools will never deliver the outcomes demanded by organisations today.
Read More