CluedIn is not just another Data Management platform. Learn how we are re-thinking the data world to meet a modern landscape.
We analysed the data space and looked at it from another angle. We started with the focus on where the biggest challenges in the lifetime of data is and honed in on that. We didn't find that people have a problem with building nice visualisations, we didn't find that people had an issue with scale. We found that companies could not take data that sat across their business and get value out of it because of all the work and infrastructure that needs to be put into place before we can truly rely on data. We then analysed what companies were doing today, with the technologies that were common and realised that the data industry was in a closed cycle where we were bending the same technologies to modern use cases and the bending was just not meeting the demands.
A modern data fabric starts with a modern data layer
There are some foundational elements of data that need to be part of the fabric, not an after-thought. We realised that speed, elastic-scale, security, deployment and robustness are expected, not desired. Hence we started with a sound foundation of Docker Containers and Kubernetes. What was obvious to us, but not necessarily seen in the industry, was that there are many types of databases that are available to us today, but yet still, we see most companies using only one or two to attack most problems. One database will never be able to give you flexible access to data how you want it, at speed, at scale and maintainable. Because of this, CluedIn to-date, uses 5 different database types to persist and process your data.
Address the parts of the process that just don't scale
Our team looked into the journey of data with a critical eye. We watched enterprises struggle through tedious mapping processes and cumbersome architecture diagrams that would never scale. We revisited the parts of the process that did not work, will never work and were the reasons behind so many data projects failing. With this, we came out the other end with a new data integration pattern we call "Eventual Connectivity" that removes the need for manually mapping data. The best part about this pattern is that it is easily explainable and yields much more scalable results than the manual approach.
Make it part of the flow, not a side car
When we analysed the vendors that solve data challenges today, it was very obvious that if you were to build a data fabric, you would need to buy or use many different platforms and stitch them together. There are huge advantages in this approach, but also a fundamental disadvantage - i.e. we need to invent the process to move data in between these different products. This is hard and time after time we saw projects failing due to this lack of process. CluedIn has built this process into its platform and streamlines the entire flow of data from source to value.
Always focus on the core pillars
CluedIn is 100% focussed on providing solutions for the core data needs, not the needs of specific use cases. For example, in Machine Learning you will need to use platforms to deploy and monitor models - this is not needed for Business Intelligence uses cases, hence this will never we something that CluedIn will do in the data lifecycle. However, cleaning data is definitely part of the Machine Learning and the Business Intelligence use case and hence that will always be part of the core platform. As new use cases evolve in the future, we will continue to analyse those use cases and bring in the common pillars to the fabric that show their face.
The main value that comes from this, is a focus. It is so important that platforms like CluedIn focus on the parts of the chain that they do well and focus on being best in class at that. Although it is often tempting to dive into other areas, it is not and will never be a focus at CluedIn.