Why DataOps and DevOps will converge
There is no place for user interfaces in the data world. Quite a bold statement to make, but hear us out. If DevOps, Deployment, SDLC and more has taught us anything, is that a system should be stateless i.e. I should always be in a state where if I wanted to rip down a system and re-deploy it then I can do that and not lose a minute of sleep at night. This is why platforms like the Data Hub, Data Lake etc. will never work in reality and production if we don’t have ways to be able to version control and deploy these environments automatically.
Let’s start with an example. You are building business rules to be able to have data policies over the flow of data within your business. Due to the nice user interface of your platform you have a nice rule builder that allows you to specify simple “If this then that” types of rules. You add a few rules and save. Due to the distributed nature of systems, you now need to make sure that this change has persisted to all other environments and then you “hope” that these new rules don’t put too much strain on the system. This “hope” is exactly why DevOps exists. No-one wants to work with hope anymore, we want to work with predictable, stable, repeatable and testable deployments.
Data Ops will require the same thing and it is important that we establish this straight away instead of finding out the hard way. Imagine for one moment that you had a data policy that was wrong in production. To fix this, you would want to be alerted of the problem and then you would want to immediately deploy an old version of the application and then make sure there was no need to clean any data up (or at least the application took care of the clean up for you).