How the microservice vs. monolith debate became meaningless

E Pluribus Unum

Andras Gerlits
ITNEXT

--

The problem of distributing systems is the problem of sharing data between multiple computers without having to think too hard about the implications of this. Databases do a lot of heavy lifting to safeguard data-sharing between processes by providing consistency guarantees. Even though these guarantees are often poorly understood by developers, these are what protect them from having to write much more complicated software when relying on a single database. Databases keep our code simple and are therefore immensely valuable. Much of the literature around microservices deals with demarcating boundaries around services for this reason.

So can we get away with not sharing any information between components?

Microservices are defined by their ability to serve the same client or API. This already implies some shared knowledge about the end-user between its constituent parts, but there is more. Since each component serves some specific purpose, and since these must work in some predetermined order to serve the requests mentioned above, there’s a process connecting the components running through the whole system. How to manage this process is what the orchestration vs. choreography debate and the various distributed workflow tooling is all about. Again, we’re not here to have opinions on these, so we’ll just say that this demonstrates that the hard problems being discussed around microservices are about how to manage data that affects multiple components and that there is no way to completely eliminate this problem.

Or rather, there wasn’t one until now.

I Was Told There Would Be Cake

If we are going to bring about the technology we’re promising, we need to move the guarantees into some kind of a shared data-platform. To maintain the advantages currently offered by microservices, we need to make sure that this platform is especially resilient and that it doesn’t require any of the databases to be available for it to continue working. To maintain the advantages of a monolith, we need to make sure that this platform provides consistency guarantees for all the information shared on it.

This is exactly what we’ve built.

Teams sharing their data via a set of database tables

Since all the problems in microservices can be reduced to the problem of sharing state consistently, once we can do this while still allowing many databases to participate in our system, the discussed trade-offs lose their meaning. Any system that exchanges information between its constituent parts must define the structure of the information being shared, so as long as our system works through each microservice’s local database instance, there’s no new requirement for them to participate in the whole.

In fact, we can simplify the workflow of the developer immensely, by enabling them to ignore the fact that there are other participating services by presenting the information created by them in their familiar database.

To put it differently, we solved consistent cache-invalidation and thereby made the debate moot.

We talk about the “How?” in three articles.

The first one is about how easily one can create an environment that transparently federates data between SQL instances

The second one provides a general overview of the concepts involved and how they come together

The third goes into more technical details about how these promises can be maintained.

We also have a science-paper for those who are into that sort of thing:

https://www.researchgate.net/publication/359578461_Continuous_Integration_of_Data_Histories_into_Consistent_Namespaces

Let us know if you would like to see a demo of the software in action at info@omniledger.io. Visit our website at omniledger.io and let us know how we can help you realise your vision.

--

--

Writer for

Writing about distributed consistency. Also founded a company called omniledger.io that helps others with distributed consistency.