Pages

Tuesday, May 3, 2016

Microservices: just a trend or real evolution?


Today I had a discussion with my colleague about the necessary coding style changes in microservice development. In my opinion we should revise our practices for simplifications, because we socialized on code reusability in monoliths and these new kids on the block are playing differently. At the end of the discussion he asked a question from me:
Is the microservice architecture is a trend or evolution?

Microservice benefits
Lot of pages listing why do we need microservices and writing long-long lists about their magical nature, but I think it has two main reasons: scalability and development time. In some blog post the list has more items like:
  • resilience (no, just restarts faster...)
  • easy to enhance (I said the same with development time)
  • ease of deployment (Bahhh. Releasing and integrating 50-100 microservices couldn't be easier then releasing a single monolith, but different...)
  • Single Responsibility (yeah, serve the business logic, provide proper logging and incident handling, integrate with other services, deliver KPI metrics, being resilient, etc...– hahaha....)
  • Low impact on other services (due the the SRP in the previous point, I have doubts about how low is the impact when others are depending on it...)
  • etc...



Let's skip the bullshit and focus on the the first two points! Scalability is a key point in the era of IoT when billions of devices are connected to the Internet and the load on your system can jump quickly if you get a good review in the TechCrunch or similar sites. With an immutable microservice architecture you can spin up more instances easily to serve the increased load and gain new customers, so the scalability is money...
The development time is the key factor to go live faster with new features and a monolith development culture could slow down this process dramatically. Microsoft Windows is the best example of this habit. So we ended up again at the money of course. Developers are expensive and the failures and outages are more expensive so we need a lightweight, easy to replace solution to our problems to move together with the ever changing business environment quickly.

From Dusk Till Dawn

Moore's law has been with us since 1975 and the whole industry built on the belief, the computational capacity, memory capacity will be doubled in every two years. There were no pressure on the software developers to optimize their code, because the semiconductor industry delivered the required extra juice to their products, but this golden era will end soon. Intel already announced stopping the tick-tock design, but in the past years the raw computational power wasn't evolved quickly. They tried some tricks with multiple cores, hyper-threading or GPU integration, but the software industry moved toward the cloud computing to sustain the scalability of their products and save money on architecture simplification.

Scaling up or out

The modern application needs scalability. We are living in the era of billion mobile devices and IoT is already knocking on our door. Scaling up is the easiest way to add extra capacity to our systems. A newer CPU with extra cores with low power and storage costs could be cheaper first, but the hardware is getting more and more sophisticated and finding an experienced engineer with good concurrency development knowledge is not easy. Sometimes very expensive comparing with the potential performance growth. If you scale out your application, then you should redesign the architecture carefully to squeeze out every extra capacity and you should give up some principles, safety or design ideas if you want to turn your monolith to a scalable one. I worked on several projects when the customer decided to build an expensive application server cluster to serve the increased load and introduced lot of synchronization and locking practices to keep the data consistent even under high load and suffered a lot from Amdahl's law due to the locking of shared resources.

The Scale Cube

The complexity and scalability restrictions on traditional architecture are forced the developers to find better and cheaper solutions to increase the performance, so they created non ACID databases for better persistence scalability, MapReduce then stream processing/analytics for real-time event rocessing and analytics and also the microservices for request-response solutions.
Here's the good, old microservice scale cube:










With microservices we are giving a simple answer to a complex problem: scalability and maintainability. Due to their simpleness we can decide easily to replace an existing solution with a different one and a stateless service could be multiplied when the increased demand needs more processing capacity. Amdahl's law can be easily replaced by Gustafson's law if we leave behind our sophisticated transactional, locking based synchronizations for a simpler architecture.


Final thoughts
As we can see microservices are the result of evolution of software development improvements and clear representation of our Single Responsibility principle on higher level to deliver scalability. This is definitely not the end of the evolution, but more than a simple trend and we should reinvent our development, testing, integration and delivery practices, because the majority of our practices are invented in the monolith era

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.