Codemotion 2016, Amsterdam!

I am so glad I am here! The venue is amazing and the people are awesome!

amsterdam-2016-meta

I just finished my talk about Microservices and NodeJS. As usual I had to cut short, not enough time to go trough even half of the slides, so I concentrated on the code 🙂 You can find the last version of the slides on slideshare.

All the code is on Github and you can see from the commits the amount of time it took, basically less than an afternoon starting from zero knowledge of NodeJS (okay, a part from a couple of bugs I had to fix for the conference!). So please try it yourself!

 

Microservices Antipatterns [2]

Looking forward to the upcoming Geecon conference in Prague I am trying to identify new antipattern and I have to say that’s not that difficult, having been working with them here in Workshare for almost three years. The more we use them, the more dead road we find 🙂 and it’s a natural darwinian process I guess: regardless of all what you can find now on the internet, sometimes it’s only trough your own experience that you learn. And, as Uncle Bob says, “you learn more from your mistakes than from your successes :). So let’s have a look at those new antipatterns!

Monolith Frontend

You have a single page application or a detached website that speaks to your beautiful microservices, you have to deploy in full for an upgrade, you are using vertical “feature” teams, Congrats, you are living the era of the monolithic frontend!

In fact you carefully structured your teams around different small sets of microservices, each one deployable individually, empowering your teams to deploy quickly and independently but, when you reach the frontend, you go back to the main model of having a big fat application, with merges all over the place and a single deployment. This basically put off all the benefits, team wise, of a microservice architecture.

The solution is of course splitting the app in small independent pieces, deployable independently, or move the entire responsability of the frontend to a separate team. Please note that none of these are as difficult as they seems.

Early mornings

You have a scheduled rithm for deployments (i.e. every week) and you deploy your services in the early morning, tipically around 6am, in order to avoid any disturbance to production across the world. You have a page that lists all the new services to be deployed (created manually) and who is the responsible for them.

The issue that you have is a lack of a critical component called continuous deployment, that would allow you, with a push of a button, to deploy all your software to production. As you do not have fully automated procedure to deploy to production, you are probably lacking other critical parts such as redudancy o high availability of your services.l

The solution is to take your time to have things in order 🙂 Well crafted microservices allows you to deploy very fast and without particular hassle: you have to make sure first that your code supports that and then that your devops are fully onboard in providing the critical parts required, usually in form of automation scripts with any decent framework.

Microservices Antipattern

Last friday I was a speaker at the Geecon Microservices conference in Sopot. I was planning to talk about the whole thing, mentioning of course our work at Workshsare about msnos and I had a long presentation but I had only a 30 minutes slot. Also I went on stage after a shitload of good speeches about such topic, and so I decided to talk about a small niche not fully exploited at the moment, Microservices antipatterns: this is a short recap.

Ah, why I know about this stuff? Because at Workshsare we have a black monolith and when I joined in January 2013 I
immediately started pushing for Microservices 🙂

Disneyworld

When we are in Disneyworld everything smells nice, cosy and lovely. Also, there are no healthchecks, no monitoring and no metrics, so you assume everything is well in your systems, all the time.

It may seem obvious to have monitoring on any system in production, but some people think that because the services are small and so simple nothing will go wrong. Of course this is silly, because we know that if something can go wrong. it will! Also small does not imply simple, as the complexity in any distributed architecture is order of degrees higher compared to a standard monolithic system.

For that reason you need each service to be able to produce a self healthcheck and metrics, possibly trough an HTTP endpoint, and surround it by some external monitoring. It won’t hurt also adding some smoke testing for each environment, possibly trough test created by you QA people. Implementation wise, dropwizard is a valid option for Java services, and we have our own small opensource implementation for Ruby.

Monolith database

In this scenario many of your microservices are sharing the same database: they expose nice different REST endpoint, but all the data ends up in the same bin.

The issue with this are multiple. First all your services are coupled to the same schema, so are your models (if you have any) and a change in the database may require you to propagate a change in your model. Furthermore, your services won’t be able to be released independently, as a database migration required by service A may/will require a change in services X/Y/Z : one of the big advantages of this architecture is out of the window.

You need each service using its own database, and when necessary they will talk to each other using your APIs. You should design your external API with this constraint in mind, moving on from the big SQL joins and welcoming asynchronous completion of tasks, a user interface that progressively updates itself, APIs that are returning link to other resources rather than embedding them.

Unknown caller

Your microservices are calling each other using their endpoints but there’s no correlation between such calls, so that each call is a new one, completely unrelated to the source of it.

When this happens there’s no easy way to track what caused a failure of a call executed on a service which fails somewhere in the chain. As there’s no (common) identification between such calls you usually end up in a endless marathon of log lurking in order to understand the reason for the failure.

The solution lies in peer collaboration: each server must inject a call id if missing , each server must propagate the call id if present, each server should log each call including the id. This will allow the correlation of the calls together, thus providing a clear chain of invocations between the services

Hardcoded hell

All the address (endpoints) of services are hardcoded somewhere, sometimes directly in the code.

Your microservices directly know about each other, so every time you need to add a new microservice you have to crack open your code or, if you are in a slightly better situation, your configuration or deployment scripts. If you are also experiencing the “All your machines are belong to us” antipattern you may be using some form of DNS or naming trickery at the machine level

You can easily build your own simple discovery mechanism, with a replicated registry which hosts service information, or you can introduce a discovery mechanism like eureka, zookeper, consul, msnos (yeah shameless plug!)

Synchronous world

Every call in your systems is synchronous, and you wait for things to actually happen before returning to the caller: you wait for the database to store the data, another service to perform an operation, and so on.

The issue with this setup is that your service will be very slow, every call can potentially hang forever and with just one malfunctioning service your whole system will be compromised

You need to use 201 and a location header as much as you can during creation operation (or 202 accepted if you fancy), using queues on top of your receivers and implementing asynchronous protocols from the start. It’s more complicated of course but in terms of performance it pays off big time.

Babel tower

You have a lot of microservices and they are using all sort of different lingo to talk to each other (i.e. REST, SOAP, IIOP, RMIP…)

The integration of a new microservice requires a lot of work, the number of adapters increase exponentially, you always consider the costs of integration before deciding to create a new microservice, thus ending up in a small amount of big services

You need of course to standardize your protocols, and introduce boundary objects only were it’s strictly required. A very successful approach consist in use REST as a synchronous protocol (point-to-point) and a lightweight messaging protocol for fully asynchronous communications, a message queue like RabbitMQ or a pub/sub like Redis

All your machines are belong to us

(a note about this one: I consider it an antipattern, but I can be very wrong, so please take it with caution)

A new virtual machine in the cloud is spinoff when a new service is provisioned and your architecture is relying on this for locating or monitoring the services

When this happens you basically have a lot of money to spend, and you are allowed to not care a lot about things like your next AWS bill. Now your architecture is now strictly connected to the money you have: if for any reason you have to shrink down you won’t be able to (timely) do so.

Make sure that your architecture is scalable without relying on metal/virtual scaling only, consider solution based on containers like Docker, CoreOS, Mesos, application servers, plain old barebone deployments. If your services are self-contained you can always run them on any machine that has the correct platform installed