Microservices Antipattern

Last friday I was a speaker at the Geecon Microservices conference in Sopot. I was planning to talk about the whole thing, mentioning of course our work at Workshsare about msnos and I had a long presentation but I had only a 30 minutes slot. Also I went on stage after a shitload of good speeches about such topic, and so I decided to talk about a small niche not fully exploited at the moment, Microservices antipatterns: this is a short recap.

Ah, why I know about this stuff? Because at Workshsare we have a black monolith and when I joined in January 2013 I
immediately started pushing for Microservices šŸ™‚

Disneyworld

When we are in Disneyworld everything smells nice, cosy and lovely. Also, there are no healthchecks, no monitoring and no metrics, so you assume everything is well in your systems, all the time.

It may seem obvious to have monitoring on any system in production, but some people think that because the services are small and so simple nothing will go wrong. Of course this is silly, because we know that if something can go wrong. it will! Also small does not imply simple, as the complexity in any distributed architecture is order of degrees higher compared to a standard monolithic system.

For that reason you need each service to be able to produce a self healthcheck and metrics, possibly trough an HTTP endpoint, and surround it by some external monitoring. It won’t hurt also adding some smoke testing for each environment, possibly trough test created by you QA people. Implementation wise, dropwizard is a valid option for Java services, and we have our own small opensource implementation for Ruby.

Monolith database

In this scenario many of your microservices are sharing the same database: they expose nice different REST endpoint, but all the data ends up in the same bin.

The issue with this are multiple. First all your services are coupled to the same schema, so are your models (if you have any) and a change in the database may require you to propagate a change in your model. Furthermore, your services won’t be able to be released independently, as a database migration required by service A may/will require a change in services X/Y/Z : one of the big advantages of this architecture is out of the window.

You need each service using its own database, and when necessary they will talk to each other using your APIs. You should design your external API with this constraint in mind, moving on from the big SQL joins and welcoming asynchronous completion of tasks, a user interface that progressively updates itself, APIs that are returning link to other resources rather than embedding them.

Unknown caller

Your microservices are calling each other using their endpoints but there’s no correlation between such calls, so that each call is a new one, completely unrelated to the source of it.

When this happens there’s no easy way to track what caused a failure of a call executed on a service which fails somewhere in the chain. As there’s no (common) identification between such calls you usually end up in a endless marathon of log lurking in order to understand the reason for the failure.

The solution lies in peer collaboration: each server must inject a call id if missing , each server must propagate the call id if present, each server should log each call including the id. This will allow the correlation of the calls together, thus providing a clear chain of invocations between the services

Hardcoded hell

All the address (endpoints) of services are hardcoded somewhere, sometimes directly in the code.

Your microservices directly know about each other, so every time you need to add a new microservice you have to crack open your code or, if you are in a slightly better situation, your configuration or deployment scripts. If you are also experiencing the “All your machines are belong to us” antipattern you may be using some form of DNS or naming trickery at the machine level

You can easily build your own simple discovery mechanism, with a replicated registry which hosts service information, or you can introduce a discovery mechanism like eureka, zookeper, consul, msnos (yeah shameless plug!)

Synchronous world

Every call in your systems is synchronous, and you wait for things to actually happen before returning to the caller: you wait for the database to store the data, another service to perform an operation, and so on.

The issue with this setup is that your service will be very slow, every call can potentially hang forever and with just one malfunctioning service your whole system will be compromised

You need to use 201 and a location header as much as you can during creation operation (or 202 accepted if you fancy), using queues on top of your receivers and implementing asynchronous protocols from the start. It’s more complicated of course but in terms of performance it pays off big time.

Babel tower

You have a lot of microservices and they are using all sort of different lingo to talk to each other (i.e. REST, SOAP, IIOP, RMIP…)

The integration of a new microservice requires a lot of work, the number of adapters increase exponentially, you always consider the costs of integration before deciding to create a new microservice, thus ending up in a small amount of big services

You need of course to standardize your protocols, and introduce boundary objects only were it’s strictly required. A very successful approach consist in use REST as a synchronous protocol (point-to-point) and a lightweight messaging protocol for fully asynchronous communications, a message queue like RabbitMQ or a pub/sub like Redis

All your machines are belong to us

(a note about this one: I consider it an antipattern, but I can be very wrong, so please take it with caution)

A new virtual machine in the cloud is spinoff when a new service is provisioned and your architecture is relying on this for locating or monitoring the services

When this happens you basically have a lot of money to spend, and you are allowed to not care a lot about things like your next AWS bill. Now your architecture is now strictly connected to the money you have: if for any reason you have to shrink down you won’t be able to (timely) do so.

Make sure that your architecture is scalable without relying on metal/virtual scaling only, consider solution based on containers like Docker, CoreOS, Mesos, application servers, plain old barebone deployments. If your services are self-contained you can always run them on any machine that has the correct platform installed

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s