Distributed System Explained recording (in English) is available!

Thanks to the hard work of the Geecon team, the recording of my speech about Distributed Programming is now available on YouTube!

The code is on GitHub as usual, and remember this is NOT production code 🙂

I have now run this speech in four conferences/countries  (Italy, Israel, Poland, Czech Republic) and every time is a success, but I think now it’s time to move on to something else 🙂 There will be another recording from the amazing Geecon Prague conference (I will update this post) that might be slightly better due to me moving less eheh!

To be honest, I would like to run this speech in my new home country, the UK (well, at least until I will be kicked out due to Brexit) but I had no success in any CFP, so if anybody wants to give me a shot I will be up for the challenge!


CAP Theorem@Codemotion 2016, Milan

I just come back from Codemotion Milan, and it was great! Apparently a lot of people liked my presentation, someone told me that they really understood, for the first time, the CAP theorem. Which is absolutely great!

Thanks for everybody who attended, you were a fantastic bunch and I feel absolutely grateful and privileged for being there!

I am looking forward to the next tech meeting or conference!

Ah, the code is on GitHub but please keep in mind that this is basically the result of a four days spike, so it’s not particularly good. But I promise I will refactor it 🙂

The recording (in Italian) is available on youtube thanks to Codemotion.


My PM wants to take a shortcut. How do I explain my engineering point of view?

In this post we describe a situation between me, the head of engineering, and a PM (Don)  that wants to push a (not very good) solution to one of the engineering team. The engineers are pushing back, and Don does not understand why. Everybody is in good faith, but they do not understand each other…

Dear Don….

Let me try to explain the issue here from my point of view. This is a classic problem in software engineering, and it happens all the time: however, if you want to understand it you will have to take a bit of time, at least as much I invested to write this, so please sit down, relax, and enjoy the ride!

abstraction consists in capturing those portions of reality that are significant for your problem

One of the core concepts of software engineering is abstraction, which consists in capturing those portions of reality that are significant for your problem: software systems tries to represent reality, but its complexity can be overwhelming. Imagine that you want to model a car: how would you do that? Would you represent it with four wheels, four doors, a bonnet? Or would you consider its speed on the road, its position on the territory? What about the current angle of the steering wheel? The number of revs of the engine? I could go on forever. The fact is that you have to capture a portion of it, the parts that make sense for your problem. So, if you plan to manage a factory that build cars, then the structural abstraction (wheels, bonnet, doors) is a good one, while if you are building a navigation system you will be mostly interested in its position, speed and similar.

models are implementations of abstraction in the software realm

Once you have defined the overall abstraction that you want to use, then you end up defining your models, which basically are implementations of the abstraction in the software realm, defining structure and behavior based on our requirements. In an Object Oriented approach those are usually represented (unsurprisingly) by objects, which may have (on some typed languages) also a generalization, which is basically a blueprint to create objects (usually called “class”, but that’s not really important). They may also have some form of persistent representation, which can be stored in a relational database (like MySql) in the form of records on tables, or  as a document on a nosql database (like Mongo). They also have a tight relation to the user experience, which should be built around such models and should match the mental model that we (and our users) will instinctively adopt and use.

on every change the models must be improved to accommodate future changes

I hope it’s clear now why models are so important, and how pervasive they are: basically they are the foundation of our software, get them wrong or screw them, and you will have very big problems. For that reason maintaining and evolving these models correctly is extremely important, and the trick is to make sure that at every change we make, the models are improved so that’s easier to accommodate changes in the future.

your change does not evolve the model, it violates the underlying abstraction, and makes it harder to change

Now, the change that you are suggesting consists in picking a portion of that model and changing it violating the underlying abstraction: you are basically proposing to screw it. You are not evolving it, you are not making it  easier to change in the future, you are just patching it. And when you continuously make changes like these then you end up with a pile of crap that you won’t be able to change at all. Sounds familiar?

do not offer solutions, state the problem

For that reason the engineers are resisting this action. So, do not offer solutions: state the problem, and trust your team to come back to the right solution! And if it’s not right.. well, failure should be part of the process. Like Lynda Resnick once said, “you will learn more from your failures than your successes“.

You shoud attend a conference. No, better, be a speaker!

I just come back from Codemotion Amsterdam and it’s been a fantastic experience! I met a lot of smart guys, I’ve been infected with a lot of new ideas, and I honestly cannot wait to come back to work to share this fun! It’s incredible the amount of knowledge you can pack in two days, the inspiration that you can get, if only you try!

But I was also a speaker there, and I thoroughly enjoyed the fact I was able, myself, to influence people. And a very good breed of people, the ones who actually go to a conference! Some of them took some days off to attend, gave up a couple of days on the beach in the sun to attend a conference and, among other, see me 🙂 How cool is that? These are the people you want to have, as engineers, in your company! People who actually care, who are willing to do sacrifices to learn and to improve themselves!

How many of them do you have in your company? You should do something about that, you should try to get the best people around, and make sure they stick with you big time! So, now, go, check, NOW! How many people do you have in your company that have this (brave, sane and good) attitude?

And what about speakers? That’s the next step! How many people, in your company, decided to be a speaker at a conference? And to risk public humiliation, the fear of the demo going wrong, the nights spent preparing slides, the endless rehearsals… for what? Speakers usually do not get paid. Sometimes they get refunded some costs, sometimes not. And yet… they do it! Because… because we can! I remember my first European conference, Javapolis 2006 (now known as Devoxx) where I talked about Selenium and FitNesse… I was scared, I was unsure, I felt very unsafe, but I did it, and it went well! And if somebody like me, at the time, did it, YOU can do it! Start small, an internal meeting, then meetup or a local user group, then a small national conference, then a bigger one, then a European one… YOU CAN! And it’s awesome!

Now, I am an old fart, and even if I’ve been speaking at many conferences during my career  I am almost out of the game (hey, I said almost!) and the best thing I can do is to breed the new generations to come 🙂 And heck, I will make a point, I will make sure my developers will be speakers, so that this beautiful cycle will continue.

I will have speakers in my company, promise!






Codemotion 2016, Amsterdam!

I am so glad I am here! The venue is amazing and the people are awesome!


I just finished my talk about Microservices and NodeJS. As usual I had to cut short, not enough time to go trough even half of the slides, so I concentrated on the code 🙂 You can find the last version of the slides on slideshare.

All the code is on Github and you can see from the commits the amount of time it took, basically less than an afternoon starting from zero knowledge of NodeJS (okay, a part from a couple of bugs I had to fix for the conference!). So please try it yourself!


How to make Google Search irrelevant

This is a concept I thought about an year ago, and it never went past the inception phase. I still think it’s a good one and for that reason I want to share it so that it won’t get lost in time 🙂

Vision: “Replace Google search”

The final vision for this project is to replace Google search. At the moment there’s no way to create a better ranking algorithm than the one used by Google, so the only way to beat Google search is to make it irrelevant, like Netflix did to Blockbuster or like digital music did to CDs.

The aim of this project is to collect all the human knowledge in a single and shared database, with all the user contributing to it as at the same time they will contribute to augment their personal knowledge.

Why this will work

As a person I am always frustrated about the time I spend to find something on the internet, or even worse, when I am offline. Searching the internet might be fast, but even when the experience is okay, I have the tendency to forget things, and I repeat search I did in the past sometimes very frequently. What I would like to do is to maintain this knowledge somewhere, in a place where it’s always accessible and extremely easy to find. At the moment there’s no such tool around: Evernote or Pocket, two tools often cited regarding this idea, have a very less bold metaphor, it does not focus around social interaction, and has no ranking of his contents as it’s focused on a single person.

Strategy (overview)

The implementation strategy will happen in three phases:

Phase one:
Launch of an application suite to collect a person knowledge

    • always available, on every device, must work offline
    • a simple and clear metaphor to manage knowledge
    • a set of super simple mechanisms to import knowledge from existing sources (i.e. Quora, Wikipedia, IMDB, Stackoverflow, emails…)

Phase two
Introduce social capabilities

    • a social mechanism to share or include knowledge of other people
    • a ranking algorithm to qualify better content from better users

Phase three
Launch of a worldwide site to explore knowledge of mankind, replacing effectively all existing focused and unfocused sources

    • all existing human knowledge will be available, already catalogued and sorted by human beings, voluntarily
    • the ranking algorithm will allow the relevant and better contents to emerge spontaneously and naturally

Strategy (detailed)

A more detailed explanation of the three-phases strategy follows.

Phase one: application suite

An application that must be always available regardless of the fact that I am using a mobile phone or a desktop computer, or even a Kindle. I should be able to install an application / plugin that will allow me to push data into my knowledge base without hassle: a set of specific browser plugins are highly recommended. Also, it must work offline: my entire database (or the most relevant part of it at least) shall be always accessible, and I will be able to push new data in the database at any time, as it will be synced automatically as long as I am online again.

A simple and clear metaphor to manage knowledge has to be found, it has to support both the adding and the retrieval of information to/from the database. At the moment the most promising model is based on a graph of information, with tags associated to it, and maybe different clusters, but it will be very important to find an extremely simple,effective and attractive mechanism for the end user to store and retrieve his knowledge (if we had telepathy we should use that). The metaphor must support some form of classification of contents, such as pre-defined tagging, clerical, prototyping.

Because we are consolidating knowledge we need to provide, and specifically on the internet, a set of super simple tools to import knowledge from existing sources like Quora, Wikipedia, IMDB, Stackoverflow, and even personal emails. Ideally we should think about two different mechanics to collect an information: you can copy it, so that it’s merged into your database and you can change as much as you want, or link it. so that you can still see the whole information (as it’s constantly sync-ed from the remote side) but it’s in readonly mode. An advance merge mode can be thought for copied contents, as soon as it’s extremely simple.

Phase two: social capabilities

A strong social element must be added to the platform from the very start. The basic mechanism would allow me to declare some content “public” or “friendly” (to a set of friends or a circle), so that other people can pull my content in their database (also here, copy+merge or link). An integration with either Facebook or Google+ is mandatory, more integrations are highly advisable.

An important step that enables the transition to the phase 3 of the project is a very good ranking algorithm, so that we can qualify the better content and the better users, ideally the “experts in the fields”: for that reason the metaphor, as explained before, must enable the classification of the contents. Such ranking algorithm should be related to relevance and in general to the reputation of the users, the same way Quora or Stackoverflow for example rank their user and automatically decide which contents should be in principle more relevant to a question: how many user linked such content? or copied it? Or liked it? Explicit rating should also be allowed, but in general the more automation the better. Some sort of gaming ranking is of course necessary.

Phase three: the worldwide site

The final aim of this project is to create a “socialpedia” or a “knowledgepedia”, so a form of global knowledge encyclopedia managed by all the users. The main difference between the most obvious antagonist, Wikipedia, is the way contents will emerge: we will not be collecting and classifying information, the users will do it, as they want to organize their knowledge. In the process of doing that, they will do organize a global knowledge that can then be used to the purpose of rendering any other search useless or redundant. The ranking algorithm will guarantee us the the best content will surface in such database, and it will be invaluable because of the fact that actual human beings, not machines or algorithms, will classify it.

At that time, you will hold the world knowledge. Like Google now, but better