Container technologies (chroot, LXC, …) are very common these days, especially since the massive adoption of Docker.

    One of the use cases of container technologies is to isolate services from each others and from the host system. As a result, in case of an intrusion, the attacker would be in theory trapped inside a container. From the attacker’s perspective, it is important to be able to detect if a compromised service lives in a restricted environment such as a Docker container or if it runs directly on the host operating system.

    One way to do so is to have a look at the inode of the / mount point (ls -id /). On the host system it will be very low (generally 1 or 2) whereas in a container it will generally be quite high (4851522 in the asciicast):

    Read more Security · SysAdmin Container · Docker

    As you probably have already noticed, the number of VPN providers has grown massively in the recent years, and you can now stumble on ads for VPN subscriptions on every corner of the Internet. Of course, not all of these providers are equal in terms of quality of service, security or privacy. At the contrary, some of them falsely claim to have a zero-log policy or to even protect you from malicious actors.

    Having said that, using a public VPN service is not necessarily a bad idea if you choose it correctly. It's a matter of trust after all. Who do you trust the most for keeping your traffic private, your ISP, or a carefully selected VPN provider? (I said carefully here because you have a larger number of VPN providers on the market than ISPs available where you live).

    If you trust your VPN provider, you get in theory a higher degree of online privacy by proxying your connections through it (even if, as of today, nothing can fully replace Tor). But what about security? From a theoretical standpoint once again, TLS is well enough to guarantee the security of your online activity. And thanks to projects like Let's Encrypt, most websites are accessible via HTTPS these days.

    Read more Essay · Security · Web Firefox · TLS · VPN

    The Docker engine stores all image layers in /var/lib/docker by default, which is not compatible with Qubes OS' template system. You would lose all your saved images each time you restart the app qube in question.

    You can change the Docker engine's root data directory by editing /etc/docker/daemon.json in your template:

    {
      "data-root": "/usr/local/lib/docker"
    }
    

    The list of the directories that can be used to store Docker's data persistently across reboots can be found here.

    Once the configuration file in place in your template, you can then boot up any app qube using this template and check if the new location has correctly been taken into account:

    $ sudo docker info | grep -i 'root dir'
     Docker Root Dir: /usr/local/lib/docker
    

    Please note that if you already had Docker images stored in an qube app before making this configuration change, they won't be available in Docker any more. You would need to move them to the new location or download them again.

    SysAdmin Qubes OS · Docker

    CodeQL CLI includes a language server which can be easily set up in coc.nvim by adding the content of this coc-settings.json file to your own configuration file:

    {
      "languageserver":{
        "codeql": {
          "command": "codeql",
          "args": [
            "execute",
            "language-server",
            "--check-errors",
            "ON_CHANGE",
            "-q"
          ],
          "filetypes": [
            "codeql",
            "ql"
          ],
          "initializationOptions": {},
          "settings": {}
        }
    }
    

    Given that coc.nvim uses Vim filetype detection system and not file extensions, you need to let Vim know about *.ql files being CodeQL files. One way to do that is to add codeql.vim to ~/.vim/ftdetect:

    " Set '.ql' files as CodeQL files.
    au BufRead,BufNewFile *.ql set filetype=codeql
    
    Programming CodeQL · Vim

    Generally, when I want to explore the file system of a Docker container, I do it interactively by executing a shell inside it, something like:

    $ docker exec -it container_name sh
    $ ls
    ...
    

    But sometimes the image of the container I want to explore does not contain any tools for this purpose. No ls, no cat, not even a shell. It is especially the case when building Docker images from scratch, which is very common with multi-stage builds.

    One solution is to rely on the docker export tool which allows to "export a container's filesystem as a tar archive". By default, it writes the tar archive to STDOUT, which means it can be easily piped into the tar command-line tool to list its contents on the fly:

    $ docker export 7c1f2edd42c4 | tar -tv | tee filesystem.txt
    -rwxr-xr-x root/root         0 2022-04-04 09:46 .dockerenv
    drwxr-xr-x root/root         0 2022-03-19 15:52 bin/
    -rwxr-xr-x root/root  45687736 2022-03-19 15:52 bin/node
    ...
    
    SysAdmin Docker