Make Me Hack

Hardware Hacking, Reverse Engineering and more …

How To Run An Old Toolchain with Docker

I released the eighth episode of the series Hardware Hacking Tutorial in the Make Me Hack YouTube channel.
This episode is about “How To Run An Old Toolchain with Docker”.

The Hardware Hacking Tutorial series is to share information on how to do hardware hacking and how to do reverse engineering. The series is useful both for beginners and experts.

We want to build a kernel and a root file system for a QEMU emulated board, where to run interesting binaries of our IoT device, but our device has a very old kernel, libraries, and packages. This means that we need to run an old Buildroot version or other old tools, that will not run on our current Linux desktop, but they will run on old Linux distributions.

To overcome this problem, in a simple and fast way, without the burden and overhead of a virtualization environment, we will introduce Docker, which can be configured to run older tools, transparently, on our modern Linux distribution.


This episode is about how to use Docker to build the emulation environment, using old tools, like old versions of Buildroot, that don’t run on modern Linux distributions. We have to use a container environment, like Docker, or a more complex virtualization environment to solve this issue.

This argument, is not strictly related to hacking, but it is a very important tool, needed to move forward with the Buildroot installation, configuration, and generation of the kernel and the root file system for our QEMU based emulation environment.

Docker is a very useful tool that can be used in many different situations, it is a light and efficient tool, to package everything is needed to run complex tools in a well-defined environment; we can include an entire Linux distribution in a Docker image. It can be particularly useful in the embedded space where, often, there is the need to rebuild some software on devices that are on the field for many years and a very old toolchain is required.

It is an easy to use tool, but it is important to know some basic concepts that we will explore.

Emulation Environment Requirements

In this episode we will use, as an example, the same sample Gemtek router we used in the previous episodes.

We want to build a kernel and a root file system that is reasonably similar to that of our device, this usually means that we want:

  • The same kernel version as in our IoT device, with similar configuration. Same pre-emption model, same NAND drivers, same file system supported, etc. We could also use a different kernel version, but it would be better to stick with the same version. In our sample router we have kernel version 2.6.36 that was released in 2010;
  • We want, also, the same libc implementation and version, this is more important than the kernel version, because, if it’s not the same, there is a high probability that the device executable binaries will not run on the emulated environment. This happens because, in the embedded devices, it is common to use specialized libc implementations to save memory, disk, and processing power; these implementations, often, are not fully compatible with one another or with different versions of the same implementation. Some of the more popular specialized libc implementations are uClibc, uClibc-ng, musl, dietlibc and some others; In our sample router we have uClibc version that was released in 2012;
  • We need, also, the same, or compatible, versions of libraries used in the executable binaries we are interested in; in our case we have libssl 1.0.0, part of the OpenSSL package, that was released in 2010;

As we have seen in the previous episode, our sample router, manufactured in 2017 and distributed for a couple of years, has very old software, unfortunately, this is quite common on embedded devices, this means that we have to build a root file system with very old software.

We cannot use the latest Buildroot version to build the kernel and the root file system, because the latest version has moved forward with kernel, libraries and package versions.

We have to use an older Buildroot version that, as much as possible, has the components we need, especially the specialized libc implementation. Maybe we have to download a lot of different Buildroot versions, to find the one we are interested in, and when we have found it, usually, all other library versions are, more or less, of the same age as the libc and, usually, are the same version, or similar versions, as in our device.

In our sample Gemtek router, it is much easier to find the correct Buildroot version because the router itself has been build with Buildroot, and his /etc/os-release file tells us that it has been built using Buildroot version “2015.02-svn12502”, so, probably, they have used version 2015.02, imported in their subversion repository, made some little modifications and internally released as this version. By the way “Subversion” is a source version control system, as it is “Git”, it was very popular before git, and it is usually abbreviated with “svn”. This means that we have to use Buildroot version 2015.02 released in February 2015.

One of the problems of older, and complex software, as Buildroot, is that it will not run, out of the box, on a recent Linux distribution. Buildroot, first of all, recompile the entire toolchain needed to do the cross-compilation for our device architecture, this means recompiling gcc and all of the GNU Binutils; then it uses this toolchain to build the kernel and the other libraries and packages for our target device. It is very easy that, slightly different gcc versions or Glibc and other library files, can give errors that were not present on older Linux distributions, and trying to fix all these errors can be time-consuming and can be like a never-ending steeplechase.

The easiest fix to this issue is using an older Linux distribution, a distribution that has, more or less, the same age as the Buildroot version we are trying to use. We could use a virtual machine, but we will use Docker that give us a better result, it is faster, it uses much less resources, and it is easier to share files between the Docker machine and our host environment.

In our case I wasn’t able to successfully run Buildroot version 2015.02 on my Ubuntu 18.04 and my Ubuntu 19.10. I was successful on Debian Wheezy, released in 2013.

Docker is very popular, for building the modern IT infrastructure, based on micro-services running on a scalable architecture with many, independent, Docker machines that are called Containers that is a much more accurate definition than “machine”.

We will use Docker differently, as a sort of very efficient virtualization layer to run old software on old Linux distribution, inside our modern Linux host machine. In our case we will not use so much the “isolation” feature, between the Docker container and the host machine, that, instead, is usually at the core of Docker infrastructure deployments.

It is very easy to install Docker on our Linux PC as we will see soon. After installation, the user running Buildroot must be a member of the “docker” Linux group to be able to launch Docker.

About Docker

Docker is a very nice and well-architected packaging of existing Linux technologies; at its core it uses:

  • the “chroot” environment, in this way a process started on a “chrooted” directory, will search all of its files, like library files, configuration files, and so on, starting not at the real root of the file system, but at the specified new root. This means that, for example, if in this new root we will put, under the directories “etc”, “var”, “lib”, “bin” and so on, the directories of another Linux distribution, this process will use that files as if it was running on another Linux distribution. In reality, it is a process running on our host machine, on the same host machine kernel, it is only using different library files and other configuration or executable files;
  • another core Linux feature used by Docker is the so-called “namespace isolation”; a similar concept to that of a local variable in a function; this is a Linux feature to have a process with limited visibility of the namespace of the entire system, this means that, for example, if this process uses standard Linux library calls to have a list of processes running on the system it will see only itself and, eventually, other processes that it has created; if it asks for a list of available ethernet interfaces it will see only what has been configured to see and so on. It is a situation similar to that of a process running on a virtual machine that can see only what is available on the virtual machine and doesn’t know the resources available in the host machine;
  • the third important Linux feature, used by Docker, is the so-called “Cgroups” that is the possibility to group processes and to assign, to these groups, the maximum amount of resources that they can use. For example we can assign, to a group of processes, the maximum amount of RAM they can use, or the maximum percentage of CPU or the maximum disk bandwidth and so on. Using Cgroups it is possible to limit the maximum amount of resources a Docker container can use;

This means that using the chroot environment, the namespace isolation, and the Cgroups to limit resources usage, a Docker container will be similar to a Linux virtual machine, but it will run on our same kernel; it will be a process, or a process tree, inside our host machine and it will starts in few milliseconds; a virtual machine, instead, usually needs at least few seconds to start and has the overhead of an entirely different kernel running.

Another nice feature of Docker is that we can start a container, using a simple configuration file, it will do his job and, and, when we terminate it, all modifications done inside the container disk is lost. When we restart this container it will be as if it has just been rebuilt from scratch. In this way we are always sure that, the just started container, has everything it has to have, without some unwanted package or library upgrade done by a previous system administrator.

If we want to have a permanent storage we can easily “mount” host directories inside the Docker container, and only these directories will keep the modifications done by the Docker container after the container is terminated. There are other ways to have permanent storage inside a Docker container, but the core concept is the same.

A Docker container is created using a Docker file that is an amazing simple file to describe a machine.

Links with additional Information

Leave a Reply

Your email address will not be published.