Offline Docker: The routing to freedom

Posted in offlining docker networking iptables iproute
Part 1 from the series "Offline Docker"
  1. | Part 2
  2. | Part 3

Five years ago I decided that I wanted to be able to work from anywhere, anytime. Four years ago, that promise was kept. In part.

I do not need to go to an office somewhere. I can work outside in a park if I want to. I can ride on to a new town every day. I only ever need to bring my trusty old Tuxedo Laptop whereever I go.

All of this as true, as long as there is internet available. And, as it turns out, good internet available.

This has become especially obvious to me once I started to work with a project that involves a collection of microservices contained in a Docker environment, which also makes extensive use of custom packages that change frequently alongside the development process. Turns out, every time I want to rebuild my cluster of containers when sitting in the sun in a park, I need my LTE modem to play along. If it doesn't, then a single package that can't be reached will thwart the build.

This does not feel much like freedom after all. So let's see how we can serve all of these locally from our host instead.

First of all, we have to be able to reach our local host from the Docker containers. This is less straightforward than it may seem at first. The most obvious solution is to use the host network driver, but this exposes your whole localhost interface and routes to internet, too. Aside from the security issues that raises, it can also also trick you into assuming that some resources are available when they in fact will not be when you move on to a different environment. What we want is to block access to internet, while choosing which services to let the Docker container use.

Once we have this in place, we want to create local repositories for all the stuff we otherwise need to download. In this particular case, that means a Docker repository, a Python repository, a nodejs repository and a linux repository. We'll use Archlinux for this exercise, because that's been my home environment for the last four years.

In fact, having your own mirror of all these and anything else you base most of your work on is not only a good idea for the purpose of offlining in itself, but wasting on bandwidth for items you've already downloaded hundreds of times is not exactly a nod to climate awareness either. And even more importantly, ensuring availability of software is something we should all participate in, and not merely defer to a couple of git repository giants.

Reaching the local host

Local host not localhost, mind you. Which means we need a different interface to connect to. And since we are not wiring up in any physical sense, a virtual interface seems to be the reasonable way to go.

First, let's prepare a base Docker layer with some tools that you should never leave home without.

Prepare the docker image

FROM archlinux:latest

RUN pacman -Sy && \
        pacman -S --noconfirm gnu-netcat socat inetutils iproute2

Let's build this as an image called archbase. Provided the content above is a file called Dockerfile.archbase in your current directory:

$ docker build -t archbase -f Dockerfile.archbase .

Set up network interfaces

Bring up a virtual interface

$ ip link add foo type dummy

Find the subnet of the no-internet Docker network. This network driver is a builtin that provides exactly what it advertises.

$ docker network inspect no-internet
[...]
"Config": [
        {
                "Subnet": "10.1.1.0/24",
                "Gateway": "10.1.1.1"
        }
]

Assign an IP address to the dummy interface in a different subnet than the one the Docker network uses.

$ ip addr add 10.1.2.1/24 dev foo

Traverse the firewall

Find the bridge used by the Docker container. Look for an ip address that matches the gateway of the docker network config.

$ ip addr ls
17: br-d4ddb68f9938: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:9c:1a:58:d2 brd ff:ff:ff:ff:ff:ff
inet 10.1.1.1/24 brd 10.1.1.255 scope global br-d4ddb68f9938
valid_lft forever preferred_lft forever
inet6 fe80::42:9cff:fe1a:58d2/64 scope link
valid_lft forever preferred_lft forever
[...]
7614: foo: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-496e528b928c state UNKNOWN group default qlen 1000
link/ether 1a:50:53:2b:96:98 brd ff:ff:ff:ff:ff:ff
inet 10.1.3.1/24 scope global foo
valid_lft forever preferred_lft forever
inet6 fe80::1850:53ff:fe2b:9698/64 scope link
valid_lft forever preferred_lft forever

Add the virtual interface to the bridge.

$ ip link set foo master br-d4ddb68f9938

Long story short, the previous step will make the traffic from the container reach the INPUT chain in iptables. Now we can make an exception for incoming traffic from the no-internet Docker bridge.

$ iptables -I INPUT 1 --source br-d4ddb68f9938 --destination 10.1.2.1/24 -j ACCEPT

Verify

Provided you don't have any other hurdles in your local ìptables setup, a port on device foo should be reachable from the docker container. We can use socat to check.

On local host:

$ socat TCP4-LISTEN:8000,bind=10.1.2.1,reuseaddr -

Start the docker container with shell prompt

$ docker run --network no-internet -it archbase /bin/bash

The moment of truth

$ echo bar | socat - TCP4:10.1.2.1:8000

Spoiler: bar should pop up on the local host side.