Docker Swarm Offline
I work on learning new technologies during my commute. Even though I can tether from my phone, pulling 100MB Docker images is not so efficient when the train is traveling at 60 miles per hour.
Hence, I tend to work offline. I have discovered when working with Docker Swarm offline, it’s a little bit of a challenge.
In this article, I will go over one of these challenges: Docker image distribution.
Requirements
If you would like to follow along, please:
- Install Virtualbox
- Install vagrant
- Download this Vagrantfile to your local folder and modify this line:
WORKER_COUNT = 0
to be:
WORKER_COUNT = 1
- From the local folder where the Vagrantfile is, run:
$ vagrant up
to create virtual computers: manager and worker1.
Ensure Clean Slate
Let’s start with no images on any of the nodes connected to the swarm,
run $ docker rmi <image name>
to get rid of any images. You may need
the -f
option.
On manager node, use $ docker images
and $ docker ps
to check
there are no images or containers running:
vagrant@manager:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
vagrant@manager:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Similarly on worker1 node:
vagrant@worker1:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
vagrant@worker1:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Just to double check, use $ docker service list
to ensure no
services running on the swarm:
vagrant@manager:~$ sudo docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
Load images
For this article, we are only going to work with the nginx image. On
the manager node, grab images using command: docker image pull
- the
nginx image itself is 109MB. No big deal when moving at 60mph, right?!
vagrant@manager:~$ sudo docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
a5a6f2f73cd8: Download complete
1ba02017c4b2: Download complete
33b176c904de: Download complete
Digest: sha256:5d32f60db294b5deb55d078cd4feb410ad88e6fe77500c87d3970eca97f54dba
Status: Downloaded newer image for nginx:latest
vagrant@manager:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 62f816a209e6 2 weeks ago 109MB
That’s all we will need from hub.docker.com for this article. I will work offline for the rest of the article. If you are following along and want to see the same results, find a way to detach from the Internet. Either by turning off the network from the host computer.
Running offline
With any locally stored image, (i.e. $ docker images
,) we can create
a swarm service from an image on the server when it is offline, but
each time, a warning message will appear:
image nginx:latest could not be accessed on a registry to record its digest. Each node will access nginx:latest independently, possibly leading to different nodes running different versions of the image.
For example:
vagrant@manager:~$ sudo docker service create nginx
image nginx:latest could not be accessed on a registry to record
its digest. Each node will access nginx:latest independently,
possibly leading to different nodes running different
versions of the image.
ve1dms3ut00e1rxqwnexdmmn9
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
This is fine, the service starts. It’s a single instance.
Let’s see what happens when we want to scale the image, since we have more than one node in our swarm, let’s use that other node:
vagrant@manager:~$ sudo docker service scale relaxed_aryabhata=2
relaxed_aryabhata scaled to 2
overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>]
verify: Service converged
Ok, so the service scaled and there are two instances. Which nodes are
they running on? Use docker ps
to find out:
On manager:
vagrant@manager:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a21491f61254 nginx:latest "nginx -g 'daemon of…" 39 seconds ago Up 38 seconds 80/tcp relaxed_aryabhata.2.v1iidzzkkflmwgyff6zep5psd
0774a9f7cccf nginx:latest "nginx -g 'daemon of…" 3 minutes ago Up 3 minutes 80/tcp relaxed_aryabhata.1.tv10gis5ywudj4unqsds1uywn
On worker1:
vagrant@worker1:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Interesting. BOTH instances of the nginx container is running on the
manager node, none are on worker1. Hmm… Let’s see what docker
service ps <service ID>
says:
On manager:
vagrant@manager:~$ sudo docker service ps ve1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
tv10gis5ywud relaxed_aryabhata.1 nginx:latest manager Running Running 3 minutes ago
v1iidzzkkflm relaxed_aryabhata.2 nginx:latest manager Running Running 2 minutes ago
b5nb2q1of04j \_ relaxed_aryabhata.2 nginx:latest worker1 Shutdown Rejected 37 seconds ago "No such image: nginx:latest"
5k8wky4e5zw1 \_ relaxed_aryabhata.2 nginx:latest worker1 Shutdown Rejected 52 seconds ago "No such image: nginx:latest"
ql2vzpub3dn5 \_ relaxed_aryabhata.2 nginx:latest worker1 Shutdown Rejected about a minute ago "No such image: nginx:latest"
aaa66r7go1fz \_ relaxed_aryabhata.2 nginx:latest worker1 Shutdown Rejected about a minute ago "No such image: nginx:latest"
So, there were attempts to scale the service on the worker1 node, but the error message says:
“No such image: nginx:latest”
Which makes sense, there are no images on worker1.
This is a problem of working with Docker Swarm offline. For any service, all the nodes must have the image locally or else it would try to reach out to the repository, by default: https://hub.docker.com.
Without a network connection, the nodes cannot get the image and would require manually loading the image from file, using the techniques from my How to Work with Docker Images article.
With a small swarm, it’s easy to manually load images. When there are more, then it becomes tedious, and error prone.
Better Solution?
As always, I am on the look out for a better solution and have found one that works well, hint: it’s setting up a local Docker Registry.
I will go over a solution in Docker Swarm and how to interact with it.
Stay tuned!