Getting Started with SocketPlane and Docker on OpenStack VMs
With all the well-deserved attention Docker gets these days, the networking aspects of Docker become increasingly important. As many have pointed out already, Docker itself has somewhat limited networking options. Several projects exist to fix this; SocketPlane, Weave, Flannel by CoreOS and Kubernetes (which is an entire container orchestration solution). Docker recently acquired SocketPlane to become part of Docker itself, both to gain better native networking options and to get help building the networking APIs necessary for other network solutions to plug into Docker.
In this post, I’ll show how to deploy and use SocketPlane on OpenStack VMs. This is based on the technology preview of SocketPlane available on Github, which I’ll deploy on Ubuntu 14.04 Trusty VMs.
Launch the first VM and bootstrap the cluster
As SocketPlane is a cluster solution with automatic leader election, all nodes in the cluster are equal and run the same services. However, the first node has to be told to bootstrap the cluster. With at least one node running, new nodes automatically join the cluster when they start up.
To get the first node to download SocketPlane, install the software including all dependencies, and bootstrap the cluster, create a Cloud-init script like this:
Start the first node and wait for the SocketPlane bootstrap to complete before starting more nodes (it takes a while, so grab a cup of coffee):
You have to customize the flavor, image, key-name, net-id and floating IP to suit your OpenStack environment before running these commands. I attach a floating IP to the node to be able to log into it and interact with SocketPlane. If you want to watch the progress of Cloud-init, you can now tail the output logs via SSH like this:
As you can see from the output above, the SocketPlane setup script is busy fetching the Docker images for the dependencies of SocketPlane and the SocketPlane agent itself. When the bootstrapping is done, the output will look like this:
The “Done!!!” line marks the end of the setup script downloaded from get.socketplane.io. The next line of output is from the “sudo socketplane cluster bind eth0” command I included in the Cloud-init script.
Important note about SocketPlane on OpenStack VMs
If you just follow the deployment instructions for a Non-Vagrant install / deploy in the SocketPlane README, you might run into an issue with the SocketPlane agent. The agent by default tries to autodetect the network interface to bind to, but that does not seem to work as expected when using OpenStack VMs. If you encounter this issue, the agent log will be full of messages like these:
To resolve this issue you have to explicitly tell SocketPlane which network interface to use:
If you don’t, the SocketPlane setup process will be stuck and never complete. This step is required on all nodes in the cluster, since they follow the same setup process.
Check the SocketPlane agent logs
The “socketplane agent logs” CLI command is useful for checking the cluster state and to see what events have occured. After the initial setup process has finished, the output will look similar to this:
SocketPlane uses Consul as a distributed key-value store for cluster configuration and cluster membership tracking. From the log output we can see that a Consul agent is started, the “socketplane1” host joins, a leader election is performed (which this single Consul agent obviously wins), and key-value pairs for the default subnet and network are created.
A note on the SocketPlane overlay network model
The real power of the SocketPlane solution lies in the overlay networks it creates. The overlay network spans all SocketPlane nodes in the cluster. SocketPlane uses VXLAN tunnels to encapsulate container traffic between nodes, so that several Docker containers running on different nodes can belong to the same virtual network and get IP addresses in the same subnet. This resembles the way OpenStack itself can use VXLAN to encapsulate traffic for a virtual tenant network that spans several physical compute hosts in the same cluster. Using SocketPlane on an OpenStack cluster which uses VXLAN (or GRE) means we use two layers of encapsulation, which is something to keep in mind if MTU and fragmentation issues occur.
Spin up more SocketPlane worker nodes
Of course we need some more nodes as workers in our SocketPlane cluster to make it a real cluster, so create another Cloud-init script for them to use:
This is almost identical to the first Cloud-init script, just without the BOOTSTRAP=true environment variable.
Spin up a couple more nodes:
Watch the agent log from the first node in realtime with the -f flag (just like with “tail”) to validate that the nodes join the cluster as they are supposed to:
The nodes joined the cluster as expected, with no need to actually SSH into the VMs and run any CLI commands, since Cloud-init took care of the entire setup process. As you may have noted I didn’t allocate any floating IP to the new worker VMs, since I don’t need access to them directly. All the VMs run in the same OpenStack virtual tenant network and are able to communicate internally on that subnet (10.20.30.0/24 in my case).
Create a virtual network and launch the first container
To test the new SocketPlane cluster, first create a new virtual network “net1” with an address range you choose yourself:
Now you should have two SocketPlane networks, the default and the new one you just created:
Now, launch a container on the virtual “net1” network:
The “-n net1” option tells SocketPlane what virtual network to use. The container is automatically assigned a free IP address from the IP address range you chose. I started an Ubuntu container running Bash as an example. You can start any Docker image you want, as all arguments after “-n net1” are passed directly to the “docker run” command which SocketPlane wraps.
The beauty of SocketPlane is that you don’t have to do any port mapping or linking for containers to be able to communicate with other containers. They behave just like VMs launched on a virtual OpenStack network and have access to other containers on the same network, in addition to resources outside the cluster:
Multiple containers on the same virtual network
Keep the previous window open to keep the container running and SSH to the first SocketPlane node again. Then launch another container on the same virtual network and ping the first container to verify connectivity:
As expected, both containers see each other on the subnet they share and can communicate. However, both containers run on the first SocketPlane node in the cluster. To prove that this communication works also between different SocketPlane nodes, I’ll SSH from the first to the second node and start a new container there. To SSH between nodes I’ll use the private IP address of the second SocketPlane VM, since I didn’t allocate a floating IP to it:
No trouble there, the new container on a different OpenStack VM can reach the other containers and also communicate with the outside world.
This concludes the Getting Started-tutorial for SocketPlane on OpenStack VMs. Please bear in mind that this is based on a technology preview of SocketPlane, which is bound to change as SocketPlane and Docker become more integrated in the months to come. I’m sure the addition of SocketPlane to Docker will bring great benefits to the Docker community as a whole!