Three years ago, the first public release of Docker made containers easier to use for all developers. Since then, Docker has evolved considerably, and became a full platform. The Docker Engine is still the central component providing an open source runtime to execute applications in containers; but it has been augmented by additional open source tools like Machine, Swarm, Compose; as well as commercial services like Docker Hub, Docker Cloud, Docker Datacenter, and more.
Docker and containers give us an easy way to run applications in a reliable, portable way. Application build is defined with a „Dockerfile“, and multiple containers can be combined together in a „Compose file“ to form complex applications.
This makes it really easy to deploy an application on any machine: a development PC, a test server on a cloud VM, a physical machine in a rack…
We want to achieve higher levels on traffic load
The next challenge is to scale our applications to achieve higher levels of traffic load. This is done by creating multiple instances of the services that require more processing power, and balancing the load across those instances. This also implies that our application will span multiple machines. We must address network communication and service discovery in that new environment.
Docker Swarm is a native clustering mechanism to seamlessly transition from a single Docker node to a cluster up to thousands of nodes. Swarm is designed to blend perfectly in the Docker ecosystem, by leveraging the same APIs and tools as the Docker Engine. After deploying a Swarm cluster, you can use it with the Docker CLI, Compose, and your favorite tools, as if your whole cluster was now an unified, elastic Docker host.
Swarm leverages a lot of powerful mechanisms: dynamic cluster membership, service discovery, overlay networks allowing private communication between groups of containers, resource allocation and provisioning, and much more. Learning all those concepts, and how to implement them, can sound daunting; but in collaboration with our Docker-Partner Neofonie we are offering a one-day workshop on 22nd april in Berlin, that will show you that it is way easier than it seems!
During this workshop, we will deploy a demo application built around a microservices architecture. The application features multiple languages and frameworks (Python, Ruby, Node.js), stateful data stores, and web services. Each student will have their own cluster of five nodes, and will setup Swarm, a highly-available key/value store, and overlay networks. We will show how to manage application lifecycle, store private images on local registries, and deploy and scale the demo application across the cluster.
Multiple strategies will be presented to implement load-balancing and high availability.
There will also be a whole chapter about operations tools and techniques. We will see how to setup a centralized log collection system, and how to route container logs to that system. We will also tackle backups, security upgrades, and network supervision and traffic analysis.
The workshop will be delivered by Jérôme Petazzoni, who has 6 years of experience working with containers in production at scale. Before Docker, Jérôme was part of the team that built and operated dotCloud, a platform-as-a-service based on containers. Jérôme has delivered that workshop multiple times and continuously improves the material to include the latest features of Docker as well as address feedback from previous sessions.
Last but not least, the workshop is designed in a way, that you can learn as much as possible, as fast as possible: students do not need to install Docker on their computers, as they will be provided with a cluster of cloud virtual machines, in order to jump immediately right into the action. All you need is a computer with a SSH client.
Docker Workshop for Beginner: In Berlin, Hamburg, Cologne or Frankfurt – Register now