Defining container orchestration
Learning objectives
- You know what container orchestration means.
So far, we've worked with Docker and Docker Compose, relying in part on the knowledge from the Web Software Development Course. Essentially, Docker has been used to create containers, and Docker Compose has been used to run them. In this course, we've also briefly looked into adjusting the number of container replicas, but we haven't really looked into how to manage them.
What if, for example, a container crashes or a part of the application starts to become overburdened with the load?
To handle a crashed container or to increase the number or replicas, a classic approach would be to manually restart the container (or to use a restart policy) and to manually adjust the number of replicas in the application. However, this approach is not scalable, as it requires manual intervention.
This is where container orchestration comes in. The term orchestration refers to the automated configuration, coordination, and management of computer systems and software. In the context of containers, the term refers to automating deployment, management, scaling, and networking of containers -- essentially managing the lifecycle of the containers. The automation becomes necessary with the increasing amount of containers (e.g. through the use of microservices), as well as from the increasing complexity of the containerized applications; orchestration tools help in managing the complexity.
There are a handful of container orchestration implementations, including Kubernetes, Docker Swarm, and Apache Mesos. In this course, we'll be using Kubernetes, which is the most popular container orchestration tool.