Container Networking is an emerging application sandboxing mechanism used in home desktops and web-scale enterprise networking solutions similar in concept to a virtual machine. Isolated inside the container from the host and all other containers are a full-featured Linux environment with its own users, file system, processes, and network stack. All applications inside the container are permitted to access or modify files or resources available inside the container only.
It is possible to run multiple containers at the same time, each with their own installations and dependencies. This is particularly useful in instances when newer versions of an application may require a dependency upgraded that may cause conflicts with other application dependencies running on the server. Unlike virtual machines, containers share host resources rather than fully simulating all hardware on the computer, making containers smaller and faster than virtual machines and reducing overhead. Particularly in the context of web-scale applications, containers were designed as a replacement to VMs as a deployment platform for microservice architectures.
Containers also have the characteristic of portability, for example, Docker, a container engine, allows developers to package a container and all its dependencies together. That container package can then be made available to download. Once downloaded, the container can immediately be run on a host.
A container network is a form of virtualization similar to virtual machines (VM) in concept but with distinguishing differences. Primarily, the container method is a form of operating system virtualization as compared to VMs, which are a form of hardware virtualization.
Each virtual machine running on a hypervisor has their own operating system, applications, and libraries, and are able to encapsulate persistent data, install a new OS, use a different filesystem than the host, or use a different kernel version.
Conversely, containers are a “running instance” of an image, ephemeral operating system virtualization that spins up to perform some tasks then is deleted and forgotten. Because of the ephemeral nature of containers, system users run many more instances of containers than compared to virtual machines requiring a larger address space.
To create isolation, a container relies on two Linux Kernel features: namespace and cgroups. To give the container its own view of the system isolating it from other resources, a namespace is created for each of the resources and unshared from the remaining system. Control groups (Cgroups) are then used to monitor and limit system resources like CPU, memory, disk I/O, network, etc.
Containers are becoming rapidly adopted, replacing VMs as a platform for microservices.
Containers have several key benefits:
- Run Containerized Apps Alongside Existing Workloads: Machines can run containerized apps alongside traditional VMs on the same infrastructure, granting flexibility and speed.
- Combine Portability with Security, Visibility, and Management: Because of the inherent design of containers it allows for greater security through sandboxing, resource transparency with the host, task management, and execution environment portability.
- Leverage Your Existing Infrastructure and Scale Easily: Use your existing SDDC to avoid costly and time-consuming re-architecture of your infrastructure that results in silos - silos occur when distinct departments maintain their own IT infrastructure within the same organization. This “silo effect” creates problems when rolling out organization-wide IT policies and upgrades due to the differences in technical configurations in each department. Reintegrating silos is a costly and time-consuming process that can be avoided through container networking.
- Provide Developers with a Docker-Compatible Interface: Developers already familiar with Docker can develop applications in containers through a Docker-compatible interface and then provision them through the self-service management portal or UI.
Containers are deployed as part of the microservices architecture in enterprise environments to help encapsulate individual tasks common for large web applications. Each task may have its own container, the external-facing containers like APIs and GUIs are opened to the public internet, the others would reside on the private network.
The microservices model brings advantages:
- Ease of Deployment: Host configurations can be embedded in containers making them ready to go once deployed.
- Disposable: Containers are designed for quick startup and disposal. If the host fails, bringing applications back online is as simple as bringing in a spare server.
- Fault-Tolerant: Containers create easy redundancy for databases and web servers. Copying the same container over several nodes provides for high-availability and fault-tolerance.
There are five types of container networking used today; their characteristics center around IP-per-container versus IP-per-pod models and the requirement of network address translation (NAT) versus no translation required.
- None: The container receives a network stack; however, it lacks an external connection. This mode is useful for testing containers, staging a container for a later network connection, and assigning to containers not requiring external communications.
- Bridge: Containers that are bridged on an internal host network and allowed to communicate with other containers on the same host. Containers cannot be accessed from outside the host. Bridge network is the default for Docker containers.
- Host: This configuration allows a created container to share the host’s network namespace, granting the container access to all the host’s network interfaces. The least complex of the external networking configurations, this type is prone to port conflicts due to the shared use of the networking interfaces.
- Underlay: Underlays open the host interfaces directly to containers running on the host and remove the need for port-mapping, making them more efficient than bridges.
- Overlay: Overlays use networking tunnels to communicate across hosts, allowing containers to act like they are on the same machine when they are hosted on different hosts.