Svg Vector Icons : Return to Glossary

What are containers?

Containers are technologies that encapsulates a complete runtime environment including an application, its libraries, binaries (that the application depends on), and any additional configuration files necessary. This means that developers can worry less about runtime environments and more about the apps they are developing. Containers increase an application’s portability, scalability, security, and agility.

Conceptually, containers are similar to Virtual Machines (VMs); however, the key difference is that they share a single host OS. The container is abstracted from the underlying OS distribution and infrastructure, offering true portability between development, staging, and production environments. Thus, containers are unaffected by any OS upgrades or migration between platforms, whether the execution environment is on-premises or on one or more clouds.

Esg Showcase – Vmware Cloud Foundation 4 – Integrating Vsphere With Kubernetes

Download Now

What is a containerized application? 

Containerized applications are applications that can be deployed without regard to underlying infrastructure. Containerized applications are isolated from each other in a similar way to VMs, thus increasing reliability and reducing problems due to application interactions.

What are the benefits of using containers?

  • Lighter weight. Containers do not include the OS, so they consume fewer resources than VMs while providing similar application isolation.
  • Portable and consistent. Containers can be developed and deployed on virtually any infrastructure, and no application changes are required to run containerized applications on different infrastructures. Containerized applications are infrastructure-agnostic and will perform the same regardless of where they are deployed.
  • Scalability. Container deployments can automatically be scaled up or down nearly instantly on-demand as workload requirement changes thanks to orchestration tools like Kubernetes.
  • Agility. Containers are the ideal platform for DevOps and microservices deployments, improving CI/CD pipeline and developer efficiency. For example, applications can be composed in multiple containers that communicate via APIs, enabling additional front-end instances to be initiated to handle demand peaks without the need to increase the number of back-end instances.

What problems do containers solve?

Containers are versatile and solve a broad range of IT problems throughout an applications lifecycle.

Some key problems include:

  • Ensuring software runs properly when moved between computing environments
  • Increasing efficiency by eliminating the need for a separate hypervisor for every containerized application
  • Eliminating conflicts and dependencies between multiple applications running on the same hardware
  • Facilitating microservices deployments enhancing DevOps and overall agility

What are some container use cases for developers?

  • Improve application portability across different platforms and configurations, so that code developed on one version of a language compiler or interpreter runs flawlessly on subsequent versions with no revisions required.
  • Free developers from having to develop, test, and deploy on the same infrastructure, so that developers who write code on their laptops can be confident that application will run as desired on any other infrastructure, whether on-premises server or a cloud-based VM.
  • Facilitate agile development processes such as CI/CD, speeding code acceptance and deployment.

What are some container use cases for IT operations?

  • Improve application security by isolating from other applications, just as VMs do, only in a lightweight fashion since each container runs on a single OS instance on the underlying infrastructure.
  • Seamless migration of applications. Since containerized applications run the same across different OS version, network topologies, or storage configurations, containers enable seamless migration of applications to and from cloud platforms.
  • Improve IT efficiency by enabling multiple application containers to run on a single OS instance. Since containers are often tens of megabytes in size where VMs are often ten or more gigabytes in size, a substantially larger number of containers can run on a single server instance. 
  • Containers offer extreme on-demand scalability, since additional container instances can spin up or down in milliseconds, VMs can take minutes to spin up.

How do containers support a cloud-native strategy?

The VMware paper “How to Think Cloud Native” defines being cloud native as structuring teams, culture, and technology to use automation and architectures to manage complexity and unlock velocity. A cloud-native mode of operations is as much about scaling people as it is about scaling infrastructure to meet rapidly changing needs.

Thus, despite the name, an organization need not run in the cloud or utilize containers to be ‘cloud-native’, rather it can apply those techniques of automation and leveraging architecture incrementally to smooth any transition to the cloud.

Here then are some ways containers support cloud-native deployments:

  • Containers are omnipresent. They run on Linux and Windows, on public and private cloud, and on-premises infrastructure. One container image can be deployed on many clouds and hybrid clouds to take advantage of a given provider’s offerings.
  • Containers offer the same open source orchestration tools such as Kubernetes across all supported environments, thus fostering cloud migration and application mobility.
  • No need to worry about hardware configurations. Since containers abstract underlying infrastructure, developers need not be concerned with hardware configurations whether running on-premises, in the cloud, on VMs, or on bare metal.
  • Containers enhance application security by isolating unwanted interactions between applications that can impact the underlying OS or neighbor containers.

How do containers work?

The story of containers began in 1979 with the Unix chroot system call, which changed a process’ root directory to another location or namespace in the file system. By 2001 Linux VServer emerged as a ‘jail’ mechanism enabling a system to be partitioned into multiple file systems, IP addresses, and memory. Solaris containers debuted in 2004, leading to Linux containers (LXC) in 2008, and eventually to Docker’s introduction in 2013, which led to the dramatic growth of container adoption we are enjoying today. The availability of tools for container orchestration like Kubernetes added fuel to the growth of containers by offering comprehensive tools to create, manage, scale, and deploy containers across multiple platforms.

Just like VMs, containers isolate applications from one another – with the big difference being that each container rides about a single OS instance, rather than encapsulating the OS within the container. The result is a much more lightweight instance with most of the benefits of VMs.

There are two main components of containers: a registry or repository used to transfer container images, and the application container engine which turns those images into executable code.

Container repositories enable reusability of commonly used container images such as databases and development languages. The application program interface (API) facilitates container management and inter-container communications.

Containers are created by packaging applications, whether monolithic or microservices-based, from multiple images that reside in one or more repositories along with any libraries or other application dependencies. This eliminates portability and compatibility issues.

Major container features include:

  • Namespaces, which provide the visibility to the underlying OS. Containers can have multiple namespaces with access to different OS features such as user ID information and mounted filesystems.
  • Control groups, which are resource managers in the Linux kernel that ensure that a container does not hog system resources but only uses what is allocated to it.
  • Union file systems, which prevent data duplication when new containers or instances are deployed by ‘stacking’ files and directories into a single file system.

What are the pillars of a successful containerization strategy?

Successful container strategies have certain pillars in common:

  1. Enterprise-wide buy-in of container endeavor
  2. Build a business case for proof-of-concept first
  3. Begin with low-hanging fruit – i.e. modern, non-clustered applications 
  4. Apply CI/CD design and testing principles
  5. Use orchestration, automation with Kubernetes
  6. After POC success, add training, support, and governance models for continuous service-oriented monitoring
  7. Develop an infrastructure plan that works for your applications

Click here to learn more about container strategy

Related Topics
Container Management
Container Networking
Container Security

VMware Containers related products, solutions, and resources