Simplifying Containerization And Virtualization Management On Linux And Unix-Like Hosts

Understanding Containers and Virtual Machines

Containers and virtual machines are two virtualization technologies that allow multiple workloads to run on a single Linux or Unix-like host. Containers virtualize at the operating system level by isolating processes and resources using features like namespaces and cgroups in the Linux kernel. This allows running multiple isolated container instances using the same Linux kernel. Virtual machines (VMs) virtualize at the hardware level by emulating a full operating system kernel and virtual hardware to create isolated VM instances.

The key differences between containers and VMs are:

  • Containers share the host operating system kernel and binarires while VMs run their own guest operating system kernel
  • Containers impose little performance overhead due to OS-level virtualization while VMs impose more overhead due to hardware emulation
  • Containers are more lightweight and start almost instantly while VMs take more time to boot up the guest OS
  • It takes less resources to run containers compared to VMs on the same host

In general, containers are the preferred choice when you want to deploy and scale applications rapidly. The lightweight and portable nature makes containers perfect for microservices and cloud-native applications. Virtual machines provide stronger workload isolation and can run different guest operating systems, so they suit better for legacy monolithic applications that cannot be modernized.

Managing Docker Containers

Docker is the most popular containerization platform that popularized containers on Linux. Docker provides tooling and a runtime to build, run, share and manage containers.

Installing Docker on Linux/Unix

Most Linux and Unix-like operating systems have Docker available as installable community or enterprise editions package. On Debian, Ubuntu, and related distributions you can install Docker using:

$ sudo apt update
$ sudo apt install docker.io

For RHEL, CentOS, Fedora, and related distributions the Docker package can be installed using:

$ sudo yum install docker

Docker relies on containerd and runc lower level runtimes. Additional kernel capabilities and cgroup mounts need to enabled as well during installation so Docker can function properly and isolation works correctly.

Running and Stopping Docker Containers

The docker container lifecycle commands allow running, starting, stopping, and deleting containers.

Some common examples include:

# Run container detached 
$ docker run -d nginx

# Start/Restart stopped container
$ docker start nginx 

# Stop running container 
$ docker stop nginx

# Remove container when stopped
$ docker rm nginx

Docker uses UnionFS stacking of image layers and copy-on-write to efficiently create containers from images. System resources like cpu, memory, and storage can be limited on a per-container basis using the Docker run command.

Container Networking and Storage

Docker provides multiple network drivers to connect containers in user-defined bridges, overlays, MACVLANs etc. Firewalld, iptables rules can secure inter-container and outside connectivity. Persistent container storage can mapped to host directories or external storage volumes.

Some examples of container networking and storage in Docker include:

# Bridged container connectivity
$ docker network create -d bridge mybridge 

# Overlay for container clustering 
$ docker network create -d overlay myoverlay

# Host directory mount
$ docker run -v /data:/var/lib/data nginx  

# External volume mount
$ docker run -v mydata:/var/lib/data nginx

Pushing and Pulling Images from Registries

Docker containers are instantiated from read-only template images that package the application binaries, libraries, dependencies and other filesystem contents. These portable images can be stored and distributed using registry servers like Docker Hub and private Docker registries.

 
# Push image to registry 
$ docker push myregistry/myapp:1.0

# Pull image from registry
$ docker pull myregistry/myapp:1.0 

Registries allow sharing and deploying Docker images across hosts for consistent container execution. Images can be automatically rebuilt from Dockerfiles to rebuild containers with latest dependencies.

Managing Podman Containers

Podman is a daemonless container engine that provides improved security and rootless container capabilities compared to Docker. It supports OCI (Open Container Initiative) standards for compatibility with Docker.

Podman as a Daemonless Alternative to Docker

Unlike Docker, Podman does not require a background containerd daemon and VM-based infrastructure to run containers. The podman CLI communicates directly with OCI runtimes like runc and crun to manage containers.

Being daemonless allows Podman to run containers and pods more securely by giving each instance a separate PID namespace. There is also no centralized Docker socket for root access to the host OS.

Running Podman Containers with Improved Security

Podman accepts most docker run commands, but applies additional security measures:

# Run container as non-root user by default
$ podman run -dt redis 

# Restrict container capabilities  
$ podman run --cap-drop ALL redis

# Enforce SELinux, Apparmor, Seccomp policies
$ podman run --security-opt label=type:svirt_apache_t redis  

The rootless Podman mode allows non-privileged users to manage containers without requiring root access to the host.

$ podman unshare cat /proc/self/uid_map
         0       1000          1
         1     100000      65536

Sharing Containers with Docker using OCI Standards

Podman implements the OCI container runtime specification for complete Docker compatibility. This allows building and running containers from Dockerfiles and Docker images on Podman:

# Build container using Dockerfile 
$ podman build -t myimage . 

# Run Docker official image 
$ podman run redis:alpine  

# Push and pull from Docker registries
$ podman push myregistry/myimage:1.0 
$ podman pull myregistry/myimage:1.0

The same familiar container workflows and commands work interchangeably between Docker and Podman thanks to adhering to open standards.

Leveraging Kubernetes for Container Orchestration

Kubernetes has become the defacto open-source container orchestration platform for deploying and managing container workloads at scale.

Kubernetes Architecture and Main Components

A Kubernetes cluster consists of one or more master nodes running API server, scheduler, controller manager control plane components and worker nodes running the kubelet and container runtime for actual application deployment. Etcd key-value store serves as the backend database.

Pods containing tightly coupled containers are the basic schedulable units that model an application instance. Higher level abstractions like Deployments automatically manage scale-out and updates for sets of pods.

# Create Redis deployment
$ kubectl create deployment redis --image=redis 

# Scale out deployment to 3 pod replicas
$ kubectl scale deployment redis --replicas=3 

# Expose deployment through a NodePort service:  
$ kubectl expose deployment redis --port 6379 --type=NodePort 

Scaling Containerized Workloads on Clusters

Kubernetes provides multiple ways to scale stateless applications horizontally across nodes:

  • ReplicaSet – Defines number of pod replicas
  • Deployments – Declarative updates and rollbacks
  • Horizontal Pod Autoscaler – CPU/Memory threshold scaling
  • Cluster Autoscaler – Adds worker nodes

Storage abstractions like PersistentVolume and StatefulSet allow stateful apps to scale as well.

Updating Applications with Zero Downtime

Kubernetes Deployments support rolling, canary, and blue-green deployments to apply application updates with no downtime:

  
# Rolling update for backend deployment
$ kubectl set image deployment/backend backend=backend:v2

# Canary testing
$ kubectl set image deployment/backend backend=backend:v2 --containers=backend 
--record=true  

# Blue-green using multiple deployments

Optimized serviceDiscovery, healthchecks, volumes, configMaps and secrets simplify app management.

Provisioning Virtual Machines

Linux virtual machines allow completely isolated guest operating systems to run using the same kernel and hardware as the host system.

Hypervisors like KVM and Functionality

KVM (Kernel-based Virtual Machine) is the default hypervisor integrated into the Linux kernel to efficiently create virtualized environments. QEMU emulator and virtIO drivers emulate virtual hardware accessed by the VM Guest OS like disk, network adapters, graphics card and other devices.

Custom networking bridges transparently connect guest VMs while still isolating traffic from the host and between VMs. The libvirt toolkit provides management of all the virtualization infrastructure using CLI and GUI tools like virt-manager.

Creating VM Images with Virt Manager

Virtual machine images in qcow2 format package the bootable VM disk to allow provisioning from known clean OS templates. Common base images for Debian, CentOS, RHEL, and other distributions are available for download.

# Ubuntu 20.04 cloud image 
$ wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img 

Simplified graphical wizards in Virt Manager guide with creating new VMs images from ISOs, existing disks or cloud images for quick setup.

Allocating vCPUs, Memory and Storage to VMs

Key virtualized hardware like CPU cores, RAM, network interfaces can configured in the domain XML used to define VMs:

<domain type='kvm'>
  <vcpu placement='static'>2</vcpu>
  <memory unit='KiB'>4194304</memory>
  <os>
     ...
  </os>
</domain>  

virsh, Virt Manager or web UIs adjust hardware without needing to shutdown guests for hot plugging support.

Networking Virtual Machines through Bridges and NAT

A Linux bridge interfaces connects VMs to virtual networks and external connectivity:

# Create isolated network bridge
$ sudo brctl addbr vm-net

# List bridges 
$ brctl show 
vm-net
virbr0   

# Connected VM tap devices
$ brctl showstp vm-net

Forwards and routed NAT provides outbound and inbound access while hiding internal VM IPs.

Monitoring Usage and Statistics

With containers and VMs running business critical applications, monitoring resource usage, activity logs and health metrics is essential.

Resource Utilization Metrics for Containers and VMs

Solutions like Prometheus scrape and aggregate system resource metrics including:

  • CPU usage percentage
  • Memory utilization
  • Disk I/O rates
  • Network bandwidth

Time series records help analyze past usage and provision correctly sized containers and VMs.

Visualizing Data with Grafana Dashboards

Grafana provides visually rich graphical dashboards for data visualization after metrics get collected by Prometheus, InfluxDB or other sources. Custom layouts can display usage per container, pod, VM, namespace or whatever logical grouping.

Setting Alerts and Notifications

Alert manager enables setting thresholds on metrics that trigger notifications to immediately alert teams of issues via:

  • Email
  • PagerDuty
  • Slack Channels
  • Webhooks

Knowing about problems like VM going down, container using excess resources or hosts running low on disk space lets you troubleshoot faster.

Troubleshooting Performance Issues

Correlating monitoring metrics with application logs filtered by container or VM points where to debug. Slow code paths manifest as higher CPU usage, excessive retries show up spikes in memory and network.

Tools like flamegraphs generate stack traces visualizing functions taking most resources. Tracing follows requests across microservices to pinpoint latency.

Leave a Reply

Your email address will not be published. Required fields are marked *