The best practices for container security

There are robust and resilient security solutions to boost the confidence of providers as well as users toward embracing the containerization journey with clarity and alacrity. In this section, we provide a number of tips, best practices, and key guidelines collected from different sources in order to enable security administrators and consultants to tightly secure Docker containers. At the bottom line, if containers are running in a multi-tenant system and you are not using the proven security practices, then there are definite dangers lurking around the security front.

The first and foremost advice is, don't run random and untested Docker images on your system. Strategize and leverage trusted repositories of Docker images and containers to subscribe and use applications and data containers for application development, packaging, shipping, deployment, and delivery. It is clear from past experiences that any untrusted containers that are downloaded from the public domain may result in malevolent and messy situations. Linux distributions, such as Red Hat Enterprise Linux (RHEL), have the following mechanisms in place in order to assist administrators to ensure the utmost security.

The best practices widely recommended by Docker experts (Daniel Walsh Consulting Engineer, Red Hat) are as follows:

  • Only run container images from trusted parties
  • Container applications should drop privileges or run without privileges whenever possible
  • Make sure the kernel is always updated with the latest security fixes; the security kernel is critical
  • Make sure you have support teams watching for security flaws in the kernel
  • Use a good quality supported host system for running the containers, with regular security updates
  • Do not disable security features of the host operating system
  • Examine your container images for security flaws and make sure the provider fixes them in a timely manner

As mentioned previously, the biggest problem is that everything in Linux is not namespaced. Currently, Docker uses five namespaces to alter the process's view of any system: process, network, mount, hostname, and shared memory. While these give the users some level of security, it is by no means a comprehensive one such as KVM. In a KVM environment, processes in a VM do not talk to the host kernel directly. They do not have any access to kernel filesystems. Device nodes can talk to the VMs kernel, not the hosts. Therefore, in order to have a privilege escalation out of a VM, the process has to subvert the VM's kernel, find an enabling vulnerability in the hypervisor, break through SELinux controls (sVirt), and attack the host's kernel. In the container landscape, the approach is to protect the host from the processes within the container and to protect containers from other containers. It is all about combining or clustering together multiple security controls to defend containers and their contents.

Basically, we want to put in as many security barriers as possible to prevent any sort of break out. If a privileged process can break out of one containment mechanism, the idea is to block them with the next barrier in the hierarchy. With Docker, it is possible to take advantage of as many security mechanisms of Linux as possible. The following are the possible security measures that can be taken:

  • Filesystem protection: Filesystems need to be read-only in order to escape from any kind of unauthorized writing. That is, privileged container processes cannot write to them and do not affect the host system too. Generally, most of the applications need not write anything to their filesystems. There are several Linux distributions with read-only filesystems. It is, therefore, possible to block the ability of the privileged container processes from remounting filesystems as read and write. It is all about blocking the ability to mount any filesystems within the container.
  • Copy-on-write filesystems: Docker has been using the Advanced Multi-Layered Unification Filesystem (AUFS) as a filesystem for containers. AUFS is a layered filesystem that can transparently overlay one or more existing filesystems. When a process needs to modify a file, AUFS first creates a copy of that file and is capable of merging multiple layers into a single representation of a filesystem. This process is called copy-on-write, and this prevents one container from seeing the changes of another container even if they write to the same filesystem image. One container cannot change the image content to affect the processes in another container.
  • The choice of capabilities: Typically, there are two ways to perform permission checks: privileged processes and unprivileged processes. Privileged processes bypass all sorts of kernel permission checks, while unprivileged processes are subject to the full permission checking based on the process's credentials. The recent Linux kernel divides the privileges traditionally associated with the superuser into distinct units known as capabilities, which can be independently enabled and disabled. Capabilities are a per-thread attribute. Removing capabilities can bring forth several positive changes in Docker containers. Invariably, capabilities decide the Docker functionality, accessibility, usability, security, and so on. Therefore, it needs a deeper thinking while embarking on the journey of adding as well as removing capabilities.
  • Keeping systems and data secure: Some security issues need to be addressed before enterprises and service providers use containers in production environments. Containerization will eventually make it easier to secure applications for the following three reasons:
    • A smaller payload reduces the surface area for security flaws
    • Instead of incrementally patching the operating system, you can update it
    • By allowing a clear separation of concerns, containers help IT and application teams collaborate purposefully

The IT department is responsible for security flaws associated with the infrastructure. The application team fixes flaws inside the container and is also responsible for runtime dependencies. Easing the tension between IT and applications development teams helps smooth the transition to a hybrid cloud model. The responsibilities of each team are clearly demarcated in order to secure both containers and their runtime infrastructures. With such a clear segregation, proactively identifying any visible and invisible endangering security ordeals and promptly eliminating time, policy engineering and enforcement, precise and perfect configuration, leveraging appropriate security-unearthing and mitigation tools, and so on, are being systematically accomplished.

  • Leveraging Linux kernel capabilities: An average server (bare metal or VM) needs to run a bunch of processes as root. These typically include ssh, cron, syslogd, hardware management tools (for example, load modules), and network configuration tools (for example, handling DHCP, WPA, or VPNs). A container is very different because almost all of these tasks are being handled by the infrastructures on which the containers are to be hosted and run. There are several best practices, key guidelines, technical know-how, and so on in various blogs authored by security experts. You can find some of the most interesting and inspiring security-related details at https://docs.docker.com/.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset