The evolution of container technology propels interest in tools aimed at securing microservices. This article will shed light on the solutions that exist in this area and the top threats to container infrastructure. It will also explain who should be responsible for safeguarding such applications in a company.
Container-based microservices are increasingly popular and are used to accomplish a variety of tasks. However, traditional security measures may be inefficient in virtual environments. Let’s discuss what containerized applications are and how to secure them.
What are containers?
Containers are essentially small virtual machines designed to simplify and speed up the development process. This term is closely related to cloud-native applications, which are independent cloud services based on four basic elements. These are the DevOps paradigm, the implementation of the CI/CD pipeline, application development according to microservice architecture, and the use of containers along with their orchestration tools.
The emergence of containers was a response to the poor implementation of multitasking in modern operating systems. Containerization first and foremost helps work with open-source products by running them on a single server, which would be much harder to do with the standard operating system features. Containers can be provided as a service, delivered from the cloud, or deployed within a customer’s computer infrastructure.
Orchestration tools are important elements of the container ecosystem. They underlie load balancing, fault tolerance, and centralized management. As a result, these instruments create conditions for system scaling. Orchestration can be implemented in four ways:
- Cloud provider service;
- A self-deployed Kubernetes cluster;
- Container management systems intended for developers;
- Container management systems focused on usability.
There are three main stages of the container lifecycle. First, the container is built and undergoes functional and load tests. Next comes the storage phase, when the container is in the image registry, waiting to be launched. Container runtime is the third stage.
What can undermine container security?
In December 2020, cloud security company Prevasio examined about 4 million containers hosted at Docker Hub and found that 51% of them were riddled with critical vulnerabilities, and 13% contained high-impact loopholes. Coin miners, hacker tools, and other types of malware, including ransomware, were detected inside the crudely secured container images. Only a fifth of all the analyzed images had no known vulnerabilities. These findings show the big picture: containers are susceptible to serious threats.
The security of the infrastructure that hosts containers is hugely important in this context. In addition to the proper configuration of orchestration systems, a well-thought-out set of permissions to access the Docker node or Kubernetes plays a major role. Another aspect is the protection of the container itself, which largely depends on the security of the images that were used to build it.
The later a vulnerability is identified, the harder it is to fix. This is the gist of the Shift Left paradigm, which recommends focusing on security as early in the product lifecycle as the design or requirements gathering stage. Automatic security checks can also be embedded into the CI/CD pipeline.
Slip-ups during the continuous integration (CI) phase are risky business as well. For instance, the use of questionably safe third-party services for testing may leak data from the product base. Therefore, container security should be approached holistically, with each stage of the software engineering lifecycle being subject to thorough analysis. The booming containerization has also raised the issue of trust in regards to the environment, the code, and the running applications.
There are four levels of security for cloud-native applications: code security, build security, deployment security, and runtime security. Each of these includes several elements that need to be addressed. At the code security level, for instance, these are secure development and open-source component management. When it comes to container security in general, it essentially boils down to controlling integrity, delimiting access to the pipeline, and ensuring that vulnerabilities are identified before a product is released.
Information security professionals traditionally work in real-time, blocking problems “here and now.” The use of unified application deployment tools (and a container is one of the ways to unify this process) also allows testing a product before it is deployed. Therefore, containers can be checked for malicious code and vulnerable components in advance to identify the secrets left behind and unveil policy violations.
To elaborate further on container security, it is also worth touching upon the target audience of specialized InfoSec products. Are these systems intended for information security specialists, or are they closer to developers and users? There is no short answer. Some are more focused on InfoSec experts; some are oriented towards building interoperability between security teams, cluster administrators, and developers, while others provide visibility into containers, allowing you to understand how the application is coded and how it works.
Managing secrets in containerized environments
Containerized microservices communicate with each other and external systems by establishing secure connections, performing authentication with usernames and passwords, and using other types of secrets. How do you protect keys, passwords, and other sensitive data in containers from leaking? How is this issue addressed in Kubernetes? Is it possible to control this aspect of security?
Kubernetes comes with a basic mechanism for managing secrets, which prevents keys and passwords from being stored in plaintext. In addition, there are separate products on the market that serve as secrets management tools for container environments.
The need for such extras stems from the fact that Kubernetes lacks a mechanism for managing the lifecycle of secrets. Another noteworthy fact is that secrets are stored in a text file by default, which means that a provider may access them during the deployment of a containerized environment in the cloud.
It is also important to manage the process of adding secrets, control the use of keys down the road, and specify restrictions that kick in when one container tries to access the secret data of another. This layer of the problem requires the development of security policies and other techniques to manage sensitive data. One more challenge comes down to the immaturity and high volatility of the container orchestration market. By and large, a clear understanding of how to properly implement secrets management has yet to emerge.
Traditional defenses in container-based ecosystems
Let's now figure out if traditional security tools, such as data loss prevention (DLP), web application firewall (WAF), network traffic analysis (NTA), and others, can be used to secure virtual cluster networks and containers.
Classic next-generation firewall (NGFW) systems cannot efficiently control traffic in virtual cluster networks. Special NGFW tools that run inside a cluster can do the trick. Essentially, these are containers that monitor data in transit.
It is not always necessary to embed protection tools into a container, as this increases the complexity of the application. In some cases, it makes more sense to use traditional security solutions. The choice of a defense method depends on the specific company and the set of tools already in use. Furthermore, there are specially crafted instruments that supervise containers while they are running and quickly rebuild them if problems are spotted.
That said, if Kubernetes is used as a service, traditional security tools simply won’t be deployable. On the other hand, if the container orchestration system is hosted on-premises, a full range of tools can be used to protect it.
The security principles for conventional infrastructure and containerization are basically the same, but their implementation may differ. The security tool must understand the environment it is safeguarding.
Who is responsible for container security?
It is also worth discussing who should be responsible for container infrastructure protection in an organization – information security specialists or developers. What expertise should these people have? In the case of containers, the usual roles of teams are reversed, and the principle of "who developed it owns it" applies.
The task of managing the defenses is assigned to the developers, but a separate team of InfoSec specialists sets the security rules and investigates incidents. The department responsible for information security most often acts as the customer for the implementation of container technology protections. Sometimes the development team gets involved, and almost never the operation team.
As for the knowledge and skills, a specialist responsible for container security should have an understanding of the infrastructure, proficiency in Linux and Kubernetes, as well as a desire to learn are the most important.