Container Security: How to Clean
Docker Images and Set Resource Quotas

Container Security: How to Clean Docker Images and Set Resource Quotas

Containers are fabulous – deployable everywhere, they allow you to run more jobs per server and move projects between servers smoothly. 

Nowadays, teams choose from numerous solutions, from Kubernetes and Docker to non-k8s cloud container solutions like ECS or ACI. However, software engineers always have to pay for flexibility – and speaking of containers, I mean security.

Container Security Overview

The term encompasses deployment, build, and runtime practices to protect a container. With the adoption of microservice design patterns and container technologies, security teams face new challenges like developing cutting-edge solutions or facilitating infrastructure shifts. 

Container security for the enterprise is mainly about securing the pipeline, an application, deployment environments, and infrastructure. 

And the programs within containers bring the main challenge – they establish the container safety level. But if aware of security enhancement in specific platforms, one can save the situation. 

Docker, for instance, uses the host OS Kernel instead of hypervisors, meaning teams should update both regularly. And non-Kubernetes cloud containers, as a rule, provide hard-to-track automatic security patching, so their safety ultimately relies on the content.

The legend is that updating platforms spreads chaos and brings more vulnerabilities – however, I am about to break down the myth. Corewide experts recommend you regularly update systems since this practice covers existing vulnerabilities. And the version changelog will help avoid new issues.

Regarding upgrading, keep an eye on container runtimes or container engines responsible for the system launch and management because runtime holes can compromise resources in all containers. Update runtimes separately, not as a part of the platform, to ensure maximum reliability.

Tips on Improving Security

Is there a way to benefit from containerization without worrying about safety? Actually, no. It’s like having children – once you’ve become a parent, you never stop bothering. 

But teams can reduce this stress dramatically by using security measures from our talented experts, proven by hundreds of highly secure container environments.

Recommendations for Businesses and software developers on securing a container
  • Restrict runtime access 

If Kubernetes has management rights and runs other containers, the hacker may penetrate it to hack the whole system. And even an unprivileged user with access to the socket can harm the host system.

Docker docs, for instance, provide excellent examples of running a container utilizing simple HTTP queries – crushing the isolation is easy provided that you can break the app with access to Docker API.

And since Docker API is HTTP-based, it brings the RCE vulnerability – the possibility to hack a host server when a privileged user runs a process inside the container. We’ve seen countless cases that slip into production, so remember that your entire OS is compromised when your app has an RCE vulnerability.

  • Keep an eye on container images

All information is kept in images forever, so it is essential to build them safely and avoid sensitive data getting into it. 

We’ve had our share of poorly crafted Dockerfiles – have a look at this:

RUN apt-get install -y dependency-packages-I-temporarily-need

RUN make && make install

RUN apt-get remove dependency-packages-I-temporarily-need

Wondering still how that Docker image weighs a ton after thorough cleaning? The answer is simple: every RUN statement creates a new file system layer in the container image and displays the content of the last one. 

You may naturally think nothing is left in the image – however, the underlying layers still contain redundant data. Reading through the best practices for writing Dockerfiles, you’ll end up with something like this:

RUN apt-get install dependency-packages-I-temporarily-need &&\

make && make install &&\

apt-get remove dependency-packages-I-temporarily-need 

That is just the thin end of the wedge. Here’s where engineers get burnt by not following the same core principles of building container images:

COPY private-ssh-key.pem /root/.ssh/id_rsa \

RUN git clone ssh://[email protected]/my-private-repository.git && \

rm -f /root/.ssh/id_rsa

All right, you added a private SSH key, used it to clone a personal git repository, then wisely removed it from the file system. But the key is obviously still there: COPY statement created a layer where it remains.  

And the fix is trivial – multi-stage builds:

FROM alpine/git:latest AS temp-img

# still putting the key into the image

COPY private-ssh-key.pem /root/.ssh/id_rsa

# still using it to clone the repo

RUN git clone ssh://[email protected]/my-private-repository.git

# ...but the end image will be built from a different source where the SSH key had never been added!

FROM ubuntu:20.04

COPY --from=temp-img /opt/my-private-repository /opt/my-private-repository

Use this tip to build a temporary image before creating the real one. Moreover, its layers will be deleted when the build is finished – credentials are safe and sound.

  • Avoid providing privileges

To run or not to run containerized apps as non-root users? 

Well, running applications as root is not harmful but providing extra privileges is! A root inside the container is not quite the same root you have on your host OS – it possesses kernel-level rights you provide to it. 

Accessing devices or managing low-level network parameters? No, this root can’t do that. There are only two ways for a root user to break its isolation:

+ you let your container manage your environment or runtime (get back to the first recommendation)

+ your kernel is out-of-date and has a critical vulnerability (someone has a much worse problem to deal with ASAP)

  • Set resource quotas

Control the number of resources per container and limit them (remember that the limit can’t be lower than the request). 

What resources? Memory and CPU. 

Why is it important? This simple action increases the efficiency of the environment, prevents the overall imbalance of resources, and relieves a headache.

Where to set limits? At the Container and the Namespace levels.

  • Shift left to secure pipelines

Have you met DevSecOps?

In brief, DevOps is about automation and optimization — nobody guarantees you security. And here comes notorious DevSecOps to automate security integration at every stage of the software development lifecycle.

DevSecOps best practices include shifting left, which means moving security from the end to the beginning of the delivery process. Such an approach helps detect malicious code, outdated packages, and similar threats in advance and accelerate security vulnerabilities patching.

Wise planning and confident implementation are all it takes, and this article will be handy to delve further into the subject.

  • Configure your firewall, for God’s sake!

Ensure you covered the basics. 

The firewall continuously monitors all incoming and outgoing traffic to block or allow content according to its configuration rules.

All firewalls typically have basic rules and don’t provide a maximum protection level. Teams need, therefore, to adjust configurations to their specific application to ensure superior security.

Configuring a firewall is a simple yet initial security measure that protects your network against unwanted or potentially dangerous data – even if you suffer from application or runtime vulnerabilities.

Summary

Underestimating container security can cost an arm and a leg – is this risk worth it?

Include regular check-ups, code reviews, and system updates to your workflows to improve security and run containers like clockwork. And recommendations from this article will exclude stress caused by poor security. 

Another effective way to avoid headaches is to delegate security management and enhancement to professionals. Start with audit and consulting services to see the big picture and craft further actions.