Automated Container Security Scans of Docker Images
Find out how to protect the containers in your infrastructure like an organism protects the cells within its body.
There are 9,054,606 publicly available images on DockerHub that have been pulled more than a couple of billion times. Most of them use a so-called “base image,” an initial layer that provides the most basic functions needed in an image, on top of which developers can add their custom code. What happens when a base is flawed? Everything on top becomes at risk.
In this blog post, you will find out how our workflows could be used to monitor the security of DockerHub’s most popular public images. As always, results could be found in our public GitHub repository.
The Cell Upon a Cell (Container Upon a Container)
Just like a human cell, a container is a unit of software that starts with a standard blueprint, and then differentiates and specializes. Growing from one single cell to trillions of specialized cells is an essential process for the growth of any organism, but each new cell comes with new risks.
With every new cell, an extra bit of attack surface is created, and the possibility that something will go wrong increases. That’s why the body has a lot of different systems that constantly check the integrity of cells and take action when something is off. That’s also why Trickest created the Containers workflow as one of the automated container security solutions that serve the same purpose as these cell integrity checks.
Let's begin.
A simple curl request to the Docker API will return 172 official image repositories. Each repository doesn’t refer to only one image, though. Images have tags. Each tag could point to a unique image with its system, configuration, and installed software. So to perform this research in the most thorough way possible, all of these tags will need to be pulled and tested individually.
For the first phase of this project, we started with the top 10 tags of each image. So we’ve got 172 repositories with 10 tags each = 17200 images. How hard can that be?
Scaling Up
We initially tried to perform the tests we had in mind on our local machine, but sure enough, that wasn’t the best idea. The total number of images (after considering the tags) is 17200, with an average image size of a couple of hundred megabytes and an expected test runtime of around 5 minutes. It would take about three months of pure compute runtime to complete the process. That’s the first problem.
The second problem is that we weren’t really sure which tests we wanted to do at this point. We knew that the initial set of tests would often change throughout the research, so spending too much time on one run wouldn't make sense.
That’s why it made sense to use Trickest for this research.
- We can split up the workflow into small, manageable nodes that we can extend easily while still being able to connect them and pass the output of one node to another, check.
- All nodes will run on cloud machines, and we can configure the specs that each step needs - not too low that it would halt performance, not too high that it would break the bank, check.
- We can run nodes in parallel using a
file-splitter
- think of it as afor line in a file
loop, but instead of creating a new process for each line, it will create an entire node and run all of these nodes in parallel.
So this sums up the initial steps that this workflows needs:
- Get a list of popular official images from DockerHub.
- Pass them to a
file-splitter
for concurrency. - Then for each image,
- Get a list of tags
Scan
With all the logistical issues resolved, now the actual research can start! First things first, let’s find outdated software with known vulnerabilities on these images. Trivy works perfectly for this. As a bonus, trivy
also reports exposed ports, so that’s another test out of the way. A simple bash script can parse trivy
’s output and extract this information.
Checks, Checks, and More Checks
There are manymore checks that need to be done on containers before deployment, privilege escalation tests being a prime example.
Each little container is essentially running its OS, with its own config/software and its own potential vulnerabilities. It’s important to consider all of these layers when assessing a service’s security. Numerous scripts/tools/frameworks were released, covering the most frequent checks that should be done on containers. Some of them are linuxprivchecker, unix-privesc-check, linPEAS, etc. All of these frameworks can work, but they are a bit too broad and somewhat difficult to customize. So, in the end, creating our own granular checks made more sense.
The first test we integrated gave us chills the first time we saw it: blank passwords for root accounts by default (CVE-2019-5021). This means that privilege escalation to root can be achieved by clicking a button. That was a promising (scary?) start.
Fueled by the kick of adrenaline the first test gave us, our team started brainstorming more test ideas:
- Searching for Developer tools installed
- Getting the base OS
cat
'ing/etc/passwd
and/etc/shadow
in search for blank passwords- Finding files owned by the root user
- Searching for guid and suid executables
- Searching for hidden files, package names, world-writable files, and folders
- Enumerating password policies and SSL certificates
- Enumeration of GTFOBins
- Scanning for open ports
At this point, it was evident that these tests would change often, so it made sense to separate the tests from the workflow logic, especially to allow people from the community to contribute more tests after the release of the project. So we created a folder on GitHub with all tests executed. You can add your own tests!
The Reporting Cross-over
As mentioned above, the Containers
workflow enumerates outdated components with known vulnerabilities. Although it's good to have this kind of visibility, the number of found vulnerabilities can get pretty high if a container has a lot of dependencies. It isn't always simple to update and fix every single one, but a good place to start is by patching the vulnerabilities that have public exploits in the wild.
So it only made sense to integrate the trickest/cve here. It has indexed so many public exploits/PoCs, that the workflow can cross-reference with a container's vulnerabilities to find the ones that need immediate attention.
Markdown is a fantastic way of presenting the information. Each image will have README.md file that contains:
- links to the test report
- a list of found CVEs, categorized based on severity and whether or not a CVE has a public PoC.
- links to every found public PoC
Conclusion
Security has always been the weakest-link problem. This research explored an often overlooked link, so naturally, there was much to find here.
There are still containers with blank passwords, critical CVEs with POCs available to be exploited easily, many developer tools that can be used for privilege escalation, and many more. You can find all the findings of this research on the Containers repository on GitHub.
Next Steps
This workflow covered all the popular public base images on DockerHub, but that’s just one layer. More components are usually added on top of these base images, which often pose an even greater attack surface. Register on Trickest to start using this workflow to test every image on your private container registry. As always, the workflow is completely customizable so you can add any extra checks that make sense to you.
Get a PERSONALIZED DEMO
See Trickest
in Action
Gain visibility, elite security, and complete coverage with Trickest Platform and Solutions.
Get a demo