Knect365 is part of the Knowledge and Networking Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Informa

The Biggest Risk with Container Security is Not Containers

Tim Mackey, Technology Evangelist from Black Duck Software discusses why datacenter attacks are threatening to containers.  

Tim Mackey, Technology Evangelist, Black Duck SoftwareContainer security may be a hot topic today, but we’re failing to recognize lessons from the past. As an industry our focus is on the containerization technology itself and how best to secure it, with the underlying logic that if the technology is itself secure, then so too will be the applications hosted.

Unfortunately, the reality is that few datacenter attacks are focused on compromising the container framework. Yes, such attacks do exist, but the priority for malicious actors is mounting an attack on applications and data; increasingly for monetary reasons. According to SAP, more than 80 percent of all cyberattacks are specifically targeting software applications rather than the network.

This reality challenges some long held beliefs that if you protect the edges, in this case the container framework, then magically those less secure applications and deployments will become more secure.

That model has served the firewall industry for decades, yet attacks continue to succeed. I assert we don’t need another firewall-style arms war, but we do need to change the container security paradigm dramatically.

I realize that’s a controversial statement, and potentially inviting a flame war. But, before you shout, “Flame on!” let me give you an example of my perspective.

Let’s say I’m building a new internet-facing application which will be deployed at scale. This application will contain data which qualifies it for stricter governance, PCI DSS for example. I know I’m going to be subject to an audit, and I know that this audit will be expensive, so I want to limit the number of components which will need auditing. I also decide that this application will be containerized using Docker. Since I have an auditing requirement, I’m going to ensure that all rational static testing is performed, and that the components will be subject to fuzzing according to my defined threat model.

After performing this analysis, and passing an audit, I find my application under attack and I start to wonder just what I might have missed. I know my craft very well, but I can always learn more. Did someone figure out something I didn’t think to test for?

Here’s a short list of what I missed, and the assumptions I made which might bite me:

1. While I decided I wanted the application to be containerized, I never defined the security template and profile for Docker

2. I never defined the precise network requirements for each component, nor what the acceptable data streams might look like (particularly those from end users – “trusted” or otherwise)

3. I never looked at how my containers might be deployed relative to other containers, nor did I define if Docker itself should be virtualized

4. There was no discussion of dependencies and dependency risk. Effectively I was assuming others were taking care of their security, but how do I validate that?

5. When performing the threat model, how did I define the threats? When under attack, the malicious actors are the ones writing the rules. Did I adequately understand their motivations?

6. Did I account for the potential for a weak component to create a beachhead within the application framework, and effectively allow a compromise to propagate?

Now that I’ve identified all these factors, and worked through an action plan, I know I’m in better shape, but my risk hasn’t really decreased. The reason for this is very simple. I’ve deployed something, and while I’ve setup monitoring for application activity, I probably still am not proactively monitoring for risks in my application. After all, I’m an application developer and have moved on to the next version or next project.

Since proactively monitoring for risk costs money, I need to have a compelling reason to do so. That reason is easy to define; if the attackers know something about your environment and you don’t – you’ve lost. You may have the best tested application in the world, but software isn’t a static environment. Security protocols once thought secure are now attackable. Vulnerabilities are disclosed on products and software components at an average rate exceeding ten per day. Vendors and open source project maintainers discontinue projects without notice. You want the best possible information, in the shortest timeframe, in order to protect against attack. You don’t want your employer or your customers to be in the media for data breaches.

The biggest risk I see with container security is that attacks are mounted on applications far more often than on perimeter defenses. Increasing container security should start with increasing the security of the applications deployed in containers. Only then will we have an effective defense in-depth model. Yes, we also need more secure container frameworks, but when those frameworks know nothing about the applications they encapsulate, they can’t possibly prevent well-crafted application attacks.

If you’d like to continue the discussion on container security with me, I’ll be speaking at Container World 2017 on the the panel, “Container Security: Countering the Container Challenges” on February 23rd. Hope to see you there and to hear you express your opinion!

Get articles like this by email