A question of trust: When good containers go bad
Managing container infrastructure in a production environment is challenged by problems of scale. One of the biggest problems is trust—specifically trust of the application. To put it another way, can you trust that all containers in your Kubernetes or OpenShift cluster are performing the tasks you expect of them? We know that containerization has increased the pace of deployment, but has trust kept pace? If a container becomes compromised in some fashion, how many other containers are at risk and how far has trust been broken?
To answer those questions, you first need to make some assertions: that all containerized applications were subjected to static code analysis and were pen-tested; that you’re able to determine the provenance of the container through signatures and from trusted repositories; and that appropriate perimeter defenses are in place and authorization controls are gating deployment changes. In essence this defines a trust model but forgets a key perspective—the attacker profile. Attackers decide what’s important to them. When defending against them at scale, you need to understand what information they use to design their attacks.
Tim Mackey explores the nature of data center threats, why threat models fail, how malicious attackers design their attacks, when the threat risk increases prior to attack and why information flow matters, and how traditional defenses are inadequate for container workloads. Tim then shares measures you can take to proactively identify risks, including OpenShift integrated tooling to identify container images with increased risk.