PCI

Blog space

How to Apply PCI DSS to Containerised Environments

Inside this article:

  • What is containerization and what are its benefits
  • How to meet PCI DSS requirements in containers
  • Tools and methods for securing containerized environments

Containerisation is a revolutionary technology that enables applications to run and be isolated independently from the hardware infrastructure. Unlike traditional virtual machines, which require their own operating systems and more resources, containers run as processes on the host – they are fast, lightweight, and use less memory, while still providing essential isolation. This approach is most commonly associated with the Docker platform, which launched the modern container era in 2013. Containers allow developers to package an application together with all its dependencies into lightweight, portable units that behave consistently across environments – from a developer’s laptop to production cloud servers – eliminating risks of configuration-related application errors. 

Other significant containerisation technologies include Podman (developed by Red Hat), rkt (rkt.io), and LXD. However, Docker has dominated the market due to its ease of use and rich ecosystem of tools. Kubernetes is the most popular container orchestration platform, automating the deployment, scaling, and management of containers at enterprise scale. Other orchestration solutions include Docker Swarm and OpenShift – a commercial platform based on Kubernetes. 

Docker remains the undisputed leader in containerisation, especially in development environments and smaller deployments. Its popularity stems from its simplicity, a rich ecosystem of pre-built images available on Docker Hub, and wide integration capabilities with CI/CD tools. Kubernetes, on the other hand, dominates in large-scale production environments where advanced orchestration and management of thousands of containers is required. 

2. Benefits of Containerisation Technology for Developers 

Containerisation technologies bring a number of important advantages, making developers increasingly inclined to build containerised applications. The first and most important benefit is environment isolation – each application runs in its own isolated container, minimising the risk of conflicts between applications and their dependencies. Containers leverage Linux kernel technologies such as cgroups and namespaces to isolate applications and their dependencies. A developer can run different versions of the same library or runtime environment on a single machine without conflict. 

Consistency of behaviour is another key benefit – containers behave identically in different environments, eliminating the “works on my machine” problem. An application run in a container on a developer’s local machine will behave the same on test and production servers. This predictability greatly reduces deployment issues and simplifies management of multiple environments. 

Containers are faster and more efficient than traditional virtual machines – they start within seconds, use fewer system resources, and enable higher application density on a single server. Developers can rapidly test different configurations and application versions without waiting for environments to start. 

Portability allows containers to be easily moved between different cloud providers, on-premises, and hybrid environments without needing code changes. Scaling is also much simpler – containers can be easily replicated and distributed across multiple servers in response to increased demand. These features have made containerisation the standard in modern application development and deployment. 

3. Container Build and System Base 

Container creation (using Docker as an example) begins with a Dockerfile, which contains instructions defining how the container image should be built. A key instruction in any Dockerfile is the FROM line, which specifies the base image – the “system base”. This base serves as the foundation for the application and includes an operating system and essential tools. 

The system base varies depending on application needs. Popular Linux distributions used as base images include Ubuntu, CentOS, Alpine Linux, and Debian. Alpine Linux is particularly valued for its minimal size (around 5MB) and security focus, making it ideal for production environments. For Java applications, base images with OpenJDK are often used; for Node.js – images with the Node.js runtime. 

A special case is the scratch image, representing an empty, minimal image without an operating system. It is used to create very lightweight containers that include only compiled application binaries, especially in languages like Go or Rust. This type of image has the smallest possible size and attack surface, which is advantageous from a security perspective. 

The image build process involves executing the Dockerfile instructions, each creating a new image layer. This layered structure allows efficient disk usage and faster builds by reusing shared layers across images. Choosing the right system base is critical for the security, size, and performance of the final container. 

4. The Difference Between a Container Image and a Running Instance 

A container image is a static, unchangeable template layer that includes all the files, configurations, and dependencies required to run an application. It can be compared to a snapshot of the runtime environment or an installation disc – it contains everything needed but does not perform any operations itself. Images are built from Dockerfile instructions and stored in registries such as Docker Hub. 

A container is a running instance of an image – an active process or set of processes isolated from the rest of the operating system. The application runs inside the container, while the image serves only as a template to create containers. Multiple containers can be launched from a single image, each as a separate instance. 

The key difference is that an image is read-only, whereas a container includes a writable layer that allows changes during execution. All changes made to a running container – such as created files, modified configurations, or stored data – exist only in this writable layer and are lost when the container is deleted unless saved to persistent storage. 

Images are versioned using tags (e.g. ubuntu:20.04, nginx:latest), enabling management of different versions of the same application. Developers can create new images from existing containers using the docker commit command, though it is more common to build images from Dockerfiles to ensure repeatability and track changes. This architecture allows efficient resource sharing between containers and rapid deployment of new application instances. 

5. Container Execution Mechanisms in Different Environments 

Containers can be run in three main types of environments, each with its own characteristics and use cases. The on-premises model involves running containers on physical or virtual servers managed directly by the organisation. In this model, the IT team has full control over the infrastructure, allowing for detailed customisation of security and network configurations, but it requires greater investment in hardware and technical expertise. 

The Platform as a Service (PaaS) model provides a managed environment where containers run on orchestration platforms such as Kubernetes or OpenShift. Organisations use ready-made clusters managed by the service provider, reducing the operational burden of maintaining infrastructure. Teams can focus on application development while the platform automatically handles scaling, load balancing, and high availability. 

The cloud model offers fully managed container services, where the cloud provider handles all underlying infrastructure. Containers are run on cloud resources with automatic scaling, pay-as-you-go pricing, and integration with other cloud services. This model offers the greatest flexibility and fastest deployment, but may lead to higher costs with constant usage and requires trust in the cloud provider regarding security and compliance. 

Each of these models plays a role in an organisation’s IT strategy – the choice depends on security requirements, budget, team expertise, and industry regulations such as PCI DSS. 

6. Container Execution in Public Clouds – A Comparison 

Today’s cloud environment offers a wide range of container execution mechanisms, differing in their level of management, scalability, and operational responsibility. Each major cloud provider offers unique solutions tailored to different organisational and technical needs. 

Amazon Web Services (AWS) 

  • Amazon ECS (Elastic Container Service) – AWS’s native container orchestration service, optimised for ease of use and deep AWS ecosystem integration. Benefits include seamless integration with IAM, CloudWatch, and other services, and a low barrier to entry. Drawbacks include vendor lock-in and limited portability outside AWS. 
  • Amazon EKS (Elastic Kubernetes Service) – a managed Kubernetes service fully compatible with open-source Kubernetes. Benefits include standard Kubernetes API usage and provider migration capabilities. However, it involves higher costs and configuration complexity requiring adequate know-how. 
  • AWS Fargate – a serverless compute engine allowing containers to run without managing servers or clusters. Benefits include a fully serverless model and automatic scaling with pay-per-use pricing. Drawbacks include higher per-unit costs and significant network configuration limitations. 

Google Cloud Platform (GCP) 

  • Google Kubernetes Engine (GKE) – an advanced Kubernetes implementation with operational automation and strong security features. Benefits include Autopilot mode reducing operational load, advanced autoscaling, and AI/ML integration. However, it is complex for simple applications and may lead to uncontrolled costs with improper configuration. 
  • Cloud Run – a fully managed serverless container platform. Benefits include very fast cold starts (1 second), automatic scaling to zero, and a simple per-request billing model. Drawbacks include time limits (maximum 60 minutes per request) and reduced infrastructure control. 

Microsoft Azure 

  • Azure Kubernetes Service (AKS) – an advanced Kubernetes instance with integration with Active Directory and Azure Security Centre. Benefits include free cluster management, support for Windows containers, and advanced network options. Drawbacks include complex deployment, requiring significant Kubernetes expertise. 
  • Azure Container Instances (ACI) – rapid deployment of single containers without orchestration. Benefits include fast launch (seconds), per-second billing, and no need to manage cluster nodes. Drawbacks include limited scalability and lack of advanced orchestration features. 

Oracle Cloud Infrastructure (OCI) 

  • Oracle Kubernetes Engine – managed Kubernetes optimised for Oracle applications. Benefits include high performance for Oracle workloads, competitive pricing, and integration with Oracle Database. Drawbacks include a smaller tool ecosystem and limited community support. 

Alibaba Cloud 

  • Elastic Container Instance (ECI) – a serverless container service integrated with Kubernetes via Virtual Kubelet. Benefits include flexible resource specifications (from 0.25 vCPU), per-second billing, and automatic integration with ACK clusters. Drawbacks include limited geographic coverage and lower solution maturity. 

Choosing the right mechanism depends on application requirements, team expertise, budget, and the organisation’s multi-cloud strategy. For environments requiring PCI DSS compliance, security features, auditing capabilities, and network segmentation options are crucial. 

7. Challenges of Implementing PCI DSS in Container Environments 

Implementing the PCI DSS standard in containerised environments introduces unique challenges due to the dynamic nature of containers and the complexity of their orchestration. Standardising base image configurations is a major issue – each base image must comply with PCI DSS security requirements, including removing unnecessary services, securing user accounts, applying security patches, and having configuration standards for each image. Organisations must establish golden images, which go through audit and approval processes before being used in production. 

Container segmentation in platforms such as OpenShift and Kubernetes requires a multi-layered approach. At the Kubernetes level, namespaces are used for logical workload separation, network policies control inter-pod traffic, and security contexts limit container privileges. OpenShift also provides Security Context Constraints (SCC) and project-based isolation, adding extra layers of protection. Finally, microsegmentation at the container level enables precise control of east-west communication between microservices. 

Access management relies on the Role-Based Access Control (RBAC) model, integrated with the organisation’s identity systems. In Kubernetes, this involves Service Accounts, ClusterRoles, and RoleBindings to precisely define who can perform which actions on specific resources. Access methods include kubectl exec for interactive access, the Kubernetes API for programmatic management, and tools like Lens or Rancher for graphical interfaces. All access must be logged and audited in line with PCI DSS. 

Managing container update cycles is particularly challenging in the PCI DSS context. Like any system, containers require regular security updates, but their short-lived nature (52% live under 5 minutes) means traditional patch management is inadequate. Organisations must automate image builds with the latest patches, continuously scan registries for vulnerabilities, and use rolling updates to avoid downtime. CI/CD pipelines must include mandatory security scanning and compliance checks before releasing new container versions. 

8. Additional Challenges and the Complexity of Container Environments 

The challenges of implementing PCI DSS in container environments go far beyond basic segmentation and access management. Managing the dynamic list of containers in production is one of the most complex issues – containers are created and destroyed within minutes, change IP addresses and cluster locations, making it difficult to maintain the system inventory required by PCI DSS. Traditional CMDB tools can’t keep up, requiring real-time discovery and automated asset management solutions. 

Dynamic container environments also present challenges for real-time compliance monitoring. PCI DSS requires continuous monitoring of card data access, but when containers exist for just a few minutes, solutions must be implemented to capture and analyse activity in real time and retain audit logs even after containers terminate. 

Managing secrets and certificates in orchestrated environments requires advanced solutions such as HashiCorp Vault or Kubernetes Secrets with encryption at rest, which can dynamically deliver and rotate credentials without restarting applications. Compliance drift detection – identifying deviations from the intended security configuration – must operate continuously, as every new deployment can potentially introduce non-compliance. 

These complex challenges require not only deep technical expertise, but also a strategic approach to security architecture and compliance. Patronusec specialises in delivering comprehensive solutions for organisations facing these issues. Our team of experts can help assess existing container environments for PCI DSS compliance, design and implement secure architectures, and build processes for continuous compliance monitoring. 

Contact us to discuss how we can support your organisation in achieving and maintaining PCI DSS compliance in the dynamic world of containerisation. Our experience in aligning business needs with best security practices will help you unlock the full potential of container technology without compromising regulatory compliance. 

Don't buy a pig in a poke -
request a free consultation and check how we can assist you.

Free consultation
Contact form

Use the contact form or contact us directly.

Patronusec Sp z o. o.

Head Office:
ul. Święty Marcin 29/8
61-806 Poznań, Polska

KRS: 0001039087
REGON: 525433988
NIP: 7831881739
D-U-N-S: 989454390
LEI: 259400NAR8ZOX1O66C64

To top