Skip to main content

Amazon Elastic Kubernetes Service (EKS) Compliance Integrations

Overview

Watch Video Summary

This article describes how to integrate Lacework with your Amazon Elastic Kubernetes Service (EKS) cluster using Helm or Terraform.

Lacework integrates with Amazon EKS to monitor configuration compliance of your cluster resources.

Optionally, Lacework can also monitor workload security on your Amazon EKS cluster. This is provided as an additional option during the installation steps in this article.

note

If you are only wanting to monitor workload security on your EKS clusters (rather than configuration compliance), see Deploy Linux Agent on Kubernetes.

Supported Versions

See Deploy on Kubernetes (Supported Versions) for the operating systems, Kubernetes, and Helm versions that are supported for Amazon EKS Compliance.

EKS Fargate

EKS Fargate is not supported for this type of integration.

EKS Compliance Integration Components

Lacework uses three components to collect data for EKS Compliance integrations:

  • Node Collector - collects data on each Kubernetes node.

    • The Node Collector is an independent component that shares the same installation journey as the Lacework Agent. It has separate configuration to allow operation on EKS nodes.

      info

      If the Lacework Agent is already installed on the cluster nodes, the installation will update the Agent configuration map to enable the Node Collector functionality.

      It may also upgrade the Lacework Agent to the latest available release. The minimum agent version for EKS Compliance functionality is v6.2.

    • This component is installed on every Kubernetes node in the cluster.

    • Node data is collected and sent to Lacework every hour.

    • The Node Collector will collect data relating to workload security if you choose to enable it during the installation steps.

  • Cluster Collector - collects Kubernetes cluster data from the Kubernetes API server.

    • This component is installed on one container per cluster.
    • The container runs as a non-root user.
    • Retrieves AWS instance metadata.
    • Cluster data is collected and sent to Lacework every 24 hours.
  • Cloud Collector (through Cloud Provider Integration) - collects data from cloud provider end points.

    • This is already provided through the AWS Configuration integration type. See Integrate Lacework with AWS to set this up (if you haven't already done so).
    • The cloud collection occurs every 24 hours at the scheduled time in the Lacework Console (under Settings > Configuration: General > Resource Management Collection Schedule).
Timings for first report

The EKS Compliance data is complete and available for assessment once all 3 collections have occurred at least once.

The node and cluster data is sent to Lacework within 2 hours of the collectors being installed on a cluster. Once the cloud collection has occurred, data will be visible in the Lacework platform.

Prerequisites

Installation Steps

Choose one of the following options to integrate Lacework with your EKS cluster:

Option 1: Install using Helm

Follow these steps to install the Node and Cluster collectors on your EKS cluster.

  1. Add the Lacework Helm Charts repository:

    helm repo add lacework https://lacework.github.io/helm-charts/
  2. Choose one of the following options to install the necessary components on your EKS cluster:

    tip

    Add --debug to this command to enter debug mode:

    helm upgrade --debug --install --create-namespace...
    • Configuration compliance integration only:

      Template with Workload Security disabled
      helm upgrade --install --create-namespace --namespace lacework \
      --set laceworkConfig.serverUrl=${LACEWORK_SERVER_URL} \
      --set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
      --set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
      --set laceworkConfig.datacollector=disable \
      --set clusterAgent.enable=True \
      --set clusterAgent.image.repository=lacework/k8scollector \
      --set clusterAgent.clusterType=${KUBERNETES_CLUSTER_TYPE} \
      --set clusterAgent.clusterRegion=${KUBERNETES_CLUSTER_REGION} \
      --set image.repository=lacework/datacollector \
      --repo https://lacework.github.io/helm-charts/ \
      lacework-agent lacework-agent

      Adjust the parameter values to match your environment, see Configuration Parameters for guidance.

    • Configuration compliance and Workload Security integration:

      tip

      Use this option for the following outcomes:

      • You want to install the Lacework Agent to monitor both configuration compliance and workload security of your Kubernetes cluster.
      • You already have the Lacework Agent installed and monitoring workload security, and you also want to monitor configuration compliance of your Kubernetes cluster.
      Template with Workload Security enabled
      helm upgrade --install --create-namespace --namespace lacework \
      --set laceworkConfig.serverUrl=${LACEWORK_SERVER_URL} \
      --set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
      --set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
      --set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
      --set clusterAgent.enable=True \
      --set clusterAgent.image.repository=lacework/k8scollector \
      --set clusterAgent.clusterType=${KUBERNETES_CLUSTER_TYPE} \
      --set clusterAgent.clusterRegion=${KUBERNETES_CLUSTER_REGION} \
      --set image.repository=lacework/datacollector \
      --repo https://lacework.github.io/helm-charts/ \
      lacework-agent lacework-agent

      Adjust the parameter values to match your environment, see Configuration Parameters for guidance.

  3. Display the pods for verification. Choose one of the following options:

    • Run the following kubectl command:

      kubectl get pods -n lacework -o wide
    • Go to Workloads > Kubernetes in the Lacework Console.

      In the Behavior section, click Pod network and then Pod activity.

    All Node Collector and Cluster Collector pods have a naming convention that includes lacework-agent-* and lacework-agent-cluster-* respectively.

Configuration Parameters

Required Parameters

Adjust the following values to match your environment:

ValueDescriptionExamples
${LACEWORK_SERVER_URL}Your Lacework Agent server URL.https://api.lacework.net
https://aprodus2.agent.lacework.net
https://agent.gprodus1.lacework.net
https://api.fra.lacework.net
https://auprodn1.agent.lacework.net
${LACEWORK_AGENT_TOKEN}Your Lacework Agent access token.0123456789abc...
${KUBERNETES_CLUSTER_NAME}Provide your EKS cluster name and ensure it matches the name defined in AWS.prd01
${KUBERNETES_ENVIRONMENT_NAME}Provide a Kubernetes environment name that will be shown in the Lacework Console.

This is user-defined and only essential for Workload Security integrations.
Production
${KUBERNETES_CLUSTER_TYPE}The Kubernetes cluster type.

NOTE: For EKS integrations, the cluster type must be written as eks in lower case.
eks
${KUBERNETES_CLUSTER_REGION}The AWS Region of the EKS cluster.us-west-1
eu-west-1
Optional Parameters

The following parameters are optional and not required for the installation:

ParameterDescriptionExamples
clusterAgent.hostNetworkAccessThe Cluster Collector needs to be able to contact the Kubernetes API server, cloud provider metadata services, and Lacework APIs. By setting this option to true, the Cluster Collector pod will have access to the host network. This is needed if your pod network policies restrict access to the host network. Default is false when omitted.true
clusterAgent.proxyUrlConfigure the Cluster Collector to use a network proxy by setting the proxy server URL and port. The Cluster Collector will use the laceworkConfig.proxyUrl option first (if it has been set).

NOTE: This option is available from Linux Agent 6.12 and above.
https://my_proxy_server:443
clusterAgent.image.tagSpecify a Lacework Agent tag suitable for your cluster. The default is latest when this parameter is omitted.5.6.0.8352-amd64
image.tagSpecify a Lacework Agent tag suitable for your cluster. The default is latest when this parameter is omitted.5.6.0.8352-amd64

Add these parameters when running the installation command:

Example
helm upgrade --install --create-namespace --namespace lacework \
...
--set clusterAgent.hostNetworkAccess=true \
--set clusterAgent.proxyUrl="https://my_proxy_server:443" \
--set clusterAgent.image.tag=5.6.0.8352-amd64 \
--set image.tag=5.6.0.8352-amd64 \
...

See Helm Configuration Options for additional parameters that can also be set using Helm.

Option 2: Install using Terraform

Use the Lacework terraform-kubernetes-agent module to create a Secret and DaemonSet and deploy the Node and Cluster collectors in your EKS cluster.

DaemonSets are an easy way to deploy a Kubernetes pod onto every node in the cluster. This is useful for monitoring tools such as Lacework.

If you are new to the Lacework Terraform Provider or Lacework Terraform Modules, read the Terraform for Lacework Overview article to learn the basics on how to configure the provider and more.

This topic assumes familiarity with the Terraform Provider for Kubernetes maintained by Hashicorp on the Terraform Registry.

Run Terraform

The following code snippets deploy the DaemonSet to the Kubernetes cluster being managed with Terraform.

info

Before running this code, ensure that the following settings match the configurations for your deployment:

  • config_path
  • config_context
  • lacework_access_token
  • lacework_server_url
  • lacework_cluster_name
  • lacework_cluster_exclusive
    • true = Configuration Compliance integration only.
    • false = Configuration Compliance and Workload Security integration. Set to false or omit this variable for the following outcomes:
      • You want to install the Lacework Agent to monitor both configuration compliance and workload security of your Kubernetes cluster.
      • You already have the Lacework Agent installed and monitoring workload security, and you also want to monitor configuration compliance of your Kubernetes cluster.
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.0"
}
lacework = {
source = "lacework/lacework"
version = "~> 1.0"
}
}
}

provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "my-context"
}

data "aws_region" "current" {}

# Use the access token resource below if you are intending
# to generate a new access token for this integration.
resource "lacework_agent_access_token" "k8s" {
name = "prod"
}

# Use the data entry below if you are choosing to use an
# existing access token for this integration.
data "lacework_agent_access_token" "k8s" {
name = "k8s-deployments"
}

module "lacework_k8s_datacollector" {
source = "lacework/agent/kubernetes"
version = "~> 2.0"

# Use one of the lacework_access_token options below depending
# on whether you are generating a new token or using an existing one.

# Option 1: Generate a new access token
#lacework_access_token = lacework_agent_access_token.k8s.token

# Option 2: Use an existing access token
#lacework_access_token = "data.lacework_agent_access_token.k8s.token"

# The lacework_server_url property is optional if your Lacework tenant
# is deployed in the US, but mandatory for non-US tenants.
# See https://docs.lacework.net/onboarding/agent-server-url for endpoints.

#lacework_server_url = "<agent-server-url>"

# Provide your EKS cluster name and ensure it matches the name defined in AWS.
# https://docs.aws.amazon.com/cli/latest/reference/eks/list-clusters.html#examples
lacework_cluster_name = "My-EKS-Cluster"

# Set lacework_cluster_exclusive to true if you only want a Configuration Compliance integration.
# Default is false.

#lacework_cluster_exclusive = true

enable_cluster_agent = true
lacework_cluster_region = data.aws_region.current.name
lacework_cluster_type = "eks"
}
  1. Open an editor and create a file called main.tf.
  2. Copy/Paste the code snippet above into the main.tf file and save the file.
  3. Run terraform plan and review the changes that will be applied.
  4. Once satisfied with the changes that will be applied, run terraform apply -auto-approve to execute Terraform.

Validate the Changes

After Terraform executes, you can use kubectl or check the Lacework Console to validate the DaemonSet is deployed successfully:

  • Run the following kubectl command:

    kubectl get pods -n lacework -o wide
  • Go to Workloads > Kubernetes in the Lacework Console.

    In the Behavior section, click Pod network and then Pod activity.

All Node Collector and Cluster Collector pods have a naming convention that includes lacework-agent-* and lacework-agent-cluster-* respectively.

Next Steps

See Kubernetes Benchmarks for details on how to check whether your resources are compliant with CIS and other regulatory benchmarks.