Skip to main content

Install Linux Agent on Kubernetes

You can use the following methods to install the Lacework Linux agent on Kubernetes:

  • Install with a Helm Chart - Helm is a package manager for Kubernetes that uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. You can download the Lacework Helm chart and use it to install the agent.
  • Deploy with a DaemonSet - DaemonSets are an easy way to deploy a Kubernetes pod onto every node in the cluster. This is useful for monitoring tools such as Lacework. You can use the DaemonSet method to deploy the agent onto any Kubernetes cluster, including hosted versions like AKS, EKS, and GKE.
  • Install with Terraform - For organizations using Hashicorp Terraform to automate their environments, Lacework provides the terraform-kubernetes-agent module to create a Secret and DaemonSet for installing the agent in a Kubernetes cluster.
  • Install in gVisor on Kubernetes - gVisor provides an additional layer of isolation between running applications and the host operating system. You can install the agent in gVisor on a Kubernetes cluster.

After you install the agent, it takes 10 to 15 minutes for agent data to appear in the Lacework Console under Agents. You can also view your Kubernetes cluster in the Lacework Console under Workloads > Kubernetes. If your cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.

note

The datacollector pod uses privileged containers and requires access to host PID namespace, networking, and volumes.

Prerequisites

  • A Kubernetes cluster on a supported Kubernetes environment. For more information, see Supported Kubernetes Environments.
  • To enable the agent to read the cluster name:
    • If you created the Kubernetes cluster in a K8s orchestrator that supports machine tags such as AKS, EKS, and GKE, add the KubernetesCluster machine tag for the cluster using the instructions at Add KubernetesCluster Machine Tag.
    • If you created your own Kubernetes cluster (rather than utilizing EKS, AKS, GKE or similar orchestrator), specify the cluster name using the KubernetesCluster tag in the config.json file using the instructions at Set KubernetesCluster Agent Tag in config.json File.

Supported Kubernetes Environments

The Lacework Linux agent supports the following Kubernetes versions, managed Kubernetes services, container network interfaces (CNI), service meshes, and container runtime engines:

Kubernetes EnvironmentEnvironment Name
Kubernetes versions1.9.x to 1.29
K8s orchestratorsAKS
EKS
GKE
EKS
EKS Fargate
ECS Fargate
MicroK8s
Openshift
Rancher
ROSA
CNIWeavenet
Calico
Flannel
Cilium
kubenet
Service meshLinkerd 2.11
Container runtime engineDocker
Containerd
CRI-O

Install using Helm

Supported Versions

  • EKS (Bottlerocket and Amazon Linux)
  • Helm v3.1.x to v3.14.x
  • Kops 1.20
  • Kubernetes v1.10 to v1.29
  • Ubuntu 20.04

Use Helm to Install the Agent (Charts Repository)

Helm Charts help you define, install, and upgrade Kubernetes applications.

  1. Add the Lacework Helm Charts repository:

    helm repo add lacework https://lacework.github.io/helm-charts/
  2. Install the Helm charts or upgrade an existing Helm chart. If the tenant you are using is located outside North America, replace the values for the LACEWORK_AGENT_TOKEN and LACEWORK_SERVER_URL.

    note

    KUBERNETES_CLUSTER_NAME and KUBERNETES_ENVIRONMENT_NAME are optional. Replace them with values from your setup. To change the KUBERNETES_CLUSTER_NAME, see How Lacework Derives the Kubernetes Cluster Name.

    If you are using a tenant located in North America, run the following command:

    helm upgrade --install --namespace lacework --create-namespace \
    --set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
    --set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
    --set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
    lacework-agent lacework/lacework-agent

    If you are using a tenant located outside of North America, run the following command:

    helm upgrade --install --namespace lacework --create-namespace \
    --set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
    --set laceworkConfig.serverUrl=${LACEWORK_SERVER_URL} \
    --set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
    --set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
    lacework-agent lacework/lacework-agent
  3. Verify the pods.

    kubectl get pods -n lacework -o wide
  4. After you install the agent, it takes 10 to 15 minutes for the agent data to appear in the Lacework Console. In the Lacework Console, go to Workloads > Kubernetes and click Clusters to verify that the cluster on which you installed the agent is displayed. If the cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.

Install Using the Charts from the Lacework Release Page

note

Lacework recommends installing from the charts repository than from the Lacework Release Page if possible. Installing from the charts repository does not require editing of the Charts.yaml file, whereas this method does.

Get the Helm Chart for the Agent

The Helm chart is available as part of the agent release tarball from the Lacework Agent Release GitHub repository (v2.12.1 or later).

The Helm chart includes the following:

  • ./helm/
  • ./helm/lacework-agent/
  • ./helm/lacework-agent/Chart.yaml
  • ./helm/lacework-agent/templates/
  • ./helm/lacework-agent/templates/_helpers.tpl
  • ./helm/lacework-agent/templates/configmap.yaml
  • ./helm/lacework-agent/templates/daemonset.yaml
  • ./helm/lacework-agent/values.yaml

Edit Charts.yaml

For Helm charts v4.2, in the Charts.yaml, change the version: 4.2.0.218 line to be version: 4.2.0

Use Helm to Install the Agent (Release Page)

Replace the example text with your own values.

  1. Install the charts or upgrade an existing installation.

    helm upgrade --install --namespace lacework --create-namespace \
    --set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
    --set laceworkConfig.serverUrl=${LACEWORK_SERVER_URL} \
    --set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
    --set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
    lacework-agent helm.tar.gz
  2. Verify the pods.

    kubectl get pods -n lacework -o wide
  3. After you install the agent, it takes 10 to 15 minutes for the agent data to appear in the Lacework Console. In the Lacework Console, go to Workloads > Kubernetes and click Clusters to verify that the cluster on which you installed the agent is displayed. If the cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.

If you have an autogenerated or custom Helm deployment and these steps do not work, you can optionally:

  1. Change "additionalProperties": true in values.schema.json. Lacework supports this change, but it is not encouraged.
  2. Use Helm to install the agent (charts repository).

Install on Openshift

Install with the cluster-admin Role

Use the normal Helm installation instructions to install the Lacework agent on Openshift.

Install with a Service Account

You can also install the Lacework agent using Helm charts and a service account.

Before deploying the Helm chart, ensure that the service account has permissions to create privileged pods by running the following command:

oc adm policy add-scc-to-user privileged -z ${SERVICE_ACCOUNT_NAME}

To install the agent using a Helm chart:

  1. Specify a service account when installing the Helm chart by adding laceworkConfig.serviceAccountName to the Helm command:

    --set laceworkConfig.serviceAccountName="${SERVICE_ACCOUNT_NAME}"

  2. Modify the values.yaml file and add the service account:

      # [Optional] Specify the service account for agent pods
    serviceAccountName: ${SERVICE_ACCOUNT_NAME}

You can specify that the agent runs on all nodes in a cluster or in a subset of nodes in the cluster.

Enable the Lacework Agent on all Nodes

To run the Lacework agent on all nodes in your cluster, specify the following toleration during installation in one of the following ways:

  1. Enter a command, such as: --set "tolerations[0].effect=NoSchedule" --set "tolerations[0].operator=Exists"

  2. Modify the values.yaml file and add data similar to the following:

    tolerations:
    # Allow Lacework agent to run on all nodes in case of a taint
    - effect: NoSchedule
    operator: Exists
Enable the Lacework Agent on a Subset of Nodes

To set multiple tolerations for the Lacework agent, set an array of desired tolerations in one of the following ways:

  1. Enter the following command and repeat for each scheduling condition: --set "tolerations[0].effect=NoSchedule" --set "tolerations[0].operator=node-role.kubernetes.io/master". Ensure you increment the array index for each scheduling condition.

  2. Modify the values.yaml file and add data similar to the following:

    tolerations:
    # Allow Lacework agent to run on all nodes in case of a taint tolerations:
    - effect: NoSchedule
    key: node-role.kubernetes.io/master
    - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane
    - effect: NoSchedule
    key: node-role.kubernetes.io/infra

Helm Configuration Options

Specify the Container Runtime that the Agent Uses to Discover Containers

By default, the agent automatically discovers the container runtime (containerd, cri-o, and docker). You can use the containerRuntime option to specify the runtime that you want the agent to use to discover containers.

To specify the container runtime that the agent uses, do one of the following:

  1. Use the following option with the helm install or helm upgrade command:
    --set laceworkConfig.containerRuntime=docker
  2. Modify the values.yaml file and add data similar to the following:
    containerRuntime:docker
note

If either the containerRunTime or the containerEngineEndpoint setting is wrong, the agent will not detect the container.

Specify the Endpoint that the Agent Uses to Discover Containers

By default, the agent uses the default endpoint for the system's container runtime. You can use the containerEngineEndpoint option to specify any valid URL, TCP endpoint, or a Unix socket as the endpoint.

To specify the endpoint that the agent uses to discover containers, do one of the following:

  1. Use the following option with the helm install or helm upgrade command:
    --set laceworkConfig.containerEngineEndpoint=unix:///run/docker.sock
  2. Modify the values.yaml file and add data similar to the following:
    containerEngineEndpoint:unix:///run/docker.sock
note

If either the containerRunTime or the containerEngineEndpoint setting is wrong, the agent will not detect the container.

Specify Nodes for Your Agent Deployment

Tolerations let you run the agent on nodes that have scheduling constraints such as master nodes or infrastructure nodes (for OpenShift).

By default, the Lacework agent is permitted to run on worker nodes and master nodes in your Kubernetes cluster. This is done by specifying the toleration as follows:

# Allow Lacework agent to run on all nodes including master node
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
# Allow Lacework agent to run on all nodes in case of a taint
# - effect: NoSchedule
# operator: Exists

Prevent Agent Pods from Being Evicted

To prevent agent pods from being evicted in oversubscribed clusters, Lacework recommends that you assign a higher priority for agent pods. For more information about pod priority, see Pod Priority and Preemption.

You can assign a higher priority for agent pods in one of the following ways:

  1. Use the following option with the helm install or helm upgrade command:
    --set priorityClassCreate=true
  2. Change priorityClassCreate:false in the values.yaml file to priorityClassCreate:true.

Specify CPU Requests and Limits

CPU requests specify the minimum CPU resources available to containers. CPU limits specify the maximum CPU resources available to containers. For more information, see Resource Management for Pods and Containers.

The default CPU request is 200m. The default CPU limit is 500m.

You can specify the CPU requests and limits in one of the following ways:

  1. Enter a command such as the following on the command line:

    --set resources.requests.cpu=300m
    --set resources.limits.cpu=500m
  2. Modify the values.yaml file and add data similar to the following:

    resources:
    requests:
    cpu: 300m
    limits:
    cpu: 500m

Specify Memory Requests and Limits

Memory requests specify the minimum memory available to containers. Memory limits specify the maximum memory available to containers. For more information, see Resource Management for Pods and Containers.

The default memory request is 512Mi. The default memory limit is 1450Mi.

You can specify the memory requests and limits in one of the following ways:

  1. Enter a command such as the following on the command line:

    --set resources.requests.memory=384Mi
    --set resources.limits.memory=512Mi
  2. Modify the values.yaml file and add data similar to the following:

    resources:
    requests:
    memory: 384Mi
    limits:
    memory: 512Mi

Specify a Proxy URL on Helm Charts

Proxy servers allow you to specify a URL to route agent traffic.

You can set the proxy server URL in your Lacework Helm charts in one of the following ways:

  1. Enter a command such as --set laceworkConfig.proxyUrl=${LACEWORK_PROXY_URL} on the command line.

  2. Modify the values.yaml file and add data similar to the following:

    # [Required] Specify a proxy server URL to use for routing agent
    proxyUrl: value

Configure File Integrity Monitoring Properties

Enable or Disable FIM

Enable FIM in one of the following ways:

  1. Enter a command such as --set laceworkConfig.fim.mode=enable on the command line.

  2. Modify the values.yaml file and add data similar to the following:

      mode: enable

Disable FIM in one of the following ways:

  1. Enter a command such as --set laceworkConfig.fim.mode=disable on the command line.

  2. Modify the values.yaml file and add data similar to the following:

      mode: disable
Specify the File Path

You can override default paths for FIM using this property in one of the following ways:

  1. Enter a command such as --set laceworkConfig.fim.filePath={<path1>,<path2>, ...} on the command line.

  2. Modify the values.yaml file and add data similar to the following:

      filePath: [<path1>, <path2>, ...]
Specify the File Path to Ignore

Alternatively, you can override default paths by specifying files to ignore for FIM in one of the following ways:

  1. Enter a command such as --set laceworkConfig.fileIgnore={<path1>,<path2>, ...} on the command line.

  2. Modify the values.yaml file and add data similar to the following:

      fileIgnore: [<path1>, <path2>, ...]
Prevent the Access Timestamp from Being Used in Hash Computation

You can prevent the access timestamp from being used by utilizing this property in one of the following ways:

  1. Enter a command such as --set laceworkConfig.fim.noAtime=true on the command line.

  2. Modify the values.yaml file and add data similar to the following:

      noAtime: true

Alternatively, you can enable access timestamp to be used by utilizing this property in one of the following ways:

  1. Enter a command such as --set laceworkConfig.fim.noAtime=false on the command line.

  2. Modify the values.yaml file and add data similar to the following:

      noAtime: false
Specify the FIM Scan Start Time

You can specify a start time for the daily FIM scan using this property in one of the following ways:

  1. Enter a command such as --set laceworkConfig.fim.runAt=<HH:MM> on the command line.

  2. Modify the values.yaml file and add data similar to the following:

      runAt: <HH:MM>
Specify the FIM Scan Interval

You can specify the FIM scan interval using the crawlInterval property in one of the following ways:

  1. Enter a command such as --set laceworkConfig.fim.crawlInterval=<time_interval> on the command line.

  2. Modify the values.yaml file and add data similar to the following:

      crawlInterval: <time_interval>

The default value is 1320 minutes. If you specify 1320 minutes or a greater value, the agent sends the data for all files that were scanned for the first FIM scan to the Lacework platform, but only the data for new or changed files for every subsequent FIM scan. If you specify a value less than 1320 minutes, agents will send the data for all files that were scanned for every FIM scan to the Lacework platform.

Enable Active Package Detection

Active package detection enables you to know whether a vulnerable package is being used by an application on your host and prioritize fixing active vulnerable packages first.

For the list of supported package managers and types, see Which package managers and types are supported?.

Use the Package Status filter in the Host Vulnerability page to view active or inactive vulnerable packages on hosts. See Host Vulnerability - Package Status for details.

note

For some package types, you also need to enable Agentless Workload Scanning in your environment. See Which package managers and types are supported? for details.

By default, active package detection is disabled.

  • To enable active package detection on hosts, do one of the following:
    1. Use the following option with the helm install or helm upgrade command:
      --set laceworkConfig.codeaware.enable=all
    2. Modify the values.yaml file and add data similar to the following:
      codeaware:
      enable:all
  • To enable active package detection on hosts and containers, do one of the following:
    1. Use the following option with the helm install or helm upgrade command:
      --set laceworkConfig.codeaware.enable=experimental
    2. Modify the values.yaml file and add data similar to the following:
      codeaware:
      enable:experimental
  • If active package detection is enabled, do one of the following to disable it:
    1. Use the following option with the helm install or helm upgrade command:
      --set laceworkConfig.codeaware.enable=false
    2. Modify the values.yaml file and add data similar to the following:
      codeaware:
      enable:false

Specify Package Scan Options

By default, package scan is enabled.

To disable package scan, do one of the following:

  1. Use the following option with the helm install or helm upgrade command:
    --set laceworkConfig.packagescan.enable=false
  2. Modify the values.yaml file and add data similar to the following:
    packagescan:
    enable:false

If package scan is disabled, do one of the following to enable it:

  1. Use the following option with the helm install or helm upgrade command:
    --set laceworkConfig.packagescan.enable=true
  2. Modify the values.yaml file and add data similar to the following:
    packagescan:
    enable:true

To specify interval (in minutes) between package scans, do one of the following:

  1. Use the following option with the helm install or helm upgrade command:
    --set laceworkConfig.packagescan.interval=60
  2. Modify the values.yaml file and add data similar to the following:
    packagescan:
    interval:60

Specify Process Scan Options

By default, process scan is enabled.

To disable process scan, do one of the following:

  1. Use the following option with the helm install or helm upgrade command:
    --set laceworkConfig.procscan.enable=false
  2. Modify the values.yaml file and add data similar to the following:
    procscan:
    enable:false

If process scan is disabled, do one of the following to enable it:

  1. Use the following option with the helm install or helm upgrade command:
    --set laceworkConfig.procscan.enable=true
  2. Modify the values.yaml file and add data similar to the following:
    procscan:
    enable:true
    To specify interval (in minutes) between process scans, do one of the following:
  3. Use the following option with the helm install or helm upgrade command:
    --set laceworkConfig.procscan.interval=60
  4. Modify the values.yaml file and add data similar to the following:
    procscan:
    interval:60

Filter Executables Tracked by the Agent

By default, the agent collects command-line arguments for all executables when it is collecting process metadata. You can use the cmdlinefilter option to selectively enable or disable collection of command-line arguments for executables.

To collect command-line arguments for specific executables only, do one of the following:

  1. Use one of the following with the helm install or helm upgrade command:

    • To collect data for one executable:
      --set laceworkConfig.cmdlinefilter.allow=java
    • To collect data for more than one executable, use a comma separated list:
      --set laceworkConfig.cmdlinefilter.allow=java,python
    • To collect data for all executables, use the * wildcard. This is the default and recommended setting.
      --set laceworkConfig.cmdlinefilter.allow=*
  2. Use one of the following in the values.yaml file:

    • To collect data for one executable:
      cmdlinefilter:
      allow:java
    • To collect data for more than one executable, use a comma separated list:
      cmdlinefilter:
      allow:java,python
    • To collect data for all executables, use the * wildcard. This is the default and recommended setting.
      cmdlinefilter:
      allow:*

To disable collection of command-line arguments for specific executables, do one of the following:

  1. Use one of the following with the helm install or helm upgrade command:

    • To disable collection of data for one executable:
      --set laceworkConfig.cmdlinefilter.disallow=java
    • To disable collection of data for more than one executable, use a comma separated list:
      --set laceworkConfig.cmdlinefilter.disallow=java,python
    • To disable collection of data for all executables, use the * wildcard. This setting stops data collection for all executables and is not recommended.
      --set laceworkConfig.cmdlinefilter.disallow=*
  2. Use one of the following in the values.yaml file:

    • To disable collection of data for one executable:
      cmdlinefilter:
      disallow:java
    • To disable collection of data for more than one executable, use a comma separated list:
      cmdlinefilter:
      disallow:java,python
    • To disable collection of data for all executables, use the * wildcard. This setting stops data collection for all executables and is not recommended.
      cmdlinefilter:
      disallow:*
info

Limiting the data collected by the agent reduces Lacework’s process-aware threat and intrusion detection in your cloud environment and limits the alerts that Lacework generates. If you must disable sensitive data collection in your environment, Lacework recommends disabling the smallest set of executables possible.

Specify Image Pull Secrets

Image pull secrets enable fetching the Lacework agent image from private repositories and/or allow bypassing rate limits.

You can configure image pull secrets in one of the following ways:

  1. Modify your Helm install/upgrade command with the following options:

      --set image.imagePullSecrets.name=<registrySecret>
  2. Modify the values.yaml file and add data similar to:

      # [Optional] imagePullSecrets.
    # https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    imagePullSecrets:
    - name: <registrySecret>

    Where <registrySecret> is the name of the secret that contains the credentials necessary to fetch the Lacework agent image.

Specify an Existing Secret

Existing secrets allow you to store the Lacework access token outside of Helm.

You can use an existing secret in your Lacework Helm charts in one of the following ways:

  1. Enter a command such as the following on the command line:

    --set laceworkConfig.accessToken.existingSecret.key="lacework_agent_token"
    --set laceworkConfig.accessToken.existingSecret.name="lacework-agent-secret"
  2. Modify the values.yaml file and add data similar to the following:

    laceworkConfig:
    # [Required] An access token is required before running agents.
    # Visit https://<LACEWORK CONSOLE URL> for eg: https://lacework.lacework.net
    accessToken:
    existingSecret:
    key: lacework_agent_token
    name: lacework-agent-secret
note

Kubernetes requires that the existing secret is base64 encoded.

Specify the AWS Metadata Request Interval

The agent retrieves metadata tags from AWS to enable you to quickly identify where you need to take actions to fix alerts displayed in the Lacework Console. To ensure that the latest metadata is displayed in the Lacework Console, the agent periodically makes describe-tags API calls to retrieve tags from AWS.

To limit the number of API calls, specify the interval during which the agent retrieves the tags. The interval can be specified in ns (nanoseconds), us (microseconds), ms (milliseconds), s (seconds), m (minutes), or h (hours).

  • For example, to retrieve the tags once every 15 minutes, do one of the following:

    1. Use the following option with the helm install or helm upgrade command to retrieve the tags once every 15 minutes:

      --set laceworkConfig.metadataRequestInterval="15m"
    2. Modify the values.yaml file and add data similar to the following:

      metadataRequestInterval: 15m
  • To disable the agent from retrieving tags from AWS, do one of the following:

    1. Use the following option with the helm install or helm upgrade command to retrieve the tags once every 15 minutes:

      --set laceworkConfig.metadataRequestInterval="0"
    2. Modify the values.yaml file and add data similar to the following:

      metadataRequestInterval: 0

Specify Custom Annotations on Helm Charts

Annotations are a way of adding non-identifying metadata to Kubernetes objects. They are used by external tools to provide extra functionalities.

You can set annotations in your Lacework Helm charts in one of the following ways:

  1. Enter a command such as --set laceworkConfig.annotations.<key> <value> on the command line.

  2. Modify the values.yaml file and add data similar to the following:

    # [Optional] Define custom annotations to use for identifying resources created by these charts
    annotations:
    key: value
    another_key: another_value

Specify Custom Labels on Helm Charts

Similar to custom annotations, custom labels are a way of adding non-identifying metadata to Kubernetes objects. They are used by external tools to provide extra functionalities.

You can set labels in your Lacework Helm charts in one of the following ways:

  1. Enter a command such as --set laceworkConfig.labels.<key> <value> on the command line.

  2. Modify the values.yaml file and add data similar to the following:

    # [Optional] Define custom labels to use for identifying resources created by these charts
    labels:
    key: value
    another_key: another_value

Specify Tags to Categorize Agents

You can use the tags option to specify name/value tags to categorize your agents. For more information, see Adding Agent Tags.

To specify tags, do one of the following:

  1. Use the following option with the helm install or helm upgrade command:

    --set laceworkConfig.tags.<tagname1>=<value1>
    --set laceworkConfig.tags.<tagname2>=<value2>

    For example:

    --set laceworkConfig.tags.location=austin
    --set laceworkConfig.tags.owner=pete
  2. Modify the values.yaml file and add data similar to the following:

    tags:
    <tagname1>: <value1>
    <tagname2>: <value2>

    For example:

    tags:
    location: austin
    owner: pete

Specify the perfmode Property on Helm Charts

You can set the perfmode property in your Lacework Helm charts in one of the following ways:

  1. Enter a command such as --set laceworkConfig.perfmode=PERFMODE_TYPE on the command line.

  2. Modify the values.yaml file and add data similar to the following:

    # [Optional] Set to one of the other modes like ebpflite, scan, or lite for load balancers.
    perfmode: PERFMODE_TYPE

Where PERFMODE_TYPE can be one of the following values:

  • ebpflite - The eBPF lite mode.
  • lite - The lite mode.
  • scan - The scan mode.
  • null - Disables the perfmode property. The agent runs in normal mode.

Disable or Enable Logging to stdout

Logging to stdout is enabled by default for Lacework Helm charts. You can disable stdout logging in one of the following ways:

  1. Enter a command such as --set laceworkConfig.stdoutLogging=false on the command line.

  2. Modify the values.yaml file and add data similar to the following:

      stdoutLogging: false

Install a Specific Version of the Lacework Agent Using Helm Charts

The Lacework Helm Charts Repository contains a helm chart version for every agent version. By default, the latest version of the Lacework agent is installed when you use the Lacework Helms Charts Repository to install the agent. You can use the chart version corresponding to an agent version to install a specific version of the agent.

  1. Add the Lacework Helm Charts repository:

    helm repo add lacework https://lacework.github.io/helm-charts/
  2. If the repository was already added on your machine, update the repository:

    helm repo update lacework
  3. View the chart versions available in the repository:

    helm search repo lacework --versions

    NAME CHART VERSION APP VERSION DESCRIPTION
    lacework/lacework-agent 6.2.0 1.0 Lacework Agent
    lacework/lacework-agent 6.1.2 1.0 Lacework Agent
    lacework/lacework-agent 6.1.0 1.0 Lacework Agent
    lacework/lacework-agent 6.0.2 1.0 Lacework Agent
    lacework/lacework-agent 6.0.1 1.0 Lacework Agent
    lacework/lacework-agent 6.0.0 1.0 Lacework Agent
    lacework/lacework-agent 5.9.0 1.0 Lacework Agent
    lacework/lacework-agent 5.8.0 1.0 Lacework Agent
    lacework/lacework-agent 5.7.0 1.0 Lacework Agent
    lacework/lacework-agent 5.6.0 1.0 Lacework Agent
    lacework/lacework-agent 5.5.2 1.0 Lacework Agent

    In this example, the 6.2.0 chart version corresponds to the 6.2.0 version of the agent.

  4. Use the --version option to use a specific chart version to install the agent. For example, run the following command to install the 6.2.0 version of the agent with the 6.2.0 chart version:

    helm upgrade --install –version 6.2.0 --namespace lacework --create-namespace \
    --set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
    --set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
    --set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
    lacework-agent lacework/lacework-agent

Deploy with a DaemonSet

DaemonSet Visibility

When an agent is installed on a node as a DaemonSet/Pod or on the node itself, the agent has container visibility of the following resources:

  • Processes running on the host.
  • Processes running in a container that make a network connection (server or client).
  • All container internal servers and processes that are listening actively on certain ports.
  • File Integrity Monitoring (FIM) on the host.
  • Host vulnerability on the host.

DaemonSet Deployment Using a configmap

  1. Download the Kubernetes Config (lacework-cfg-k8s.yaml) and Kubernetes Orchestration (lacework-k8s.yaml) files using the instructions in Download Linux Agent Installer.

  2. Create the pods namespace

    kubectl create namespace lacework
    note

    Lacework recommends assigning a namespace to the DaemonSet config.

  3. Using the kubectl command line interface, add the Lacework configuration file into the cluster in the newly created namespace.

    kubectl create -f lacework-cfg-k8s.yaml -n lacework
  4. Instruct the Kubernetes orchestrator to deploy an agent on all nodes in the cluster, including the master.
    To change the CPU and memory limits, see Change Agent Resource Installation Limits on K8s Environments.

    kubectl apply -f lacework-k8s.yaml -n lacework
  5. After you install the agent, it takes 10 to 15 minutes for the agent data to appear in the Lacework Console. In the Lacework Console, go to Workloads > Kubernetes and click Clusters to verify that the cluster on which you installed the agent is displayed. If the cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.

  6. Repeat the above steps for each Kubernetes cluster.
    The config.json file is embedded in the lacework-cfg-k8s.yaml file.
    To customize FIM or add tags in a Kubernetes environment, edit the configuration section of the YAML file and push the revised lacework-cfg-k8s.yaml file to the cluster using the following command:

    kubectl replace -f lacework-cfg-k8s.yaml -n lacework
    note

    Lacework always recommends assigning a namespace to the DaemonSet config.

DaemonSet Deployment Using a Secret

  1. Download the Kubernetes orchestration file (lacework-k8s.yaml) using the instructions in Download Linux Agent Installer.

  2. Edit the lacework-k8s.yaml file and make the following changes:

    • Change configMap to secret
    • Change name to secretName
  3. Use the following command in the kubectl command line interface to create the Lacework access token secret. In the command, replace:

    • YOUR_ACCESS_TOKEN with the agent access token. For more information, see Create Agent Access Token.
    • CLUSTER_NAME with your Kubernetes cluster name.
    • SERVER_URL with your Lacework agent server URL. For more information, see Agent Server URL.
    kubectl create secret generic lacework-config --from-literal config.json='{"tokens":{"AccessToken":"YOUR_ACCESS_TOKEN"}, "serverurl":"SERVER_URL", "tags":{"Env":"k8s", "KubernetesCluster":"CLUSTER_NAME"}}' --from-literal syscall_config.yaml=""

    You should see the message secret/lacework-config created if the secret is created successfully.

  4. Instruct the Kubernetes orchestrator to deploy an agent on all nodes in the cluster, including the master. To change the CPU and memory limits, see Change Agent Resource Installation Limits on K8s Environments.

    kubectl create -f lacework-k8s.yaml

    You should see the message daemonset.apps/lacework-agent created if the DaemonSet is created successfully.

  5. After you install the agent, it takes 10 to 15 minutes for the agent data to appear in the Lacework Console. In the Lacework Console, go to Workloads > Kubernetes and click Clusters to verify that the cluster on which you installed the agent is displayed. If the cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.

  6. Repeat the above steps for each Kubernetes cluster.
    To customize FIM or add tags in a Kubernetes environment, edit the configuration section of the YAML file and push the revised lacework-cfg-k8s.yaml file to the cluster using the following commands:

    kubectl replace -f lacework-k8s.yaml
    kubectl create namespace lacework
    kubectl apply -f lacework-k8s.yaml -n lacework
  7. You can confirm the DaemonSet status using the following command:

    kubectl get ds

    or

    kubectl get pods --all-namespaces | grep lacework-agent

Deploy DaemonSet Using Terraform

Lacework maintains the terraform-kubernetes-agent module to create a Secret and DaemonSet for deploying the Lacework Datacollector Agent in a Kubernetes cluster.

If you are new to the Lacework Terraform Provider or Lacework Terraform Modules, read the Terraform for Lacework Overview article to learn the basics on how to configure the provider and more.

This topic assumes familiarity with the Terraform Provider for Kubernetes maintained by Hashicorp on the Terraform Registry.

DaemonSets are an easy way to deploy a Kubernetes pod onto every node in the cluster. This is useful for monitoring tools such as Lacework. You can use the DaemonSet method to deploy Lacework onto any Kubernetes cluster, including hosted versions such as EKS, AKS, and GKE.

Run Terraform

The following code snippet creates a Lacework Agent Access token with Terraform and then deploys the DaemonSet to the Kubernetes cluster being managed with Terraform.

info

Before running this code, ensure that the following settings match the configurations for your deployment.

  • config_path
  • config_context
  • lacework_server_url
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.0"
}
lacework = {
source = "lacework/lacework"
version = "~> 1.0"
}
}
}

provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "my-context"
}

provider "lacework" {
# Configuration options
}

resource "kubernetes_namespace" "lacework" {
metadata {
name = "lacework"
}
}

resource "lacework_agent_access_token" "k8s" {
name = "prod"
description = "k8s deployment for production env"
}

module "lacework_k8s_datacollector" {
source = "lacework/agent/kubernetes"
version = "~> 1.0"
namespace = "lacework"
lacework_access_token = lacework_agent_access_token.k8s.token

# For deployments in Europe, overwrite the Lacework agent server URL
#lacework_server_url = "https://api.fra.lacework.net"

# For deployments in Australia and New Zealand, overwrite the Lacework agent server URL
#lacework_server_url = "https://auprodn1.agent.lacework.net"

# Add the lacework_agent_tag argument to retrieve the cluster name in the Kubernetes dashboard
lacework_agent_tags = {KubernetesCluster: "Name of the Kubernetes cluster"}

pod_cpu_request = "200m"
pod_mem_request = "512Mi"
pod_cpu_limit = "500m"
pod_mem_limit = "1024Mi"
}
note

Due to upstream breaking changes, version 1.0+ of this module discontinued support for version 1.x of the hashicorp/kubernetes provider. If 1.x of the hashicorp/kubernetes provider is required, pin this module's version to ~> 0.1.

  1. Open an editor and create a file called main.tf.
  2. Copy/Paste the code snippet above into the main.tf file and save the file.
  3. Run terraform plan and review the changes that will be applied.
  4. Once satisfied with the changes that will be applied, run terraform apply -auto-approve to execute Terraform.

Validate the Changes

After Terraform executes, you can use kubectl to validate the DaemonSet is deployed successfully:

kubectl get pods -l name=lacework -o=wide --all-namespaces

Install in gVisor on Kubernetes

gVisor is an application kernel written in Go that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system. gVisor provides a virtualized environment in order to sandbox containers. The system interfaces normally implemented by the host kernel are moved into a distinct, per-sandbox application kernel in order to minimize the risk of a container escape exploit.

Install in gVisor on a Kubernetes Cluster Using GKE Sandbox

  1. Set up gVisor on a Kubernetes cluster using GKE sandbox using the steps described in Enabling GKE Sandbox.

  2. After all nodes are running correctly, create a Lacework agent and Google microservices.

  3. Download lacework-cfg-k8s.yaml and lacework-k8s.yaml files.

  4. Use the following steps to create Lacework agent on the cluster:

    1. Download the lacework-cfg-k8s.yaml and lacework-k8s.yaml files.

    2. Update the Daemonset to include proper NodeAffinity and Toleration as follows:

      template:    
      metadata:
      labels:
      name: lacework
      spec:
      affinity:
      nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
      - key: sandbox.gke.io/runtime
      operator: In
      values:
      - gvisor
      tolerations:
      - effect: NoSchedule
      key: sandbox.gke.io/runtime
      operator: Equal
      value: gvisor
      - key: node-role.kubernetes.io/master
      effect: NoSchedule
      - key: node-role.kubernetes.io/control-plane
      effect: NoSchedule
    3. Go to your home directory.

    4. Run

      sudo mkdir lw
      cd lw
    5. Create lacework-cfg-k8s.yaml and lacework-k8s.yaml files in the lw directory.

    6. Run these commands to create the Lacework agent:

      1. kubectl create namespace lacework
      2. kubectl create -f lacework-cfg-k8s.yaml -n lacework
      3. kubectl apply -f lacework-k8s.yaml -n lacework
      4. kubectl get ds -n lacework (This command shows the daemonsets created)
    7. The Lacework agent pod is now deployed and should be up and running. To confirm, run this command: kubectl get pods -n lacework -o wide

  5. After the Lacework agent pod is running, deploy microservices on the cluster using the steps in Migrating a Monolithic Website to Microservices on Google Kubernetes Engine.

  6. Verify your configuration using this command: kubectl get pods

Install in gVisor on a Kubernetes Cluster Using containerd

  1. Launch any Google Cloud instance (such as an Ubuntu instance).

  2. Configure the security group of the Google Cloud instance to allow traffic only to your IP address.

  3. Install gCloud on the instance and create a cluster with gCloud.

  4. Configure containerd using steps in Containerd Configuration.

  5. After successful installation of containerd, configure containerd and update /etc/containerd/config.toml. Ensure containerd-shim-runsc-v1 is in ${PATH} or in the same directory as the containerd binary.

  6. After successful setup of containerd, set up Lacework agent and microservices pods.

    1. Go to your home directory
    2. Run these commands:
      sudo mkdir lw
      cd lw
    3. Create lacework-cfg-k8s.yaml and lacework-k8s.yaml files in lw directory.
    4. Run these commands to create the Lacework agent:
      1. kubectl create namespace lacework
      2. kubectl create -f lacework-cfg-k8s.yaml -n lacework
      3. kubectl apply -f lacework-k8s.yaml -n lacework
      4. kubectl get ds -n lacework (This command shows the daemonsets created)
    5. The Lacework agent pod is now deployed and should be up and running. To confirm, run this command: kubectl get pods -n lacework -o wide.
  7. After the Lacework pod is running, deploy microservices on the cluster.

  8. Verify your configuration using this command: kubectl get pods