EKS Audit Log Integration Using Terraform
Overview
Lacework integrates with AWS to analyze EKS Audit Logs for monitoring EKS cluster security and configuration compliance. This topic describes how to integrate with AWS by running Lacework Terraform modules from any host supported by Terraform.
To integrate with Amazon EKS, Lacework recommends using guided configuration. The guided interface takes your input and generates a script that downloads and sets up all necessary Lacework CLI and Terraform components to create the integration non-interactively.
To use guided configuration:
- In the Lacework Console go to Settings > Integrations > Cloud accounts.
- Click + Add New.
- Click Amazon Web Services and select Guided configuration.
Alternatively, follow the steps in this topic to use the Lacework CLI to generate Terraform code. Or you can create the main.tf
file manually and run Terraform from any supported host.
If you are new to the Lacework Terraform provider, or Lacework Terraform modules, read Terraform for Lacework Overview to learn the basics on how to configure the provider and more.
Resources Provisioned by Lacework Terraform Modules
To integrate AWS with Lacework, Lacework Terraform modules provision the following resources in the designated AWS account:
- IAM Cross-Account Role - A cross-account role is required to give access to Lacework access for assessments of cloud resource configurations and for analysis of CloudTrail events. The cross-account role will be given the following policy:
- Lacework Custom IAM Policy - A custom policy that provides Lacework read-only access to ingest EKS Audit Logs.
- SNS Topic - An SNS topic is required for all EKS Audit Log integrations. Terraform will create a new topic in the designated account.
- S3 Bucket - An S3 bucket is required for all EKS Audit Log integrations. Lacework will create a new bucket in the designated account.
- S3 Bucket Notification - A S3 Bucket notification is required for all EKS Audit Log integrations. This notifies the SNS topic when a new object has been created in the bucket.
- S3 Bucket Lifecycle rule - An S3 Bucket Lifecycle rule to specify the log number of days the logs are retained. Defaults to 180 days.
- S3 Bucket versioning - A resource for controlling versioning on an S3 bucket.
- Kinesis Firehose - A Kinesis Firehose is required for all EKS Audit Log integrations. Lacework will create a new Firehose in the designated account.
- IAM Lacework Firehose Role - A firehose role is required to assume the AWS firehose service role.
- Lacework Firehose IAM Policy - A firehose policy is required to allow the firehose to manage the contents of the S3 bucket.
- IAM Lacework CloudWatch Role - A CloudWatch role is required to assume the logs role for each region integrated.
- Lacework CloudWatch IAM Policy - A CloudWatch policy is required to allow the firehose to access the CloudWatch logs.
- CloudWatch Subscription Filter(s) - CloudWatch subscription filter(s) are required for each EKS cluster in order to notify when logs have been added to the CloudWatch Log group.
Requirements
- AWS Account Admin - The account used to run Terraform must have administrative privileges on every AWS account you intend to integrate with Lacework.
- AWS CLI - The Terraform provider for AWS leverages the configuration from the AWS CLI and it is recommended the AWS CLI is installed and configured with API keys for the account being integrated.
- Lacework Administrator - A Lacework account with administrator privileges.
- Cross-account IAM role - The multi-region scenario below requires a cross-account IAM role that Lacework can use to access accounts across multiple regions.
- Lacework CLI - The Terraform configuration relies on the Lacework CLI. Before starting, install and configure the Lacework CLI.
- Terraform -
~> 0.15
,~> 1.0
,~> 1.1
. - Ensure that you are deploying the integration to a supported AWS region.
- Audit logging must be enabled on the clusters that you want to integrate. You can do this via the AWS CLI using the following command:
aws eks --region <region> update-cluster-config --name <cluster_name> \
--logging '{"clusterLogging":[{"types":["audit"],"enabled":true}]}'
Module Dependencies
The Lacework Terraform modules for AWS have the following dependencies, which are installed when along with the Lacework module when you run the terraform init
command:
For detailed information on these dependencies, visit Lacework on the Terraform Registry.
Deployment Scenarios
- Integrate EKS cluster(s) Audit Logs in a single region - This deployment scenario configures a new Lacework EKS Audit Log integration for clusters in a single AWS region.
- Integrate EKS cluster(s) Audit Logs across multiple regions - This deployment scenario configures a new Lacework EKS Audit Log integration for clusters across multiple AWS regions.
Scenario 1 - Integrate EKS Clusters Audit Logs in a Single Region
This scenario creates a new Lacework EKS Audit Log integration with a cross-account IAM role to provide Lacework access. This example targets clusters in a single AWS region.
Run the Lacework CLI
Run the following Lacework CLI command, replacing
YourRegion
with the AWS regions in which the EKS clusters reside, such as us-east-1, andcluster-1
andcluster-2
with the names of the EKS clusters:lacework generate k8s eks \
--region_clusters YourRegion="cluster-1,cluster-2" \
--noninteractiveThe Terraform files are created in the
~/lacework/aws_eks_audit
directory.Navigate to the
~/lacework/aws_eks_audit
directory.Run
terraform plan
and review the changes that will be applied.Once satisfied with the changes that will be applied, run
terraform apply
to execute Terraform.
To verify the integration, see Validate the Configuration.
If creating or modifying the main.tf
file manually, you can use Terraform inputs to customize Lacework Terraform modules. See the EKS Audit Log Module Inputs for the complete list of module inputs.
Scenario 2 - Integrate EKS Clusters Audit Logs Across Multiple Regions
This scenario creates a new Lacework EKS Audit Log integration with a cross-account IAM role to provide Lacework access. This example targets clusters across multiple AWS regions.
Run the Lacework CLI
Run the following Lacework CLI command. In your command, replace
YourRegion
andYourRegion2
with the AWS regions in which the EKS clusters reside, such asus-east-1
, andcluster-1
andcluster-2
with the names of the EKS clusters. Also replaceCrossAccountIAMRoleARN
andCrossAccountIAMRoleExternalID
with the cross account IAM role ARN and IAM role external ID for your Lacework role.lacework generate k8s eks \
--region_clusters YourRegion="cluster-1,cluster-2" \
--region_clusters YourRegion2="cluster-1,cluster-2" \
--existing_ca_iam_role_arn CrossAccountIAMRoleARN \
--existing_ca_iam_role_external_id CrossAccountIAMRoleExternalID \
--noninteractiveThe Terraform files are created in the
~/lacework/aws_eks_audit
directory.Navigate to the
~/lacework/aws_eks_audit
directory.Run
terraform plan
and review the changes that will be applied.Once satisfied with the changes that will be applied, run
terraform apply
to execute Terraform.
To verify the integration, see Validate the Configuration.
If creating or modifying the main.tf
file manually, you can use Terraform inputs to customize Lacework Terraform modules. See the EKS Audit Log Module Inputs for the complete list of module inputs.
Validate the Configuration
To confirm that the Cloud Account integration is working, use the Lacework CLI or the Lacework Console as follows:
- To validate the integration using the CLI, run the
lacework cloud-account list
command. EKS cloud account integrations are listed asAwsEksAudit
. - To validate the integration using the Lacework Console, log in to your account and go to Settings > Integrations > Cloud accounts. If successful, your integration should appear in the integration list as type EKS Audit Log.
AWS Security Token Service Limitations
If using AWS Security Token Service (STS), the Lacework AWS module will fail due to limitations of STS. See AWS STS documentation for more information. For example, if using aws-vault, you must pass the --no-session
flag.
For more information, see aws-vault documentation.