How to securely send AWS security logs to On-Premises ?
There might be a scenario in which you would like to forward AWS Security logs such as cloudtrail/ Config / guardduty/ VPC Flow logs to on-premises SIEM tool for visibility and analysis.
i) AWS Cloudtrail
ii) AWS Guardduty
iii) OS Security and auditing logs
If you have enabled Control tower in your AWS Organization then it creates two AWS accounts for you i.e
a) Security-Audit: For managing Cloudtrail (Org level/ multi region) and AWS Config to record resources timelines/ track config changes.
b) Log-Archive: To store Cloudtrail and Config logs in centralized S3 bucket in locked down account.
i) Cloudtrail logs to on-premises/Splunk:
Pre-requsite:
- Create an IAM user (Assuming SIEM doesn’t support IAM Roles anywhere in this use case).
- Attach custom policy with access to specific S3 (Get/List) and SQS (List/Read) resources only .
- Generate IAM static credentials.
- Assuming we are talking about Splunk as SIEM tool (on-premises) : Configure SQS-based S3 inputs for the Splunk Add-on for AWS (https://docs.splunk.com/Documentation/AddOns/released/AWS/SQS-basedS3)
How it works:
- Whenever a new log file is added to the S3 bucket an event notification is triggered. We set event based notification under S3 bucket properties.
- This event is sent to the configured SQS queue indicating that new data available in the S3 bucket.
- Your on-premises Splunk instance is configured to monitor the SQS queue for incoming messages.
- Splunk uses its AWS S3 input capability to pull data directly from the S3 bucket.
- Splunk uses the information from the SQS message to determine which log files in the S3 bucket need to be ingested.
- Splunk ingests the log data retrieved from S3 and then index it/make it available for search and analysis within the Splunk.
- Setup SQS (Dead Letter Queue. This mechanism handle events or data that couldn’t be ingested or processed successfully. It’s referred as a “dead queue”.
Diagram:
Other Options:
- Amazon Kinesis Firehose to push data from AWS to On-premises Splunk endpoint. Be aware of data egress traffic over IPSEC tunnel as it might impact your existing traffic. You can also use Splunk public endpoints and use NAT gateway to egress traffic over secure transport.
- Lambda function can be used/invoked with S3 based event notification if you need to filter out logs before sending them to on-premises.
ii) Send Guardduty Logs to on-premises (SIEM):
Steps:
- GuardDuty sends all findings to centralized S3 bucket.
- S3 triggers Lambda function via event based notification
- Lambda function processes and refines GuardDuty findings.
- Refined findings sent to Amazon Kinesis Data Firehose.
- Kinesis Data Firehose sends this data to the on-premises Splunk instance using the HTTP Event Collector.
VPC Flow logs/AWS Config -> Can also use same method as Cloudtrail logs. If solution is not on-premises then you can also provide access via cross account IAM role (if 3rd party or your solution is hosted in another AWS account).
iii) OS Security/Audit Logs to on-premises ->
Option 1) Install cloudwatch agent as part of your bootsrap script or packer in ec2 instances. You can forward these logs to cloudwatch location and on-premises SIEM can pull it from cloudwatch.
Option 2) You can also install splunk universal forwarder agent via bootstrap in ec2 and forward logs to Splunk endpoint directly. The only issue is that you might not able to retain logs in cloud for longer retention.
Logs Monitoring:
- Setup alert if there is no log ingestion from cloud to on-premises. You can simply count num of lines and set time range.
- Monitor SQS “Number of messages Visible” as this queue grows during any interruption.
- Monitor SQS Dead Queue and check/remediate failures.
- You can also setup anomaly alerts based on your regular usage.
- Forward Lambda/Firehose logs to cloudwatch/S3 and setup alerting if there is an error reaching on-premises splunk endpoint or Lambda function error.
Summary: Always try to keep solutions as simple as possible. Any new service will add more complexity and you would want to monitor it as well. Setup minimal/useful alerting and only when you intend to take an action on those alerts.
Create all services with security mindset such as :
- S3 bucket creation with secure transport (https) and encrypt objects at rest and transit.
- Create VPC private interface endpoints to connect from on-premises.
- Use strict security groups (ingress/egress).
If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇