How to implement AWS Security Best Practices ?

Sudhir Kumar
12 min readJun 19, 2021

Highlighting a few important points regarding AWS Security. Applying best security practices is the most important aspect of running things in cloud. We can apply NIST cybersecurity framework five functions i.e. Identify, Protect, Detect, Respond, Recover.

Always use Defense in Depth approach i.e. Protection at each layer.

Example →

  • Cloudfront with AWS WAF i.e. protection at edge.
  • Load balancer layer (Layer 4/Layer7) with Enterprise Shield and Web Application Firewall.
  • Use Security groups in combination with NACL’s.
  • Ec2 instances with strict Ingress/Egress Security groups.
  • Security at Application/OS level e.g. iptables/HIDS (Host Intrusion Detection System) etc.
  • Forward all logs to SIEM and create dashboards and alerts for anomalies based on suspicious activity.

Reason is simple that if there is a breach at any level then it will thwart these attempts.

Strong detection and response can circumvent security incidents.

Covering Below topics :-

IAM (Identity Access Management) :

  • Always enable MFA for root account user and store it in safe place OR you can shred it afterwards. Use cross account Organisation IAM role from Master billing account (Break Glass) to login to another account. Enable MFA to assume this IAM role and create alerts via Cloudtrail or centralised SIEM tool to security team.
  • Do not create IAM secret/access keys for root user. We shouldn’t use it for any resource deployment.
  • Use AWS Single Sign on with your external IDP for login purposes. If you don’t have any external Identity provider then you can look into Directory Services as well.
  • Use AWS STS via AWS Single-Sign on or any other in-house application that can interact with AWS and assumeRole to request temporary credentials OR look into IAM Roles anywhere.
  • Strong IAM password policy.
  • Create alerts on AWS root account login as it’s not needed except few rare cases such as account closure . Alerts can be triggered via Cloudtrail -> SNS topic -> Email/SMS etc OR via centralized SIEM / CSPM tools .
  • Do not create IAM users (unless it’s absolute required). Apply IP whitelisting with principle of least privilege (PoLP). Do not use wildcards.
  • Use IAM roles and attach profile to ec2 instances in order to interact with AWS Services.
  • IAM users with console-sign access must have MFA enabled. Monitor console login events without MFA.
  • Set IAM permission boundaries.
  • For static IAM users, you can also enable MFA to generate temporary credentials and set duration on it.
  • Rotate IAM keys every 90 days. Check reports using AWS Organisation Trusted advisor report. Create alerts using centralized CSPM / SIEM tools.
  • Setup AWS Config Compliance rules to notify about IAM keys rotation.
  • Use AWS IAM Access Analyzer.
  • Use AWS System Manager -> Create System Manager activation -> Attach IAM Role -> Install/activate ssm-agent with activation code (on-premises virtual machine). With this step we can avoid creating IAM user for on-premises VM and generate temporary credentials using STS.
  • Do not use wildcards in IAM policies. Check exact permissions needed i.e. least privilege approach.
  • NEW/Strongly recommended: Use “IAM Roles Anywhere” i.e. recently released by AWS. We can use it from on-premises environment →

In case of any security compromise:

  • Check Cloudtrail logs to see what kind of actions were performed via Access key ID and revoke IAM sessions and then rotate IAM keys.

S3 (Simple Storage Service):

  • Enable “Block Public Access” Settings at account level. If you need this setting to be enabled then use a dedicated account to host public buckets only.
  • Always create S3 bucket policy.
  • Use AES-256 bit encryption (bare minimum) that will encrypt object when we store in S3. Best practice is to use SSE-KMS that put stringent security constraints while accessing object, such as mandatory use of HTTPS.
  • Enable mandatory use of HTTPS, you can enable it in bucket policy i.e. “aws:SecureTransport” condition in bucket policy.
  • If you have Directconnect, then make sure all S3 traffic is going over via DX to access S3 objects over https. It’s best practice to access critical data.
  • Use public IP over Directconnect and enable edge firewall from on-premises to reach public services such as S3 OR you can use private IP over DX with interface VPC endpoint.
  • Whitelist IP addresses to access S3 bucket via bucket policies.
  • MFA enable deletion for compliance related buckets. This is to make sure no one should be able to delete critical data.
  • Checkout S3 Glacier Vault lock — WORM (Write once read many) i.e. for compliance purposes.
  • Save costs by defining lifecycle policies from S3 to glacier OR use intelligent tiering.
  • For critical buckets, you can enable S3 object logging and use cloudtrail to monitor it.
  • For DR purposes, You can also enable S3 versioning with bucket replication to another region. Make sure to comply with GDPR policies before syncing data to regions such as Europe.
  • Use proper tagging to identify public/private buckets.
  • Monitor S3 buckets storage and total objects via cloudwatch monitoring.
  • Monitor S3 bucket policy changes for critical buckets e.g. with lambda/SNS notifications.

ELB (Elastic Load Balancing):

  • Enable logging for public Load Balancers and push it to centralised S3 bucket. One of the method is to create your ELK stack to fetch all logs from S3 bucket using filebeat and SQS.
  • For most important LB’s you can turn on deletion protection. This will protect them from accidental deletion.
  • Enable cross zone load balancing and choose draining policy (while deregistering an ec2 instance it takes care of inflight sessions and will not send new request to that ec2 instance) carefully.
  • For mission critical applications, you can attach AWS WAF (Web Application Firewall) to AWS-ALB.
  • Choose and apply latest CipherSuite for SSL negotiation.
  • Protect your loadbalancers against DDOS attacks with AWS Advanced Shield protection.
  • NEW: Apply Security groups at Network Load Balancer level -

VPC and VPC Endpoints/Endpoints Services and PrivateLink:

  • VPC endpoints can be used from AWS via route tables (Gateway endpoint). Interface endpoints can be reached by AWS public services via privateLink and we don’t need Internet Gateway/NAT GW/Public-IP/DX to reach these services.
  • Use VPC Endpoints to lower down NAT Gateway egress cost, improve security posture as traffic remains local within the VPC.
  • Use Network Load balancers for more throughput, static IP addresses and create VPC Endpoint Services to access NLB from other AWS accounts. You can whitelist account principal’s in endpoint services as security measure. NLB’s doesn’t have it’s own security groups and it relies on backend/target groups security groups.
  • Use Multi AZ deployments i.e. create public and private subnets across multiple AZ’s. Deploy VPC’s in 2 regions (minimum) to create multi region applications.
  • Use NACL’s only to block non-secure ports as its unmanageable at scale (100’s of VPC’s) and due to ephemeral in nature it‘s bit challenging as you have to allow both incoming and outgoing ports explicitly.
  • Use Network Firewall Manager and manage security groups at scale ( at AWS Organisation level). Firewall manager can monitor/rollback overly permissive security groups/insecure ports etc.
  • VPC endpoint services is preferable over VPC Peering connections as you are exposing only that needed endpoint from one VPC to another. Peerings should be avoided but if no other option then make sure to use strict ingress security groups/NACL’s in both VPC’s.

RDS (Relational Database Service):

  • While deploying RDS, make sure to use private subnets and don’t allow public access to RDS. You can access it over private subnets via DirectConnect from your organisation.
  • If any strong business use-case to access RDS from external vendors then use IPSEC tunnel from vendor side to reach RDS privately.
  • Make sure to encrypt traffic using SSL/TLS and whitelist source IP’s using security groups.

IAM database authentication provides the following benefits:

  • Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL).
  • You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance.
  • For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security.

AWS Security Services (Guardduty/VPC-Flowlogs/Cloudtrail/AWS-Config/Cloudwatch):

  • For observability/monitoring multiple accounts within AWS you should enable Guardduty/VPC-Flowlogs/Cloudtrail/AWS-Config/Cloudwatch. Best architecture would be to delegate master account and forward all logs to separate locked account S3 bucket. Use either ELK stack /SIEM tool to visualise/create dashboards and set alerts based on events.
  • Enable Cloudtrail for all regions across all accounts and send logs to centralized S3 buckets. (If you deploy account using control tower/account-factory then it will automatically enable cloudtrail with built in detective/preventive guardrails).
  • Create Aggregator for AWS-Config and Master Guardduty Account to visualize all activity within one account.
  • Set cloudwatch alarms based on events in Cloudtrail logs. Cloudwatch can filter events and can act on those events with Lambda/SNS and other supported triggers. This will help achieve automated protections for AWS Accounts.
  • Set Cross Account/Cross region Cloudwatch dashboards in central account and get all metrics from centralised location.
  • You can leverage Amazon Managed Prometheus and Amazon Managed Grafana and use remote_write from your own Prometheus to send metrics to above managed services. You can also use cloudwatch as data-source and use Prometheus Pushgateway to send custom metrics.
  • It’s useful to have synthetic monitoring from critical VPC’s to measure ping/traceroute/dns/latency metrics.
  • VPC-Flowlogs shows logs about IP traffic with Accept/Deny traffic excluding payload. You can leverage CSPM tool and feed VPC flow logs to visualize threat detections/connectivity to bad actors.
  • Analyze Guardduty alerts with lambda function in master account and send logs to on-premises SIEM using kinesis stream and HTTP event collector. Trigger high priority alerts via SIEM.
  • Enable Guardduty S3 protection as it will monitor object level API operations to identify security risks for data. It analyzes cloudtrail management and S3 data events. Reference ->
  • NEW : Enable Guardduty Malware Detection for EBS volumes. Reference →
  • There are indeed tons of services that can be looked into based on the business needs such as :-

Amazon Macie : Uses Machine Learning to discover sensitive data at S3.

AWS Inspector: Security assessment services for vulnerabilities etc.

AWS Detective: Analyze and Identify root cause of potential security issues/backtracking.

AWS Firewall Manager: Manage WAF rules, security groups, AWS Shield, Resolver DNS Firewall at scale i.e. across multiple VPC’s in your AWS organisation.

AWS KMS : Create/manage cryptographic keys. It’s managed service that provides encryption to all applications and integrated with AWS services. Logs all API call with cloudtrail for auditing/compliance.

AWS Security Hub with compliance overview. Mitigate all alerts based on compliance framework e.g. AWS CIS Benchmarking, as a starting point.

and few others… Check out below link:-

AWS Security Reference Architecture


  • For cross account private hosted zone DNS resolution, create VPC association authorisation in participant account, associate vpc with hosted zone and then delete vpc association authorisation in main account. Afterwards, you can start resolving private hosted zone records from one account to another.
  • For multiple accounts/on-premises resolution, use Route53 resolver and share it with multiple (participant) accounts. This is best suited for hybrid environments i.e. for resolution from cloud to on-premises and vice versa.
  • Be careful with public hosted zones. Any record you create in this zone will be resolved publicly. It’s best to use private hosted zones for all internal DNS resolutions.
  • If you are removing R53 public hosted zones and using another DNS provider that delegate NS records to R53 delegation set , then make sure to remove those NS records else the zone will be vulnerable to hijacking attack (subdomain takeover). E.g- (Azure example but same principal applies to R53/AWS as well).

AWS Config:

  • AWS Config will record all changes related to AWS resources. Any security group/ec2/rds will be recorded and you can see all timelines based on any security event.
  • Check compliant and non-compliant section E.g: we can create an alert to check whether IAM access/secret key age is above 90 days. When a resource transitioned from COMPLIANT to NON-COMPLIANT state then it can trigger lambda functions and events.
  • Configure AWS Config Aggregator and it can be your multi account compliance/governance dashboard for all accounts.
  • You can run advance queries with AWS Config Aggregator to determine all resources e.g: Total ec2 instances/RDS instances and all other complex queries:

Below example screenshots to gain insights about all resources/compliance within your AWS accounts:

EC2 :

  • Do not deploy Ec2 in public subnets. Use 3 tier architecture approach with LB in front with SSL termination and ec2 instances (web/app) + RDS in private subnets.
  • Use IAM profile with Role and attach to ec2 instance to access AWS resources with STS. Always be mindful of adding exact Actions needed. Do not use wildcards.
  • Use IMDSV2 version while deploying Ec2 instance: IMDSv2 is a more secure version of metadata service, that makes it harder to steal the IAM role from an EC2 unless you have full RCE (remote code execution) on the EC2. By default, all EC2s still allow access to the original metadata service, which means that if an attacker finds an EC2 running a proxy or WAF, or finds and SSRF vulnerability, they likely can steal the IAM role of the EC2.

AWS Multi-Account Best Practices:

  • Enable centralised billing with AWS Organisation.
  • Create multiple OU and use Service control policies at Org. level. Few examples- Restrict access to few regions, mandatory tagging, enforce imdsv2, allow approved services and so on.
  • Enable AWS SSO and integrate with external IDP to login in multiple AWS accounts. Manage permission sets and use groups to attach to AWS accounts.Set critical activities related to IAM using cloudtrail via centralized SIEM tool.
  • In master AWS SSO account — make sure to prevent close account with just one api request ( Use Service control policy to restrict it and trigger alert for such actions via cloudtrail.
  • Bootstrap AWS accounts using terraform modules or any IAC platform to create VPC/Subnets/routes etc.
  • Create IT Approved AMI’s (with packer/CIS benchmarking pipelines) and share with all AWS accounts. Block all other AMI’s/marketplace usage.
  • Create alerts using lambda functions and cloudwatch events across all accounts or via centralised SIEM. Example: If a user generate secret/access keys then relevant EventName will be registered in cloudtrail and that can trigger notification via cloudwatch to incident team.
  • Implement control tower and migrate existing accounts. There are lot of benefits of using control tower including inbuilt security guardrails and out of the box support for Cloudtrail and Config logs in separate locked down account.
  • JIT (Just-in-time) access for critical permission such as superadmin. Example: Assign a group with permission sets to an AWS account via SSO and use IAM utility (such as Sailpoint/Okta) to add users to that particular group via complete workflow and approval process. Once approved, user will be automatically attached to that permission sets group and access will be revoked in next few hours.
  • AWS Trusted advisor Reports : You can create AWS trusted advisor dashboards with quicksight, athena. Here is the how to: . This is very useful and provide you all insight about your current environment from security/cost perspective.
  • DirectConnect MACsec Encryption for dedicated 10G/100G connections.
  • Use tfsec or similar tool to run against your terraform code before applying or integrate with CI/CD pipeline. It will scan terraform code and flag potential security issues.
  • Checkout “Security By design” Whitepaper by AWS.
  • Enable ECR Repo scanning (Automated/Continuous) and EC2 scanning (via ssm agent) with AWS Inspector-2.0 at Organisation level with a single pane of view. . You can leverage your CSPM tool and integrate with AWS Inspector.
  • Use AWS RSS Feeds to get all real time alerts.
  • Use Cloudformation StackSet with AWS Organisation to make sure that new accounts get all security related/other cross account IAM roles automatically.
  • Implement container (static and dynamic scan) i.e. before and after container deployment. You may need to use third party tools (such as aqua/twistlock).
  • If using terraform as IAAC tool; do look into enabling drift detection. It provides continuous checks against infrastructure state to detect and notify operators of changes in your infrastructure to minimize your risk exposure, application downtime, and costs.
  • You have to spend time in either creating cross account lambda functions with AWS Services like Security Hub OR use open source/third party tools to assist with Cloud Security posture management, inventory, VPC traffic analysis and create alerts for any discrepancy.

AWS does provide native security solutions with Organisation level dashboards. You need to pick tools as per Organisation requirements/multiple cloud posture management/UEBA (User and Entity Behaviour Analytics)/ Threat analysis using traffic logs/ Inventory / Security Frameworks and it’s good to have a single pane of view.

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇



Sudhir Kumar

Working as Cloud lead/Architect with security mindset.