13 best practices before deploying AWS S3 buckets in production

Sudhir Kumar
6 min readOct 7, 2022

AWS S3 stands for Simple Storage Service. It’s an object storage service that stores data as objects. It’s designed to provide 99.999999999% durability and 99.99% availability of objects over a given year.

In the past there have been many security incidents about S3 data exposure to public and since then AWS made a ton of changes and added more security features.

Few security incidents related to S3 breach : https://github.com/nagwww/s3-leaks

AWS S3 best security practices:

  1. Bucket Policy (Implement least privilege access)
  2. Encryption of data at rest and transit
  3. MFA Delete and S3 object level locking
  4. S3 Lifecycle Policies
  5. S3 Object logging
  6. S3 Monitoring
  7. S3 Pre-signed URL’s
  8. Versioning (keeping multiple variants of an object in the same bucket)
  9. Enable S3 Guardduty Protection
  10. S3 Macie (Sensitive data discovery and protection at scale)
  11. S3 Gateway Endpoint
  12. Monitor S3 buckets policy alterations
  13. Miscellaneous tips

1.Bucket Policy (Implement least privilege access)

  • Use S3 bucket policies and make sure you allow only required operations only (get/put etc).
  • You can also whitelist IP addresses and ARN for other accounts that might need access to S3 bucket in your account.
  • Enable Block Public Access Settings at account level. This is preventive measure and make sure that we restrict buckets and prevent anonymous access.
Enable Block Public Access
  • Make sure that we do not use wildcard identity such as Principal “*” (which effectively means “anyone”) or allows a wildcard action “*” (which effectively allows the user to perform any action in the Amazon S3 bucket).
  • If bucket needs to be accessed by IAM user → Writing IAM user policies that specify the users that can access specific buckets and objects. IAM policies provide a programmatic way to manage Amazon S3 permissions for multiple users. For more information about creating and testing user policies, see the AWS Policy Generator and IAM Policy Simulator.

AWS Doc → https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html

** Highly recommend to use IAM roles with temporary credentials and session token instead of IAM users with static credentials. For on-premises try using IAM Roles anywhere (Recently released).

2. Encryption of data at rest and transit:

  • Enable Server-side Encryption and Client-side encryption.
  • Enable mandatory use of HTTPS, you can enable it in bucket policy i.e. “aws:SecureTransport” condition in bucket policy.
  • Use AES-256 bit encryption (bare minimum) that will encrypt object when we store in S3. Best practice is to use SSE-KMS that put stringent security constraints while accessing object, such as mandatory use of HTTPS.

AWS Doc → https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-config-rule/

3. MFA Delete and S3 object level locking :

  • For critical buckets such as Cloudtrail logs/ Config logs you can enable MFA for object deletion. This is to make sure we don’t have accidental deletion of objects.
  • S3 object level locking enables you to store objects using a “Write Once Read Many” (WORM) model. S3 Object Lock can help prevent accidental or inappropriate deletion of data.

AWS Doc → https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiFactorAuthenticationDelete.html

** In general, it’s recommended to enable Control tower and forward all critical logs to locked down AWS accounts.

4. S3 Lifecycle Policies:

  • Bucket lifecycle policies can be set and it make sure that objects are getting deleted if not needed.
  • Applying lifecycle policy major advantage is cost optimisation and also we can explore other storage solution such as S3 Intelligent tiering, Glacier (Deep Archival) etc.
  • Make sure to check AWS doc if you want to apply it to existing bucket. There is also costs associated with object transition i.e. from one storage tier to another and it can multiply costs if you have millions of objects. Intelligent tiering is best to start with.

AWS Doc → https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html

5. S3 Object logging:

  • Object logging is needed for critical buckets to analyze any anomalies or track all S3 API operations.
  • It’s also needed for audting purposes and backtrack security incidents.
  • You can create a separate S3 bucket to log all activities.
  • Check cloudtrail logs to trackback security incidents.

AWS Doc → https://docs.aws.amazon.com/AmazonS3/latest/userguide/logging-with-S3.html

6. Versioning (keeping multiple variants of an object in the same bucket):

  • Versioning-enabled buckets can help you recover objects from accidental deletion or overwrite. For example, if you delete an object, Amazon S3 inserts a delete marker instead of removing the object permanently. The delete marker becomes the current object version. If you overwrite an object, it results in a new object version in the bucket. You can always restore the previous version. For more information, see Deleting object versions from a versioning-enabled bucket.
  • Enable it for critical buckets and use it together with MFADelete to add another layer of security.

AWS Doc → https://docs.aws.amazon.com/AmazonS3/latest/userguide/versioning-workflows.html

7. S3 Monitoring:

  • S3 buckets monitoring to check critical metrics e.g — total objects, Total storage in-use.
  • Use S3 Lens Dashboard to check usage. It comes with free and premium version.

Doc → https://docs.aws.amazon.com/AmazonS3/latest/userguide/monitoring-overview.html

8. S3 Pre-signed URL’s:

  • S3 objects are private by default. Only the object owner has permission to access them. However, the object owner can optionally share objects with others by creating a presigned URL, using their own security credentials, to grant time-limited permission to download the objects.
  • Anyone who receives the presigned URL can then access the object. For example, if you have a video in your bucket and both the bucket and the object are private, you can share the video with others by generating a presigned URL. See Share an object with others.

CLI : https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/presign.html

Example : To create a pre-signed URL with a custom lifetime that links to an object in an S3 bucket

The following presign command generates a pre-signed URL for a specified bucket and key that is valid for one week.

aws s3 presign s3://DOC-EXAMPLE-BUCKET/test2.txt \
--expires-in 604800



9. Enable S3 Guardduty Protection :

  • It monitor object-level API operations to identify potential security risks for data within your S3 buckets. This is to make sure that bucket is reached out by sane IP addresses and not accessed by malicious actors.
  • You can enable S3 guardduty protection service together with Guardduty at AWS Organization level.
  • Be aware that Guardduty will analyze all operations on S3 bucket and there are additional costs associated with it.
  • Ideally, we should be able to disable S3 protection at bucket level OR as we don’t need to enable it for buckets that are meant for private use only i.e. from on-premises but this feature is not implemented yet. You can reach out to AWS TAMs/Support and they can disable it at backend.

AWS Doc → https://docs.aws.amazon.com/guardduty/latest/ug/s3-protection.html

10. S3 Macie (Sensitive data discovery and protection at scale):

  • Macie automates the discovery of sensitive data, such as personally identifiable information (PII) and financial data, to provide you with a better understanding of the data that your organization stores in Amazon Simple Storage Service (Amazon S3)
  • You can also integrate Macie with AWS Organizations and manage it centrally.
  • Macie can publish findings to AWS Security Hub and EventBridge.

AWS Doc → https://docs.aws.amazon.com/macie/latest/user/what-is-macie.html

11. S3 Gateway Endpoint:

  • Gateway endpoints provide reliable connectivity to Amazon S3 without requiring an internet gateway or a NAT device for your VPC. Gateway endpoints do not enable AWS PrivateLink. Amazon S3 supports both gateway endpoints and interface endpoints.
  • Each subnet route table must have a route that sends traffic destined for the service to the gateway endpoint using the prefix list for the service.
Source — AWS
  • When you create a gateway endpoint, you select the VPC route tables for the subnets that you enable. The following route is automatically added to each route table that you select. The destination is a prefix list for the service owned by AWS and the target is the gateway endpoint.

AWS Doc → https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html

12. Monitor S3 buckets policy alterations:

  • For critical buckets with sensitive data, make sure you monitor S3 bucket properties change and set alerts for it. Example : Cloudwatch event can detect and notify via SNS for any S3 bucket policy changes. You can accomplish same with your CSPM tool or centralized SIEM.

Doc (example) → https://asecure.cloud/a/detect-s3-bucket-policy-changes/

13. Other tips:

  • Use IAM Access analyzer to analyze S3 bucket permissions.
  • Use AWS trusted advisor S3 buckets permissions check
  • AWS Config can be used to record resource timeline and backtrack changes related to bucket configurations.
  • Use Cloud security posture management tools / AWS console to list all publicly exposed S3 buckets and perform risk assessment. Check if those are really needed and implement additional security controls.

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇



Sudhir Kumar

Working as Cloud lead/Architect with security mindset.