AWS S3 Bucket
Amazon S3 (Simple Storage Service) is an object storage service that offers industry-leading scalability, data availability, security, and performance. It can be used for a variety of use cases, such as storing and retrieving data, hosting static websites, and more. It facilitates features such as extremely high availability, security, and simple connection to other AWS Services. Each S3 bucket name should be named globally unique and should be configured with ACL (Access Control List).
Here are some common uses of S3 buckets:
Data Backup and Archiving: S3 buckets provide a reliable and durable storage solution for backing up data and archiving files. Many organizations use S3 to store backups of their databases, applications, and files for disaster recovery purposes.
Static Website Hosting: S3 buckets can be configured to host static websites. This is a cost-effective way to host websites that don't require server-side processing. Users can upload HTML, CSS, JavaScript, and other files to an S3 bucket, and configure it to serve web pages to visitors.
Media Storage and Streaming: S3 buckets can store media files like images, videos, and audio files. These files can be streamed directly from S3 or integrated with other AWS services like Amazon Elastic Transcoder for video transcoding or Amazon CloudFront for low-latency streaming.
Data Sharing and Collaboration: S3 buckets can be configured with fine-grained access controls, allowing organizations to securely share data with external partners, customers, or other AWS accounts. This makes S3 a popular choice for collaboration and sharing large files securely over the internet.
How does AWS S3 bucket works?
Amazon S3 organizes data into individual S3 buckets and customizes them with access controls. It enables customers to store items in S3 buckets while also providing capabilities such as versioning and lifecycle management of data storage with scaling.
Creation: To start using S3, you first create a bucket. A bucket is a container for objects stored in S3. Each bucket has a globally unique name and can be located in a specific AWS region.
Objects: Once you have a bucket, you can upload objects to it. Objects in S3 can be any kind of data, such as files, documents, images, videos, or even entire application backups. Each object is stored as a key-value pair, where the key is the unique identifier for the object within the bucket.
Storage Classes: S3 offers different storage classes optimized for various use cases. These include Standard, Standard-IA (Infrequent Access), One Zone-IA, Intelligent-Tiering, Glacier, and Glacier Deep Archive.
Access Control: S3 allows you to control access to your buckets and objects using a combination of bucket policies, (ACLs), and AWS Identity and Access Management (IAM) policies and control actions such as read, write, and delete at both the bucket and object levels.
Lifecycle Policies: S3 provides lifecycle policies that enable you to automate the management of your objects over their lifetime. This helps optimize storage costs and compliance with data retention policies.
Data Transfer: S3 supports high-speed data transfer both into and out of the service. You can use the AWS Management Console, AWS Command Line Interface (CLI), or software development kits (SDKs) to upload and download objects to and from S3.
Monitoring and Logging: S3 provides detailed metrics and logging capabilities to monitor the usage and performance of your buckets and objects. You can use Amazon CloudWatch metrics to track storage usage, request rates, and error rates, and enable server access logging to record all requests made to your bucket.
Task1:
Creating and Managing S3 Buckets Using Terraform.
Step 1: Create a terraform.tf
, where we have to pass on AWS provider details.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
Step2: Create a file named "s3.tf
" and add the following code to it.
provider "aws" {
region = "us-east-2"
}
resource "aws_s3_bucket" "my_test_bucket" {
bucket = "demo_bucket"
}
Step 4: Run the terraform init
command to initialize the working directory and download the required providers.
terraform init
Step5: Apply terraform plan
command
Step6: Execute terraform apply
command
Step7: Now, we will check the terraform state:
Step 8: Check whether the bucket is created in the AWS S3 bucket.
Configure the bucket to allow public read access.
As the S3 bucket is created which is Private only, To provide public read access to the S3 bucket, the code generates an ACL (access control list) resource using the "aws_s3_bucket_acl" resource type.
This policy allows users to edit the S3 bucket policy for a given bucket. This might be beneficial in situations when you need to allow users or roles to modify the bucket policy, such as adding or removing permissions.
Step 1: You have to give permissions for your IAM user. Go to IAM console and select your user. In Permission policies click on create inline policy for user.
Step 2: Create a file access.tf
, the resource is associated with the S3 bucket resource "aws_s3_bucket.my_bucket" using the "bucket" parameter. The "acl" parameter is set to "public-read", which allows public read access to the bucket.
resource "aws_s3_bucket_policy" "bucket_policy" {
bucket = aws_s3_bucket.my_bucket.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject"],
"Resource": [
"arn:aws:s3:::my-demo-bucket-021/*"
]
}
]
}
EOF
}
resource "aws_s3_bucket_public_access_block" "pem_access" {
bucket = aws_s3_bucket.my_bucket.id
block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = false
}
Step 3: Now change the object Ownership by enabling “ACL enable” in the S3 Bucket
Step 3: Run the terraform init
command to initialize the working directory and download the required providers.
Step4:Run terraform plan
command
Step5: Execute terraform apply
command
Step 6: Now the S3 Bucket is publicly accessible.
Create an S3 bucket policy that allows read-only access to a specific IAM user or role:
In this task, we will create an S3 bucket policy that allows read-only access to a specific IAM user or role using terraform configuration file.
To provide read-only access to a specific IAM user or role, the code creates an S3 bucket policy resource using the “aws_s3_bucket_policy” resource type. The resource is associated with the S3 bucket resource “aws_s3_bucket.my_bucket” using the “bucket” parameter.
Step1: Create a file named "bucket_read_only.tf
" and add the following code to it.
resource "aws_s3_bucket_policy" "bucket_iam_policy" {
bucket = aws_s3_bucket.my_bucket.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::459840383823:user/Terraform-user" #change access "*" to specific IAM user
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-demo-bucket-021/*"
}
]
}
EOF
}
Step 2: Run the terraform init
command
Step3: Run terraform plan
command
Step4: Execute terraform apply
command
Now, we will check the terraform state:
Step5: Now, verify it in AWS console:
Enable versioning on the S3 bucket:
S3 bucket versioning is a feature in AWS S3 that enables the preservation and tracking of multiple versions of an object. It provides an added layer of data protection, allowing you to recover and restore previous versions of objects stored in an S3 bucket.
Step1: Create a file named "enable_versioning.tf
" and add the following code to it.
resource "aws_s3_bucket_versioning" "my_bucket_version" {
bucket = "my-demo-bucket-021"
versioning {
enabled = true
}
}
The Terraform configuration we've provided adds a resource named aws_s3_bucket_versioning
with the identifier my_bucket_versioning
. This resource is used to enable versioning for the S3 bucket specified by my-demo-bucket-021
. Versioning in Amazon S3 allows us to keep multiple versions of an object in the same bucket.
Step2: Run commands terraform init
and terraform plan
.
Step3: Execute terraform apply
command
Step 4: Now we can verify in the S3 Bucket that Bucket Versioning has been enabled.
Step 5: Once you are done with the newly created instance we can use terraform destroy
command which will delete the complete infrastructure.
Now, we will check the terraform state (Which shows no resources):
Step6: Now, We can see the S3 bucket is removed as well.
Conclusion:
⌛In conclusion, using Terraform to manage Amazon S3 buckets offers various benefits, including automation, repeatability, and version-controlled infrastructure. Terraform allows you to define your S3 bucket infrastructure as code using a declarative configuration language.
⌛Terraform enables you to manage the lifecycle of your S3 buckets, including enabling versioning, configuring access controls, defining lifecycle policies, and setting up logging and monitoring.
⌛Using Terraform with Amazon S3 buckets helps streamline the provisioning and management of cloud storage resources, enhances infrastructure agility, and improves overall operational efficiency. By treating infrastructure as code, you can achieve greater control, consistency, and scalability in your cloud environment.
Thank you for 📖reading my blog. 👍 Like it and share it 🔄 with your friends . Hope you find it helpful🤞
Happy learning😊😊