AWS-Terraform-Launching Webserver on AWS using Terraform (EFS- Elastic File System).

Mohamed Furqan
5 min readJul 21, 2020

Check my old post on launching Webserver on AWS using Terraform which is quiet similar to this post but there I have used EBS.

Goal is to launch a webserver by using Terraform but here we will be using EFS as a persistent storage .

Persistent Storage — Persistent storage means that the storage resource outlives any other resource and is always available, regardless of the state of a running instance.

Why use EFS as a persistent storage ?

An application can access files on EFS just like it would do in an on-premise environment. S3 does not support NFS. Comparing EFS to another popular Amazon service, Elastic Block Storage (EBS), the major advantage of EFS is that it offers shared storage.

So Lets get started ..!!!

Setting up the provider to access the aws so here I have created a profile with my secret key and access key .

aws configure --profile fate

Now we can directly use profile name in Terraform code .

provider "aws" {
region = "ap-south-1"
profile = "fate"
}

Now Create the key and security group .

In order to access the webserver running in the ec2_instance we must allow the port 80 and port 22 to access the ec2_instance (ssh , putty etc..)

resource "aws_security_group" "allow_tls" {
name = "allow_tls"
ingress {
description = "Security group for ssh"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "Security group for http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "NFS"
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}tags = {
Name = "allow_tls"
}
}

2. Launch EC2 instance.

In this Ec2 instance we used the key and security group which we have created .

resource "aws_instance" "web" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = "fate_key"
security_groups = [ "allow_tls" ]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("D:/eks.pem")
host = aws_instance.web.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}tags = {
Name = "os1"
}}

here it will launch and automatically download the required softwares and start the webserver .

3. Launch one Volume (EFS)

resource "aws_efs_file_system" "foo" {creation_token = "my-product"tags = {Name = "MyProduct"}}resource "aws_efs_access_point" "test" {file_system_id = "${aws_efs_file_system.foo.id}"}resource "aws_efs_mount_target" "alpha" {depends_on = [ aws_instance.web , ]file_system_id = "${aws_efs_file_system.foo.id}"subnet_id      = "subnet-8f7c17c3"security_groups =  ["${aws_security_group.allow_tls.id}"]}resource "null_resource" "null-remote-1"  {
depends_on = [aws_efs_mount_target.alpha,]
connection {
type = "ssh"
user = "ec2-user"
private_key = ("D:/eks.pem")
host = aws_instance.web.public_ip
}
// ATTACH EFS provisioner "remote-exec" {inline = [
"sudo echo ${aws_efs_file_system.foo.dns_name}:/var/www/html efs defaults,_netdev 0 0 >> sudo /etc/fstab",
"sudo mount ${aws_efs_file_system.foo.dns_name}:/ /var/www/html",
"sudo curl https://github.com/FateDaeth/aws_terraform_web.git > index.html",
"sudo cp index.html /var/www/html/",]
}
}

4. Create S3 bucket

resource "aws_s3_bucket" "buck" {
bucket = "fateultimate1"
acl = "public-read"tags = {
Name = "Mybucket"
}
}

copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

resource "aws_s3_bucket_object" "object" {
depends_on = [
aws_s3_bucket.buck ,
]bucket = "${aws_s3_bucket.buck.id}"
key = "fate.jpg"
source = "D:/fate.jpg"
etag = "${filemd5("D:/fate.jpg")}"
acl = "public-read"
}

It will upload the image to S3_bucket

Create a Cloudfront using s3 bucket(which contains images)

locals {
s3_origin_id = "myS3Origin"
}resource "aws_cloudfront_distribution" "s3_distribution" {
depends_on = [
aws_s3_bucket.buck,
]
origin {
domain_name = "${aws_s3_bucket.buck.bucket_regional_domain_name}"
origin_id = "${local.s3_origin_id}"}enabled = true
is_ipv6_enabled = true
comment = "Some comment"
default_root_object = "index.php"default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.s3_origin_id}"forwarded_values {
query_string = falsecookies {
forward = "none"
}
}viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = "/content/immutable/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = "${local.s3_origin_id}"forwarded_values {
query_string = false
headers = ["Origin"]cookies {
forward = "none"
}
}min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "redirect-to-https"
}# Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern = "/content/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.s3_origin_id}"forwarded_values {
query_string = falsecookies {
forward = "none"
}
}min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = "redirect-to-https"
}price_class = "PriceClass_200"restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}}

We can use the CloudFront URL to update in code in /var/www/html

To get the IP of the instance

output "myos_ip" {
value = aws_instance.web.public_ip
}

This code will automatically launch the website in your browser by obtaining the IP automatically

resource “null_resource” “null_chrome” {
depends_on = [
aws_cloudfront_distribution.s3_distribution,
]
provisioner “local-exec” {
command = “microsoftedge ${aws_instance.web.public_ip}”
}
}

You can get full code from my GitHub .

--

--