Terraform powered by Niles Partners can efficiently deploy, scale, release, and monitor infrastructure for multi-tier applications. Terraform is a free downloadable tool that you can interact with on the command line. It allows you to provide infrastructure on any cloud provider and handle configuration, plugins, and state. This open-source tool lets you specify on-prem and cloud resources in human-readable configuration files, which you can reuse, version, and share. Moreover, Terraform can handle low-level components (including storage, compute, and networking resources) and high-level components (including DNS entries and SaaS features).
It will take a few minutes for your VM to be deployed. When the deployment is finished, move on to the next section.
Connect to virtual machine
Create an SSH connection with the VM.
bashCopy
ssh azureuser@10.111.12.123
3.Using the same bash shell you used to create your SSH key pair (you can reopen the Cloud Shell by selecting >_ again or going to https://shell.azure.com/bash), paste the SSH connection command into the shell to create an SSH session.
Usage/Deployment Instructions
Step 1: Access the Terraform in Azure Marketplace and click on Get it now button.
Click on Continue and then click on Create.
Step 2: Now to create a virtual machine, enter or select appropriate values for zone, machine type, resource group and so on as per your choice.
Click on Review + create.
Step 3: The below window confirms that VM was deployed.
Step 4: Open putty and connect with your machine. Add IP address of the running virtual machine.
Step 5: Login with user name and password that you provided during machine creation.
Step 6: Check the version of terraform installed on your system.
$ terraform –version
Step 7: Using Terraform to Manage Infrastructure.
Now that terraform is installed, let’s create a test project.
$ mkdir projects
$ cd projects
Create Terraform main configuration file.
$ touch main.tf
I’m doing a Test with AWS Provider but you can use other Providers for your projects. My terraform configuration provider section is as below.
$ vim main.tf
# Provider
provider “aws” {
access_key = “”
secret_key = “”
region = “us-west-1”
}
Paste your AWS Access Key and Secret Key inside the access_key and secret_keysections respectively. You can also configure your AWS access credentials with AWS CLI tool.
When done, run terraform init to initialize a Terraform working directory.
$ terraform init
Terraform will automatically download provider configured to .terraform directory.
Step 8: Let’s now add resource section to create AWS VPC and Subnet resources by editing the main.tf file.
$ vim main.tf
# Provider
provider “aws” {
access_key = “”
secret_key = “”
region = “”
}
# Retrieve the AZ where we want to create network resources
data “aws_availability_zones” “available” {}
# VPC Resource
resource “aws_vpc” “main” {
cidr_block = “10.11.0.0/16”
enable_dns_support = true
enable_dns_hostnames = true
tags {
Name = “Test-VPC”
}
tags {
Environment = “Test”
}
}
# AWS subnet resource
resource “aws_subnet” “test” {
vpc_id = “${aws_vpc.main.id}”
cidr_block = “10.11.1.0/24”
availability_zone = “${data.aws_availability_zones.available.names[0]}”
map_public_ip_on_launch = “false”
tags {
Name = “Test_subnet1”
}
}
Save the file after adding resource definitions and setting AWS variables then generate and show an execution plan.
$ terraform plan
Finally build your Infrastructure with Terraform using terraform apply.
$ terraform apply
Confirm changes to be made and type “yes” to initiate modifications.
A successful terraform run should print success message at the end.
Terraform state is saved to ./terraform.tfstate but the backend can be changed. You can confirm Infrastructure changes from AWS console.
Step 9: Destroying Terraform Infrastructure
We have confirmed that our Terraform installation on Ubuntu 22.04/20.04/18.04 is working as expected, now destroy Terraform-managed infrastructure by running terraform destroy command.
$ terraform destroy
Enjoy your Application.
Until now, small developers did not have the capital to acquire massive compute resources and ensure they had the capacity they needed to handle unexpected spikes in load. Amazon EC2 enables any developer to leverage Amazon’s own benefits of massive scale with no up-front investment or performance compromises. Developers are now free to innovate knowing that no matter how successful their businesses become, it will be inexpensive and simple to ensure they have the compute capacity they need to meet their business requirements.
The “Elastic” nature of the service allows developers to instantly scale to meet spikes in traffic or demand. When computing requirements unexpectedly change (up or down), Amazon EC2 can instantly respond, meaning that developers have the ability to control how many resources are in use at any given point in time. In contrast, traditional hosting services generally provide a fixed number of resources for a fixed amount of time, meaning that users have a limited ability to easily respond when their usage is rapidly changing, unpredictable, or is known to experience large peaks at various intervals.
Traditional hosting services generally provide a pre-configured resource for a fixed amount of time and at a predetermined cost. Amazon EC2 differs fundamentally in the flexibility, control and significant cost savings it offers developers, allowing them to treat Amazon EC2 as their own personal data center with the benefit of Amazon.com’s robust infrastructure.
When computing requirements unexpectedly change (up or down), Amazon EC2 can instantly respond, meaning that developers have the ability to control how many resources are in use at any given point in time. In contrast, traditional hosting services generally provide a fixed number of resources for a fixed amount of time, meaning that users have a limited ability to easily respond when their usage is rapidly changing, unpredictable, or is known to experience large peaks at various intervals.
Secondly, many hosting services don’t provide full control over the compute resources being provided. Using Amazon EC2, developers can choose not only to initiate or shut down instances at any time, they can completely customize the configuration of their instances to suit their needs – and change it at any time. Most hosting services cater more towards groups of users with similar system requirements, and so offer limited ability to change these.
Finally, with Amazon EC2 developers enjoy the benefit of paying only for their actual resource consumption – and at very low rates. Most hosting services require users to pay a fixed, up-front fee irrespective of their actual computing power used, and so users risk overbuying resources to compensate for the inability to quickly scale up resources within a short time frame.
No. You do not need an Elastic IP address for all your instances. By default, every instance comes with a private IP address and an internet routable public IP address. The private address is associated exclusively with the instance and is only returned to Amazon EC2 when the instance is stopped or terminated. The public address is associated exclusively with the instance until it is stopped, terminated or replaced with an Elastic IP address. These IP addresses should be adequate for many applications where you do not need a long lived internet routable end point. Compute clusters, web crawling, and backend services are all examples of applications that typically do not require Elastic IP addresses.
You have complete control over the visibility of your systems. The Amazon EC2 security systems allow you to place your running instances into arbitrary groups of your choice. Using the web services interface, you can then specify which groups may communicate with which other groups, and also which IP subnets on the Internet may talk to which groups. This allows you to control access to your instances in our highly dynamic environment. Of course, you should also secure your instance as you would any other server.