- November 1, 2022
- Comments Disabled
Project Reasoning
The reasoning behind the project, was to continue my learning journey with infrastructure as code (IaC). Having worked on several projects within AWS including the cloud resume challenge, I had briefly come across Terraform and its uses, but had not had any hands on experience until this point. This was due to me focusing primarily on cloud native tools such as serverless for AWS and SAM. Given that terraform is multi-cloud and used heavily within industry as an infrastructure provisioning tool, I thought it would be wise to focus some of my learning on this. To challenge myself further, I decided to carry out this project within Azure, as most of my (IaC) and serverless work has primarily been with AWS.
Project Reasoning
The aim of this project was to use Terraform to create two separate environments, each with an Azure instace called staging and production. This is a typical practice that is used in industry as a way to test/experiment prior migrating service to the live environment. As a web developer, I have a great deal of experience with staging environments all albeit being from the platform level. The difference here is that this project focussed solely on the infrastructure. There were two ways in which this project could have been carried out. The first of which was using Terraform workspaces which utilises less code for efficiency, but more prone to human error. The second option, which is the one I opted for was to create multiple environments, which did take longer and has more code duplication than the former. The reason I opted for the latter is down to isolation of the backend, improved security, less chance of human error as well as the codebase representing the deployed site(s).
The Project Breakdown
I began my project by creating two separate directories within my code editor of choice (VS Code), beginning most of my work with the staging environment. I began by creating a “providers.tf” file, which was responsible for importing in any dependencies I would need for this project. This included a reference to the correct Terraform version as well as the most updated reference for Azure.
The phase of the project involved creating the main infrastructure configuration file. When I initially wrote the script for the main file, I had hard coded all the variable names while I was still experimenting and troubleshooting. This is not best practice and hence why I decide to rewrite the file and define the variables within separate files (variables are referenced in “variables.tf” and explicitly defined in “terraform,tfvars” . Each new resource deployed was assigned a tag for organisation purposes.
The first resource I set up in the main file was the resource group which had two parameters, the name “Staging” and the location “West Europe”. I then created a virtual network for the VM as well as the address space it should occupy. The name and location were then assigned from the resource group. I then created a subnet called “staging-subnet” and an address prefix. The next resource that was added was a public IP (which will be used until I assign FQ Domain Name). As this project was just for educational purposes I chose to have the IP assigned dynamically.
The next set of resources that I allocated were more VM specific, starting off with the security group and the list of rules assigned. I wanted to allow inbound connections though SSH and therefore opened up port 22 (TCP). This will at some point become a stage/production web application, and therefore will require port 80 for inbound web traffic. However as this project was mainly centred on infrastructure, that step was omitted. The next resources I needed to set up was the network interface card (NIC), to allow connections between the VM and the subnet. Some of its attributes were inherited from the resource group, and similarly to the public IP the private IP address was dynamically assigned.
The next key component of the VM was storage allocation. Again most of the attributes were inherited from the resource group however when it came the name this needed to be unique as storage requires a URL.
The next step was authentication with the creation of a private key in the form of a (.pem) file. This is set to output however, I added a sensitive attribute so the actual key is not printed to the console.
The final step was configuring the virtual machine itself. The VM inherits its location, group name from the resource group as well as the NIC configured previously. The size of the VM (in this case a standard B1s) based on the workload required. I then needed to reference the type of disc for the OS as well as the OS itself which was “Ubuntu Server 22.04-LTS”. As mentioned previously, I plan to use these machines as web servers and wanted a stable version of Linux. The finals aspects for configuring the VM were assigning the admin username and public key as well as the boot diagnostics.
Production Environment
The production environment was mostly comprised of the same scripts utilised within the staging environment, hence the code duplication drawback I mentioned in the project aim. Most of the changes were just related to naming conventions as well as any unique references that needed to be made.
Deployment
Now that the templates were in place, it was time to start the deployment to Azure. I wanted to make sure my formatting was correct so ran the following command to check this:
terraform fmt
The next step involved initialisation, which downloaded the provider needed for Azure. This was referenced in the “providers.tf” file:
terraform init
To validate the template I used the following command:
terraform validate
Once the template had been validated, I wanted to save this plan within a separate file. I used the following command to do this.
terraform plan –out tfplan
Connecting the VM
In order to connect to the VM I needed to gain access to the private key in raw data. The key was already set as an output, however I did not want the key outputting to the console. Therefore I set the key to output to a “.pem” file. I used the following command to achieve this:
terraform output –raw tls_private_key > id_rsa
To avoid any further authentication issues I set the access on that file to read only using:
chmod 400 id_rsa
I was then able to ssh into the machine using:
ssh -i id_rsa azureuser@<public_ip_address>
Summary
I found this project to be incredibly beneficial, as it allowed me to build on my knowledge of Azure whilst learning about Terraform and its use cases. I found this to be a bit more challenging compared to AWS SAM however, I plan to work use it to deploy more complex infrastructure.