Terraform Assignment– 3
State Management
Basic Questions
- Run terraform show to inspect the current Terraform state.
- Run terraform state list to see all resources in the state file.
- Run terraform state show <resource> to inspect a specific resource.
- Create a resource and verify that it appears in terraform.tfstate.
- Explain the purpose of Terraform state in a simple text file and commit it to your notes.
- Initialize a project that uses the default local backend.
- Move the local state file to another directory and reinitialize.
- Create an S3 bucket manually to prepare for remote backend.
- Create a DynamoDB table manually to prepare for state locking.
- Configure a backend block for AWS S3 in your Terraform configuration.
- Reinitialize the project with the new backend using terraform init.
- Run terraform plan and check that the state is stored remotely in S3.
- Enable state locking by adding DynamoDB in backend configuration.
- Test state locking by running two terraform apply commands in parallel.
- Use terraform state pull to download the current state.
- Use terraform state push to upload a modified state file.
- Run terraform refresh to detect changes between state and real infrastructure.
- Modify a resource outside of Terraform and detect state drift using plan.
- Mark a sensitive output in Terraform and check the state file.
- Verify that sensitive outputs are hidden in CLI but still stored in state.
Intermediate Questions
- Configure a GCS (Google Cloud Storage) bucket as a remote backend.
- Configure an Azure Blob Storage container as a remote backend.
- Configure Terraform Cloud as a remote backend for your project.
- Create two environments (dev and prod) using workspaces with the same state backend.
- Switch between workspaces and observe different state files.
- Use terraform state mv to rename a resource inside the state file.
- Use terraform state rm to remove a resource from the state file.
- Import an existing AWS S3 bucket into Terraform state using terraform import.
- Import an existing EC2 instance into Terraform state.
- Use explicit backend configuration with versioning enabled in S3.
- Add encryption to your S3 backend for securing state files.
- Enable DynamoDB point-in-time recovery for secure state locking.
- Configure backend with custom profile and region in AWS provider.
- Create a backend with bucket policy restricting access only to Terraform users.
- Use terraform state replace-provider to update provider references.
- Write a Playbook (HCL) that provisions an EC2 instance and stores state in S3 with locking.
- Demonstrate state drift by manually terminating an EC2 instance and running terraform plan.
- Restore state drift by applying configuration again.
- Test concurrent state access with two users using S3 + DynamoDB backend.
- Document state management best practices for your team in a Markdown file.
Advanced Questions
- Configure a secure remote backend in AWS S3 with DynamoDB state locking and server-side encryption.
- Configure a remote backend in Azure Blob Storage with access restricted by service principals.
- Configure a remote backend in GCS with IAM permissions restricted to specific users.
- Use Terraform Cloud remote backend with version control integration (GitHub).
- Write a script to rotate S3 bucket encryption keys and reconfigure backend.
- Create a multi-region backend setup with S3 replication enabled.
- Configure state locking with DynamoDB and demonstrate conflict resolution.
- Configure sensitive outputs (passwords, keys) and verify how they appear in remote state.
- Write a Terraform configuration that provisions VPC, subnets, and EC2 while storing state securely in S3 with locking.
- Create a full workflow: Local backend → Migrate to S3 → Enable locking with DynamoDB → Secure sensitive data → Demonstrate drift detection.