Terraform Assignment– 3

State Management

Basic Questions

  1. Run terraform show to inspect the current Terraform state.
  2. Run terraform state list to see all resources in the state file.
  3. Run terraform state show <resource> to inspect a specific resource.
  4. Create a resource and verify that it appears in terraform.tfstate.
  5. Explain the purpose of Terraform state in a simple text file and commit it to your notes.
  6. Initialize a project that uses the default local backend.
  7. Move the local state file to another directory and reinitialize.
  8. Create an S3 bucket manually to prepare for remote backend.
  9. Create a DynamoDB table manually to prepare for state locking.
  10. Configure a backend block for AWS S3 in your Terraform configuration.
  11. Reinitialize the project with the new backend using terraform init.
  12. Run terraform plan and check that the state is stored remotely in S3.
  13. Enable state locking by adding DynamoDB in backend configuration.
  14. Test state locking by running two terraform apply commands in parallel.
  15. Use terraform state pull to download the current state.
  16. Use terraform state push to upload a modified state file.
  17. Run terraform refresh to detect changes between state and real infrastructure.
  18. Modify a resource outside of Terraform and detect state drift using plan.
  19. Mark a sensitive output in Terraform and check the state file.
  20. Verify that sensitive outputs are hidden in CLI but still stored in state.

Intermediate Questions

  1. Configure a GCS (Google Cloud Storage) bucket as a remote backend.
  2. Configure an Azure Blob Storage container as a remote backend.
  3. Configure Terraform Cloud as a remote backend for your project.
  4. Create two environments (dev and prod) using workspaces with the same state backend.
  5. Switch between workspaces and observe different state files.
  6. Use terraform state mv to rename a resource inside the state file.
  7. Use terraform state rm to remove a resource from the state file.
  8. Import an existing AWS S3 bucket into Terraform state using terraform import.
  9. Import an existing EC2 instance into Terraform state.
  10. Use explicit backend configuration with versioning enabled in S3.
  11. Add encryption to your S3 backend for securing state files.
  12. Enable DynamoDB point-in-time recovery for secure state locking.
  13. Configure backend with custom profile and region in AWS provider.
  14. Create a backend with bucket policy restricting access only to Terraform users.
  15. Use terraform state replace-provider to update provider references.
  16. Write a Playbook (HCL) that provisions an EC2 instance and stores state in S3 with locking.
  17. Demonstrate state drift by manually terminating an EC2 instance and running terraform plan.
  18. Restore state drift by applying configuration again.
  19. Test concurrent state access with two users using S3 + DynamoDB backend.
  20. Document state management best practices for your team in a Markdown file.

Advanced Questions

  1. Configure a secure remote backend in AWS S3 with DynamoDB state locking and server-side encryption.
  2. Configure a remote backend in Azure Blob Storage with access restricted by service principals.
  3. Configure a remote backend in GCS with IAM permissions restricted to specific users.
  4. Use Terraform Cloud remote backend with version control integration (GitHub).
  5. Write a script to rotate S3 bucket encryption keys and reconfigure backend.
  6. Create a multi-region backend setup with S3 replication enabled.
  7. Configure state locking with DynamoDB and demonstrate conflict resolution.
  8. Configure sensitive outputs (passwords, keys) and verify how they appear in remote state.
  9. Write a Terraform configuration that provisions VPC, subnets, and EC2 while storing state securely in S3 with locking.
  10. Create a full workflow: Local backend → Migrate to S3 → Enable locking with DynamoDB → Secure sensitive data → Demonstrate drift detection.