Free Terraform Associate Exam Braindumps (page: 62)

Page 62 of 113

You have configured an Auto Scaling group in AWS to automatically scale the number of instances behind a load balancer based on the instances CPU utilization. The instances are configured using a Launch Configuration. You have observed that the Auto Scaling group doesn't successfully scale when you apply changes that require replacing the Launch Configuration. Why is this happening?

  1. You need to configure an explicit dependency for the Auto Scaling group using the depends_on meta-parameter.
  2. You need to configure an explicit dependency for the Launch Configuration using the depends_on meta-parameter.
  3. You need to configure the Auto Scaling group's create_before_destroy meta-parameter.
  4. You need to configure the Launch Configuration's create_before_destroy meta-parameter.

Answer(s): D


Reference:

https://www.terraform.io/docs/providers/aws/r/launch_configuration.html#using-withautoscaling-groups



You have written a terraform IaC script which was working till yesterday , but is giving some vague error from today , which you are unable to understand . You want more detailed logs that could potentially help you troubleshoot the issue , and understand the root cause. What can you do to enable this setting? Please note , you are using terraform OSS.

  1. Terraform OSS can push all its logs to a syslog endpoint. As such, you have to set up the syslog sink, and enable TF_LOG_PATH env variable to the syslog endpoint and all logs will automatically start streaming.
  2. Detailed logs are not available in terraform OSS, except the crash message. You need to upgrade to terraform enterprise for this point.
  3. Enable the TF_LOG_PATH to the log sink file location, and logging output will automatically be stored there.
  4. Enable TF_LOG to the log level DEBUG, and then set TF_LOG_PATH to the log sink file location. Terraform debug logs will be dumped to the sink path, even in terraform OSS.

Answer(s): D

Explanation:

Terraform has detailed logs which can be enabled by setting the TF_LOG environment variable to any value. This will cause detailed logs to appear on stderr.
You can set TF_LOG to one of the log levels TRACE, DEBUG, INFO, WARN or ERROR to change the verbosity of the logs. TRACE is the most verbose and it is the default if TF_LOG is set to something other than a log level name.
To persist logged output you can set TF_LOG_PATH in order to force the log to always be appended to a specific file when logging is enabled. Note that even when TF_LOG_PATH is set, TF_LOG must be set in order for any logging to be enabled.



Given the Terraform configuration below, in which order will the resources be created?
1. resource "aws_instance" "web_server" 2. {
3. ami = "ami-b374d5a5"
4. instance_type = "t2.micro" 5. }
6. resource "aws_eip" "web_server_ip" 7. {
8. vpc = true instance = aws_instance.web_server.id 9. }

  1. aws_eip will be created first aws_instance will be created second
  2. aws_eip will be created first aws_instance will be created second
  3. Resources will be created simultaneously
  4. aws_instance will be created first aws_eip will be created second

Answer(s): D

Explanation:

Implicit and Explicit Dependencies
By studying the resource attributes used in interpolation expressions, Terraform can automatically infer when one resource depends on another. In the example above, the reference to aws_instance.web_server.id creates an implicit dependency on the aws_instance named web_server. Terraform uses this dependency information to determine the correct order in which to create the different resources.
# Example of Implicit Dependency resource "aws_instance" "web_server" { ami = "ami-b374d5a5"
instance_type = "t2.micro"
}
resource "aws_eip" "web_server_ip" { vpc = true
instance = aws_instance.web_server.id
}
In the example above, Terraform knows that the aws_instance must be created before the aws_eip. Implicit dependencies via interpolation expressions are the primary way to inform Terraform about these relationships, and should be used whenever possible.
Sometimes there are dependencies between resources that are not visible to Terraform. The depends_on argument is accepted by any resource and accepts a list of resources to create explicit dependencies for.
For example, perhaps an application we will run on our EC2 instance expects to use a specific Amazon S3 bucket, but that dependency is configured inside the application code and thus not visible to Terraform. In that case, we can use depends_on to explicitly declare the dependency: # Example of Explicit Dependency
# New resource for the S3 bucket our application will use. resource "aws_s3_bucket" "example" {
bucket = "terraform-getting-started-guide" acl = "private"
}
# Change the aws_instance we declared earlier to now include "depends_on" resource "aws_instance" "example" {
ami = "ami-2757f631" instance_type = "t2.micro"
# Tells Terraform that this EC2 instance must be created only after the # S3 bucket has been created.
depends_on = [aws_s3_bucket.example]
}


Reference:

https://learn.hashicorp.com/terraform/getting-started/dependencies.html



After executing a terraform apply, you notice that a resource has a tilde (~) next to it. What does this infer?

  1. The resource will be updated in place.
  2. The resource will be created.
  3. Terraform can't determine how to proceed due to a problem with the state file.
  4. The resource will be destroyed and recreated.

Answer(s): A

Explanation:

The prefix -/+ means that Terraform will destroy and recreate the resource, rather than updating it in- place.
The prefix ~ means that some attributes and resources can be updated in-place.
$ terraform apply
aws_instance.example: Refreshing state... [id=i-0bbf06244e44211d1] An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement Terraform will perform the following actions:
# aws_instance.example must be replaced
-/+ resource "aws_instance" "example" {
~ ami = "ami-2757f631" -> "ami-b374d5a5" # forces replacement
~ arn = "arn:aws:ec2:us-east-1:130490850807:instance/i-0bbf06244e44211d1" -> (known after apply)
~ associate_public_ip_address = true -> (known after apply)
~ availability_zone = "us-east-1c" -> (known after apply)
~ cpu_core_count = 1 -> (known after apply)
~ cpu_threads_per_core = 1 -> (known after apply)
- disable_api_termination = false -> null
- ebs_optimized = false -> null get_password_data = false
+ host_id = (known after apply)
~ id = "i-0bbf06244e44211d1" -> (known after apply)
~ instance_state = "running" -> (known after apply) instance_type = "t2.micro"
~ ipv6_address_count = 0 -> (known after apply)
~ ipv6_addresses = [] -> (known after apply)
+ key_name = (known after apply)
- monitoring = false -> null
+ network_interface_id = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
~ primary_network_interface_id = "eni-0f1ce5bdae258b015" -> (known after apply)
~ private_dns = "ip-172-31-61-141.ec2.internal" -> (known after apply)
~ private_ip = "172.31.61.141" -> (known after apply)
~ public_dns = "ec2-54-166-19-244.compute-1.amazonaws.com" -> (known after apply)
~ public_ip = "54.166.19.244" -> (known after apply)
~ security_groups = [

- "default",
] -> (known after apply) source_dest_check = true
~ subnet_id = "subnet-1facdf35" -> (known after apply)
~ tenancy = "default" -> (known after apply)
~ volume_tags = {} -> (known after apply)
~ vpc_security_group_ids = [
- "sg-5255f429",
] -> (known after apply)
- credit_specification {
- cpu_credits = "standard" -> null
}
+ ebs_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ snapshot_id = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
+ ephemeral_block_device {
+ device_name = (known after apply)
+ no_device = (known after apply)
+ virtual_name = (known after apply)
}
+ network_interface {
+ delete_on_termination = (known after apply)
+ device_index = (known after apply)
+ network_interface_id = (known after apply)
}
~ root_block_device {
~ delete_on_termination = true -> (known after apply)
~ iops = 100 -> (known after apply)
~ volume_id = "vol-0079e485d9e28a8e5" -> (known after apply)
~ volume_size = 8 -> (known after apply)
~ volume_type = "gp2" -> (known after apply)
}
}
Plan: 1 to add, 0 to change, 1 to destroy.



Page 62 of 113



Post your Comments and Discuss HashiCorp Terraform Associate exam with other Community members:

Bin Mahamood commented on November 03, 2024
terraform { required_providers { aws = { version = ">= 2.7.0" source = "hashicorp/aws" } } }
Anonymous
upvote

Nayaran commented on October 21, 2024
First and for most... this exam is extremely hard. Second this exam dump contains majority of the questions. I passed the certification exam.
UNITED STATES
upvote

Marc commented on October 21, 2024
hello would need help
UNITED STATES
upvote

Marcellus Werifah commented on October 20, 2024
Verified answers
UNITED STATES
upvote

Nathan commented on October 20, 2024
Using dumps are my last resort. And that is what I ended up using with this exam to pass. The exam is extremely difficult.
France
upvote

Marcellus Werifah commented on October 20, 2024
Who decides what is the correct in case of conflicts
UNITED STATES
upvote

Marcellus Werifah commented on October 20, 2024
Novice. Would need detailed explanation of any questions
UNITED STATES
upvote

Siva commented on June 17, 2024
It's a good platform to start preparing for the HCTA 003 exam
Anonymous
upvote

Dhiraj Bhattad commented on June 14, 2024
It's a good platform to start preparing for the HCTA 003 exam.
Anonymous
upvote

Amizhchandra commented on May 12, 2024
Good material
CHINA
upvote

Direen commented on February 16, 2024
This was a easy passsss! Scored 95%. Unbelievable! I was hesitant at first but then I saw the pass guarantee policy so I said what the hell. If I fa I will get my money back. I am glad I bought it. Saved me so much time.
United States
upvote

Satya commented on February 09, 2024
Q83:--Terraform can only manage resource dependencies if you set them explicitly with the depends_on argument. Answer is "False"
UNITED STATES
upvote

Satya commented on February 09, 2024
Q76:---Which of these options is the most secure place to store secrets foe connecting to a Terraform remote backend? Shouldn't the answer be "Defined in a connection configuration outside of Terraform"
UNITED STATES
upvote

Satya commented on February 09, 2024
Q39:---Which argument(s) is (are) required when declaring a Terraform variable? Answer should be "None of the above" as Nothing is required while declaring variable
UNITED STATES
upvote

DN commented on September 04, 2023
question 14 - run terraform import: this is the recommended best practice for bringing manually created or destroyed resources under terraform management. you use terraform import to associate an existing resource with a terraform resource configuration. this ensures that terraform is aware of the resource, and you can subsequently manage it with terraform.
Anonymous
upvote

YK commented on December 11, 2023
good one nice
JAPAN
upvote

Mn8300 commented on November 09, 2023
nice questions
Anonymous
upvote

Naka commented on January 19, 2024
Very good, many questions same as the real exam
BRAZIL
upvote

vasu commented on December 22, 2023
good for practice
INDIA
upvote

MDN commented on December 11, 2023
Good sample questions
UNITED STATES
upvote

YK commented on December 11, 2023
Good one nice
JAPAN
upvote

YK 11 commented on December 09, 2023
Good one nice
JAPAN
upvote

Mn8300 commented on November 13, 2023
Very useful
Anonymous
upvote

Mn8300 commented on November 09, 2023
Nice questions
Anonymous
upvote

mpakal commented on October 19, 2023
Good and realistic questions.
UNITED STATES
upvote

pakalamb1995@gmail.com commented on October 19, 2023
so far nice
UNITED STATES
upvote

CP commented on October 09, 2023
Let Hope for the Best
EUROPEAN UNION
upvote

DN commented on September 04, 2023
Question 14 - Run terraform import: This is the recommended best practice for bringing manually created or destroyed resources under Terraform management. You use terraform import to associate an existing resource with a Terraform resource configuration. This ensures that Terraform is aware of the resource, and you can subsequently manage it with Terraform.
Anonymous
upvote

sipho commented on August 30, 2023
i will study ans see how it goes
Anonymous
upvote

Jersey boy commented on June 25, 2023
I just paid and download my files. I will report in a week after writing my exam to see how this goes.
UNITED STATES
upvote

Yung K. commented on October 11, 2021
Thank you for this exams dumps package. From the 2 exams I purchased as part of the 50% sale I alredy passed first exam.
TAIWAN
upvote