IIA-CIA-Part3 Exam Discussions & Posts
Gosia
Commented on February 04, 2025
Hi, did you have the same questions on exams?
POLAND
Becky
Commented on January 09, 2025
Exam collections here awesome. Kudos
Anonymous
Shehata mohamed
Commented on December 13, 2023
Many thanks cia part 3
Anonymous
IIA-CIA-Part3
Commented on July 16, 2023
CIA Exam Part Three: Business Knowledge for Internal Auditing
Anonymous
Community-Verified AI Explanation
Commented on April 28, 2026
Question 2:
- Question 2 asks you to pick a storage redundancy option that stores data in multiple locations and allows reading from the secondary site.
- Correct answer: B. Read-only geo-redundant storage (RA-GRS).
- RA-GRS replicates data to a secondary region (data in separate geographic locations).
- It provides read access to the data in the secondary location (read-only), meeting the requirement to read from both primary and secondary.
- It is an opt-in feature that requires geo-replication.
- LRS (Locally Redundant Storage) keeps data in one data center only.
- ZRS (Zone-Redundant Storage) replicates across zones within a single region.
- GRS (Geo-Redundant Storage) replicates to a secondary region but does not inherently provide read access unless configured as RA-GRS.
- Takeaway: When you need cross-region replication with read access to the secondary, choose RA-GRS.
Bengaluru, India
Community-Verified AI Explanation
Commented on April 28, 2026
Question 118:
- The correct action to raise service level from 95% to 97% is Increasing safety stock (Option A), not Decreasing safety stock.
Why:
- Safety stock buffers against demand and supply variability. Increasing it reduces stockouts, which directly raises the service level.
- Expanding replenishment lead time can worsen service level unless you also increase safety stock; it generally isn’t the best lever.
- Holding more cycle stock increases total inventory but doesn’t reliably improve service level; it ties up capital without addressing variability.
- Decreasing safety stock would typically lower the service level.
Note: The provided answer label (“Answer(s): C”) appears to be a misprint. The explanatory principle in the material confirms that higher safety stock increases customer service levels.
Plymouth, United Kingdom
Mathi
Commented on April 28, 2026
Helped me understand AWS services
Chennai, India
Community-Verified AI Explanation
Commented on April 28, 2026
Question 10:
Explanation:
APIKit generates flows to implement each API operation defined in the RAML.
- For every operation, APIKit creates a private flow that contains the internal logic to fulfill that operation.
- There is typically one public flow per resource that routes requests to the appropriate private flow.
- Therefore, if the RAML specifies four operations, APIKit will generate four private flows.
San Jose, United States
Gurdeep
Commented on April 27, 2026
Ekdam Zabardast!
All questions real and from the actual exam.
India
Community-Verified AI Explanation
Commented on April 27, 2026
Question 5:
Correct answer: D. Authentication
Explanation:
- The Remote Access panel in the User Activity dashboard relies on events related to user authentication (e.g., remote logins). Those events come from the Authentication data model.
- If the panel isn’t showing the most recent hour, the underlying cause is often that scheduled searches feeding the Authentication data model were skipped, so new data wasn’t processed.
What to check:
- In Splunk: go to Settings > Data models and open the Authentication data model. Ensure it’s active and its population/accelerations are running.
- Check the scheduled searches that feed this data model for any Skipped status or recently failed runs (Saved Searches related to the Authentication model or ES user activity).
- Look at last run times for these searches (e.g., via
_internal or Saved Searches UI) to confirm they’re executing as expected.
- If searches were skipped due to resource constraints, consider adjusting the schedule, reducing scope, enabling summary indexing, or increasing search head capacity.
Reason this matters: the panel’s data is pulled from Authentication data; skipped searches prevent the latest data from appearing.
Kollam, India
Community-Verified AI Explanation
Commented on April 27, 2026
Question 63:
Answer: B — A packet capture tool was used to steal the password.
- Why: Telnet transmits credentials in cleartext. If Telnet was regularly used to log in, an attacker on the same network could capture the username/password with a packet sniffer, and the malware could then use those credentials to access the server.
- Why not the others:
- A spraying attack: would target many accounts, not specifically tied to the observed Telnet use.
- A remote-access Trojan: could install malware, but the clue points to credential theft via sniffing, not a Trojan.
- A dictionary attack: would brute-force credentials, not rely on Telnet’s plaintext transmission.
Mitigation: Disable Telnet; use encrypted protocols like SSH; enforce strong authentication and network monitoring.
Cartersville, United States
Community-Verified AI Explanation
Commented on April 27, 2026
Question 58:
- Correct answer: MITRE ATT&CK
- Why: MITRE ATT&CK is a real-world knowledge base of adversary behavior, detailing tactics, techniques, and procedures (TTPs). It’s used for threat modeling, defense optimization, and mapping mitigations to attacker methods.
- CSIRT: incident response team, not a knowledge base of attacker behavior.
- CVSS: vulnerability scoring system, not attacker behavior.
- SOAR: security orchestration, automation, and response platform; focuses on automating responses, not detailing adversary techniques.
- How to use it: reference ATT&CK to align controls and detections with specific attacker techniques, perform gap analysis, and prioritize mitigations based on real-world attacker behavior.
Cartersville, United States
Community-Verified AI Explanation
Commented on April 27, 2026
Question 241:
- Correct answer: C — Journaling
Why: RTO (recovery time objective) is how quickly you can restore service; RPO (recovery point objective) is how much data you’re willing to lose. For a database:
- Snapshots: point-in-time copies. RPO depends on how often snapshots run; can still lose data between snapshots.
- On-site backups: restore from backups. RPO depends on backup frequency; not always
Chicago, United States
Beky
Commented on April 27, 2026
guys just do did this cert and passed. it much more accurate than other sites and AI tutor is a great help.
United States
Community-Verified AI Explanation
Commented on April 27, 2026
Question 33:
Question 33 explanation
Correct answers: E) Compensating and F) Technical
- F (Technical): The use of a VLAN and restricted reachability is a technical control—it uses network technology to enforce access restrictions (segmentation, ACLs, etc.).
- E (Compensating): This is a compensating control—an alternative that provides equivalent protection when primary controls aren’t feasible (an end-of-life server may not be fully patchable or hardened, so segmentation acts as another layer of protection).
Why the others are not correct:
- A (Managerial): Policies/governance, not the technical access controls here.
- B (Physical): Focuses on physical security; not about logical network segmentation.
- C (Corrective): Remedies after an incident; not about preventing access in this setup.
- D (Detective): Detection of incidents; not about preventing access.
- G (Deterrent): Aims to deter attacks; not enforcing access control.
In short: VLAN-based segmentation is a technical control, used as a compensating measure to limit exposure of an end-of-life server.
Ikeja, Nigeria
Community-Verified AI Explanation
Commented on April 27, 2026
Question 113:
- Correct answer: User1 only.
- Licensing in Entra ID can be assigned to either users or groups only if the license supports group-based licensing.
- The Microsoft Fabric license in this scenario is a per-user license and does not support group-based licensing. Therefore, it can be assigned to individual user accounts only, not to groups like Group1 or Group2.
- Group-based licensing applies when the license type supports it; otherwise you must assign the license directly to each user.
- What to check in practice:
- In the Entra admin center, go to the Fabric license details and verify that the scope shows individual
Fort Lauderdale, United States
Akansha
Commented on April 27, 2026
it really helped me test my knowledge for education cloud certification
Ahmedabad, India
Community-Verified AI Explanation
Commented on April 27, 2026
Question 1:
For Question 1, the correct answer is D: Blocking the attachment of a sensitive document in webmail.
Why:
- Inline Data Protection means inspecting data as it travels over the network in real time and blocking it if it would leak.
- Blocking a webmail attachment happens at the network level during transit, thus inline.
Why the others aren’t inline:
- A) Preventing copying to a USB drive: endpoint control, not inline network traffic.
- B) Preventing sharing in OneDrive: typically CASB/API-based, not real-time network interception.
- C) Analyzing an M365 tenant: audit/advisory activity, not real-time data transit protection.
Chennai, India
Community-Verified AI Explanation
Commented on April 27, 2026
Question 1:
I can’t view the image-based statements in Question 1, so I can’t explain why option A is correct without knowing the exact text.
The page shows “Answer(s): A,” but I don’t know what statements you’re evaluating. If you paste the statements here (or describe them), I’ll explain why Yes/No is correct and tie each point to SC-900 concepts.
Tips for these Hotspots in SC-900:
- For each statement, map it to core areas like identity and access management (IAM), data protection, governance, risk, and compliance.
- Common concepts: Zero Trust, Federation, customer lockbox, data classification, and Azure AD roles/permissions.
- Treat each statement as a true/false about how Microsoft security controls and governance work.
Share the statements, and I’ll walk through them step by step.
Mumbai, India
Community-Verified AI Explanation
Commented on April 27, 2026
Question 2:
- Correct answer: Dataverse.
Why:
- The project management app is built with Power Apps and hosted in Teams, and its data is stored in the Power Apps data platform, Dataverse. Use the Dataverse connector in Power BI to directly access the app’s data tables.
- This provides proper table relationships and security context for the data used by the app.
Why not the others:
- Microsoft Teams Personal Analytics is for analyzing individual Teams usage, not app data.
- SQL Server database only if the data is actually in SQL Server, which isn’t stated here.
- Dataflows are for data preparation, not the direct data source of the Power Apps app.
Tip:
- When connecting to Dataverse from Power BI, you can pull multiple related tables and leverage existing relationships to build your report.
Birmingham, United Kingdom
Community-Verified AI Explanation
Commented on April 27, 2026
Question 133:
- Answer: A — Defined remediation plans
- When identifying IoT-related risk exposures, it’s essential to connect findings to action. Having defined remediation plans ensures there are concrete steps, owners, timelines, and controls to address identified risks, not just to recognize them.
- This supports the risk treatment process: you identify exposures, then plan how to reduce or mitigate them, and finally monitor effectiveness.
- It provides governance and accountability, helping prioritize resources and track progress in the risk register.
- Why the other options are less central to identification:
- Management sign-off on the scope: governance, but not the core driver for identifying risk exposure.
- Manual testing of device vulnerabilities: important, but not as foundational as having a plan to remediate when exposures are found.
- Visibility into all networked devices: valuable for detection, but per this item the emphasis is on having remediation actions ready.
Auckland, New Zealand
Community-Verified AI Explanation
Commented on April 27, 2026
Question 40:
Correct answer: B
- Why: To delete TestRG, you must remove blockers inside the resource group. There is a resource lock on VNET1, so you must remove that lock. If Vault1 has backups, you must delete (purge) all data in Vault1 as Recovery Services Vaults cannot be deleted while backups exist. Only after clearing the lock and vault data can TestRG be deleted.
- Why the other options are wrong:
- A and C include actions on VM1 or VNET1 that don’t remove the core blockers (locks/backups) preventing RG deletion.
- D suggests turning off VM1, which doesn’t affect resource locks or vault
Pune, India
Community-Verified AI Explanation
Commented on April 27, 2026
Question 2:
Question 2 explanation:
- Why: For analyzing computer performance, objective, quantitative data on how resources are used over time is most informative. Statistical metrics measuring capacity utilization show whether CPUs, memory, I/O, etc., are under or over-utilized and help identify bottlenecks and trend patterns.
- Why the others are less useful for analysis:
- A (Tuning system software) is an action, not an analysis metric.
- B (User dissatisfaction reports) are subjective and may not reflect actual system performance.
- D (Off-peak utilization) may not represent normal or peak loads.
- Practical tip: collect metrics such as CPU/memory/disk I/O utilization, throughput, response time, and queue lengths, compare to baselines, and monitor trends to determine if capacity needs adjustment.
Doha, Qatar
Jithu
Commented on April 26, 2026
For learning and practicing, this helped me a lot.
Chennai, India
Nik
Commented on April 26, 2026
The AI function is amazing and very useful. Thanks for providing this feature.
United States
Kishor
Commented on April 26, 2026
This exam dump has the latest questions. I just passed this exam this past Friday.
India
Community-Verified AI Explanation
Commented on April 26, 2026
but when I clicked the Answer tabs it says C
You're right to be confused. For Question 76, the correct answer is D – Right to audit the provider.
Why D is correct:
- Audit rights give independent verification of the provider’s controls, security, and compliance against contracts and SLAs. This evidence is essential to define residual IT risk in outsourcing.
Why C (Scope of services) is not enough:
- Scope identifies what is outsourced but does not ensure that controls are designed or operating effectively. Without an audit, you lack evidence to verify risk controls.
If your Answer tab shows C, it may be a misprint or mismatched key in that interface. The CRISC guidance consistently emphasizes audit rights as the primary factor for defining outsourcing risk. If you’d like, I can walk through the manual references to confirm.
Auckland, New Zealand
Community-Verified AI Explanation
Commented on April 26, 2026
Question 19:
- Reason: The proposed solution uses the Bulk create user operation in Microsoft Entra ID/Azure portal. That creates internal user accounts, not guest (external) users.
- Correct approach: Use the
New-AzureADMSInvitation cmdlet (or equivalent Graph/portal workflow) to invite external users as guests. This is the Azure AD B2B onboarding method. Each external user is invited and becomes a guest in your directory after redeeming the invitation.
- Concept in brief: Guest users are external collaborators invited to your directory, not regular internal accounts. To add them in bulk, invitation-based methods (like
New-AzureADMSInvitation) are required.
Pune, India
Community-Verified AI Explanation
Commented on April 26, 2026
Question 12:
- Why: Each VM must have at least one network interface. A single NIC can have both a private IP and a public IP by configuring an IP configuration and attaching a public IP to the NIC. Therefore, for five VMs you need five NICs in total (one NIC per VM). Inbound/outbound rules can be made identical by using the same NSG on the NICs or subnet.
Pune, India
Community-Verified AI Explanation
Commented on April 26, 2026
Question 8:
- Why: An Azure internal load balancer health probe for an SQL Server Always On AG should use a TCP probe on the SQL listener port (usually 1433). An HTTP health probe on port 1433 won’t succeed because SQL Server does not speak HTTP, so the backend would be marked unhealthy. Use a TCP health probe on port 1433 (or the actual listener port) to properly verify backend health for the AG listener.
Pune, India
Community-Verified AI Explanation
Commented on April 26, 2026
Question 1:
Answer: A
Explanation:
- In PMI risk management, a risk that was identified earlier has a planned response stored in the risk register. When that risk triggers (the resource becomes unavailable), you should execute the planned risk response rather than changing the plan ad hoc.
- The correct action is to consult the risk register and implement the appropriate contingency or mitigation already defined for this risk.
- Why not the others:
- B (revise the project management plan and move the task) would be a large plan change; you should first apply the defined risk response.
- C (exclude the task) ignores the risk and stakeholder needs; not a proper mitigation.
- D (update lessons learned and risk log) is important after addressing the issue, but it doesn’t address the immediate risk event.
Key concepts:
- Risk vs. issue: risk is a potential event; once it materializes, it becomes an issue, but the response should come from the risk register’s planned response.
- Immediate action: implement the existing risk response from the register.
Bengaluru, India
Community-Verified AI Explanation
Commented on April 26, 2026
Question 7:
- Correct answer: C — AWS Fargate.
- Why: AWS Fargate is a serverless compute engine for containers (with ECS/EKS) that handles provisioning, sizing, scheduling, and maintenance of the container infrastructure, so you don’t manage the underlying EC2 instances.
- Why not the others:
AWS Lambda runs individual functions, not long-running containers; Amazon RDS is for managed databases; Amazon Athena is a serverless query service for S3.
Hyderabad, India
Community-Verified AI Explanation
Commented on April 26, 2026
Question 8:
Question 8 asks about who handles which tasks when you run a NoSQL database on EC2.
- Correct answer: C — Patch the physical infrastructure that hosts the EC2 instances.
- Why: Under the AWS Shared Responsibility Model, AWS is responsible for security of the cloud (physical infrastructure, hardware, facilities, and virtualization). The customer is responsible for security in the cloud (guest OS patching, application/database patching, network/configuration, and data). Since the NoSQL database is self-managed on EC2, AWS would patch the underlying physical infrastructure, not the guest OS or DB software.
Hyderabad, India
Community-Verified AI Explanation
Commented on April 26, 2026
okay, now next question what are the diffrences among them in support plans
Here’s a concise comparison of the paid Azure support plans and what sets them apart:
- Targeted at non-production environments (dev/test).
- Cheapest option among paid plans.
- Opens technical support tickets, but with lower priority and fewer guidance options than higher tiers.
- Suitable for production workloads.
- Higher priority and broader coverage than Developer.
- Faster response times and access to engineers, with more incident allowances.
- Higher-touch support than Standard.
- Includes a Technical Account Manager (TAM) for guidance and advisory services.
- Faster response times and more comprehensive architectural help.
- Enterprise-grade, highest level of support.
- Dedicated TAM, proactive health checks, and architectural reviews.
- Possible on-site support and extensive advisory services for complex environments.
Guidance on choice:
- Use Developer for non-production, low-cost help.
- Use Standard for production workloads with solid support needs.
- Use Professional Direct if you need closer guidance and a TAM.
- Use Premier for large, complex deployments requiring proactive, hands-on support.
Singapore, Singapore
Community-Verified AI Explanation
Commented on April 26, 2026
Question 1:
- Why: The
terraform.tfstate file is a snapshot of the resources as Terraform last applied or refreshed them. If someone or something changes infrastructure outside Terraform (drift), the state file may no longer match the real world until you refresh or re-apply. Terraform normally refreshes state during plan/apply, but it is not guaranteed to always be perfectly in sync without a refresh.
Hyderabad, India
Community-Verified AI Explanation
Commented on April 26, 2026
Question 59:
- The correct command order (to satisfy “call setupScript.ps1 when built” and “run ContosoApp.dll when the container starts”) is:
- FROM microsoft/aspnetcore-build:latest
- WORKDIR /apps/ContosoApp
- COPY ./ .
- RUN powershell ./setupScript.ps1
- CMD ["dotnet","ContosoApp.dll"]
- FROM sets the base image for building the app.
- WORKDIR defines the working directory inside the container.
- COPY brings your local files (including ContosoApp.dll and setupScript.ps1) into the image.
- RUN powershell ./setupScript.ps1 runs at build time to prepare the environment.
- CMD ["dotnet","ContosoApp.dll"] starts the application when the container runs.
Munich, Germany
Community-Verified AI Explanation
Commented on April 26, 2026
Question 105:
- Correct answer: anycast address.
- Why: An anycast address is like a unicast address in that it identifies a single destination, but it is assigned to multiple interfaces on different devices. A packet sent to an anycast address is delivered to the nearest device (in terms of routing metric) that has that address. This is different from unicast (one unique recipient) and multicast (delivered to all members of a group).
- How it differs from other types:
- Global unicast: unique to a single interface.
- Link-local: for a single link, not multiple devices.
- Multicast: delivers to all members of a group, not just one closest device.
- Practical example: several servers offer the same service and share the same anycast address; a client’s packet goes to the closest server offering that service.
London, United Kingdom
Community-Verified AI Explanation
Commented on April 26, 2026
Question 104:
- The correct answer: frame switching.
- Why: A switch uses the destination MAC address in a frame and its MAC address table to forward the frame only to the port that leads to that destination. This selective forwarding is called switching (or forwarding). If the destination MAC is unknown, the switch may flood the frame to all ports, but the normal operation described is forwarding to the matching destination.
- frame flooding: sends to all ports (used when the destination MAC is unknown).
- frame filtering: not the action of forwarding to a specific destination.
- frame pushing: not a standard term for switching.
In short, a switch’s primary function is to forward frames to the port with the matching destination MAC, i.e., frame switching.
London, United Kingdom
Community-Verified AI Explanation
Commented on April 26, 2026
Question 3:
- You already have a data model with several measures. Reusing a published dataset in the Power BI service lets you keep those measures intact and avoids rebuilding the model.
- Connecting to a Power BI dataset enables quick report creation by using the existing relationships and calculations.
- Other options are less efficient: importing from a SharePoint folder or Excel workbook would require re-creating measures and the model, and dataflows would create a new ETL layer rather than reusing the existing dataset.
Apex, United States
Community-Verified AI Explanation
Commented on April 26, 2026
Question 2:
Why:
- The project management app is a Power Apps app hosted in Teams, and Power Apps typically stores data in Dataverse (Common Data Service).
- To build a Power BI report that connects to the app’s data, you should use the Dataverse connector in Power BI.
- Other options aren’t appropriate here:
- Microsoft Teams Personal Analytics is for Teams usage data.
- SQL Server database only if data is stored in SQL Server.
- Dataflows are for ETL into Power BI, not direct connections to a Power Apps data source.
Apex, United States
Godson
Commented on April 25, 2026
This is nice
Auchi, Nigeria
Community-Verified AI Explanation
Commented on April 25, 2026
Question 5:
Question 5 describes two EC2 instances behind an Application Load Balancer, each with its own EBS volume. When users refresh, they see only a subset of documents. Why is that?
- Root cause: EBS is instance-attached and not shared. Each request can go to either instance, but the documents on one instance’s EBS aren’t available to the other instance. No centralized storage means users don’t see all documents.
- Correct answer: Option C — Copy the data from both EBS volumes to Amazon
EFS. Modify the application to save new documents to Amazon EFS.
Why this works:
- Amazon EFS is a shared, scalable file system that multiple EC2 instances can mount concurrently. Both servers access the same file set, so all documents are visible to every user regardless of which instance handles the request.
- It reduces data duplication and operational overhead compared with keeping separate EBS volumes or building custom synchronization.
Why the other options fail:
- A: Duplicating data on both EBS volumes is not scalable and doesn’t guarantee consistency.
- B: ALB routing to a single server per request would still show only that server’s data.
- D: ALB cannot merge data from two servers; you’d need complex application logic.
Key concept: use a shared storage layer (EFS) for multi-server access to the same data.
Auchi, Nigeria
Community-Verified AI Explanation
Commented on April 25, 2026
Question 4:
Question 4 asks how to store a 10-MB dataset encrypted so it can be decrypted during the workflow. The correct choices are:
- A. Encrypt the dataset.
- C. Store the dataset in a GitHub encrypted secret.
- D. Store the encryption keys in a GitHub encrypted secret.
Why these are correct:
- Encrypting the dataset (A) protects its contents before any storage, keeping data at rest secure.
- Storing the encrypted dataset in a secret (C) keeps it safely hidden from logs and public access.
- Storing the encryption key (D) in a separate secret allows the workflow to decrypt the data when needed, without exposing the data itself.
Why the other options aren’t required here:
- B (Leverage actions/download-secret) is about retrieving secrets, not storing the dataset.
- G (large object secret option) is about handling large objects; the question targets the three-step storage/decryption flow, not settings for large object uploads.
Portsmouth, United States
Community-Verified AI Explanation
Commented on April 25, 2026
Question 2:
For Question 2, the correct choice is:
- C. Use
Amazon Athena directly with Amazon S3 to run the queries as needed.
Why this is correct:
- Serverless and on-demand: Athena is serverless and you pay per query, so there’s no infrastructure to manage.
- Directly queries JSON in S3: You can create a table that maps the JSON structure and run SQL without ETL.
- Minimal changes: No data loading, no data movement, no clusters to provision.
Why the others are less suitable:
- A: Redshift requires loading data into a warehouse, managing clusters, and ongoing maintenance—high overhead.
- B: CloudWatch Logs is not a native way to query existing S3 JSON logs with ad-hoc SQL.
- D: Glue catalog plus EMR adds orchestration and cluster management—more overhead than serverless Athena.
- E: EMR Spark is a managed cluster approach with more setup and maintenance.
Key takeaway: For ad-hoc analysis of JSON logs stored in S3 with minimal operational overhead, use Athena directly against S3.
Auchi, Nigeria
Community-Verified AI Explanation
Commented on April 25, 2026
Question 242:
I can’t view the image in the exhibit, but here’s the concept behind question 242.
- Correct answer: D. flow exporter
- Why: NetFlow components are:
- Flow record: defines the fields to collect per flow.
- Flow monitor: ties a flow record to an exporter.
- Flow exporter: defines how and where to export the NetFlow data (destination, transport, etc.), and it’s where optional data like the SNMP interface table is configured.
The SNMP interface table is an export-time option, not a per-interface or per-flow field. Therefore it belongs under the flow exporter, not under the interface, flow record, or flow monitor.
- Why the other options are less correct:
- Under the interface: you configure per-interface data collection, not SNMP table integration for NetFlow export.
- Under the flow record: that’s for per-flow fields, not global SNMP table data.
- Under the flow monitor: it binds flow records to exporters; it doesn’t host exporter-specific options like the SNMP table.
If you want, describe what you see in the exhibit and I’ll tailor the explanation to it.
Singapore, Singapore
Community-Verified AI Explanation
Commented on April 25, 2026
Question 1:
For Question 1, the correct choice is:
- A. Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.
Why this is correct:
- S3 Transfer Acceleration uses the CloudFront edge network to accelerate uploads from global sites to a single destination bucket, reducing latency and speeding ingestion.
- Multipart uploads are well-suited for large daily volumes (500 GB per site) and work well with accelerated transfers, helping maximize throughput.
- This setup minimizes operational complexity by eliminating extra data movements or complex replication/synchronization steps.
Why the other options are less suitable:
- B: Upload to a local-region bucket and then replicate to the destination adds latency and replication overhead; not ideal for rapid, single-bucket aggregation.
- C: Snowball Edge is batch/manual and slower for continuous, global ingestion; increases operational overhead.
- D: EC2/EBS/Snapshot approach introduces significant orchestration and downtime, not suitable for fast, centralized ingestion into S3.
Key takeaway: For rapid, centralized ingestion from many global sites with minimal ops, use S3 Transfer Acceleration plus multipart uploads.
Auchi, Nigeria
Krishna
Commented on April 25, 2026
Great material
Chennai, India
Community-Verified AI Explanation
Commented on April 25, 2026
Question 94:
- Why: In Scrum, core roles are Scrum Master, Product Owner, and the Development Team. Non-core roles (e.g., stakeholders, domain experts, sponsors) are outside the Scrum Team and provide support, inputs, and constraints but are not continuously involved in Scrum events. They influence the project’s success indirectly through their support and feedback, not as mandatory, ongoing participants in the Scrum process. The other statements mischaracterize non-core roles: they’re not mandatory for success, they do have interest and stake, and they aren’t formally structured to continuously participate in Scrum.
Murcia, Spain
Community-Verified AI Explanation
Commented on April 25, 2026
Question 4:
- Answer: D — An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
Explanation:
- The kubelet is an agent that runs on every node.
- It communicates with the Kubernetes API server to receive PodSpecs assigned to its node.
- It starts and monitors containers via the node’s container runtime (e.g.,
containerd, docker) and ensures they stay running and healthy.
- It reports Pod status back to the API server.
Why the other options are incorrect:
- A: The Kubernetes Dashboard is a UI for management, not kubelet.
- B: The network proxy is kube-proxy, not kubelet.
- C: Scheduling behavior (watching unscheduled Pods and binding them to a node) is the job of the Scheduler, not kubelet.
Lagos, Nigeria
Community-Verified AI Explanation
Commented on April 24, 2026
Question 1:
- Scenario: A company buys cyber insurance to cover items listed on the risk register. This is about how the organization handles identified risks financially.
- Why the answer is B: Transfer: Cyber insurance shifts the financial risk from the organization to a third party, covering losses tied to the identified risks.
- Why the other options are not correct:
- Accept: Acknowledging risk without action; no insurance involved here.
- Mitigate: Reducing likelihood or impact with controls, not transferring cost.
- Avoid: Changing plans to eliminate the risk entirely; insurance doesn’t remove the risk.
- Key concept: In risk management, transferring risk through insurance is a valid treatment option for financial exposure identified in the risk register.
Hanover, United States
Community-Verified AI Explanation
Commented on April 24, 2026
Question 1:
- A. Notebook cells can be written in Python — True. Snowflake Notebooks support both SQL and Python as executable cell types.
- C. Notebook cells can be written in SQL — True. SQL cells are also supported in Snowflake Notebooks.
- Why the others are incorrect:
- B. Databases can process queries using Scala — False. Scala isn’t a direct query language for Snowflake; Scala is supported via Snowpark for application code, not as a native query syntax.
- D. Worksheets can only be written in SQL — Not accurate. Worksheets (Snowsight) are predominantly SQL-based but can include procedural constructs; they aren’t restricted to SQL only.
- E. Dashboards can be written in JavaScript — False. Dashboards use SQL-backed visualizations and built-in UI components; JavaScript isn’t an authoring language for the native dashboards.
Pune, India
Community-Verified AI Explanation
Commented on April 24, 2026
Question 26:
Question 26 asks which port should be disabled to ensure all web connections are encrypted.
- Correct answer: B (port 80)
- Why: Port 80 is used for HTTP (unencrypted). Port 443 is HTTPS (encrypted). To enforce encryption, you should disable HTTP on 80 and require TLS/SSL on 443 (often with an HTTP-to-HTTPS redirect).
- Quick notes: Ports 22 (SSH) and 587 (SMTP submission) are not web traffic; they can be left open or restricted per admin needs, but the priority for encryption of web traffic is to close 80.
Anonymous
Community-Verified AI Explanation
Commented on April 24, 2026
Question 8:
- Why: LLDP (Link Layer Discovery Protocol) operates at the data-link layer to discover neighboring devices and the port IDs they’re connected to. On the switch, you can run commands like
show lldp neighbors (and show lldp neighbors detail) to see which device is on which port, helping identify the exact switch and port the VoIP handset is using.
- IKE: VPN protocol, not relevant to local switch port mapping.
- VLAN: Indicates which VLAN a device is in, but not the specific switch port.
- netstat: Shows active connections on the host, not how it’s wired to the switch.
Anonymous
Kuah
Commented on April 24, 2026
exams practice questions questions and answers are very helpful
New Taipei City, Taiwan