CompTIA SY0-701 Exam Questions
CompTIA Security+ (Page 12 )

Updated On: 31-Mar-2026

A security analyst recently read a report about a flaw in several of the organization's printer models that causes credentials to be sent over the network in cleartext, regardless of the encryption settings.
Which of the following would be best to use to validate this finding?

  1. Wireshark
  2. netcat
  3. Nessus
  4. Nmap

Answer(s): A

Explanation:

A) Wireshark is right because it captures and analyzes network traffic, enabling inspection of credentials in transit and confirmation of cleartext transmission across printers. B) netcat is wrong; it can create raw network connections but isn’t suited for detailed packet analysis or credential interception verification. C) Nessus is wrong; it’s a vulnerability scanner, not a live traffic analyzer to verify credentials sent in plain text. D) Nmap is wrong; it performs port and service discovery, not deep packet inspection to validate whether credentials traverse the network unencrypted.


Reference:



A development team is launching a new public-facing web product. The Chief Information Security Officer has asked that the product be protected from attackers who use malformed or invalid inputs to destabilize the system.
Which of the following practices should the development team implement?

  1. Fuzzing
  2. Continuous deployment
  3. Static code analysis
  4. Manual peer review

Answer(s): A

Explanation:

Fuzzing is the correct choice because it tests how the web product handles malformed or invalid inputs by sending random or unexpected data to uncover stability and security weaknesses. A) Fuzzing: specifically targets input validation and error handling to prevent input-based exploits. B) Continuous deployment focuses on rapid release pipelines, not input validation or resilience to malformed data. C) Static code analysis reviews source code for flaws without executing the program, but fuzzing tests runtime input handling. D) Manual peer review helps identify issues but is less systematic for uncovering input-induced crashes compared to automated fuzzing.


Reference:



During an annual review of the system design, an engineer identified a few issues with the currently released design.
Which of the following should be performed next according to best practices?

  1. Risk management process
  2. Product design process
  3. Design review process
  4. Change control process

Answer(s): D

Explanation:

Implementing a change control process (D) is the correct next step to formally assess and authorize modifications after identifying design issues, ensuring traceability, risk evaluation, and proper rollback plans. A) Risk management process is ongoing but not the immediate procedural step to handle a known design issue. B) Product design process is upstream and would not address the current issue in the released design. C) Design review process may occur earlier; after issues are found, formal change control is required to manage deviations. D) Change control process is the correct mechanism to authorize, document, and implement design changes.



Which of the following is best to use when determining the severity of a vulnerability?

  1. CVE
  2. OSINT
  3. SOAR
  4. CVSS

Answer(s): D

Explanation:

A vulnerability’s severity is best determined using CVSS because it provides a standardized scoring system for base, temporal, and environmental factors, yielding a numeric severity that facilitates risk assessment and prioritization.
A) CVE identifiers catalog vulnerabilities but do not provide severity scoring.
B) OSINT gathers public information but does not quantify impact or severity.
C) SOAR automates response and orchestration, not severity scoring.
D) CVSS provides the standardized scoring framework used to measure vulnerability impact and priority.


Reference:



An organization experienced a security breach that allowed an attacker to send fraudulent wire transfers from a hardened PC exclusively to the attacker's bank through remote connections. A security analyst is creating a timeline of events and has found a different PC on the network containing malware. Upon reviewing the command history, the analyst finds the following:

PS>.\mimikatz.exe "sekurlsa::pth /user:localadmin /domain:corp-domain.com / ntlm:B4B9B02E1F29A3CF193EAB28C8D617D3F327

Which of the following best describes how the attacker gained access to the hardened PC?

  1. The attacker created fileless malware that was hosted by the banking platform.
  2. The attacker performed a pass-the-hash attack using a shared support account.
  3. The attacker utilized living-off-the-land binaries to evade endpoint detection and response software.
  4. The attacker socially engineered the accountant into performing bad transfers.

Answer(s): B

Explanation:

The command shows mimikatz performing a pass-the-hash using a stolen NTLM hash to impersonate an administrator and access resources remotely.
A) Incorrect: Fileless malware refers to living in memory or legitimate tooling, not indicated by NTLM hash usage in the command.
B) Correct: Pass-the-hash uses an NTLM hash to authenticate to services without credentials, fitting the command.
C) Incorrect: LOTB binaries describe using legitimate tools but not the cryptographic hash-based credential reuse shown here.
D) Incorrect: Social engineering is outside the technical evidence; the command demonstrates credential reuse, not persuasion.


Reference:



Which of the following is the best resource to consult for information on the most common application exploitation methods?

  1. OWASP
  2. STIX
  3. OVAL
  4. Threat intelligence feed
  5. Common Vulnerabilities and Exposures

Answer(s): A

Explanation:

A) Best resource for common application exploitation methods is OWASP, which documents top risks, vulnerability patterns, and secure coding practices impacting web apps.
B) STIX focuses on threat intelligence sharing and cyber threat data structure, not specifically common app exploitation methods.
C) OVAL is for validating, testing, and representing system configurations and vulnerabilities, not detailing exploitation techniques.
D) Threat intelligence feed provides indicators and context but is not a centralized best-practice guide for exploitation techniques.
E) CVE catalogs vulnerabilities, not the common exploitation methods or attacker techniques.


Reference:



A security analyst is reviewing the logs on an organization's DNS server and notices the following unusual snippet:



Which of the following attack techniques was most likely used?

  1. Determining the organization's ISP-assigned address space
  2. Bypassing the organization's DNS sinkholing
  3. Footprinting the internal network
  4. Attempting to achieve initial access to the DNS server
  5. Exfiltrating data from fshare.int.complia.org

Answer(s): C

Explanation:

A) Footprinting the internal network is the correct rationale because DNS log anomalies often indicate reconnaissance to map internal hosts and services, typical of footprinting.
B) Bypassing DNS sinkholing would show indicators of evasion against known malicious domains, not general footprinting patterns.
C) Determining the organization’s ISP-assigned address space is more about external network footprinting, not internal DNS server activity.
D) Attempting to achieve initial access to the DNS server would present login or privilege escalation events, not generic DNS query patterns.
E) Exfiltrating data from fshare.int.complia.org would show large outbound DNS tunneling or data transfer indicators, not typical footprinting signals.


Reference:



A security analyst at an organization observed several user logins from outside the organization's network. The analyst determined that these logins were not performed by individuals within the organization.
Which of the following recommendations would reduce the likelihood of future attacks? (Choose two.)

  1. Disciplinary actions for users
  2. Conditional access policies
  3. More regular account audits
  4. Implementation of additional authentication factors
  5. Enforcement of content filtering policies
  6. A review of user account permissions

Answer(s): B,D

Explanation:

A strong two-factor and location-aware approach mitigates unauthorized access.
B) Conditional access policies limit access based on location, device, risk, or symptoms, reducing exposure from noncompliant endpoints and suspicious contexts.
D) Implementation of additional authentication factors adds something the user has/knows/are, making credential theft less effective.
A) Disciplinary actions do not prevent credential misuse or compromised devices.
C) Regular account audits help detect issues but don’t directly prevent external logins.
E) Content filtering addresses web content, not authentication.
F) Reviewing permissions helps least-privilege but won’t stop zero-day or credential abuse without stronger auth.


Reference:



Viewing page 12 of 91
Viewing questions 89 - 96 out of 757 questions



Post your Comments and Discuss CompTIA SY0-701 exam dumps with other Community members:

SY0-701 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!