A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization?
Trusted third-party certification
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup language (SAML)
Cross-certification
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as:
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM:
SAML works as follows:
SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
Which of the following is the BEST reason for writing an information security policy?
To support information security governance
To reduce the number of audit findings
To deter attackers
To implement effective information security controls
The best reason for writing an information security policy is to support information security governance. Information security governance is the process or the framework of establishing and enforcing the policies and standards for the protection and the management of the information and the systems within an organization, as well as for overseeing and evaluating the performance and the effectiveness of the information security program and the information security controls. Information security governance can provide some benefits for security, such as enhancing the visibility and the accountability of the information security program and the information security controls, preventing or detecting any unauthorized or improper activities or changes, and supporting the audit and the compliance activities. Information security governance can involve various elements and roles, such as:
Writing an information security policy is the best reason for writing an information security policy, as it is the foundation and the core of the information security governance process or framework, and it provides the guidance and the direction for the information security program and the information security controls, as well as for the information security stakeholders. Writing an information security policy can involve various tasks or duties, such as:
To reduce the number of audit findings, to deter attackers, and to implement effective information security controls are not the best reasons for writing an information security policy, although they may be related or possible outcomes or benefits of writing an information security policy. To reduce the number of audit findings is an outcome or a benefit of writing an information security policy, as it implies that the information security policy has helped to improve the performance and the effectiveness of the information security program and the information security controls, as well as to comply with the industry regulations or the best practices, and that the information security policy has supported the audit and the compliance activities, by providing the evidence or the data that can validate or verify the information security program and the information security controls. However, to reduce the number of audit findings is not the best reason for writing an information security policy, as it is not the primary or the most important purpose or objective of writing an information security policy, and it may not be true or applicable for all information security policies.
Which of the following is MOST appropriate for protecting confidentially of data stored on a hard drive?
Triple Data Encryption Standard (3DES)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
Secure Hash Algorithm 2(SHA-2)
The most appropriate method for protecting the confidentiality of data stored on a hard drive is to use the Advanced Encryption Standard (AES). AES is a symmetric encryption algorithm that uses the same key to encrypt and decrypt data. AES can provide strong and efficient encryption for data at rest, as it uses a block cipher that operates on fixed-size blocks of data, and it supports various key sizes, such as 128, 192, or 256 bits. AES can protect the confidentiality of data stored on a hard drive by transforming the data into an unreadable form that can only be accessed by authorized parties who possess the correct key. AES can also provide some degree of integrity and authentication, as it can detect any modification or tampering of the encrypted data. Triple Data Encryption Standard (3DES), Message Digest 5 (MD5), and Secure Hash Algorithm 2 (SHA-2) are not the most appropriate methods for protecting the confidentiality of data stored on a hard drive, although they may be related or useful cryptographic techniques. 3DES is a symmetric encryption algorithm that uses three iterations of the Data Encryption Standard (DES) algorithm with two or three different keys to encrypt and decrypt data. 3DES can provide encryption for data at rest, but it is not as strong or efficient as AES, as it uses a smaller key size (56 bits per iteration), and it is slower and more complex than AES. MD5 is a hash function that produces a fixed-length output (128 bits) from a variable-length input. MD5 does not provide encryption for data at rest, as it does not use any key to transform the data, and it cannot be reversed to recover the original data. MD5 can provide some integrity for data at rest, as it can verify if the data has been changed or corrupted, but it is not secure or reliable, as it is vulnerable to collisions and pre-image attacks. SHA-2 is a hash function that produces a fixed-length output (224, 256, 384, or 512 bits) from a variable-length input. SHA-2 does not provide encryption for data at rest, as it does not use any key to transform the data, and it cannot be reversed to recover the original data. SHA-2 can provide integrity for data at rest, as it can verify if the data has been changed or corrupted, and it is more secure and reliable than MD5, as it is resistant to collisions and pre-image attacks.
Within the company, desktop clients receive Internet Protocol (IP) address over Dynamic Host Configuration
Protocol (DHCP).
Which of the following represents a valid measure to help protect the network against unauthorized access?
Implement path management
Implement port based security through 802.1x
Implement DHCP to assign IP address to server systems
Implement change management
Port based security through 802.1x is a valid measure to help protect the network against unauthorized access. 802.1x is an IEEE standard for port-based network access control (PNAC). It provides an authentication mechanism to devices wishing to attach to a LAN or WLAN. 802.1x authentication involves three parties: a supplicant, an authenticator, and an authentication server. The supplicant is a client device that wishes to access the network. The authenticator is a network device that provides a data link between the client and the network and can allow or block network traffic between the two, such as an Ethernet switch or wireless access point. The authentication server is a trusted server that can receive and respond to requests for network access, and can tell the authenticator if the connection is to be allowed, and various settings that should apply to that client’s connection. By implementing port based security through 802.1x, the network can prevent unauthorized devices from accessing the network resources and ensure that only authenticated and authorized devices can communicate on the network. References: IEEE 802.1X - Wikipedia; What Is 802.1X Authentication? How Does 802.1x Work? - Fortinet; 802.1X: Port-Based Network Access Control - IEEE 802
Which of the following is a benefit in implementing an enterprise Identity and Access Management (IAM) solution?
Password requirements are simplified.
Risk associated with orphan accounts is reduced.
Segregation of duties is automatically enforced.
Data confidentiality is increased.
A benefit in implementing an enterprise Identity and Access Management (IAM) solution is that the risk associated with orphan accounts is reduced. An orphan account is an account that belongs to a user who has left the organization or changed roles, but the account has not been deactivated or deleted. An orphan account poses a security risk, as it can be exploited by unauthorized users or attackers to gain access to the system or data. An enterprise IAM solution is a system that manages the identification, authentication, authorization, and provisioning of users and devices across the organization. An enterprise IAM solution can help to reduce the risk associated with orphan accounts by automating the account lifecycle management, such as creating, updating, suspending, or deleting accounts based on the user status, role, or policy. An enterprise IAM solution can also help to monitor and audit the account activity, and to detect and remediate any orphan accounts. Password requirements are simplified, segregation of duties is automatically enforced, and data confidentiality is increased are all possible benefits or features of an enterprise IAM solution, but they are not the best answer to the question. Password requirements are simplified by an enterprise IAM solution that supports single sign-on (SSO) or federated identity management (FIM), which allow the user to access multiple systems or applications with one set of credentials. Segregation of duties is automatically enforced by an enterprise IAM solution that implements role-based access control (RBAC) or attribute-based access control (ABAC), which grant or deny access to resources based on the user role or attributes. Data confidentiality is increased by an enterprise IAM solution that encrypts or masks the sensitive data, or applies data loss prevention (DLP) or digital rights management (DRM) policies to the data.
A security compliance manager of a large enterprise wants to reduce the time it takes to perform network,
system, and application security compliance audits while increasing quality and effectiveness of the results.
What should be implemented to BEST achieve the desired results?
Configuration Management Database (CMDB)
Source code repository
Configuration Management Plan (CMP)
System performance monitoring application
A Configuration Management Database (CMDB) is a database that stores information about configuration items (CIs) for use in change, release, incident, service request, problem, and configuration management processes. A CI is any component or resource that is part of a system or a network, such as hardware, software, documentation, or personnel. A CMDB can provide some benefits for security compliance audits, such as:
A source code repository, a configuration management plan (CMP), and a system performance monitoring application are not the best options to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, although they may be related or useful tools or techniques. A source code repository is a database or a system that stores and manages the source code of a software or an application, and that supports version control, collaboration, and documentation of the code. A source code repository can provide some benefits for security compliance audits, such as:
However, a source code repository is not the best option to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, as it is only applicable to the application layer, and it does not provide information about the other CIs that are part of the system or the network, such as hardware, documentation, or personnel. A configuration management plan (CMP) is a document or a policy that defines and describes the objectives, scope, roles, responsibilities, processes, and procedures of configuration management, which is the process of identifying, controlling, tracking, and auditing the changes to the CIs. A CMP can provide some benefits for security compliance audits, such as:
However, a CMP is not the best option to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, as it is not a database or a system that stores and provides information about the CIs, but rather a document or a policy that defines and describes the configuration management process. A system performance monitoring application is a software or a tool that collects and analyzes data and metrics about the performance and the behavior of a system or a network, such as availability, reliability, throughput, response time, or resource utilization. A system performance monitoring application can provide some benefits for security compliance audits, such as:
However, a system performance monitoring application is not the best option to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, as it is only applicable to the network and system layers, and it does not provide information about the other CIs that are part of the system or the network, such as software, documentation, or personnel.
An organization’s security policy delegates to the data owner the ability to assign which user roles have access
to a particular resource. What type of authorization mechanism is being used?
Discretionary Access Control (DAC)
Role Based Access Control (RBAC)
Media Access Control (MAC)
Mandatory Access Control (MAC)
Discretionary Access Control (DAC) is a type of authorization mechanism that grants or denies access to resources based on the identity of the user and the permissions assigned by the owner of the resource. The owner of the resource has the discretion to decide who can access the resource and what level of access they can have. For example, the owner of a file can assign read, write, or execute permissions to different users or groups. DAC is flexible and easy to implement, but it also poses security risks, such as unauthorized access, data leakage, or privilege escalation, if the owner is not careful or knowledgeable about the security implications of their decisions.
What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance?
Audit logs
Role-Based Access Control (RBAC)
Two-factor authentication
Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as:
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on other criteria or mechanisms.
Which Identity and Access Management (IAM) process can be used to maintain the principle of least
privilege?
identity provisioning
access recovery
multi-factor authentication (MFA)
user access review
The Identity and Access Management (IAM) process that can be used to maintain the principle of least privilege is user access review. User access review is the process of periodically reviewing and verifying the user accounts and access rights on a system or a network, and ensuring that they are appropriate, necessary, and compliant with the policies and standards. User access review can help to maintain the principle of least privilege by identifying and removing any excessive, obsolete, or unauthorized access rights that may pose a security risk or violate the regulations. User access review can also help to support the audit and compliance activities, as well as the identity lifecycle management activities. Identity provisioning, access recovery, and multi-factor authentication (MFA) are not the IAM processes that can be used to maintain the principle of least privilege, although they may be related or useful processes. Identity provisioning is the process of creating, modifying, or deleting the user accounts and access rights on a system or a network. Identity provisioning can help to establish the principle of least privilege by granting the user accounts and access rights that are aligned with the user roles or functions within the organization. However, identity provisioning is not sufficient to maintain the principle of least privilege, as the user accounts and access rights may change or become outdated over time, due to various factors, such as role changes, transfers, promotions, or terminations. Access recovery is the process of restoring the user accounts and access rights on a system or a network, after they have been lost, corrupted, or compromised. Access recovery can help to ensure the availability and integrity of the user accounts and access rights, as well as to mitigate the impact of a security incident or a disaster. However, access recovery is not a process that can be used to maintain the principle of least privilege, as it does not involve reviewing or verifying the appropriateness or necessity of the user accounts and access rights. Multi-factor authentication (MFA) is a technique that uses two or more factors of authentication to verify the identity of the user who accesses a system or a network. MFA can help to enhance the security and reliability of the authentication process, by requiring the user to provide something they know (e.g., password), something they have (e.g., token), or something they are (e.g., biometric). However, MFA is not a process that can be used to maintain the principle of least privilege, as it does not affect the user accounts and access rights, but only the user access credentials.
Which of the following would MINIMIZE the ability of an attacker to exploit a buffer overflow?
Memory review
Code review
Message division
Buffer division
Code review is the technique that would minimize the ability of an attacker to exploit a buffer overflow. A buffer overflow is a type of vulnerability that occurs when a program writes more data to a buffer than it can hold, causing the data to overwrite the adjacent memory locations, such as the return address or the stack pointer. An attacker can exploit a buffer overflow by injecting malicious code or data into the buffer, and altering the execution flow of the program to execute the malicious code or data. Code review is the technique that would minimize the ability of an attacker to exploit a buffer overflow, as it involves examining the source code of the program to identify and fix any errors, flaws, or weaknesses that may lead to buffer overflow vulnerabilities. Code review can help to detect and prevent the use of unsafe or risky functions, such as gets, strcpy, or sprintf, that do not perform any boundary checking on the buffer, and replace them with safer or more secure alternatives, such as fgets, strncpy, or snprintf, that limit the amount of data that can be written to the buffer. Code review can also help to enforce and verify the use of secure coding practices and standards, such as input validation, output encoding, error handling, or memory management, that can reduce the likelihood or impact of buffer overflow vulnerabilities. Memory review, message division, and buffer division are not techniques that would minimize the ability of an attacker to exploit a buffer overflow, although they may be related or useful concepts. Memory review is not a technique, but a process of analyzing the memory layout or content of a program, such as the stack, the heap, or the registers, to understand or debug its behavior or performance. Memory review may help to identify or investigate the occurrence or effect of a buffer overflow, but it does not prevent or mitigate it. Message division is not a technique, but a concept of splitting a message into smaller or fixed-size segments or blocks, such as in cryptography or networking. Message division may help to improve the security or efficiency of the message transmission or processing, but it does not prevent or mitigate buffer overflow. Buffer division is not a technique, but a concept of dividing a buffer into smaller or separate buffers, such as in buffering or caching. Buffer division may help to optimize the memory usage or allocation of the program, but it does not prevent or mitigate buffer overflow.
Which of the following is the MOST common method of memory protection?
Compartmentalization
Segmentation
Error correction
Virtual Local Area Network (VLAN) tagging
The most common method of memory protection is segmentation. Segmentation is a technique that divides the memory space into logical segments, such as code, data, stack, and heap. Each segment has its own attributes, such as size, location, access rights, and protection level. Segmentation can help to isolate and protect the memory segments from unauthorized or unintended access, modification, or execution, as well as to prevent memory corruption, overflow, or leakage. Compartmentalization, error correction, and VLAN tagging are not methods of memory protection, but of information protection, data protection, and network protection, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 589; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 370.
Who would be the BEST person to approve an organizations information security policy?
Chief Information Officer (CIO)
Chief Information Security Officer (CISO)
Chief internal auditor
Chief Executive Officer (CEO)
Section: Security Operations
Which of the following is a common feature of an Identity as a Service (IDaaS) solution?
Single Sign-On (SSO) authentication support
Privileged user authentication support
Password reset service support
Terminal Access Controller Access Control System (TACACS) authentication support
Single Sign-On (SSO) is a feature that allows a user to authenticate once and access multiple applications or services without having to re-enter their credentials. SSO improves the user experience and reduces the password management burden for both users and administrators. SSO is a common feature of Identity as a Service (IDaaS) solutions, which are cloud-based services that provide identity and access management capabilities to organizations. IDaaS solutions typically support various SSO protocols and standards, such as Security Assertion Markup Language (SAML), OpenID Connect (OIDC), OAuth, and Kerberos, to enable seamless and secure integration with different applications and services, both on-premises and in the cloud.
Which of the following mechanisms will BEST prevent a Cross-Site Request Forgery (CSRF) attack?
parameterized database queries
whitelist input values
synchronized session tokens
use strong ciphers
The best mechanism to prevent a Cross-Site Request Forgery (CSRF) attack is to use synchronized session tokens. A CSRF attack is a type of web application vulnerability that exploits the trust that a site has in a user’s browser. A CSRF attack occurs when a malicious site, email, or link tricks a user’s browser into sending a forged request to a vulnerable site, where the user is already authenticated. The vulnerable site cannot distinguish between the legitimate and the forged requests, and may perform an unwanted action on behalf of the user, such as changing a password, transferring funds, or deleting data. Synchronized session tokens are a technique to prevent CSRF attacks by adding a random and unique value to each request that is generated by the server and verified by the server before processing the request. The token is usually stored in a hidden form field or a custom HTTP header, and is tied to the user’s session. The token ensures that the request originates from the same site that issued it, and not from a malicious site. Synchronized session tokens are also known as CSRF tokens, anti-CSRF tokens, or state tokens. Parameterized database queries, whitelist input values, and use strong ciphers are not mechanisms to prevent CSRF attacks, although they may be useful for other types of web application vulnerabilities. Parameterized database queries are a technique to prevent SQL injection attacks by using placeholders or parameters for user input, instead of concatenating or embedding user input directly into the SQL query. Parameterized database queries ensure that the user input is treated as data and not as part of the SQL command. Whitelist input values are a technique to prevent input validation attacks by allowing only a predefined set of values or characters for user input, instead of rejecting or filtering out unwanted or malicious values or characters. Whitelist input values ensure that the user input conforms to the expected format and type. Use strong ciphers are a technique to prevent encryption attacks by using cryptographic algorithms and keys that are resistant to brute force, cryptanalysis, or other attacks. Use strong ciphers ensure that the encrypted data is confidential, authentic, and integral.
Which of the following is a characteristic of an internal audit?
An internal audit is typically shorter in duration than an external audit.
The internal audit schedule is published to the organization well in advance.
The internal auditor reports to the Information Technology (IT) department
Management is responsible for reading and acting upon the internal audit results
A characteristic of an internal audit is that management is responsible for reading and acting upon the internal audit results. An internal audit is an independent and objective evaluation or assessment of the internal controls, processes, or activities of an organization, performed by a group of auditors or professionals who are part of the organization, such as the internal audit department or the audit committee. An internal audit can provide some benefits for security, such as enhancing the accuracy and the reliability of the operations, preventing or detecting fraud or errors, and supporting the audit and the compliance activities. An internal audit can involve various steps and roles, such as:
Management is responsible for reading and acting upon the internal audit results, as they are the primary users or recipients of the internal audit report, and they have the authority and the accountability to implement or execute the recommendations or the improvements suggested by the internal audit report, as well as to report or disclose the internal audit results to the external parties, such as the regulators, the shareholders, or the customers. An internal audit is typically shorter in duration than an external audit, the internal audit schedule is published to the organization well in advance, and the internal auditor reports to the audit committee are not characteristics of an internal audit, although they may be related or possible aspects of an internal audit. An internal audit is typically shorter in duration than an external audit, as it is performed by a group of auditors or professionals who are part of the organization, and who have more familiarity and access to the internal controls, processes, or activities of the organization, compared to a group of auditors or professionals who are outside the organization, and who have less familiarity and access to the internal controls, processes, or activities of the organization. However, an internal audit is typically shorter in duration than an external audit is not a characteristic of an internal audit, as it is not a defining or a distinguishing feature of an internal audit, and it may vary depending on the type or the nature of the internal audit, such as the objectives, scope, criteria, or methodology of the internal audit. The internal audit schedule is published to the organization well in advance, as it is a good practice or a technique that can help to ensure the transparency and the accountability of the internal audit, as well as to facilitate the coordination and the cooperation of the internal audit stakeholders, such as the management, the audit committee, the internal auditor, or the audit team.
Who is accountable for the information within an Information System (IS)?
Security manager
System owner
Data owner
Data processor
The data owner is the person who has the authority and responsibility for the information within an Information System (IS). The data owner is accountable for the security, quality, and integrity of the data, as well as for defining the classification, sensitivity, retention, and disposal of the data. The data owner must also approve or deny the access requests and periodically review the access rights. The security manager, the system owner, and the data processor are not accountable for the information within an IS, but they may have roles and responsibilities related to the security and operation of the IS. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
Which of the following could be considered the MOST significant security challenge when adopting DevOps practices compared to a more traditional control framework?
Achieving Service Level Agreements (SLA) on how quickly patches will be released when a security flaw is found.
Maintaining segregation of duties.
Standardized configurations for logging, alerting, and security metrics.
Availability of security teams at the end of design process to perform last-minute manual audits and reviews.
The most significant security challenge when adopting DevOps practices compared to a more traditional control framework is maintaining segregation of duties. DevOps is a set of practices and methodologies that aim to integrate and automate the development and the operations of a system or a network, such as software, applications, or services, to enhance the quality and the speed of the delivery and the deployment of the system or the network. DevOps can provide some benefits for security, such as enhancing the performance and the functionality of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. DevOps can involve various tools and techniques, such as continuous integration, continuous delivery, continuous testing, continuous monitoring, or continuous feedback. A traditional control framework is a set of policies and procedures that aim to establish and enforce the security and the governance of a system or a network, such as software, applications, or services, to protect the confidentiality, the integrity, and the availability of the system or the network. A traditional control framework can provide some benefits for security, such as enhancing the visibility and the accountability of the system or the network, preventing or detecting any unauthorized or improper activities or changes, and supporting the audit and the compliance activities. A traditional control framework can involve various controls and mechanisms, such as risk assessment, change management, configuration management, access control, or audit trail. Maintaining segregation of duties is the most significant security challenge when adopting DevOps practices compared to a more traditional control framework, as it can be difficult and costly to implement and manage, due to the differences and the conflicts between the DevOps and the traditional control framework principles and objectives. Segregation of duties is a security principle or a technique that requires that different roles or functions are assigned to different parties, and that no single party can perform all the steps of a process or a task, such as development, testing, deployment, or maintenance. Segregation of duties can provide some benefits for security, such as enhancing the accuracy and the reliability of the process or the task, preventing or detecting fraud or errors, and supporting the audit and the compliance activities.
An important principle of defense in depth is that achieving information security requires a balanced focus on which PRIMARY elements?
Development, testing, and deployment
Prevention, detection, and remediation
People, technology, and operations
Certification, accreditation, and monitoring
An important principle of defense in depth is that achieving information security requires a balanced focus on the primary elements of people, technology, and operations. People are the users, administrators, managers, and other stakeholders who are involved in the security process. They need to be aware, trained, motivated, and accountable for their security roles and responsibilities. Technology is the hardware, software, network, and other tools that are used to implement the security controls and measures. They need to be selected, configured, updated, and monitored according to the security standards and best practices. Operations are the policies, procedures, processes, and activities that are performed to achieve the security objectives and requirements. They need to be documented, reviewed, audited, and improved continuously to ensure their effectiveness and efficiency.
The other options are not the primary elements of defense in depth, but rather the phases, functions, or outcomes of the security process. Development, testing, and deployment are the phases of the security life cycle, which describes how security is integrated into the system development process. Prevention, detection, and remediation are the functions of the security management, which describes how security is maintained and improved over time. Certification, accreditation, and monitoring are the outcomes of the security evaluation, which describes how security is assessed and verified against the criteria and standards.
When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined?
Only when assets are clearly defined
Only when standards are defined
Only when controls are put in place
Only procedures are defined
When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures.
Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities.
Which of the following represents the GREATEST risk to data confidentiality?
Network redundancies are not implemented
Security awareness training is not completed
Backup tapes are generated unencrypted
Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted.
Intellectual property rights are PRIMARY concerned with which of the following?
Owner’s ability to realize financial gain
Owner’s ability to maintain copyright
Right of the owner to enjoy their creation
Right of the owner to control delivery method
Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy.
The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one.
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that
determine the risk of a business interruption occurring
determine the technological dependence of the business processes
Identify the operational impacts of a business interruption
Identify the financial impacts of a business interruption
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that:
The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with?
Application
Storage
Power
Network
A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes.
However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored.
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
Examine the device for physical tampering
Implement more stringent baseline configurations
Purge or re-image the hard disk drive
Change access codes
Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re-imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction.
The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not protect the data if the laptop is bypassed or breached. Changing access codes will make it harder for unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive.
What is the MOST important consideration from a data security perspective when an organization plans to relocate?
Ensure the fire prevention and detection systems are sufficient to protect personnel
Review the architectural plans to determine how many emergency exits are present
Conduct a gap analysis of a new facilities against existing security requirements
Revise the Disaster Recovery and Business Continuity (DR/BC) plan
When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating.
Which of the following types of technologies would be the MOST cost-effective method to provide a reactive control for protecting personnel in public areas?
Install mantraps at the building entrances
Enclose the personnel entry area with polycarbonate plastic
Supply a duress alarm for personnel exposed to the public
Hire a guard to protect the public area
Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs.
Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary?
Limit access to predefined queries
Segregate the database into a small number of partitions each with a separate security level
Implement Role Based Access Control (RBAC)
Reduce the number of people who have access to the system for statistical purposes
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as:
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries.
An organization has outsourced its financial transaction processing to a Cloud Service Provider (CSP) who will provide them with Software as a Service (SaaS). If there was a data breach who is responsible for monetary losses?
The Data Protection Authority (DPA)
The Cloud Service Provider (CSP)
The application developers
The data owner
The data owner is the person who has the authority and responsibility for the data stored, processed, or transmitted by an Information System (IS). The data owner is responsible for the monetary losses if there was a data breach, as the data owner is accountable for the security, quality, and integrity of the data, as well as for defining the classification, sensitivity, retention, and disposal of the data. The Data Protection Authority (DPA) is not responsible for the monetary losses, but for the enforcement of the data protection laws and regulations. The Cloud Service Provider (CSP) is not responsible for the monetary losses, but for the provision of the cloud services and the protection of the cloud infrastructure. The application developers are not responsible for the monetary losses, but for the development and maintenance of the software applications. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
Which of the following is the MOST challenging issue in apprehending cyber criminals?
They often use sophisticated method to commit a crime.
It is often hard to collect and maintain integrity of digital evidence.
The crime is often committed from a different jurisdiction.
There is often no physical evidence involved.
The most challenging issue in apprehending cyber criminals is that the crime is often committed from a different jurisdiction. This means that the cyber criminals may operate from a different country or region than the victim or the target, and thus may be subject to different laws, regulations, and enforcement agencies. This can create difficulties and delays in identifying, locating, and prosecuting the cyber criminals, as well as in obtaining and preserving the digital evidence. The other issues, such as the sophistication of the methods, the integrity of the evidence, and the lack of physical evidence, are also challenges in apprehending cyber criminals, but they are not as significant as the jurisdiction issue. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Operations, page 475; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 544.
Which one of the following is an advantage of an effective release control strategy form a configuration control standpoint?
Ensures that a trace for all deliverables is maintained and auditable
Enforces backward compatibility between releases
Ensures that there is no loss of functionality between releases
Allows for future enhancements to existing features
An advantage of an effective release control strategy from a configuration control standpoint is that it ensures that a trace for all deliverables is maintained and auditable. Release control is a process that manages the distribution and installation of software releases into the operational environment. Configuration control is a process that maintains the integrity and consistency of the software configuration items throughout the software development life cycle. An effective release control strategy can help to ensure that a trace for all deliverables is maintained and auditable, which means that the origin, history, and status of each software release can be tracked and verified. This can help to prevent unauthorized or incompatible changes, as well as to facilitate troubleshooting and recovery. Enforcing backward compatibility, ensuring no loss of functionality, and allowing for future enhancements are not advantages of release control from a configuration control standpoint, but from a functionality or performance standpoint. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 969; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 895.
Which of the following is the BEST way to reduce the impact of an externally sourced flood attack?
Have the service provider block the soiree address.
Have the soiree service provider block the address.
Block the source address at the firewall.
Block all inbound traffic until the flood ends.
The best way to reduce the impact of an externally sourced flood attack is to have the service provider block the source address. A flood attack is a type of denial-of-service attack that aims to overwhelm the target system or network with a large amount of traffic, such as SYN packets, ICMP packets, or UDP packets. An externally sourced flood attack is a flood attack that originates from outside the target’s network, such as from the internet. Having the service provider block the source address can help to reduce the impact of an externally sourced flood attack, as it can prevent the malicious traffic from reaching the target’s network, and thus conserve the network bandwidth and resources. Having the source service provider block the address, blocking the source address at the firewall, or blocking all inbound traffic until the flood ends are not the best ways to reduce the impact of an externally sourced flood attack, as they may not be feasible, effective, or efficient, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 745; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 525.
Which one of the following data integrity models assumes a lattice of integrity levels?
Take-Grant
Biba
Harrison-Ruzzo
Bell-LaPadula
The Biba model is a data integrity model that assumes a lattice of integrity levels, where each subject and object has a fixed integrity level. The model enforces two rules: the simple integrity property and the *-integrity property. The simple integrity property states that a subject can only read an object with an equal or lower integrity level. The *-integrity property states that a subject can only write to an object with an equal or higher integrity level. These rules prevent data corruption from low-integrity sources and unauthorized modification from high-integrity sources. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, page 316; CISSP For Dummies, 7th Edition, page 113.
Which of the following MUST be scalable to address security concerns raised by the integration of third-party
identity services?
Mandatory Access Controls (MAC)
Enterprise security architecture
Enterprise security procedures
Role Based Access Controls (RBAC)
Enterprise security architecture is the framework that defines the security policies, standards, guidelines, and controls that govern the security of an organization’s information systems and assets. Enterprise security architecture must be scalable to address the security concerns raised by the integration of third-party identity services, such as Identity as a Service (IDaaS) or federated identity management. Scalability means that the enterprise security architecture can accommodate the increased complexity, diversity, and volume of identity and access management transactions and interactions that result from the integration of external identity providers and consumers. Scalability also means that the enterprise security architecture can adapt to the changing security requirements and threats that may arise from the integration of third-party identity services.
The security accreditation task of the System Development Life Cycle (SDLC) process is completed at the end of which phase?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The security accreditation task of the System Development Life Cycle (SDLC) process is completed at the end of the system implementation phase. The SDLC is a framework that describes the stages and activities involved in the development, deployment, and maintenance of a system. The SDLC typically consists of the following phases: system initiation, system acquisition and development, system implementation, system operations and maintenance, and system disposal. The security accreditation task is the process of formally authorizing a system to operate in a specific environment, based on the security requirements, controls, and risks. The security accreditation task is part of the security certification and accreditation (C&A) process, which also includes the security certification task, which is the process of technically evaluating and testing the security controls and functionality of a system. The security accreditation task is completed at the end of the system implementation phase, which is the phase where the system is installed, configured, integrated, and tested in the target environment. The security accreditation task involves reviewing the security certification results and documentation, such as the security plan, the security assessment report, and the plan of action and milestones, and making a risk-based decision to grant, deny, or conditionally grant the authorization to operate (ATO) the system. The security accreditation task is usually performed by a senior official, such as the authorizing official (AO) or the designated approving authority (DAA), who has the authority and responsibility to accept the security risks and approve the system operation. The security accreditation task is not completed at the end of the system acquisition and development, system operations and maintenance, or system initiation phases. The system acquisition and development phase is the phase where the system requirements, design, and development are defined and executed, and the security controls are selected and implemented. The system operations and maintenance phase is the phase where the system is used and supported in the operational environment, and the security controls are monitored and updated. The system initiation phase is the phase where the system concept, scope, and objectives are established, and the security categorization and planning are performed.
The design review for an application has been completed and is ready for release. What technique should an organization use to assure application integrity?
Application authentication
Input validation
Digital signing
Device encryption
The technique that an organization should use to assure application integrity is digital signing. Digital signing is a technique that uses cryptography to generate a digital signature for a message or a document, such as an application. The digital signature is a value that is derived from the message and the sender’s private key, and it can be verified by the receiver using the sender’s public key. Digital signing can help to assure application integrity, which means that the application has not been altered or tampered with during the transmission or storage. Digital signing can also help to assure application authenticity, which means that the application originates from the legitimate source. Application authentication, input validation, and device encryption are not techniques that can assure application integrity, but they can help to assure application security, usability, or confidentiality, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 607; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 388.
Which of the following is the PRIMARY risk with using open source software in a commercial software construction?
Lack of software documentation
License agreements requiring release of modified code
Expiration of the license agreement
Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it.
The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiration of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options.
The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as:
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as:
The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements.
When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined?
After the system preliminary design has been developed and the data security categorization has been performed
After the vulnerability analysis has been performed and before the system detailed design begins
After the system preliminary design has been developed and before the data security categorization begins
After the business functional analysis and the data security categorization have been performed
Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes.
The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed and the data security categorization has been performed is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is verified and validated. After the vulnerability analysis has been performed and before the system detailed design begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system design and components are evaluated and tested for the security effectiveness and compliance, and the system detailed design is developed, based on the system architecture and components. After the system preliminary design has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned.
Which of the following is a web application control that should be put into place to prevent exploitation of Operating System (OS) bugs?
Check arguments in function calls
Test for the security patch level of the environment
Include logging functions
Digitally sign each application module
Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of Operating System (OS) bugs. OS bugs are errors or defects in the code or logic of the OS that can cause the OS to malfunction or behave unexpectedly. OS bugs can be exploited by attackers to gain unauthorized access, disrupt business operations, or steal or leak sensitive data. Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of OS bugs, because it can provide several benefits, such as:
The other options are not the web application controls that should be put into place to prevent exploitation of OS bugs, but rather web application controls that can prevent or mitigate other types of web application attacks or issues. Checking arguments in function calls is a web application control that can prevent or mitigate buffer overflow attacks, which are attacks that exploit the vulnerability of the web application code that does not properly check the size or length of the input data that is passed to a function or a variable, and overwrite the adjacent memory locations with malicious code or data. Including logging functions is a web application control that can prevent or mitigate unauthorized access or modification attacks, which are attacks that exploit the lack of or weak authentication or authorization mechanisms of the web applications, and access or modify the web application data or functionality without proper permission or verification. Digitally signing each application module is a web application control that can prevent or mitigate code injection or tampering attacks, which are attacks that exploit the vulnerability of the web application code that does not properly validate or sanitize the input data that is executed or interpreted by the web application, and inject or modify the web application code with malicious code or data.
A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended?
Least privilege
Privilege escalation
Defense in depth
Privilege bracketing
The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several components, such as:
In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions.
What is the BEST approach to addressing security issues in legacy web applications?
Debug the security issues
Migrate to newer, supported applications where possible
Conduct a security assessment
Protect the legacy application with a web application firewall
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as:
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as:
The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms.
Which one of the following security mechanisms provides the BEST way to restrict the execution of privileged procedures?
Role Based Access Control (RBAC)
Biometric access control
Federated Identity Management (IdM)
Application hardening
Role Based Access Control (RBAC) is the security mechanism that provides the best way to restrict the execution of privileged procedures. Privileged procedures are the actions or commands that require higher or special permissions or privileges to perform, such as changing system settings, installing software, or accessing sensitive data. RBAC is a security model that assigns permissions and privileges to roles, rather than to individual users. Roles are defined based on the functions or responsibilities of the users in an organization. Users are assigned to roles based on their qualifications or credentials. RBAC enforces the principle of least privilege, which means that users only have the minimum permissions and privileges necessary to perform their tasks. RBAC also simplifies the administration and management of access control, as it reduces the complexity and redundancy of assigning permissions and privileges to individual users. RBAC is not the same as biometric access control, federated identity management, or application hardening. Biometric access control is a security mechanism that uses physical or behavioral characteristics of the users, such as fingerprints, iris patterns, or voice recognition, to authenticate and authorize them. Federated identity management is a security mechanism that enables the sharing and recognition of identity information across different organizations or domains, using standards and protocols such as SAML, OAuth, or OpenID. Application hardening is a security mechanism that involves the modification or improvement of an application’s code, design, or configuration, to make it more resistant to attacks or vulnerabilities.
Which of the following statements is TRUE for point-to-point microwave transmissions?
They are not subject to interception due to encryption.
Interception only depends on signal strength.
They are too highly multiplexed for meaningful interception.
They are subject to interception by an antenna within proximity.
They are subject to interception by an antenna within proximity. Point-to-point microwave transmissions are line-of-sight media, which means that they can be intercepted by any antenna that is in the direct path of the signal. The interception does not depend on encryption, multiplexing, or signal strength, as long as the antenna is close enough to receive the signal.
While impersonating an Information Security Officer (ISO), an attacker obtains information from company employees about their User IDs and passwords. Which method of information gathering has the attacker used?
Trusted path
Malicious logic
Social engineering
Passive misuse
Social engineering is the method of information gathering that the attacker has used while impersonating an ISO and obtaining information from company employees about their User IDs and passwords. Social engineering is a technique of manipulating or deceiving people into revealing confidential or sensitive information, or performing actions that compromise the security of an organization or a system1. Social engineering can exploit the human factors, such as trust, curiosity, fear, or greed, to influence the behavior or judgment of the target. Social engineering can take various forms, such as phishing, baiting, pretexting, or impersonation. Trusted path, malicious logic, and passive misuse are not methods of information gathering that the attacker has used, as they are related to different aspects of security or attack. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 19.
Which of the following statements is TRUE of black box testing?
Only the functional specifications are known to the test planner.
Only the source code and the design documents are known to the test planner.
Only the source code and functional specifications are known to the test planner.
Only the design documents and the functional specifications are known to the test planner.
Black box testing is a method of software testing that does not require any knowledge of the internal structure or code of the software1. The test planner only knows the functional specifications, which describe what the software is supposed to do, and tests the software based on the expected inputs and outputs. Black box testing is useful for finding errors in the functionality, usability, or performance of the software, but it cannot detect errors in the code or design. White box testing, on the other hand, requires the test planner to have access to the source code and the design documents, and tests the software based on the internal logic and structure2. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21, page 13132: CISSP For Dummies, 7th Edition, Chapter 8, page 215.
When constructing an Information Protection Policy (IPP), it is important that the stated rules are necessary, adequate, and
flexible.
confidential.
focused.
achievable.
An Information Protection Policy (IPP) is a document that defines the objectives, scope, roles, responsibilities, and rules for protecting the information assets of an organization. An IPP should be aligned with the business goals and legal requirements, and should be communicated and enforced throughout the organization. When constructing an IPP, it is important that the stated rules are necessary, adequate, and achievable, meaning that they are relevant, sufficient, and realistic for the organization’s context and capabilities34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 234: CISSP For Dummies, 7th Edition, Chapter 1, page 15.
Which of the following is the BEST way to verify the integrity of a software patch?
Cryptographic checksums
Version numbering
Automatic updates
Vendor assurance
The best way to verify the integrity of a software patch is to use cryptographic checksums. Cryptographic checksums are mathematical values that are computed from the data in the software patch using a hash function or an algorithm. Cryptographic checksums can be used to compare the original and the downloaded or installed version of the software patch, and to detect any alteration, corruption, or tampering of the data. Cryptographic checksums are also known as hashes, digests, or fingerprints, and they are often provided by the software vendor along with the software patch12. References: 1: What is a Checksum and How to Calculate a Checksum32: How to Verify File Integrity Using Hashes
Which of the following is an authentication protocol in which a new random number is generated uniquely for each login session?
Challenge Handshake Authentication Protocol (CHAP)
Point-to-Point Protocol (PPP)
Extensible Authentication Protocol (EAP)
Password Authentication Protocol (PAP)
Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol in which a new random number is generated uniquely for each login session. CHAP is used to authenticate a user or a system over a Point-to-Point Protocol (PPP) connection, such as a dial-up or a VPN connection. CHAP works as follows: The server sends a challenge message to the client, which contains a random number. The client calculates a response by applying a one-way hash function to the random number and its own secret key, and sends the response back to the server. The server performs the same calculation using the same random number and the secret key stored in its database, and compares the results. If they match, the authentication is successful. CHAP provides more security than Password Authentication Protocol (PAP), which sends the username and password in clear text over the network . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 516. : CISSP For Dummies, 7th Edition, Chapter 5, page 151.
Which of the following MUST be done when promoting a security awareness program to senior management?
Show the need for security; identify the message and the audience
Ensure that the security presentation is designed to be all-inclusive
Notify them that their compliance is mandatory
Explain how hackers have enhanced information security
The most important thing to do when promoting a security awareness program to senior management is to show the need for security; identify the message and the audience. This means that you should demonstrate how security awareness can benefit the organization, reduce risks, and align with the business goals. You should also tailor your message and your audience according to the specific security issues and challenges that your organization faces. Ensuring that the security presentation is designed to be all-inclusive, notifying them that their compliance is mandatory, or explaining how hackers have enhanced information security are not the most effective ways to promote a security awareness program, as they may not address the specific needs, interests, or concerns of senior management. References: 9: Seven Keys to Success for a More Mature Security Awareness Program1011: 6 Metrics to Track in Your Cybersecurity Awareness Training Campaign
Which one of the following considerations has the LEAST impact when considering transmission security?
Network availability
Data integrity
Network bandwidth
Node locations
Network bandwidth is the least important consideration when considering transmission security, as it is more related to the performance or efficiency of the network, rather than the security or protection of the data. Network bandwidth is the amount of data that can be transmitted or received over a network in a given time period, and it can affect the speed or quality of the communication1. However, network bandwidth does not directly impact the confidentiality, integrity, or availability of the data, which are the main goals of transmission security. Network availability, data integrity, and node locations are more important considerations when considering transmission security, as they can affect the ability to access, verify, or protect the data from unauthorized or malicious parties. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 402.
When building a data center, site location and construction factors that increase the level of vulnerability to physical threats include
hardened building construction with consideration of seismic factors.
adequate distance from and lack of access to adjacent buildings.
curved roads approaching the data center.
proximity to high crime areas of the city.
When building a data center, site location and construction factors that increase the level of vulnerability to physical threats include proximity to high crime areas of the city. This factor increases the risk of theft, vandalism, sabotage, or other malicious acts that could damage or disrupt the data center operations. The other options are factors that decrease the level of vulnerability to physical threats, as they provide protection or deterrence against natural or human-made hazards. Hardened building construction with consideration of seismic factors (A) reduces the impact of earthquakes or other natural disasters. Adequate distance from and lack of access to adjacent buildings (B) prevents unauthorized entry or fire spread from neighboring structures. Curved roads approaching the data center © slow down the speed of vehicles and make it harder for attackers to ram or bomb the data center. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 637; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 10, page 699.
Which of the following can BEST prevent security flaws occurring in outsourced software development?
Contractual requirements for code quality
Licensing, code ownership and intellectual property rights
Certification of the quality and accuracy of the work done
Delivery dates, change management control and budgetary control
The best way to prevent security flaws occurring in outsourced software development is to establish contractual requirements for code quality that specify the security standards, guidelines, and best practices that the outsourced developers must follow. This way, the organization can ensure that the outsourced software meets the expected level of security and quality, and that any security flaws are detected and remediated before delivery. The other options are not as effective as contractual requirements for code quality, as they either do not address the security aspects of the software development (B and D), or do not prevent the security flaws from occurring in the first place ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 472; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 572.
Which of the following is the FIRST step of a penetration test plan?
Analyzing a network diagram of the target network
Notifying the company's customers
Obtaining the approval of the company's management
Scheduling the penetration test during a period of least impact
The first step of a penetration test plan is to obtain the approval of the company’s management, as well as the consent of the target network’s owner or administrator. This is essential to ensure the legality, ethics, and scope of the test, as well as to define the objectives, expectations, and deliverables of the test. Without proper authorization, a penetration test could be considered as an unauthorized or malicious attack, and could result in legal or reputational consequences . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 758. : CISSP For Dummies, 7th Edition, Chapter 7, page 234.
What should be the INITIAL response to Intrusion Detection System/Intrusion Prevention System (IDS/IPS) alerts?
Ensure that the Incident Response Plan is available and current.
Determine the traffic's initial source and block the appropriate port.
Disable or disconnect suspected target and source systems.
Verify the threat and determine the scope of the attack.
The initial response to Intrusion Detection System/Intrusion Prevention System (IDS/IPS) alerts should be to verify the threat and determine the scope of the attack, as this will help to confirm the validity and severity of the alert, and to identify the affected systems, networks, and data. This step is essential to avoid false positives, false negatives, and overreactions, and to prepare for the appropriate mitigation and recovery actions. Ensuring that the Incident Response Plan is available and current is a preparatory step that should be done before any IDS/IPS alert occurs, not after. Determining the traffic’s initial source and blocking the appropriate port, and disabling or disconnecting suspected target and source systems are possible mitigation steps that should be done after verifying the threat and determining the scope of the attack, not before . References: 5: IDS vs IPS - What’s the Difference & Which do You Need? - Comparitech 6: IDS vs. IPS: Definitions, Comparisons & Why You Need Both | Okta 7: IDS and IPS: Understanding Similarities and Differences - EC-Council
Which of the following is TRUE about Disaster Recovery Plan (DRP) testing?
Operational networks are usually shut down during testing.
Testing should continue even if components of the test fail.
The company is fully prepared for a disaster if all tests pass.
Testing should not be done until the entire disaster plan can be tested.
Testing is a vital part of the Disaster Recovery Plan (DRP) process, as it validates the effectiveness and feasibility of the plan, identifies gaps and weaknesses, and provides opportunities for improvement and training. Testing should continue even if components of the test fail, as this will help to evaluate the impact of the failure, the root cause of the problem, and the possible solutions or alternatives34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 10354: CISSP For Dummies, 7th Edition, Chapter 10, page 351.
In Disaster Recovery (DR) and business continuity training, which BEST describes a functional drill?
A full-scale simulation of an emergency and the subsequent response functions
A specific test by response teams of individual emergency response functions
A functional evacuation of personnel
An activation of the backup site
A functional drill is a type of disaster recovery and business continuity training that involves a specific test by response teams of individual emergency response functions, such as fire suppression, medical assistance, or data backup. A functional drill is designed to evaluate the performance, coordination, and effectiveness of the response teams and the emergency procedures. A functional drill is not the same as a full-scale simulation, a functional evacuation, or an activation of the backup site. A full-scale simulation is a type of disaster recovery and business continuity training that involves a realistic and comprehensive scenario of an emergency and the subsequent response functions, involving all the stakeholders, resources, and equipment. A functional evacuation is a type of disaster recovery and business continuity training that involves the orderly and safe movement of personnel from a threatened or affected area to a safe location. An activation of the backup site is a type of disaster recovery and business continuity action that involves the switching of operations from the primary site to the secondary site in the event of a disaster or disruption.
Following the completion of a network security assessment, which of the following can BEST be demonstrated?
The effectiveness of controls can be accurately measured
A penetration test of the network will fail
The network is compliant to industry standards
All unpatched vulnerabilities have been identified
A network security assessment is a process of evaluating the security posture of a network by identifying and analyzing vulnerabilities, threats, and risks. The results of the assessment can help measure how well the network controls are performing and where they need improvement.
B, C, and D are incorrect because they are not the main objectives or outcomes of a network security assessment. A penetration test is a type of security assessment that simulates an attack on the network, but it does not guarantee that the network will fail or succeed. The network may or may not be compliant to industry standards depending on the criteria and scope of the assessment. Not all unpatched vulnerabilities may be identified by the assessment, as some may be unknown or undetectable by the tools or methods used.
The goal of software assurance in application development is to
enable the development of High Availability (HA) systems.
facilitate the creation of Trusted Computing Base (TCB) systems.
prevent the creation of vulnerable applications.
encourage the development of open source applications.
The goal of software assurance in application development is to prevent the creation of vulnerable applications. Software assurance is the process of ensuring that the software is designed, developed, and maintained in a secure, reliable, and trustworthy manner. Software assurance involves applying security principles, standards, and best practices throughout the software development life cycle, such as security requirements, design, coding, testing, deployment, and maintenance. Software assurance aims to prevent or reduce the introduction of vulnerabilities, defects, or errors in the software that could compromise its security, functionality, or quality . References: : Software Assurance : Software Assurance - OWASP Cheat Sheet Series
A system has been scanned for vulnerabilities and has been found to contain a number of communication ports that have been opened without authority. To which of the following might this system have been subjected?
Trojan horse
Denial of Service (DoS)
Spoofing
Man-in-the-Middle (MITM)
A trojan horse is a type of malware that masquerades as a legitimate program or file, but performs malicious actions in the background. A trojan horse may open unauthorized ports on the infected system, allowing remote access or communication by the attacker or other malware12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 6432: CISSP For Dummies, 7th Edition, Chapter 6, page 189.
The process of mutual authentication involves a computer system authenticating a user and authenticating the
user to the audit process.
computer system to the user.
user's access to all authorized objects.
computer system to the audit process.
Mutual authentication is the process of verifying the identity of both parties in a communication. The computer system authenticates the user by verifying their credentials, such as username and password, biometrics, or tokens. The user authenticates the computer system by verifying its identity, such as a digital certificate, a trusted third party, or a challenge-response mechanism34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 5154: CISSP For Dummies, 7th Edition, Chapter 5, page 151.
In a financial institution, who has the responsibility for assigning the classification to a piece of information?
Chief Financial Officer (CFO)
Chief Information Security Officer (CISO)
Originator or nominated owner of the information
Department head responsible for ensuring the protection of the information
In a financial institution, the responsibility for assigning the classification to a piece of information belongs to the originator or nominated owner of the information. The originator is the person who creates or generates the information, and the nominated owner is the person who is assigned the accountability and authority for the information by the management. The originator or nominated owner is the best person to determine the value and sensitivity of the information, and to assign the appropriate classification level based on the criteria and guidelines established by the organization. The originator or nominated owner is also responsible for reviewing and updating the classification as needed, and for ensuring that the information is handled and protected according to its classification56. References: 5: Information Classification Policy76: Information Classification and Handling Policy
A software scanner identifies a region within a binary image having high entropy. What does this MOST likely indicate?
Encryption routines
Random number generator
Obfuscated code
Botnet command and control
Obfuscated code is a type of code that is deliberately written or modified to make it difficult to understand or reverse engineer3. Obfuscation techniques can include changing variable names, removing comments, adding irrelevant code, or encrypting parts of the code. Obfuscated code can have high entropy, which means that it has a high degree of randomness or unpredictability4. A software scanner can identify a region within a binary image having high entropy as a possible indication of obfuscated code. Encryption routines, random number generators, and botnet command and control are not necessarily related to obfuscated code, and may not have high entropy. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 8, page 4674: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 508.
The BEST way to check for good security programming practices, as well as auditing for possible backdoors, is to conduct
log auditing.
code reviews.
impact assessments.
static analysis.
Code reviews are the best way to check for good security programming practices, as well as auditing for possible backdoors, in a software system. Code reviews involve examining the source code of the software for any errors, vulnerabilities, or malicious code that could compromise the security or functionality of the system. Code reviews can be performed manually by human reviewers, or automatically by tools that scan and analyze the code. The other options are not as effective as code reviews, as they either do not examine the source code directly (A and C), or only detect syntactic or semantic errors, not logical or security flaws (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 463; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 555.
Which one of the following is a threat related to the use of web-based client side input validation?
Users would be able to alter the input after validation has occurred
The web server would not be able to validate the input after transmission
The client system could receive invalid input from the web server
The web server would not be able to receive invalid input from the client
A threat related to the use of web-based client side input validation is that users would be able to alter the input after validation has occurred. Client side input validation is performed on the user’s browser using JavaScript or other scripting languages. It can provide a faster and more user-friendly feedback to the user, but it can also be easily bypassed or manipulated by an attacker who disables JavaScript, uses a web proxy, or modifies the source code of the web page. Therefore, client side input validation should not be relied upon as the sole or primary method of preventing malicious or malformed input from reaching the web server. Server side input validation is also necessary to ensure the security and integrity of the web application56. References: 5: Input Validation - OWASP Cheat Sheet Series76: Input Validation vulnerabilities and how to fix them
During an audit of system management, auditors find that the system administrator has not been trained. What actions need to be taken at once to ensure the integrity of systems?
A review of hiring policies and methods of verification of new employees
A review of all departmental procedures
A review of all training procedures to be undertaken
A review of all systems by an experienced administrator
During an audit of system management, if auditors find that the system administrator has not been trained, the immediate action that needs to be taken to ensure the integrity of systems is a review of all systems by an experienced administrator. This is to verify that the systems are configured, maintained, and secured properly, and that there are no errors, vulnerabilities, or breaches that could compromise the system’s availability, confidentiality, or integrity. A review of hiring policies, departmental procedures, or training procedures are not urgent actions, as they are more related to the long-term improvement of the system management process, rather than the current state of the systems . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 829. : CISSP For Dummies, 7th Edition, Chapter 8, page 267.
Which of the following is the MAIN reason for using configuration management?
To provide centralized administration
To reduce the number of changes
To reduce errors during upgrades
To provide consistency in security controls
4 the main reason for using configuration management is to provide consistency in security controls. Configuration management is the process of identifying, documenting, controlling, and verifying the characteristics and settings of the hardware, software, data, and network components of a system. Configuration management helps to ensure that the system is configured and maintained according to the security policies, standards, and baselines, and that any changes to the system are authorized, recorded, and tracked. Configuration management also helps to prevent or detect unauthorized or unintended changes to the system, which may introduce vulnerabilities, errors, or inconsistencies. Configuration management does not necessarily provide centralized administration, although it may involve some centralized tools or processes to facilitate the configuration management activities. Configuration management does not aim to reduce the number of changes, although it may help to prioritize, schedule, and coordinate the changes to minimize the impact and disruption to the system. Configuration management does not aim to reduce errors during upgrades, although it may help to test, validate, and verify the upgrades before implementing them on the system.
Which of the following is an advantage of on premise Credential Management Systems?
Improved credential interoperability
Control over system configuration
Lower infrastructure capital costs
Reduced administrative overhead
The advantage of on premise credential management systems is that they provide more control over the system configuration and customization. On premise credential management systems are the systems that store and manage the credentials, such as usernames, passwords, tokens, or certificates, of the users or the devices within an organization’s own network or infrastructure. On premise credential management systems can offer more flexibility and security for the organization, as they can tailor the system to their specific needs and requirements, and they can enforce their own policies and standards for the credential management.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 346; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 307
Network-based logging has which advantage over host-based logging when reviewing malicious activity about a victim machine?
Addresses and protocols of network-based logs are analyzed.
Host-based system logging has files stored in multiple locations.
Properly handled network-based logs may be more reliable and valid.
Network-based systems cannot capture users logging into the console.
According to the CISSP CBK Official Study Guide1, the advantage of network-based logging over host-based logging when reviewing malicious activity about a victim machine is that properly handled network-based logs may be more reliable and valid. Logging is the process of recording or documenting the events or the activities that occur or happen in the system or the network, such as the access, the communication, or the operation of the system or the network. Logging can be classified into two types, which are:
The advantage of network-based logging over host-based logging when reviewing malicious activity about a victim machine is that properly handled network-based logs may be more reliable and valid, as they may provide a more accurate, complete, and consistent record or documentation of the malicious activity, as well as a more independent, objective, and verifiable evidence or proof of the malicious activity. Properly handled network-based logs may be more reliable and valid, as they may:
Addresses and protocols of network-based logs are analyzed is not the advantage of network-based logging over host-based logging when reviewing malicious activity about a victim machine, although it may be a benefit or a consequence of network-based logging. Analyzing the addresses and protocols of network-based logs is the process of examining or evaluating the traffic or the data that passes through the network, which may include or reveal the source, the destination, the protocol, or the port of the traffic or the data, by applying the appropriate tools or techniques, such as the packet capture, the packet analysis, or the packet filtering tools or techniques. Analyzing the addresses and protocols of network-based logs may help to identify and analyze the malicious activity, as well as to determine and measure the impact or the consequence of the malicious activity. Analyzing the addresses and protocols of network-based logs may be a benefit or a consequence of network-based logging, as network-based logging may provide the traffic or the data that passes through the network, which may include or reveal the source, the destination, the protocol, or the port of the traffic or the data. However, analyzing the addresses and protocols of network-based logs is not the advantage of network-based logging over host-based logging when reviewing malicious activity about a victim machine, as it is not the reason or the factor that makes network-based logging superior or preferable to host-based logging when reviewing malicious activity about a victim machine. Host-based system logging has files stored in multiple locations is not the advantage of network
The MAIN reason an organization conducts a security authorization process is to
force the organization to make conscious risk decisions.
assure the effectiveness of security controls.
assure the correct security organization exists.
force the organization to enlist management support.
The main reason an organization conducts a security authorization process is to force the organization to make conscious risk decisions. A security authorization process is a process that evaluates and approves the security of an information system or a product before it is deployed or used. A security authorization process involves three steps: security categorization, security assessment, and security authorization. Security categorization is the step of determining the impact level of the information system or product on the confidentiality, integrity, and availability of the information and assets. Security assessment is the step of testing and verifying the security controls and measures implemented on the information system or product. Security authorization is the step of granting or denying the permission to operate or use the information system or product based on the security assessment results and the risk acceptance criteria. The security authorization process forces the organization to make conscious risk decisions, as it requires the organization to identify, analyze, and evaluate the risks associated with the information system or product, and to decide whether to accept, reject, mitigate, or transfer the risks. The other options are not the main reasons, but rather the benefits or outcomes of a security authorization process. Assuring the effectiveness of security controls is a benefit of a security authorization process, as it provides an objective and independent evaluation of the security controls and measures. Assuring the correct security organization exists is an outcome of a security authorization process, as it establishes the roles and responsibilities of the security personnel and stakeholders. Forcing the organization to enlist management support is an outcome of a security authorization process, as it involves the management in the risk decision making and approval process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, p. 419; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, p. 150.
A security professional has been asked to evaluate the options for the location of a new data center within a multifloor building. Concerns for the data center include emanations and physical access controls.
Which of the following is the BEST location?
On the top floor
In the basement
In the core of the building
In an exterior room with windows
The best location for a new data center within a multifloor building is in the core of the building. This location can minimize the emanations and enhance the physical access controls. Emanations are the electromagnetic signals or radiation that are emitted by electronic devices, such as computers, servers, or network equipment. Emanations can be intercepted or captured by attackers to obtain sensitive or confidential information. Physical access controls are the measures that prevent or restrict unauthorized or malicious access to physical assets, such as data centers, servers, or network devices. Physical access controls can include locks, doors, gates, fences, guards, cameras, alarms, etc. The core of the building is the central part of the building that is usually surrounded by other rooms or walls. This location can reduce the emanations by creating a shielding effect and increasing the distance from the potential attackers. The core of the building can also improve the physical access controls by limiting the entry points and visibility of the data center12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3: Security Engineering, p. 133; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 3: Security Engineering, p. 295.
Order the below steps to create an effective vulnerability management process.
The PRIMARY security concern for handheld devices is the
strength of the encryption algorithm.
spread of malware during synchronization.
ability to bypass the authentication mechanism.
strength of the Personal Identification Number (PIN).
The primary security concern for handheld devices is the spread of malware during synchronization. Handheld devices are often synchronized with other devices, such as desktops or laptops, to exchange data and update applications. This process can introduce malware from one device to another, or vice versa, if proper security controls are not in place.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 635; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 10, page 557
Which Web Services Security (WS-Security) specification negotiates how security tokens will be issued, renewed and validated? Click on the correct specification in the image below.
WS-Trust
WS-Trust is a Web Services Security (WS-Security) specification that negotiates how security tokens will be issued, renewed and validated. WS-Trust defines a framework for establishing trust relationships between different parties, and a protocol for requesting and issuing security tokens that can be used to authenticate and authorize the parties. WS-Trust also supports different types of security tokens, such as Kerberos tickets, X.509 certificates, SAML assertions, etc56 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 346; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 465.
A mobile device application that restricts the storage of user information to just that which is needed to accomplish lawful business goals adheres to what privacy principle?
Onward transfer
Collection Limitation
Collector Accountability
Individual Participation
Collection Limitation is the privacy principle that states that the collection of personal information should be limited, relevant, and lawful. It also implies that personal information should not be collected unless it is necessary for a specific purpose. This principle is aligned with the concept of data minimization, which means that only the minimum amount of data required to achieve a legitimate goal should be collected and processed. A mobile device application that restricts the storage of user information to just that which is needed to accomplish lawful business goals adheres to this principle by minimizing the amount of personal data it collects and stores. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 35; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 28
Which of the following describes the BEST configuration management practice?
After installing a new system, the configuration files are copied to a separate back-up system and hashed to detect tampering.
After installing a new system, the configuration files are copied to an air-gapped system and hashed to detect tampering.
The firewall rules are backed up to an air-gapped system.
A baseline configuration is created and maintained for all relevant systems.
The best configuration management practice is to create and maintain a baseline configuration for all relevant systems. A baseline configuration is a documented and approved set of specifications and settings for a system or component that serves as a standard for comparison and evaluation. A baseline configuration can help ensure the consistency, security, and performance of the system or component, as well as facilitate the identification and resolution of any deviations or issues. A baseline configuration should be updated and reviewed regularly to reflect the changes and improvements made to the system or component12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 456; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 869.
Which Web Services Security (WS-Security) specification handles the management of security tokens and the underlying policies for granting access? Click on the correct specification in the image below.
WS-Authorization
Which of the following is the BEST example of weak management commitment to the protection of security assets and resources?
poor governance over security processes and procedures
immature security controls and procedures
variances against regulatory requirements
unanticipated increases in security incidents and threats
The best example of weak management commitment to the protection of security assets and resources is poor governance over security processes and procedures. Governance is the set of policies, roles, responsibilities, and processes that guide, direct, and control how an organization’s business divisions and IT teams cooperate to achieve business goals. Management commitment is essential for effective governance, as it demonstrates the leadership and support for security initiatives and activities. Poor governance indicates that management does not prioritize security, allocate sufficient resources, enforce accountability, or monitor performance. The other options are not examples of weak management commitment, but rather possible consequences or indicators of poor security practices. Immature security controls and procedures, variances against regulatory requirements, and unanticipated increases in security incidents and threats are all signs that security is not well-managed or implemented, but they do not necessarily reflect the level of management commitment. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, p. 19; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, p. 9.
The PRIMARY purpose of accreditation is to:
comply with applicable laws and regulations.
allow senior management to make an informed decision regarding whether to accept the risk of operating the system.
protect an organization’s sensitive datA.
verify that all security controls have been implemented properly and are operating in the correct manner.
According to the CISSP CBK Official Study Guide1, the primary purpose of accreditation is to allow senior management to make an informed decision regarding whether to accept the risk of operating the system. Accreditation is the process of formally authorizing a system to operate based on the results of the security assessment and the risk analysis. Accreditation is a management responsibility that involves evaluating the security posture, the residual risk, and the compliance status of the system, and determining if the system is acceptable to operate within the organization’s risk tolerance. Accreditation does not necessarily mean that the system complies with applicable laws and regulations, protects the organization’s sensitive data, or verifies that all security controls have been implemented properly and are operating in the correct manner, although these may be factors that influence the accreditation decision. References: 1
When designing a vulnerability test, which one of the following is likely to give the BEST indication of what components currently operate on the network?
Topology diagrams
Mapping tools
Asset register
Ping testing
According to the CISSP All-in-One Exam Guide2, when designing a vulnerability test, mapping tools are likely to give the best indication of what components currently operate on the network. Mapping tools are software applications that scan and discover the network topology, devices, services, and protocols. They can provide a graphical representation of the network structure and components, as well as detailed information about each node and connection. Mapping tools can help identify potential vulnerabilities and weaknesses in the network configuration and architecture, as well as the exposure and attack surface of the network. Topology diagrams are not likely to give the best indication of what components currently operate on the network, as they may be outdated, inaccurate, or incomplete. Topology diagrams are static and abstract representations of the network layout and design, but they may not reflect the actual and dynamic state of the network. Asset register is not likely to give the best indication of what components currently operate on the network, as it may be outdated, inaccurate, or incomplete. Asset register is a document that lists and categorizes the assets owned by an organization, such as hardware, software, data, and personnel. However, it may not capture the current status, configuration, and interconnection of the assets, as well as the changes and updates that occur over time. Ping testing is not likely to give the best indication of what components currently operate on the network, as it is a simple and limited technique that only checks the availability and response time of a host. Ping testing is a network utility that sends an echo request packet to a target host and waits for an echo reply packet. It can measure the connectivity and latency of the host, but it cannot provide detailed information about the host’s characteristics, services, and vulnerabilities. References: 2
Knowing the language in which an encrypted message was originally produced might help a cryptanalyst to perform a
clear-text attack.
known cipher attack.
frequency analysis.
stochastic assessment.
Frequency analysis is a technique of cryptanalysis that exploits the statistical patterns of letters or symbols in an encrypted message. Frequency analysis assumes that the frequency distribution of the plaintext is preserved in the ciphertext, and that the frequency distribution of the plaintext is known or can be estimated. Knowing the language in which an encrypted message was originally produced might help a cryptanalyst to perform frequency analysis, as different languages have different letter frequencies, digraphs, and word lengths. For example, in English, the letter “e” is the most common, while in French, it is the letter “a”. By comparing the frequency distribution of the ciphertext with the expected frequency distribution of the plaintext language, a cryptanalyst can make educated guesses about the encryption key or algorithm23. References:
What is the MOST efficient way to secure a production program and its data?
Disable default accounts and implement access control lists (ACL)
Harden the application and encrypt the data
Disable unused services and implement tunneling
Harden the servers and backup the data
The most efficient way to secure a production program and its data is to harden the application and encrypt the data. Hardening the application means applying the security best practices and standards to the development, testing, deployment, and maintenance of the application, such as input validation, output encoding, error handling, logging, patching, and configuration. Encrypting the data means applying the cryptographic techniques and algorithms to the data at rest and in transit, such as symmetric encryption, asymmetric encryption, hashing, and digital signatures. These two measures can provide a comprehensive and effective protection for the application and its data against various threats and attacks.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 481; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 437
Which of the following is a reason to use manual patch installation instead of automated patch management?
The cost required to install patches will be reduced.
The time during which systems will remain vulnerable to an exploit will be decreased.
The likelihood of system or application incompatibilities will be decreased.
The ability to cover large geographic areas is increased.
Manual patch installation allows for thorough testing before deployment to ensure that the patch does not introduce new vulnerabilities or incompatibilities. Automated patch management can sometimes lead to unexpected issues if patches are not fully compatible with all systems and applications12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 452; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 863.
What does an organization FIRST review to assure compliance with privacy requirements?
Best practices
Business objectives
Legal and regulatory mandates
Employee's compliance to policies and standards
The first thing that an organization reviews to assure compliance with privacy requirements is the legal and regulatory mandates that apply to its business operations and data processing activities. Legal and regulatory mandates are the laws, regulations, standards, and contracts that govern how an organization must protect the privacy of personal information and the rights of data subjects. An organization must identify and understand the relevant mandates that affect its jurisdiction, industry, and data types, and implement the appropriate controls and measures to comply with them. The other options are not the first thing that an organization reviews, but rather part of the privacy compliance program. Best practices are the recommended methods and techniques for achieving privacy objectives, but they are not mandatory or binding. Business objectives are the goals and strategies that an organization pursues to create value and competitive advantage, but they may not align with privacy requirements. Employee’s compliance to policies and standards is the degree to which the organization’s staff adhere to the internal rules and guidelines for privacy protection, but it is not a review activity, but rather a measurement and enforcement activity. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, p. 105; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, p. 287.
Which of the following is the BEST approach to take in order to effectively incorporate the concepts of business continuity into the organization?
Ensure end users are aware of the planning activities
Validate all regulatory requirements are known and fully documented
Develop training and awareness programs that involve all stakeholders
Ensure plans do not violate the organization's cultural objectives and goals
Incorporating business continuity concepts effectively into an organization requires developing training and awareness programs that involve all stakeholders. This ensures that everyone understands their roles, responsibilities, and actions required during a disruption or crisis. References: CISSP Official (ISC)2 Practice Tests, Chapter 9, page 249; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 9, page 440
Which of the following is the MOST likely cause of a non-malicious data breach when the source of the data breach was an un-marked file cabinet containing sensitive documents?
Ineffective data classification
Lack of data access controls
Ineffective identity management controls
Lack of Data Loss Prevention (DLP) tools
The most likely cause of a non-malicious data breach when the source of the data breach was an un-marked file cabinet containing sensitive documents is ineffective data classification. Data classification is a process of assigning labels or categories to data based on their sensitivity, value, and criticality. Data classification can help protect the data from unauthorized access, disclosure, or misuse, as well as comply with the legal and regulatory requirements. Data classification can also guide the implementation of appropriate security controls and measures for different types of data, such as encryption, access control, retention, or disposal. Ineffective data classification can result in data being stored, handled, or transmitted without proper protection or awareness, which can lead to data breaches, even if they are not intentional or malicious56 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 28; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 30.
If compromised, which of the following would lead to the exploitation of multiple virtual machines?
Virtual device drivers
Virtual machine monitor
Virtual machine instance
Virtual machine file system
If compromised, the virtual machine monitor would lead to the exploitation of multiple virtual machines. The virtual machine monitor, also known as the hypervisor, is the software layer that creates and manages the virtual machines on a physical host. The virtual machine monitor controls the allocation and distribution of the hardware resources, such as CPU, memory, disk, and network, among the virtual machines. The virtual machine monitor also provides the isolation and separation of the virtual machines from each other and from the physical host. If the virtual machine monitor is compromised, the attacker can gain access to all the virtual machines and their data, as well as the physical host and its resources.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 269; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 234
Which of the following is a recommended alternative to an integrated email encryption system?
Sign emails containing sensitive data
Send sensitive data in separate emails
Encrypt sensitive data separately in attachments
Store sensitive information to be sent in encrypted drives
The recommended alternative to an integrated email encryption system is to encrypt sensitive data separately in attachments. An integrated email encryption system is a system or a service that provides or offers the encryption or the protection for the email messages or the email communications, by using or applying the cryptographic techniques or the mechanisms, such as the public key encryption, the symmetric key encryption, or the digital signatures. An integrated email encryption system can protect the confidentiality, the integrity, or the authenticity of the email messages or the email communications, as it can prevent or reduce the risk of unauthorized or inappropriate access, disclosure, modification, or spoofing of the email messages or the email communications by the third parties or the attackers who intercept or capture the email messages or the email communications over the network. However, an integrated email encryption system can also have some limitations or challenges, such as the compatibility, the usability, or the cost. Therefore, the recommended alternative to an integrated email encryption system is to encrypt sensitive data separately in attachments, which means that instead of encrypting the entire email message or the email communication, only the sensitive data or the information that is attached or appended to the email message or the email communication, such as the documents, the files, or the images, are encrypted or protected, using the cryptographic techniques or the mechanisms, such as the password, the passphrase, or the key. Encrypting sensitive data separately in attachments can provide a similar level of security or protection for the email messages or the email communications, as it can prevent or reduce the risk of unauthorized or inappropriate access, disclosure, modification, or spoofing of the sensitive data or the information by the third parties or the attackers who intercept or capture the email messages or the email communications over the network, and it can also overcome or address some of the limitations or challenges of the integrated email encryption system, such as the compatibility, the usability, or the cost. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 116; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 173
Software Code signing is used as a method of verifying what security concept?
Integrity
Confidentiality
Availability
Access Control
Software code signing is used as a method of verifying the integrity of the software code. Integrity is the security concept that ensures that the data or code is not modified, corrupted, or tampered with by unauthorized parties. Software code signing is the process of attaching a digital signature to the software code, which is generated by applying a cryptographic hash function to the code and encrypting the hash value with the private key of the software developer or publisher. The digital signature can be verified by the software user or recipient by decrypting the signature with the public key of the developer or publisher and comparing the hash value with the hash value of the code.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 207; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 174
How can lessons learned from business continuity training and actual recovery incidents BEST be used?
As a means for improvement
As alternative options for awareness and training
As indicators of a need for policy
As business function gap indicators
The best way to use the lessons learned from business continuity training and actual recovery incidents is as a means for improvement. Business continuity training is a process or a technique that educates or trains the personnel or the staff of the organization, such as the employees, the contractors, or the partners, on the business continuity plan or the document that defines or specifies the procedures or the actions that are performed or executed by the organization, such as the business, the enterprise, or the institution, to continue or to resume the critical or the essential functions or operations of the organization, such as the services, the products, or the processes, after or during the occurrence or the happening of the disaster or the event that causes or results in the disruption, the interruption, or the damage of the functions or operations of the organization, such as the fire, the flood, or the cyberattack. Actual recovery incidents are the scenarios or the situations that occur or happen in the real world or the reality, where the organization, such as the business, the enterprise, or the institution, experiences or faces the disaster or the event that causes or results in the disruption, the interruption, or the damage of the functions or operations of the organization, such as the fire, the flood, or the cyberattack, and where the organization implements or applies the business continuity plan or the document that defines or specifies the procedures or the actions that are performed or executed by the organization, to continue or to resume the critical or the essential functions or operations of the organization, such as the services, the products, or the processes. Lessons learned are the outcomes or the results of the business continuity training and the actual recovery incidents, that provide or offer the feedback, the evaluation, or the assessment of the effectiveness or the efficiency of the business continuity plan, and that identify or detect the strengths, the weaknesses, the opportunities, or the threats of the business continuity plan. The best way to use the lessons learned from business continuity training and actual recovery incidents is as a means for improvement, which means that the lessons learned from business continuity training and actual recovery incidents are used or applied to improve or enhance the business continuity plan, by addressing or resolving the issues, the gaps, or the problems of the business continuity plan, by incorporating or integrating the best practices, the standards, or the guidelines of the business continuity plan, and by updating or maintaining the business continuity plan to reflect or represent the current or the accurate needs, the requirements, or the expectations of the organization, such as the business, the enterprise, or the institution.
Which of the following entities is ultimately accountable for data remanence vulnerabilities with data replicated by a cloud service provider?
Data owner
Data steward
Data custodian
Data processor
The entity that is ultimately accountable for data remanence vulnerabilities with data replicated by a cloud service provider is the data owner. A data owner is a person or an entity that has the authority or the responsibility for the data or the information within an organization, and that determines or defines the classification, the usage, the protection, or the retention of the data or the information. A data owner has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization, as the data owner is ultimately accountable or liable for the security or the quality of the data or the information, regardless of who processes or handles the data or the information. A data owner can ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization, by performing the tasks or the functions such as conducting due diligence, establishing service level agreements, defining security requirements, monitoring performance, or auditing compliance. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 61; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 67
While inventorying storage equipment, it is found that there are unlabeled, disconnected, and powered off devices. Which of the following is the correct procedure for handling such equipment?
They should be recycled to save energy.
They should be recycled according to NIST SP 800-88.
They should be inspected and sanitized following the organizational policy.
They should be inspected and categorized properly to sell them for reuse.
The correct procedure for handling the unlabeled, disconnected, and powered off devices that are found while inventorying storage equipment is that they should be inspected and sanitized following the organizational policy. The unlabeled, disconnected, and powered off devices are the devices or the systems that are not identified, not connected, or not turned on, and that are used or intended for storing or holding the data or the information, such as the hard disks, the flash drives, or the memory cards. The correct procedure for handling the unlabeled, disconnected, and powered off devices is that they should be inspected and sanitized following the organizational policy, which means that they should be examined or checked and cleaned or erased according to the rules or the guidelines that are established or defined by the organization, and that are based on the classification, the sensitivity, or the value of the data or the information that are stored or held on the devices or the systems. The inspection and the sanitization of the unlabeled, disconnected, and powered off devices can ensure or maintain the security or the privacy of the data or the information, as they can prevent or reduce the risk of unauthorized or inappropriate access or disclosure of the data or the information by the third parties or the attackers who access or compromise the devices or the systems.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 198; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 355
A company was ranked as high in the following National Institute of Standards and Technology (NIST) functions: Protect, Detect, Respond and Recover. However, a low maturity grade was attributed to the Identify function. In which of the following the controls categories does this company need to improve when analyzing its processes individually?
Asset Management, Business Environment, Governance and Risk Assessment
Access Control, Awareness and Training, Data Security and Maintenance
Anomalies and Events, Security Continuous Monitoring and Detection Processes
Recovery Planning, Improvements and Communications
According to the NIST Cybersecurity Framework, the control categories that the company needs to improve when analyzing its processes individually are Asset Management, Business Environment, Governance and Risk Assessment. These control categories are part of the Identify function, which is one of the five core functions of the NIST Cybersecurity Framework. The Identify function is the function that provides the foundational understanding and awareness of the organization’s systems, assets, data, capabilities, and risks, as well as the role and contribution of the organization to the critical infrastructure and the society. The Identify function helps the organization to prioritize and align its cybersecurity activities and resources with its business objectives and requirements, as well as to establish and maintain its cybersecurity policies and standards. The Identify function consists of six control categories, which are the specific outcomes or goals that the organization should achieve for each function. The control categories for the Identify function are:
The company was ranked as high in the following NIST functions: Protect, Detect, Respond and Recover. However, a low maturity grade was attributed to the Identify function. This means that the company has a good level of capability and performance in implementing and executing the cybersecurity activities and controls that are related to the other four functions, but it has a low level of capability and performance in implementing and executing the cybersecurity activities and controls that are related to the Identify function. Therefore, the company needs to improve its processes and controls that are related to the Identify function, which are the Asset Management, Business Environment, Governance, Risk Assessment, Risk Management Strategy, and Supply Chain Risk Management control categories. By improving these control categories, the company can enhance its foundational understanding and awareness of its systems, assets, data, capabilities, and risks, as well as its role and contribution to the critical infrastructure and the society. The company can also better prioritize and align its cybersecurity activities and resources with its business objectives and requirements, as well as establish and maintain its cybersecurity policies and standards. Access Control, Awareness and Training, Data Security and Maintenance are not the control categories that the company needs to improve when analyzing its processes individually, as they are part of the Protect function, not the Identify function. The Protect function is the function that provides the appropriate safeguards and countermeasures to ensure the delivery of critical services and to limit or contain the impact of potential cybersecurity incidents. The Protect function consists of eight control categories, which are:
The company was ranked as high in the Protect function, which means that it has a good level of capability and performance in implementing and executing the cybersecurity activities and controls that are related to the Protect function. Therefore, the company does not need to improve its processes and controls that are related to the Protect function, which are the Access Control, Awareness and Training, Data Security, Information Protection Processes and Procedures, Maintenance, and Protective Technology control categories. Anomalies and Events, Security Continuous Monitoring and Detection Processes are not the control categories that the company needs to improve when analyzing its processes individually, as they are part of the Detect function, not the Identify function. The Detect function is the function that provides the appropriate activities and capabilities to identify the occurrence of a cybersecurity incident in a timely manner. The Detect function consists of three control categories, which are:
The company was ranked as high in the Detect function, which means that it has a good level of capability and performance in implementing and executing the cybersecurity activities and controls that are related to the Detect function. Therefore, the company does not need to improve its processes and controls that are related to the Detect function, which are the Anomalies and Events, Security Continuous Monitoring, and Detection Processes control categories. Recovery Planning, Improvements and Communications are not the control categories that the company needs to improve when analyzing its processes individually, as they are part of the Recover function, not the Identify function. The Recover function is the function that provides the appropriate activities and capabilities to restore the normal operations and functions of the organization as quickly as possible after a cybersecurity incident, as well as to prevent or reduce the recurrence or impact of future incidents. The Recover function consists of three control categories, which are:
The company was ranked as high in the Recover function, which means that it has a good level of capability and performance in implementing and executing the cybersecurity activities and controls that are related to the Recover function. Therefore, the company does not need to improve its processes and controls that are related to the Recover function, which are the Recovery Planning, Improvements, and Communications control categories.
An organization lacks a data retention policy. Of the following, who is the BEST person to consult for such requirement?
Application Manager
Database Administrator
Privacy Officer
Finance Manager
The best person to consult for a data retention policy requirement is the privacy officer, who is responsible for ensuring that the organization complies with the applicable privacy laws, regulations, and standards. A data retention policy defines the criteria and procedures for retaining, storing, and disposing of data, especially personal data, in accordance with the legal and business requirements. The privacy officer can advise on the data retention policy by identifying the relevant privacy mandates, assessing the data types and categories, determining the retention periods and disposal methods, and implementing the appropriate controls and measures. The other options are not the best person to consult, but rather stakeholders or contributors to the data retention policy. An application manager is responsible for managing the development, maintenance, and operation of applications, but not the data retention policy. A database administrator is responsible for managing the design, implementation, and performance of databases, but not the data retention policy. A finance manager is responsible for managing the financial resources and activities of the organization, but not the data retention policy. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, p. 118; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, p. 292; CISSP practice exam questions and answers, Question 8.
Which of the following BEST describes a rogue Access Point (AP)?
An AP that is not protected by a firewall
An AP not configured to use Wired Equivalent Privacy (WEP) with Triple Data Encryption Algorithm (3DES)
An AP connected to the wired infrastructure but not under the management of authorized network administrators
An AP infected by any kind of Trojan or Malware
A rogue Access Point (AP) is an AP connected to the wired infrastructure but not under the management of authorized network administrators. A rogue AP can pose a serious security threat, as it can allow unauthorized access to the network, bypass security controls, and expose sensitive data. The other options are not correct descriptions of a rogue AP. Option A is a description of an unsecured AP, which is an AP that is not protected by a firewall or other security measures. Option B is a description of an outdated AP, which is an AP not configured to use Wired Equivalent Privacy (WEP) with Triple Data Encryption Algorithm (3DES), which are weak encryption methods that can be easily cracked. Option D is a description of a compromised AP, which is an AP infected by any kind of Trojan or Malware, which can cause malicious behavior or damage to the network. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, p. 325; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, p. 241.
Which of the following BEST describes the purpose of the security functional requirements of Common Criteria?
Level of assurance of the Target of Evaluation (TOE) in intended operational environment
Selection to meet the security objectives stated in test documents
Security behavior expected of a TOE
Definition of the roles and responsibilities
The security functional requirements of Common Criteria are meant to describe the expected security behavior of a Target of Evaluation (TOE). These requirements are detailed and are used to evaluate the security functions that a TOE claims to implement.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 211; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 178
Which of the following is the MOST effective method of mitigating data theft from an active user workstation?
Implement full-disk encryption
Enable multifactor authentication
Deploy file integrity checkers
Disable use of portable devices
The most effective method of mitigating data theft from an active user workstation is to disable use of portable devices. Portable devices are the devices that can be easily connected to or disconnected from a workstation, such as USB drives, external hard drives, flash drives, or smartphones. Portable devices can pose a risk of data theft from an active user workstation, as they can be used to copy, transfer, or exfiltrate data from the workstation, either by malicious insiders or by unauthorized outsiders. By disabling use of portable devices, the data theft from an active user workstation can be prevented or reduced.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 330; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 291
The restoration priorities of a Disaster Recovery Plan (DRP) are based on which of the following documents?
Service Level Agreement (SLA)
Business Continuity Plan (BCP)
Business Impact Analysis (BIA)
Crisis management plan
According to the CISSP All-in-One Exam Guide, the restoration priorities of a Disaster Recovery Plan (DRP) are based on the Business Impact Analysis (BIA). A DRP is a document that defines the procedures and actions to be taken in the event of a disaster that disrupts the normal operations of an organization. A restoration priority is the order or sequence in which the critical business processes and functions, as well as the supporting resources, such as data, systems, personnel, and facilities, are restored after a disaster. A BIA is a process that assesses the potential impact and consequences of a disaster on the organization’s business processes and functions, as well as the supporting resources. A BIA helps to identify and prioritize the critical business processes and functions, as well as the recovery objectives and time frames for them. A BIA also helps to determine the dependencies and interdependencies among the business processes and functions, as well as the supporting resources. Therefore, the restoration priorities of a DRP are based on the BIA, as it provides the information and analysis that are needed to plan and execute the recovery strategy. A Service Level Agreement (SLA) is not the document that the restoration priorities of a DRP are based on, although it may be a factor that influences the restoration priorities. An SLA is a document that defines the expectations and requirements for the quality and performance of a service or product that is provided by a service provider to a customer or client, such as the availability, reliability, scalability, or security of the service or product. An SLA may help to justify or support the restoration priorities of a DRP, but it does not provide the information and analysis that are needed to plan and execute the recovery strategy. A Business Continuity Plan (BCP) is not the document that the restoration priorities of a DRP are based on, although it may be a document that is aligned with or integrated with a DRP. A BCP is a document that defines the procedures and actions to be taken to ensure the continuity of the essential business operations during and after a disaster. A BCP may cover the same or similar business processes and functions, as well as the supporting resources, as a DRP, but it focuses on the continuity rather than the recovery of them. A BCP may also include other aspects or components that are not covered by a DRP, such as the prevention, mitigation, or response to a disaster. A crisis management plan is not the document that the restoration priorities of a DRP are based on, although it may be a document that is aligned with or integrated with a DRP. A crisis management plan is a document that defines the procedures and actions to be taken to manage and resolve a crisis or emergency situation that may affect the organization, such as a natural disaster, a cyberattack, or a pandemic. A crisis management plan may cover the same or similar business processes and functions, as well as the supporting resources, as a DRP, but it focuses on the management rather than the recovery of them. A crisis management plan may also include other aspects or components that are not covered by a DRP, such as the communication, coordination, or escalation of the crisis or emergency situation.
Which of the following prevents improper aggregation of privileges in Role Based Access Control (RBAC)?
Hierarchical inheritance
Dynamic separation of duties
The Clark-Wilson security model
The Bell-LaPadula security model
The method that prevents improper aggregation of privileges in role based access control (RBAC) is dynamic separation of duties. RBAC is a type of access control model that assigns permissions and privileges to users or devices based on their roles or functions within an organization, rather than their identities or attributes. RBAC can simplify and streamline the access control management, as it can reduce the complexity and redundancy of the permissions and privileges. However, RBAC can also introduce the risk of improper aggregation of privileges, which is the situation where a user or a device can accumulate more permissions or privileges than necessary or appropriate for their role or function, either by having multiple roles or by changing roles over time. Dynamic separation of duties is a method that prevents improper aggregation of privileges in RBAC, by enforcing rules or constraints that limit or restrict the roles or the permissions that a user or a device can have or use at any given time or situation.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 349; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 310
Which of the following is a remote access protocol that uses a static authentication?
Point-to-Point Tunneling Protocol (PPTP)
Routing Information Protocol (RIP)
Password Authentication Protocol (PAP)
Challenge Handshake Authentication Protocol (CHAP)
Password Authentication Protocol (PAP) is a remote access protocol that uses a static authentication method, which means that the username and password are sent in clear text over the network. PAP is considered insecure and vulnerable to eavesdropping and replay attacks, as anyone who can capture the network traffic can obtain the credentials. PAP is supported by Point-to-Point Protocol (PPP), which is a common protocol for establishing remote connections over dial-up, broadband, or wireless networks. PAP is usually used as a fallback option when more secure protocols, such as Challenge Handshake Authentication Protocol (CHAP) or Extensible Authentication Protocol (EAP), are not available or compatible.
Which of the following is a document that identifies each item seized in an investigation, including date and time seized, full name and signature or initials of the person who seized the item, and a detailed description of the item?
Property book
Chain of custody form
Search warrant return
Evidence tag
According to the CISSP CBK Official Study Guide1, a chain of custody form is a document that identifies each item seized in an investigation, including date and time seized, full name and signature or initials of the person who seized the item, and a detailed description of the item. A chain of custody form is used to maintain the integrity and admissibility of the evidence, by documenting the history and handling of the evidence, such as the location, possession, transfer, or disposition of the evidence. A chain of custody form helps to prevent or detect any tampering, alteration, or loss of the evidence, as well as to support the authenticity and reliability of the evidence. A property book is not a document that identifies each item seized in an investigation, although it may be a document that records the inventory of the items. A property book is a document that lists the property or assets that belong to an organization or a person, such as the equipment, tools, or materials. A property book may help to manage or account for the property or assets, but it does not document the history and handling of the evidence. A search warrant return is not a document that identifies each item seized in an investigation, although it may be a document that reports the result of the investigation. A search warrant return is a document that summarizes the outcome and findings of the execution of a search warrant, such as the date, time, place, and manner of the search, the items seized, and the persons arrested. A search warrant return may help to inform or update the court or the authority that issued the search warrant, but it does not document the history and handling of the evidence. An evidence tag is not a document that identifies each item seized in an investigation, although it may be a label or a marker that is attached to the item. An evidence tag is a piece of paper or a sticker that contains information about the item, such as the case number, the item number, the description, or the barcode of the item. An evidence tag may help to identify or track the item, but it does not document the history and handling of the evidence.
Which of the following is the BEST network defense against unknown types of attacks or stealth attacks in progress?
Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDS)
Stateful firewalls
Network Behavior Analysis (NBA) tools
Network Behavior Analysis (NBA) tools are the best network defense against unknown types of attacks or stealth attacks in progress. NBA tools are devices or software that monitor and analyze the network traffic and activities, and detect any anomalies or deviations from the normal or expected behavior. NBA tools use various techniques, such as statistical analysis, machine learning, artificial intelligence, or heuristics, to establish a baseline of the network behavior, and to identify any outliers or indicators of compromise. NBA tools can provide several benefits, such as:
The other options are not the best network defense against unknown types of attacks or stealth attacks in progress, but rather network defenses that have other limitations or drawbacks. Intrusion Prevention Systems (IPS) are devices or software that monitor and block the network traffic and activities that match the predefined signatures or rules of known attacks. IPS can provide a proactive and preventive layer of security, but they cannot detect or stop unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IPS. Intrusion Detection Systems (IDS) are devices or software that monitor and alert the network traffic and activities that match the predefined signatures or rules of known attacks. IDS can provide a reactive and detective layer of security, but they cannot detect or alert unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IDS. Stateful firewalls are devices or software that filter and control the network traffic and activities based on the state and context of the network sessions, such as the source and destination IP addresses, port numbers, protocol types, and sequence numbers. Stateful firewalls can provide a granular and dynamic layer of security, but they cannot filter or control unknown types of attacks or stealth attacks that use valid or spoofed network sessions, or that can exploit or bypass the firewall rules.
An external attacker has compromised an organization’s network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information?
Implement packet filtering on the network firewalls
Install Host Based Intrusion Detection Systems (HIDS)
Require strong authentication for administrators
Implement logical network segmentation at the switches
Implementing logical network segmentation at the switches is the most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, location, or security level. Logical network segmentation can be implemented at the switches, which are devices that operate at the data link layer of the OSI model and forward data packets based on the MAC addresses. Logical network segmentation can provide several benefits, such as:
Logical network segmentation can mitigate the attacker’s ability to gain further information by limiting the visibility and access of the sniffer to the segment where it is installed. A sniffer is a tool that captures and analyzes the data packets that are transmitted over a network. A sniffer can be used for legitimate purposes, such as troubleshooting, testing, or monitoring the network, or for malicious purposes, such as eavesdropping, stealing, or modifying the data. A sniffer can only capture the data packets that are within its broadcast domain, which is the set of devices that can communicate with each other without a router. By implementing logical network segmentation at the switches, the organization can create multiple broadcast domains and isolate the sensitive or critical data from the compromised segment. This way, the attacker can only see the data packets that belong to the same segment as the sniffer, and not the data packets that belong to other segments. This can prevent the attacker from gaining further information or accessing other resources on the network.
The other options are not the most effective layers of security the organization could have implemented to mitigate the attacker’s ability to gain further information, but rather layers that have other limitations or drawbacks. Implementing packet filtering on the network firewalls is not the most effective layer of security, because packet filtering only examines the network layer header of the data packets, such as the source and destination IP addresses, and does not inspect the payload or the content of the data. Packet filtering can also be bypassed by using techniques such as IP spoofing or fragmentation. Installing Host Based Intrusion Detection Systems (HIDS) is not the most effective layer of security, because HIDS only monitors and detects the activities and events on a single host, and does not prevent or respond to the attacks. HIDS can also be disabled or evaded by the attacker if the host is compromised. Requiring strong authentication for administrators is not the most effective layer of security, because authentication only verifies the identity of the users or processes, and does not protect the data in transit or at rest. Authentication can also be defeated by using techniques such as phishing, keylogging, or credential theft.
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
Layer 2 Tunneling Protocol (L2TP)
Link Control Protocol (LCP)
Challenge Handshake Authentication Protocol (CHAP)
Packet Transfer Protocol (PTP)
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload.
In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for negotiating and establishing a connection with another node?
Transport layer
Application layer
Network layer
Session layer
The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a simplified version of the OSI model, and it consists of four layers: application, transport, internet, and link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection and correction, flow control, and congestion control. The transport layer also provides connection-oriented or connectionless services, depending on the protocol used.
TCP is a connection-oriented protocol, which means that it establishes a logical connection between two nodes before exchanging data, and it maintains the connection until the data transfer is complete. TCP uses a three-way handshake to negotiate and establish a connection with another node. The three-way handshake works as follows:
UDP is a connectionless protocol, which means that it does not establish or maintain a connection between two nodes, but rather sends data packets independently and without any guarantee of delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and establish a connection with another node, but rather relies on the application layer to handle any connection-related issues.
Which of the following factors contributes to the weakness of Wired Equivalent Privacy (WEP) protocol?
WEP uses a small range Initialization Vector (IV)
WEP uses Message Digest 5 (MD5)
WEP uses Diffie-Hellman
WEP does not use any Initialization Vector (IV)
WEP uses a small range Initialization Vector (IV) is the factor that contributes to the weakness of Wired Equivalent Privacy (WEP) protocol. WEP is a security protocol that provides encryption and authentication for wireless networks, such as Wi-Fi. WEP uses the RC4 stream cipher to encrypt the data packets, and the CRC-32 checksum to verify the data integrity. WEP also uses a shared secret key, which is concatenated with a 24-bit Initialization Vector (IV), to generate the keystream for the RC4 encryption. WEP has several weaknesses and vulnerabilities, such as:
WEP has been deprecated and replaced by more secure protocols, such as Wi-Fi Protected Access (WPA) or Wi-Fi Protected Access II (WPA2), which use stronger encryption and authentication methods, such as the Temporal Key Integrity Protocol (TKIP), the Advanced Encryption Standard (AES), or the Extensible Authentication Protocol (EAP).
The other options are not factors that contribute to the weakness of WEP, but rather factors that are irrelevant or incorrect. WEP does not use Message Digest 5 (MD5), which is a hash function that produces a 128-bit output from a variable-length input. WEP does not use Diffie-Hellman, which is a method for generating a shared secret key between two parties. WEP does use an Initialization Vector (IV), which is a 24-bit value that is concatenated with the secret key.
At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area Network (SAN) located?
Link layer
Physical layer
Session layer
Application layer
Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The physical layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc. The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh, etc.
A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN allows multiple servers or clients to share the same storage devices, and it provides high performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the storage devices, and it is accessed by the servers or clients through the physical medium of the network infrastructure.
An input validation and exception handling vulnerability has been discovered on a critical web-based system. Which of the following is MOST suited to quickly implement a control?
Add a new rule to the application layer firewall
Block access to the service
Install an Intrusion Detection System (IDS)
Patch the application source code
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system. An input validation and exception handling vulnerability is a type of vulnerability that occurs when a web-based system does not properly check, filter, or sanitize the input data that is received from the users or other sources, or does not properly handle the errors or exceptions that are generated by the system. An input validation and exception handling vulnerability can lead to various attacks, such as:
An application layer firewall is a device or software that operates at the application layer of the OSI model and inspects the application layer payload or the content of the data packets. An application layer firewall can provide various functions, such as:
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, because it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid input data that exploit the vulnerability. For example, a new rule can be added to the application layer firewall to:
Adding a new rule to the application layer firewall can be done quickly and easily, without requiring any changes or patches to the web-based system, which can be time-consuming and risky, especially for a critical system. Adding a new rule to the application layer firewall can also be done remotely and centrally, without requiring any physical access or installation on the web-based system, which can be inconvenient and costly, especially for a distributed system.
The other options are not the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, but rather options that have other limitations or drawbacks. Blocking access to the service is not the most suited option, because it can cause disruption and unavailability of the service, which can affect the business operations and customer satisfaction, especially for a critical system. Blocking access to the service can also be a temporary and incomplete solution, as it does not address the root cause of the vulnerability or prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the most suited option, because IDS only monitors and detects the attacks, and does not prevent or respond to them. IDS can also generate false positives or false negatives, which can affect the accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks, which can affect the effectiveness and efficiency of the detection. Patching the application source code is not the most suited option, because it can take a long time and require a lot of resources and testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the application source code can also introduce new errors or vulnerabilities, which can affect the functionality and security of the system. Patching the application source code can also be difficult or impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of the patch.
What is the purpose of an Internet Protocol (IP) spoofing attack?
To send excessive amounts of data to a process, making it unpredictable
To intercept network traffic without authorization
To disguise the destination address from a target’s IP filtering devices
To convince a system that it is communicating with a known entity
The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is communicating with a known entity. IP spoofing is a technique that involves creating and sending IP packets with a forged source IP address, which is usually the IP address of a trusted or authorized host. IP spoofing can be used for various malicious purposes, such as:
The purpose of IP spoofing is to convince a system that it is communicating with a known entity, because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships.
The other options are not the main purposes of IP spoofing, but rather the possible consequences or methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a valid option, as IP spoofing involves forging the source address, not the destination address.
Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model?
Packet filtering
Port services filtering
Content filtering
Application access control
Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions.
Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer.
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
Change management processes
User administration procedures
Operating System (OS) baselines
System backup documentation
Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as:
OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations.
In which of the following programs is it MOST important to include the collection of security process data?
Quarterly access reviews
Security continuous monitoring
Business continuity testing
Annual security training
Security continuous monitoring is the program in which it is most important to include the collection of security process data. Security process data is the data that reflects the performance, effectiveness, and compliance of the security processes, such as the security policies, standards, procedures, and guidelines. Security process data can include metrics, indicators, logs, reports, and assessments. Security process data can provide several benefits, such as:
Security continuous monitoring is the program in which it is most important to include the collection of security process data, because it is the program that involves maintaining the ongoing awareness of the security status, events, and activities of the system. Security continuous monitoring can enable the system to detect and respond to any security issues or incidents in a timely and effective manner, and to adjust and improve the security controls and processes accordingly. Security continuous monitoring can also help the system to comply with the security requirements and standards from the internal or external authorities or frameworks.
The other options are not the programs in which it is most important to include the collection of security process data, but rather programs that have other objectives or scopes. Quarterly access reviews are programs that involve reviewing and verifying the user accounts and access rights on a quarterly basis. Quarterly access reviews can ensure that the user accounts and access rights are valid, authorized, and up to date, and that any inactive, expired, or unauthorized accounts or rights are removed or revoked. However, quarterly access reviews are not the programs in which it is most important to include the collection of security process data, because they are not focused on the security status, events, and activities of the system, but rather on the user accounts and access rights. Business continuity testing is a program that involves testing and validating the business continuity plan (BCP) and the disaster recovery plan (DRP) of the system. Business continuity testing can ensure that the system can continue or resume its critical functions and operations in case of a disruption or disaster, and that the system can meet the recovery objectives and requirements. However, business continuity testing is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the continuity and recovery of the system. Annual security training is a program that involves providing and updating the security knowledge and skills of the system users and staff on an annual basis. Annual security training can increase the security awareness and competence of the system users and staff, and reduce the human errors or risks that might compromise the system security. However, annual security training is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the security education and training of the system users and staff.
Which of the following could cause a Denial of Service (DoS) against an authentication system?
Encryption of audit logs
No archiving of audit logs
Hashing of audit logs
Remote access audit logs
Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns.
Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents.
Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could launch a DoS attack against an authentication system by sending a large number of fake or invalid remote access requests, and generating a large amount of remote access audit logs that fill up the disk space or memory of the authentication system, and cause it to crash or slow down.
The other options are not the factors that could cause a DoS against an authentication system, but rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs.
Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure?
Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken
Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability
Management teams will understand the testing objectives and reputational risk to the organization
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as:
A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as:
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security.
The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report.
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files?
Host VM monitor audit logs
Guest OS access controls
Host VM access controls
Guest OS audit logs
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as:
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents.
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs on the physical machine. Host VM monitor audit logs can provide information about the performance, status, and configuration of the VMs, but they cannot provide information about the user’s access to data files on the VMs. Guest OS access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the resources and services on the guest OS. Guest OS access controls can provide a proactive and preventive layer of security by enforcing the principles of least privilege, separation of duties, and need to know. However, guest OS access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs.
Which of the following methods provides the MOST protection for user credentials?
Forms-based authentication
Digest authentication
Basic authentication
Self-registration
The method that provides the most protection for user credentials is digest authentication. Digest authentication is a type of authentication that verifies the identity of a user or a device by using a cryptographic hash function to transform the user credentials, such as username and password, into a digest or a hash value, before sending them over a network, such as the internet. Digest authentication can provide more protection for user credentials than basic authentication, which sends the user credentials in plain text, or forms-based authentication, which relies on the security of the web server or the web application. Digest authentication can prevent the interception, disclosure, or modification of the user credentials by third parties, and can also prevent replay attacks by using a nonce or a random value. Self-registration is not a method of authentication, but a process of creating a user account or a profile by providing some personal information, such as name, email, or phone number. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
Which of the following is the BEST reason to review audit logs periodically?
Verify they are operating properly
Monitor employee productivity
Identify anomalies in use patterns
Meet compliance regulations
The best reason to review audit logs periodically is to identify anomalies in use patterns that may indicate unauthorized or malicious activities, such as intrusion attempts, data breaches, policy violations, or system errors. Audit logs record the events and actions that occur on a system or network, and can provide valuable information for security analysis, investigation, and response. The other options are not as good as identifying anomalies, as they either do not relate to security (B), or are not the primary purpose of audit logs (A and D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 405; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 465.
Refer to the information below to answer the question.
A security practitioner detects client-based attacks on the organization’s network. A plan will be necessary to address these concerns.
In the plan, what is the BEST approach to mitigate future internal client-based attacks?
Block all client side web exploits at the perimeter.
Remove all non-essential client-side web services from the network.
Screen for harmful exploits of client-side services before implementation.
Harden the client image before deployment.
The best approach to mitigate future internal client-based attacks is to harden the client image before deployment. Hardening the client image means to apply the security configurations and measures to the client operating system and applications, such as disabling unnecessary services, installing patches and updates, enforcing strong passwords, and enabling encryption and firewall. Hardening the client image can help to reduce the attack surface and the vulnerabilities of the client, and to prevent or resist the client-based attacks, such as web exploits, malware, or phishing. Blocking all client side web exploits at the perimeter, removing all non-essential client-side web services from the network, and screening for harmful exploits of client-side services before implementation are not the best approaches to mitigate future internal client-based attacks, as they are related to the network or the server level, not the client level, and they may not address all the possible types or sources of the client-based attacks. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 295. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 311.
Identify the component that MOST likely lacks digital accountability related to information access.
Click on the correct device in the image below.
Storage Area Network (SAN): SANs are designed for centralized storage and access control mechanisms can be implemented to track users and their activities.
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following solutions would have MOST likely detected the use of peer-to-peer programs when the computer was connected to the office network?
Anti-virus software
Intrusion Prevention System (IPS)
Anti-spyware software
Integrity checking software
The best solution to detect the use of P2P programs when the computer was connected to the office network is an Intrusion Prevention System (IPS). An IPS is a device or a software that monitors, analyzes, and blocks the network traffic based on the predefined rules or policies, and that can prevent or stop any unauthorized or malicious access or activity on the network, such as P2P programs. An IPS can detect the use of P2P programs by inspecting the network packets, identifying the P2P protocols or signatures, and blocking or dropping the P2P traffic. Anti-virus software, anti-spyware software, and integrity checking software are not the best solutions to detect the use of P2P programs when the computer was connected to the office network, as they are related to the protection, removal, or verification of the software or files on the computer, not the monitoring, analysis, or blocking of the network traffic. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 512. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 528.
Which of the following assures that rules are followed in an identity management architecture?
Policy database
Digital signature
Policy decision point
Policy enforcement point
The component that assures that rules are followed in an identity management architecture is the policy enforcement point. A policy enforcement point is a device or software that implements and enforces the security policies and rules defined by the policy decision point. A policy decision point is a device or software that evaluates and makes decisions about the access requests and privileges of the users or devices based on the security policies and rules. A policy enforcement point can be a firewall, a router, a switch, a proxy, or an application that controls the access to the network or system resources. A policy database, a digital signature, and a policy decision point are not the components that assure that rules are followed in an identity management architecture, as they are related to the storage, verification, or definition of the security policies and rules, not the implementation or enforcement of them. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 664. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 680.
Refer to the information below to answer the question.
In a Multilevel Security (MLS) system, the following sensitivity labels are used in increasing levels of sensitivity: restricted, confidential, secret, top secret. Table A lists the clearance levels for four users, while Table B lists the security classes of four different files.
Which of the following is true according to the star property (*property)?
User D can write to File 1
User B can write to File 1
User A can write to File 1
User C can write to File 1
According to the star property (*property) of the Bell-LaPadula model, a subject with a given security clearance may write data to an object if and only if the object’s security level is greater than or equal to the subject’s security level. In other words, a subject can write data to an object with the same or higher sensitivity label, but not to an object with a lower sensitivity label. This rule is also known as the no write-down rule, as it prevents the leakage of information from a higher level to a lower level. In this question, User A has a Restricted clearance, and File 1 has a Restricted security class. Therefore, User A can write to File 1, as they have the same security level. User B, User C, and User D cannot write to File 1, as they have higher clearances than the security class of File 1, and they would violate the star property by writing down information to a lower level. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 498. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 514.
Refer to the information below to answer the question.
A large, multinational organization has decided to outsource a portion of their Information Technology (IT) organization to a third-party provider’s facility. This provider will be responsible for the design, development, testing, and support of several critical, customer-based applications used by the organization.
The third party needs to have
processes that are identical to that of the organization doing the outsourcing.
access to the original personnel that were on staff at the organization.
the ability to maintain all of the applications in languages they are familiar with.
access to the skill sets consistent with the programming languages used by the organization.
The third party needs to have access to the skill sets consistent with the programming languages used by the organization. The programming languages are the tools or the methods of creating, modifying, testing, and supporting the software applications that perform the functions or the tasks required by the organization. The programming languages can vary in their syntax, semantics, features, or paradigms, and they can require different levels of expertise or experience to use them effectively or efficiently. The third party needs to have access to the skill sets consistent with the programming languages used by the organization, as it can ensure the quality, the compatibility, and the maintainability of the software applications that the third party is responsible for. The third party does not need to have processes that are identical to that of the organization doing the outsourcing, access to the original personnel that were on staff at the organization, or the ability to maintain all of the applications in languages they are familiar with, as they are related to the methods, the resources, or the preferences of the software development, not the skill sets consistent with the programming languages used by the organization. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1000. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1016.
During the procurement of a new information system, it was determined that some of the security requirements were not addressed in the system specification. Which of the following is the MOST likely reason for this?
The procurement officer lacks technical knowledge.
The security requirements have changed during the procurement process.
There were no security professionals in the vendor's bidding team.
The description of the security requirements was insufficient.
The most likely reason for some of the security requirements not being addressed in the system specification during the procurement of a new information system is that the description of the security requirements was insufficient. The description of the security requirements is the part of the procurement document that specifies the security objectives, criteria, standards, and measures that the system must meet or comply with. If the description of the security requirements is insufficient, vague, ambiguous, incomplete, or inaccurate, then the system specification may not reflect or satisfy the security needs and expectations of the organization. The procurement officer lacking technical knowledge, the security requirements changing during the procurement process, and there being no security professionals in the vendor’s bidding team are not the most likely reasons for this problem, as they do not directly affect the quality or clarity of the description of the security requirements. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1045. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1071.
Refer to the information below to answer the question.
A security practitioner detects client-based attacks on the organization’s network. A plan will be necessary to address these concerns.
In addition to web browsers, what PRIMARY areas need to be addressed concerning mobile code used for malicious purposes?
Text editors, database, and Internet phone applications
Email, presentation, and database applications
Image libraries, presentation and spreadsheet applications
Email, media players, and instant messaging applications
The primary areas that need to be addressed concerning mobile code used for malicious purposes, in addition to web browsers, are email, media players, and instant messaging applications. Mobile code is a type of code that can be transferred or executed over a network, such as the internet, without the user’s knowledge or consent, and that can perform various functions or tasks on the user’s system, such as displaying advertisements, collecting information, or installing malware. Mobile code can be embedded or attached in various types of applications or files, such as web browsers, email, media players, or instant messaging applications, and can pose a serious security threat to the user’s system or data. Text editors, database, and internet phone applications are not the primary areas that need to be addressed concerning mobile code used for malicious purposes, as they are not the common or likely sources or targets of the mobile code attacks, and they may not support or execute the mobile code as easily or frequently as the other applications. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1050. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1066.
Which of the following provides the MOST protection against data theft of sensitive information when a laptop is stolen?
Set up a BIOS and operating system password
Encrypt the virtual drive where confidential files can be stored
Implement a mandatory policy in which sensitive data cannot be stored on laptops, but only on the corporate network
Encrypt the entire disk and delete contents after a set number of failed access attempts
Encrypting the entire disk and deleting the contents after a set number of failed access attempts provides the most protection against data theft of sensitive information when a laptop is stolen. This method ensures that the data is unreadable without the correct decryption key, and that the data is erased if someone tries to guess the key or bypass the encryption. Setting up a BIOS and operating system password, encrypting the virtual drive, or implementing a policy are less effective methods, as they can be circumvented by physical access, booting from another device, or copying the data to another location. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, p. 269; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 5: Identity and Access Management (IAM), p. 521.
Which of the following is the BEST solution to provide redundancy for telecommunications links?
Provide multiple links from the same telecommunications vendor.
Ensure that the telecommunications links connect to the network in one location.
Ensure that the telecommunications links connect to the network in multiple locations.
Provide multiple links from multiple telecommunications vendors.
The best solution to provide redundancy for telecommunications links is to provide multiple links from multiple telecommunications vendors. Redundancy is the ability to maintain the availability and functionality of a system or network in the event of a failure or disruption. By providing multiple links from multiple telecommunications vendors, the organization can ensure that there is always an alternative path for data transmission, and that the failure or outage of one vendor does not affect the entire network. Providing multiple links from the same telecommunications vendor, ensuring that the telecommunications links connect to the network in one location, and ensuring that the telecommunications links connect to the network in multiple locations are not the best solutions to provide redundancy for telecommunications links, as they do not offer the same level of diversity, resilience, and fault tolerance as providing multiple links from multiple telecommunications vendors. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 504. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 520.
Refer to the information below to answer the question.
A security practitioner detects client-based attacks on the organization’s network. A plan will be necessary to address these concerns.
What is the BEST reason for the organization to pursue a plan to mitigate client-based attacks?
Client privilege administration is inherently weaker than server privilege administration.
Client hardening and management is easier on clients than on servers.
Client-based attacks are more common and easier to exploit than server and network based attacks.
Client-based attacks have higher financial impact.
The best reason for the organization to pursue a plan to mitigate client-based attacks is that client-based attacks are more common and easier to exploit than server and network based attacks. Client-based attacks are the attacks that target the client applications or systems, such as web browsers, email clients, or media players, and that can exploit the vulnerabilities or weaknesses of the client software or configuration, or the user behavior or interaction. Client-based attacks are more common and easier to exploit than server and network based attacks, because the client applications or systems are more exposed and accessible to the attackers, the client software or configuration is more diverse and complex to secure, and the user behavior or interaction is more unpredictable and prone to errors or mistakes. Therefore, the organization needs to pursue a plan to mitigate client-based attacks, as they pose a significant security threat or risk to the organization’s data, systems, or network. Client privilege administration is inherently weaker than server privilege administration, client hardening and management is easier on clients than on servers, and client-based attacks have higher financial impact are not the best reasons for the organization to pursue a plan to mitigate client-based attacks, as they are not supported by the facts or evidence, or they are not relevant or specific to the client-side security. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1050. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1066.
Refer to the information below to answer the question.
A large organization uses unique identifiers and requires them at the start of every system session. Application access is based on job classification. The organization is subject to periodic independent reviews of access controls and violations. The organization uses wired and wireless networks and remote access. The organization also uses secure connections to branch offices and secure backup and recovery strategies for selected information and processes.
In addition to authentication at the start of the user session, best practice would require re-authentication
periodically during a session.
for each business process.
at system sign-off.
after a period of inactivity.
The best practice would require re-authentication after a period of inactivity, in addition to authentication at the start of the user session. Authentication is a process of verifying the identity or the credentials of a user or a device that requests access to a system or a resource. Re-authentication is a process of repeating the authentication after a certain condition or event, such as a change of location, a change of role, a change of privilege, or a period of inactivity. Re-authentication can help to enhance the security and the accountability of the access control, as it can prevent or detect the unauthorized or malicious use of the user or the device credentials, and it can ensure that the user or the device is still active and valid. Re-authenticating after a period of inactivity can help to prevent the unauthorized or malicious access by someone who may have gained physical access to the user or the device session, such as a co-worker, a visitor, or a thief. Re-authenticating periodically during a session, for each business process, or at system sign-off are not the best practices, as they may not be necessary or effective for the security or the accountability of the access control, and they may cause inconvenience or frustration to the user or the device. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following documents explains the proper use of the organization's assets?
Human resources policy
Acceptable use policy
Code of ethics
Access control policy
The document that explains the proper use of the organization’s assets is the acceptable use policy. An acceptable use policy is a document that defines the rules and guidelines for the appropriate and responsible use of the organization’s information systems and resources, such as computers, networks, or devices. An acceptable use policy can help to prevent or reduce the misuse, abuse, or damage of the organization’s assets, and to protect the security, privacy, and reputation of the organization and its users. An acceptable use policy can also specify the consequences or penalties for violating the policy, such as disciplinary actions, termination, or legal actions. A human resources policy, a code of ethics, and an access control policy are not the documents that explain the proper use of the organization’s assets, as they are related to the management, values, or authorization of the organization’s employees or users, not the usage or responsibility of the organization’s information systems or resources. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 47. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 62.
Host-Based Intrusion Protection (HIPS) systems are often deployed in monitoring or learning mode during their initial implementation. What is the objective of starting in this mode?
Automatically create exceptions for specific actions or files
Determine which files are unsafe to access and blacklist them
Automatically whitelist actions or files known to the system
Build a baseline of normal or safe system events for review
A Host-Based Intrusion Protection (HIPS) system is a software that monitors and blocks malicious activities on a single host, such as a computer or a server. A HIPS system can also prevent unauthorized changes to the system configuration, files, or registry12
During the initial implementation, a HIPS system is often deployed in monitoring or learning mode, which means that it observes the normal behavior of the system and the applications running on it, without blocking or alerting on any events. The objective of starting in this mode is to automatically create exceptions for specific actions or files that are legitimate and safe, but may otherwise trigger false alarms or unwanted blocks by the HIPS system34
By creating exceptions, the HIPS system can reduce the number of false positives and improve its accuracy and efficiency. However, the monitoring or learning mode should not last too long, as it may also expose the system to potential attacks that are not detected or prevented by the HIPS system. Therefore, after a sufficient baseline of normal behavior is established, the HIPS system should be switched to a more proactive mode, such as alerting or blocking mode, which can actively respond to suspicious or malicious events
When using third-party software developers, which of the following is the MOST effective method of providing software development Quality Assurance (QA)?
Retain intellectual property rights through contractual wording.
Perform overlapping code reviews by both parties.
Verify that the contractors attend development planning meetings.
Create a separate contractor development environment.
When using third-party software developers, the most effective method of providing software development Quality Assurance (QA) is to perform overlapping code reviews by both parties. Code reviews are the process of examining the source code of an application for quality, functionality, security, and compliance. Overlapping code reviews by both parties means that the code is reviewed by both the third-party developers and the contracting organization, and that the reviews cover the same or similar aspects of the code. This can ensure that the code meets the requirements and specifications, that the code is free of defects or vulnerabilities, and that the code is consistent and compatible with the existing system or environment. Retaining intellectual property rights through contractual wording, verifying that the contractors attend development planning meetings, and creating a separate contractor development environment are all possible methods of providing software development QA, but they are not the most effective method of doing so. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1026. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1050.
What is the MOST effective method for gaining unauthorized access to a file protected with a long complex password?
Brute force attack
Frequency analysis
Social engineering
Dictionary attack
The most effective method for gaining unauthorized access to a file protected with a long complex password is social engineering. Social engineering is a type of attack that exploits the human factor or the psychological weaknesses of the target, such as trust, curiosity, greed, or fear, to manipulate them into revealing sensitive information, such as passwords, or performing malicious actions, such as opening malicious attachments or clicking malicious links. Social engineering can bypass the technical security controls, such as encryption or authentication, and can be more efficient and successful than other methods that rely on brute force or guesswork. Brute force attack, frequency analysis, and dictionary attack are not the most effective methods for gaining unauthorized access to a file protected with a long complex password, as they require a lot of time, resources, and computing power, and they can be thwarted by the use of strong passwords, password policies, or password managers. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, Security Assessment and Testing, page 813. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, Security Assessment and Testing, page 829.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
Which of the following will indicate where the IT budget is BEST allocated during this time?
Policies
Frameworks
Metrics
Guidelines
The best indicator of where the IT budget is best allocated during this time is the metrics. The metrics are the measurements or the indicators of the performance, the effectiveness, the efficiency, or the quality of the IT processes, activities, or outcomes. The metrics can help to allocate the IT budget in a rational, objective, and evidence-based manner, as they can show the value, the impact, or the return of the IT investments, and they can identify the gaps, the risks, or the opportunities for the IT improvement or enhancement. The metrics can also help to justify, communicate, or report the IT budget allocation to the senior management or the stakeholders, and to align the IT budget allocation with the business needs and requirements. Policies, frameworks, and guidelines are not the best indicators of where the IT budget is best allocated during this time, as they are related to the documents or the models that define, guide, or standardize the IT processes, activities, or outcomes, not the measurements or the indicators of the IT performance, effectiveness, efficiency, or quality. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 38. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 53.
Which of the following is the BEST way to determine if a particular system is able to identify malicious software without executing it?
Testing with a Botnet
Testing with an EICAR file
Executing a binary shellcode
Run multiple antivirus programs
The best way to determine if a particular system is able to identify malicious software without executing it is to test it with an EICAR file. An EICAR file is a standard file that is used to test the functionality and performance of antivirus software, without using any real malware. An EICAR file is a harmless text file that contains a specific string of characters that is recognized by most antivirus software as a virus signature. An EICAR file can be used to check if the antivirus software is installed, configured, updated, and working properly, without risking any damage or infection to the system. Testing with a botnet, executing a binary shellcode, and running multiple antivirus programs are not the best ways to determine if a particular system is able to identify malicious software without executing it, as they may involve using or creating actual malware, which can be dangerous, illegal, or unethical, and may compromise the security or performance of the system. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, Security Assessment and Testing, page 813. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, Security Assessment and Testing, page 829.
Which of the following is the MOST beneficial to review when performing an IT audit?
Audit policy
Security log
Security policies
Configuration settings
The most beneficial item to review when performing an IT audit is the security log. The security log is a record of the events and activities that occur on a system or network, such as logins, logouts, file accesses, policy changes, or security incidents. The security log can provide valuable information for the auditor to assess the security posture, performance, and compliance of the system or network, and to identify any anomalies, vulnerabilities, or breaches that need to be addressed. The other options are not as beneficial as the security log, as they either do not provide enough information for the audit (A and C), or do not reflect the actual state of the system or network (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 405; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 465.
Which of the following mobile code security models relies only on trust?
Code signing
Class authentication
Sandboxing
Type safety
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows:
Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations.
Which technique can be used to make an encryption scheme more resistant to a known plaintext attack?
Hashing the data before encryption
Hashing the data after encryption
Compressing the data after encryption
Compressing the data before encryption
Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption.
Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key?
Confidentiality
Integrity
Identification
Availability
The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows:
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
Who in the organization is accountable for classification of data information assets?
Data owner
Data architect
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
The person in the organization who is accountable for the classification of data information assets is the data owner. The data owner is the person or entity that has the authority and responsibility for the creation, collection, processing, and disposal of a set of data. The data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. The data owner should be able to determine the impact of the data on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the data on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The data owner should also ensure that the data is properly labeled, stored, accessed, shared, and destroyed according to the data classification policy and procedures.
The other options are not the persons in the organization who are accountable for the classification of data information assets, but rather persons who have other roles or functions related to data management. The data architect is the person or entity that designs and models the structure, format, and relationships of the data, as well as the data standards, specifications, and lifecycle. The data architect supports the data owner by providing technical guidance and expertise on the data architecture and quality. The Chief Information Security Officer (CISO) is the person or entity that oversees the security strategy, policies, and programs of the organization, as well as the security performance and incidents. The CISO supports the data owner by providing security leadership and governance, as well as ensuring the compliance and alignment of the data security with the organizational objectives and regulations. The Chief Information Officer (CIO) is the person or entity that manages the information technology (IT) resources and services of the organization, as well as the IT strategy and innovation. The CIO supports the data owner by providing IT management and direction, as well as ensuring the availability, reliability, and scalability of the IT infrastructure and applications.
The use of private and public encryption keys is fundamental in the implementation of which of the following?
Diffie-Hellman algorithm
Secure Sockets Layer (SSL)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
The use of private and public encryption keys is fundamental in the implementation of Secure Sockets Layer (SSL). SSL is a protocol that provides secure communication over the Internet by using public key cryptography and digital certificates. SSL works as follows:
The use of private and public encryption keys is fundamental in the implementation of SSL because it enables the authentication of the parties, the establishment of the shared secret key, and the protection of the data from eavesdropping, tampering, and replay attacks.
The other options are not protocols or algorithms that use private and public encryption keys in their implementation. Diffie-Hellman algorithm is a method for generating a shared secret key between two parties, but it does not use private and public encryption keys, but rather public and private parameters. Advanced Encryption Standard (AES) is a symmetric encryption algorithm that uses the same key for encryption and decryption, but it does not use private and public encryption keys, but rather a single secret key. Message Digest 5 (MD5) is a hash function that produces a fixed-length output from a variable-length input, but it does not use private and public encryption keys, but rather a one-way mathematical function.
Which component of the Security Content Automation Protocol (SCAP) specification contains the data required to estimate the severity of vulnerabilities identified automated vulnerability assessments?
Common Vulnerabilities and Exposures (CVE)
Common Vulnerability Scoring System (CVSS)
Asset Reporting Format (ARF)
Open Vulnerability and Assessment Language (OVAL)
The component of the Security Content Automation Protocol (SCAP) specification that contains the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and objective way to measure and communicate the characteristics and impacts of vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base metric group captures the intrinsic and fundamental properties of a vulnerability that are constant over time and across user environments. The temporal metric group captures the characteristics of a vulnerability that change over time, such as the availability and effectiveness of exploits, patches, and workarounds. The environmental metric group captures the characteristics of a vulnerability that are relevant and unique to a user’s environment, such as the configuration and importance of the affected system. Each metric group has a set of metrics that are assigned values based on the vulnerability’s attributes. The values are then combined using a formula to produce a numerical score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can also be translated into a qualitative rating that ranges from none to low, medium, high, and critical. CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and prioritize their remediation.
The other options are not components of the SCAP specification that contain the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component that provides a standardized and unique identifier and description for each publicly known vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and extensible format for expressing the information about the assets and their characteristics, such as configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is a component that provides a standardized and expressive language for defining and testing the state of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL enables the automation and interoperability of vulnerability assessment and management.
What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
Implementation Phase
Initialization Phase
Cancellation Phase
Issued Phase
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps:
The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency Identification (RFID) based access cards?
Personal Identity Verification (PIV)
Cardholder Unique Identifier (CHUID) authentication
Physical Access Control System (PACS) repeated attempt detection
Asymmetric Card Authentication Key (CAK) challenge-response
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use radio frequency identification (RFID) technology to communicate with a reader and grant access to a physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the process of copying the data and identity of a legitimate card to a counterfeit card, and using it to impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-response is a cryptographic technique that prevents electronic cloning by using public key cryptography and digital signatures to verify the authenticity and integrity of the card and the reader. Asymmetric CAK challenge-response works as follows:
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the private key of the original card, and a rogue reader cannot impersonate a legitimate reader without knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for federal employees and contractors to use smart cards for physical and logical access, but it does not specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier (CHUID) authentication is a technique that uses a unique number and a digital certificate to identify the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity. Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the cloning of the card or the impersonation of the reader.
Which of the following BEST describes the responsibilities of a data owner?
Ensuring quality and validation through periodic audits for ongoing data integrity
Maintaining fundamental data availability, including data storage and archiving
Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
Determining the impact the information has on the mission of the organization
The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data.
When implementing a data classification program, why is it important to avoid too much granularity?
The process will require too many resources
It will be difficult to apply to both hardware and software
It will be difficult to assign ownership to the data
The process will be perceived as having value
When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it requires clear and accountable roles and responsibilities for the creation, collection, processing, and disposal of data. The process will be perceived as having value is a benefit of data classification, as it demonstrates the commitment and awareness of the organization to protect its data assets and comply with its obligations.
Which one of the following affects the classification of data?
Assigned security label
Multilevel Security (MLS) architecture
Minimum query size
Passage of time
The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time.
In a data classification scheme, the data is owned by the
system security managers
business managers
Information Technology (IT) managers
end users
In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing technical support and guidance to the users and stakeholders. End users are the persons or entities that access and use the data for their legitimate purposes and needs. They are responsible for complying with the security policies and procedures for the data, as well as reporting any security issues or violations.
An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
Platform as a Service (PaaS)
Identity as a Service (IDaaS)
Desktop as a Service (DaaS)
Software as a Service (SaaS)
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that provides software applications for users to use, but it does not manage the user accounts for the software applications.
Which of the following is MOST important when assigning ownership of an asset to a department?
The department should report to the business owner
Ownership of the asset should be periodically reviewed
Individual accountability should be ensured
All members should be trained on their responsibilities
When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging.
The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities.
Which of the following is an initial consideration when developing an information security management system?
Identify the contractual security obligations that apply to the organizations
Understand the value of the information assets
Identify the level of residual risk that is tolerable to management
Identify relevant legislative and regulatory compliance requirements
When developing an information security management system (ISMS), an initial consideration is to understand the value of the information assets that the organization owns or processes. An information asset is any data, information, or knowledge that has value to the organization and supports its mission, objectives, and operations. Understanding the value of the information assets helps to determine the appropriate level of protection and investment for them, as well as the potential impact and consequences of losing, compromising, or disclosing them. Understanding the value of the information assets also helps to identify the stakeholders, owners, and custodians of the information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations when developing an ISMS. Identifying the contractual security obligations that apply to the organizations is a consideration that depends on the nature, scope, and context of the information assets, as well as the relationships and agreements with the external parties. Identifying the level of residual risk that is tolerable to management is a consideration that depends on the risk appetite and tolerance of the organization, as well as the risk assessment and analysis of the information assets. Identifying relevant legislative and regulatory compliance requirements is a consideration that depends on the legal and ethical obligations and expectations of the organization, as well as the jurisdiction and industry of the information assets.
What are the roles within a scrum methodology?
Scrum master, retirements manager, and development team
System owner, scrum master, and development team
Scrum master, quality assurance team, and scrum team
Product owner, scrum master, and scrum team
The roles within a scrum methodology are product owner, scrum master, and scrum team. Scrum is an agile framework for developing, delivering, and sustaining complex products. The product owner is the person who represents the stakeholders and the business value of the product. The product owner is responsible for defining the product vision, managing the product backlog, and prioritizing the features. The scrum master is the person who facilitates the scrum process and ensures that the scrum team adheres to the scrum values, principles, and practices. The scrum master is responsible for removing impediments, coaching the team, and ensuring collaboration and communication. The scrum team is the group of people who work together to deliver the product increments. The scrum team is self-organizing, cross-functional, and accountable for the quality and timeliness of the product. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 393; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8: Software Development Security, page 533]
Which of the following BEST provides for non-repudiation od user account actions?
Centralized authentication system
File auditing system
Managed Intrusion Detection System (IDS)
Centralized logging system
A centralized logging system is the best option for providing non-repudiation of user account actions. Non-repudiation is the ability to prove that a certain action or event occurred and who was responsible for it, without the possibility of denial or dispute. A centralized logging system is a system that collects, stores, and analyzes the log records generated by various sources, such as applications, servers, devices, or users. A centralized logging system can provide non-repudiation by capturing and preserving the evidence of the user account actions, such as the timestamp, the username, the IP address, the action performed, and the outcome. A centralized logging system can also prevent the tampering or deletion of the log records by using encryption, hashing, digital signatures, or write-once media. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, page 382. CISSP Practice Exam | Boson, Question 10.
An enterprise is developing a baseline cybersecurity standard its suppliers must meet before being awarded a contract. Which of the following statements is TRUE about
the baseline cybersecurity standard?
It should be expressed as general requirements.
It should be expressed in legal terminology.
It should be expressed in business terminology.
It should be expressed as technical requirements.
The statement that is true about the baseline cybersecurity standard that an enterprise is developing for its suppliers is that it should be expressed in business terminology. A baseline cybersecurity standard is a standard that defines the minimum level and type of security controls that are required to protect the information assets and systems of an organization, or its suppliers, from the security risks and threats that they may face. A baseline cybersecurity standard should be expressed in business terminology, which means using the language and concepts that are relevant and understandable for the business stakeholders, such as the management, the customers, or the suppliers. Expressing the baseline cybersecurity standard in business terminology can help to communicate and convey the security objectives and criteria, and to ensure the alignment and integration of the security controls with the business needs and goals of the organization, or its suppliers . References: [CISSP CBK, Fifth Edition, Chapter 2, page 113]; [100 CISSP Questions, Answers and Explanations, Question 18].
A database server for a financial application is scheduled for production deployment. Which of the following controls will BEST prevent tampering?
Service accounts removal
Data validation
Logging and monitoring
Data sanitization
The control that will best prevent tampering of a database server for a financial application is data validation. Tampering is a type of attack or threat that involves modifying, altering, or changing the data or the information on a system or a network, such as a database server, without authorization or permission, and with malicious or harmful intent, such as fraud, corruption, or sabotage. Tampering can compromise the confidentiality, integrity, or availability of the data or the information, and can cause harm or damage to the system or the network, or to the organization or the business, or to the stakeholders or the customers. Tampering can be prevented or mitigated by various controls or measures, such as:
Information security practitioners are in the midst of implementing a new firewall. Which of the following failure methods would BEST prioritize security in the event of failure?
Fail-Closed
Fail-Open
Fail-Safe
Failover
The failure method that would best prioritize security in the event of failure is fail-closed. Fail-closed is a failure mode that blocks or denies all access when a system or a component fails or malfunctions. Fail-closed is also known as fail-secure or fail-safe, as it prevents unauthorized or malicious access and preserves the confidentiality and integrity of the system or the data. Fail-closed is suitable for systems or components that handle sensitive or critical information or operations, and where security is more important than availability. Fail-open is a failure mode that allows or grants all access when a system or a component fails or malfunctions. Fail-open is also known as fail-insecure or fail-soft, as it enables authorized or legitimate access and preserves the availability and functionality of the system or the data. Fail-open is suitable for systems or components that handle non-sensitive or non-critical information or operations, and where availability is more important than security. Failover is a failure mode that switches or transfers the access to a backup or redundant system or component when the primary system or component fails or malfunctions. Failover is also known as fault tolerance or high availability, as it maintains the continuity and reliability of the system or the data. Failover is suitable for systems or components that handle vital or essential information or operations, and where both security and availability are equally important. References: [CISSP CBK Reference, 5th Edition, Chapter 7, page 377]; [CISSP All-in-One Exam Guide, 8th Edition, Chapter 7, page 357]
Which of the following describes the order in which a digital forensic process is usually conducted?
Ascertain legal authority, agree upon examination strategy, conduct examination, and report results
Ascertain legal authority, conduct investigation, report results, and agree upon examination strategy
Agree upon examination strategy, ascertain legal authority, conduct examination, and report results
Agree upon examination strategy, ascertain legal authority, report results, and conduct examination
The digital forensic process is usually conducted in the following order: ascertain legal authority, agree upon examination strategy, conduct examination, and report results. This order ensures that the forensic process is lawful, ethical, and effective. The first step is to ascertain legal authority, which means to verify that the forensic examiner has the proper authorization, consent, or warrant to perform the examination. This step is crucial to avoid violating any laws or privacy rights of the data owner or custodian. The second step is to agree upon examination strategy, which means to define the scope, objectives, and methods of the examination. This step is important to establish the expectations, roles, and responsibilities of the forensic examiner and the client or stakeholder. The third step is to conduct examination, which means to collect, preserve, analyze, and document the digital evidence. This step is essential to perform the forensic tasks in a systematic, accurate, and reliable manner. The fourth step is to report results, which means to present the findings, conclusions, and recommendations of the examination. This step is necessary to communicate the forensic outcomes in a clear, concise, and understandable way. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Security Assessment and Testing, p. 545-546. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 6: Security Assessment and Testing, p. 688-689.
Which of the following is the MOST common cause of system or security failures?
Lack of system documentation
Lack of physical security controls
Lack of change control
Lack of logging and monitoring
The most common cause of system or security failures is lack of change control. Change control is a process that ensures that any changes to the system or the environment are authorized, documented, tested, and approved before implementation. Change control helps to prevent errors, conflicts, inconsistencies, and vulnerabilities that may arise from unauthorized or uncontrolled changes. Lack of change control can result in system instability, performance degradation, functionality loss, security breaches, or compliance violations. Lack of system documentation, lack of physical security controls, and lack of logging and monitoring are also potential causes of system or security failures, but they are not as common or as critical as lack of change control. References: CISSP CBK Reference, 5th Edition, Chapter 3, page 145; CISSP All-in-One Exam Guide, 8th Edition, Chapter 3, page 113
A company hired an external vendor to perform a penetration test ofa new payroll system. The company’s internal test team had already performed an in-depth application
and security test of the system and determined that it met security requirements. However, the external vendor uncovered significant security weaknesses where sensitive
personal data was being sent unencrypted to the tax processing systems. What is the MOST likely cause of the security issues?
Failure to perform interface testing
Failure to perform negative testing
Inadequate performance testing
Inadequate application level testing
The most likely cause of the security issues is the failure to perform interface testing. Interface testing is a type of testing that verifies the functionality and security of the interactions and communications between different components or systems. Interface testing can detect and prevent errors, defects, or vulnerabilities that may occur due to the integration or interoperability of the components or systems. In this scenario, the company’s internal test team had performed an in-depth application and security test of the system, but they had failed to test the interface between the payroll system and the tax processing systems. This resulted in the external vendor uncovering significant security weaknesses where sensitive personal data was being sent unencrypted to the tax processing systems. Failure to perform negative testing, inadequate performance testing, or inadequate application level testing are not the most likely causes of the security issues, as they are not directly related to the interface between the payroll system and the tax processing systems. Negative testing is a type of testing that verifies the behavior and security of the system when invalid or unexpected inputs or conditions are given. Performance testing is a type of testing that measures the speed, scalability, reliability, or availability of the system under different workloads or scenarios. Application level testing is a type of testing that verifies the functionality and security of the application as a whole, rather than its individual components or systems. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 21: Software Development Security, page 2009.
Which of the following is the BEST method a security practitioner can use to ensure that systems and sub-system gracefully handle invalid input?
Negative testing
Integration testing
Unit testing
Acceptance testing
Negative testing is a method of software testing that involves providing invalid, unexpected, or erroneous input to the system or sub-system and verifying that it can handle it gracefully, without crashing, freezing, or producing incorrect results. Negative testing helps to identify the boundary conditions, error handling, and exception handling of the system or sub-system, and to ensure its robustness, reliability, and security. Negative testing is the best method among the given options to ensure that systems and sub-systems gracefully handle invalid input. Integration testing is a method of software testing that involves combining two or more components or modules of the system and verifying that they work together as expected. Integration testing helps to identify the interface, compatibility, and communication issues between the components or modules, and to ensure their functionality, performance, and quality. Integration testing does not focus on how the system or sub-system handles invalid input, but rather on how it interacts with other parts of the system. Unit testing is a method of software testing that involves testing each individual component or module of the system in isolation and verifying that it performs its intended function. Unit testing helps to identify the logic, syntax, and functionality errors of the component or module, and to ensure its correctness, completeness, and efficiency. Unit testing does not focus on how the system or sub-system handles invalid input, but rather on how it performs its own function. Acceptance testing is a method of software testing that involves testing the system or sub-system by the end users or customers and verifying that it meets their requirements and expectations. Acceptance testing helps to identify the usability, suitability, and satisfaction issues of the system or sub-system, and to ensure its acceptance, delivery, and deployment. Acceptance testing does not focus on how the system or sub-system handles invalid input, but rather on how it satisfies the user or customer needs. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, p. 823-824. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 8: Software Development Security, p. 1004-1005.
Which of the following is the MOST important first step in preparing for a security audit?
Identify team members.
Define the scope.
Notify system administrators.
Collect evidence.
The scope of a security audit defines the objectives, criteria, boundaries, and responsibilities of the audit process. It is the most important first step in preparing for a security audit, as it helps to establish the expectations, requirements, and limitations of the audit. Without a clear and agreed-upon scope, the audit may not achieve its intended goals, or it may exceed its budget, time, or resources. Identifying team members, notifying system administrators, and collecting evidence are all important steps in a security audit, but they should be done after the scope is defined.
What is the PRIMARY benefit of analyzing the partition layout of a hard disk volume when performing forensic analysis?
Sectors which are not assigned to a perform may contain data that was purposely hidden.
Volume address information for he hard disk may have been modified.
partition tables which are not completely utilized may contain data that was purposely hidden
Physical address information for the hard disk may have been modified.
The primary benefit of analyzing the partition layout of a hard disk volume when performing forensic analysis is to find data that was purposely hidden in unused or unallocated space. A partition is a logical division of a hard disk volume that can contain a file system, an operating system, or other data. A partition table is a data structure that stores information about the partitions, such as their size, location, type, and status. By analyzing the partition table, a forensic examiner can identify the partitions that are active, inactive, hidden, or deleted, and recover data from them. Sometimes, malicious users or attackers may hide data in partitions that are not completely utilized, such as slack space, free space, or unpartitioned space, to avoid detection or deletion. By analyzing the partition layout, a forensic examiner can discover and extract such data and use it as evidence. References:
Which of the following actions should be undertaken prior to deciding on a physical baseline Protection Profile (PP)?
Check the technical design.
Conduct a site survey.
Categorize assets.
Choose a suitable location.
Conducting a site survey is the action that should be undertaken prior to deciding on a physical baseline Protection Profile (PP). A PP is a document that defines the security requirements and objectives for a system or a product, and that can be used as a basis for evaluation, testing, or certification. A physical baseline PP is a type of PP that focuses on the physical security aspects of a system or a product, such as the locks, doors, windows, fences, cameras, alarms, or sensors. Conducting a site survey is a process that involves inspecting, measuring, and documenting the physical characteristics and conditions of a site, such as the layout, dimensions, access points, environmental factors, or potential threats. Conducting a site survey can help to determine the appropriate physical security requirements and objectives for a system or a product, and to select the suitable physical security controls and measures to meet those requirements and objectives. The other options are not the actions that should be undertaken prior to deciding on a physical baseline PP, as they either do not relate to the physical security aspects, or do not involve inspecting, measuring, or documenting the site. References: CISSP - Certified Information Systems Security Professional, Domain 3. Security Architecture and Engineering, 3.4 Implement and manage physical security, 3.4.1 Apply physical security concepts, 3.4.1.1 Site and facility design considerations; CISSP Exam Outline, Domain 3. Security Architecture and Engineering, 3.4 Implement and manage physical security, 3.4.1 Apply physical security concepts, 3.4.1.1 Site and facility design considerations
Which of the following protocols will allow the encrypted transfer of content on the Internet?
Server Message Block (SMB)
Secure copy
Hypertext Transfer Protocol (HTTP)
Remote copy
Secure copy (SCP) is a protocol that allows the encrypted transfer of content on the Internet. SCP uses Secure Shell (SSH) to provide authentication and encryption for the data transfer. SCP can be used to copy files between local and remote hosts, or between two remote hosts. References: Unable to provide specific references due to browsing limitations.
An organization contracts with a consultant to perform a System Organization Control (SOC) 2 audit on their internal security controls. An auditor documents a finding related to an Application Programming Interface (API) performing an action that is not aligned with the scope or objective of the system. Which trust service principle would
be MOST applicable in this situation?
Processing Integrity
Availability
Confidentiality
Security
Processing integrity is one of the five trust service principles that are used to evaluate the security controls of a service organization in a SOC 2 audit. Processing integrity refers to the completeness, validity, accuracy, timeliness, and authorization of the system’s processing of data and transactions. An API that performs an action that is not aligned with the scope or objective of the system violates the processing integrity principle, because it may compromise the quality, reliability, and consistency of the system’s output. The other trust service principles are availability, confidentiality, security, and privacy34. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 51; 2024 Pass4itsure CISSP Dumps, Question 9.
A malicious user gains access to unprotected directories on a web server. Which of the following is MOST likely the cause for this information disclosure?
Security misconfiguration
Cross-site request forgery (CSRF)
Structured Query Language injection (SQLi)
Broken authentication management
The most likely cause for the information disclosure is security misconfiguration. Security misconfiguration is a type of vulnerability that occurs when a web server or an application is not properly configured or secured, and exposes sensitive or unnecessary information or functionality to unauthorized or malicious users. Security misconfiguration can result in information disclosure, as it can allow a malicious user to gain access to unprotected directories, files, or databases on a web server, and to view, modify, or steal the data stored or transmitted by the web server or the application. Cross-site request forgery (CSRF), SQL injection (SQLi), or broken authentication management are not the most likely causes for the information disclosure, as they are not directly related to the configuration or the security of the web server or the application. CSRF is a type of attack that exploits the trust between a web browser and a web server, and forces the web browser to perform an unwanted or malicious action on behalf of the web server, such as transferring funds, changing passwords, or updating profiles. SQLi is a type of attack that exploits the vulnerability in the input validation or the database query of a web-based application, and injects malicious SQL statements into the application, such as retrieving, modifying, or deleting the data from the database. Broken authentication management is a type of vulnerability that occurs when a web-based application does not properly implement or protect the authentication or session management mechanisms, such as passwords, tokens, or cookies, and allows a malicious user to compromise or impersonate the identity or the session of a legitimate user. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 21: Software Development Security, page 2021.
For the purpose of classification, which of the following is used to divide trust domain and trust boundaries?
Network architecture
Integrity
Identity Management (IdM)
Confidentiality management
Network architecture is the factor that is used to divide trust domain and trust boundaries for the purpose of classification. A trust domain is a logical grouping of systems or networks that share a common security policy and trust level. A trust boundary is the border or the interface between two trust domains that have different security policies or trust levels. For the purpose of classification, trust domains and trust boundaries are used to define the scope and the level of protection for the information that is transmitted or stored within or across the domains. Network architecture is the factor that determines how the trust domains and trust boundaries are established and maintained, as it defines the physical and logical layout, the connectivity, the topology, the devices, and the protocols of the network. Network architecture can affect the security and the performance of the network, and influence the design and the implementation of the security controls and the encryption mechanisms for the information that flows through the network. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 143. CISSP Testking ISC Exam Questions, Question 10.
What method could be used to prevent passive attacks against secure voice communications between an organization and its vendor?
Encryption in transit
Configure a virtual private network (VPN)
Configure a dedicated connection
Encryption at rest
Encryption in transit is a method that could be used to prevent passive attacks against secure voice communications between an organization and its vendor. Encryption in transit is a technique that encrypts the data while it is being transmitted over a network, such as the Internet, a phone line, or a wireless connection. Encryption in transit can protect the data from eavesdropping, interception, or modification by unauthorized parties, such as passive attackers who monitor the network traffic. Encryption in transit can be achieved by using protocols such as Secure Sockets Layer (SSL), Transport Layer Security (TLS), Secure Shell (SSH), or Internet Protocol Security (IPsec), which provide encryption, authentication, and integrity for the data. The other options are not methods that could be used to prevent passive attacks against secure voice communications, as they either do not encrypt the data, do not apply to voice communications, or do not address the passive attacks. References: CISSP - Certified Information Systems Security Professional, Domain 4. Communication and Network Security, 4.2 Secure network components, 4.2.1 Establish secure communication channels, 4.2.1.1 Cryptography used to maintain communication security; CISSP Exam Outline, Domain 4. Communication and Network Security, 4.2 Secure network components, 4.2.1 Establish secure communication channels, 4.2.1.1 Cryptography used to maintain communication security
The personal laptop of an organization executive is stolen from the office, complete with personnel and project records. Which of the following should be done FIRST to mitigate future occurrences?
Encrypt disks on personal laptops.
Issue cable locks for use on personal laptops.
Create policies addressing critical information on personal laptops.
Monitor personal laptops for critical information.
The first step to mitigate future occurrences of personal laptops being stolen from the office with critical information is to create policies addressing this issue. Policies are high-level statements that define the goals and objectives of an organization and provide guidance for decision making. Policies can specify the roles and responsibilities of the users, the acceptable use of personal laptops, the security controls and requirements for protecting critical information, the reporting and response procedures in case of theft or loss, and the sanctions for non-compliance. The other options are possible actions to implement the policies, but they are not the first step. Encrypting disks, issuing cable locks, and monitoring personal laptops are examples of technical, physical, and administrative controls, respectively, that can help prevent or detect unauthorized access to critical information on personal laptops. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 51-52; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 29-30.
Which of the following is the MOST significant key management problem due to the number of keys created?
Keys are more difficult to provision and
Storage of the keys require increased security
Exponential growth when using asymmetric keys
Exponential growth when using symmetric keys
Key management is the process of generating, distributing, storing, using, and destroying cryptographic keys. One of the most significant key management problems is the number of keys created, which affects the complexity, scalability, and security of the cryptographic system. The number of keys created depends on the type of encryption used: symmetric or asymmetric. Symmetric encryption uses the same key for encryption and decryption, while asymmetric encryption uses a pair of keys: a public key for encryption and a private key for decryption. When using symmetric encryption, the number of keys created grows exponentially with the number of users or devices involved. For example, if there are n users or devices that need to communicate securely with each other, then each user or device needs to have a unique key for each other user or device. Therefore, the total number of keys needed is n(n-1)/2, which is an exponential function of n. This means that as the number of users or devices increases, the number of keys needed increases dramatically, making it more difficult to provision, store, and protect the keys. When using asymmetric encryption, the number of keys created grows linearly with the number of users or devices involved. For example, if there are n users or devices that need to communicate securely with each other, then each user or device needs to have only one pair of keys: a public key and a private key. Therefore, the total number of keys needed is 2n, which is a linear function of n. This means that as the number of users or devices increases, the number of keys needed increases proportionally, making it easier to provision, store, and protect the keys. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, p. 287-288. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 433-434.
What is the PRIMARY benefit of relying on Security Content Automation Protocol (SCAP)?
Save security costs for the organization.
Improve vulnerability assessment capabilities.
Standardize specifications between software security products.
Achieve organizational compliance with international standards.
The primary benefit of relying on Security Content Automation Protocol (SCAP) is to standardize specifications between software security products. SCAP is a suite of specifications that enable the automated and interoperable assessment, measurement, and reporting of the security posture and compliance of systems and networks. SCAP consists of six components: Common Platform Enumeration (CPE), Common Configuration Enumeration (CCE), Common Vulnerabilities and Exposures (CVE), Common Vulnerability Scoring System (CVSS), Extensible Configuration Checklist Description Format (XCCDF), and Open Vulnerability and Assessment Language (OVAL). SCAP enables different software security products, such as scanners, analyzers, or auditors, to use a common language and format to describe and exchange information about the security configuration, vulnerabilities, and risks of systems and networks. This can improve the accuracy, consistency, and efficiency of the security assessment and remediation processes, and reduce the complexity and cost of managing multiple security products. Saving security costs for the organization, improving vulnerability assessment capabilities, and achieving organizational compliance with international standards are also benefits of relying on SCAP, but they are not the primary benefit. Saving security costs for the organization is a benefit of relying on SCAP, as it can reduce the need for manual and labor-intensive security tasks, and increase the reuse and integration of security data and tools. Improving vulnerability assessment capabilities is a benefit of relying on SCAP, as it can provide more comprehensive, timely, and reliable information about the security weaknesses and exposures of systems and networks, and enable more effective and proactive mitigation and response actions. Achieving organizational compliance with international standards is a benefit of relying on SCAP, as it can help to demonstrate and verify the alignment of the security policies and practices of the organization with the established benchmarks and baselines, such as the National Institute of Standards and Technology (NIST) Special Publication 800-53 or the International Organization for Standardization (ISO) 27001. References:
What should be the FIRST action to protect the chain of evidence when a desktop computer is involved?
Take the computer to a forensic lab
Make a copy of the hard drive
Start documenting
Turn off the computer
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as:
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering.
A continuous information security-monitoring program can BEST reduce risk through which of the following?
Collecting security events and correlating them to identify anomalies
Facilitating system-wide visibility into the activities of critical user accounts
Encompassing people, process, and technology
Logging both scheduled and unscheduled system changes
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology. A continuous information security monitoring program is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. A continuous information security monitoring program can provide several benefits, such as:
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology, because it can ensure that the continuous information security monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of the system or network security. People, process, and technology are the three pillars of a continuous information security monitoring program, and they represent the following:
The other options are not the best ways to reduce risk through a continuous information security monitoring program, but rather specific or partial ways that can contribute to the risk reduction. Collecting security events and correlating them to identify anomalies is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one aspect of the security data and information, and it does not address the other aspects, such as the security objectives and requirements, the security controls and measures, and the security feedback and improvement. Facilitating system-wide visibility into the activities of critical user accounts is a partial way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only covers one element of the system or network security, and it does not cover the other elements, such as the security threats and vulnerabilities, the security incidents and impacts, and the security response and remediation. Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one type of the security events and activities, and it does not focus on the other types, such as the security alerts and notifications, the security analysis and correlation, and the security reporting and documentation.
When is a Business Continuity Plan (BCP) considered to be valid?
When it has been validated by the Business Continuity (BC) manager
When it has been validated by the board of directors
When it has been validated by all threat scenarios
When it has been validated by realistic exercises
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as:
A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as:
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios.
What would be the MOST cost effective solution for a Disaster Recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours?
Warm site
Hot site
Mirror site
Cold site
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours. A DR site is a backup facility that can be used to restore the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DR site can have different levels of readiness and functionality, depending on the organization’s recovery objectives and budget. The main types of DR sites are:
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it can provide a balance between the recovery time and the recovery cost. A warm site can enable the organization to resume its critical functions and operations within a reasonable time frame, without spending too much on the DR site maintenance and operation. A warm site can also provide some flexibility and scalability for the organization to adjust its recovery strategies and resources according to its needs and priorities.
The other options are not the most cost effective solutions for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, but rather solutions that are either too costly or too slow for the organization’s recovery objectives and budget. A hot site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to invest a lot of money on the DR site equipment, software, and services, and to pay for the ongoing operational and maintenance costs. A hot site may be more suitable for the organization’s systems that cannot be unavailable for more than a few hours or minutes, or that have very high availability and performance requirements. A mirror site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to duplicate its entire primary site, with the same hardware, software, data, and applications, and to keep them online and synchronized at all times. A mirror site may be more suitable for the organization’s systems that cannot afford any downtime or data loss, or that have very strict compliance and regulatory requirements. A cold site is a solution that is too slow for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to spend a lot of time and effort on the DR site installation, configuration, and restoration, and to rely on other sources of backup data and applications. A cold site may be more suitable for the organization’s systems that can be unavailable for more than a few days or weeks, or that have very low criticality and priority.
Which of the following is a PRIMARY advantage of using a third-party identity service?
Consolidation of multiple providers
Directory synchronization
Web based logon
Automated account management
Consolidation of multiple providers is the primary advantage of using a third-party identity service. A third-party identity service is a service that provides identity and access management (IAM) functions, such as authentication, authorization, and federation, for multiple applications or systems, using a single identity provider (IdP). A third-party identity service can offer various benefits, such as:
Consolidation of multiple providers is the primary advantage of using a third-party identity service, because it can simplify and streamline the IAM architecture and processes, by reducing the number of IdPs and IAM systems that are involved in managing the identities and access for multiple applications or systems. Consolidation of multiple providers can also help to avoid the issues or risks that might arise from having multiple IdPs and IAM systems, such as the inconsistency, redundancy, or conflict of the IAM policies and controls, or the inefficiency, vulnerability, or disruption of the IAM functions.
The other options are not the primary advantages of using a third-party identity service, but rather secondary or specific advantages for different aspects or scenarios of using a third-party identity service. Directory synchronization is an advantage of using a third-party identity service, but it is more relevant for the scenario where the organization has an existing directory service, such as LDAP or Active Directory, that stores and manages the user accounts and attributes, and wants to synchronize them with the third-party identity service, to enable the SSO or federation for the users. Web based logon is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service uses a web-based protocol, such as SAML or OAuth, to facilitate the SSO or federation for the users, by redirecting them to a web-based logon page, where they can enter their credentials or consent. Automated account management is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service provides the IAM functions, such as provisioning, deprovisioning, or updating, for the user accounts and access rights, using an automated or self-service mechanism, such as SCIM or JIT.
What is the MOST important step during forensic analysis when trying to learn the purpose of an unknown application?
Disable all unnecessary services
Ensure chain of custody
Prepare another backup of the system
Isolate the system from the network
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application. An unknown application is an application that is not recognized or authorized by the system or network administrator, and that may have been installed or executed without the user’s knowledge or consent. An unknown application may have various purposes, such as:
Forensic analysis is a process that involves examining and investigating the system or network for any evidence or traces of the unknown application, such as its origin, nature, behavior, and impact. Forensic analysis can provide several benefits, such as:
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the forensic analysis is conducted in a safe and controlled environment. Isolating the system from the network can also help to:
The other options are not the most important steps during forensic analysis when trying to learn the purpose of an unknown application, but rather steps that should be done after or along with isolating the system from the network. Disabling all unnecessary services is a step that should be done after isolating the system from the network, because it can ensure that the system is optimized and simplified for the forensic analysis, and that the system resources and functions are not consumed or affected by any irrelevant or redundant services. Ensuring chain of custody is a step that should be done along with isolating the system from the network, because it can ensure that the integrity and authenticity of the evidence are maintained and documented throughout the forensic process, and that the evidence can be traced and verified. Preparing another backup of the system is a step that should be done after isolating the system from the network, because it can ensure that the system data and configuration are preserved and replicated for the forensic analysis, and that the system can be restored and recovered in case of any damage or loss.
Which of the following is the FIRST step in the incident response process?
Determine the cause of the incident
Disconnect the system involved from the network
Isolate and contain the system involved
Investigate all symptoms to confirm the incident
Investigating all symptoms to confirm the incident is the first step in the incident response process. An incident is an event that violates or threatens the security, availability, integrity, or confidentiality of the IT systems or data. An incident response is a process that involves detecting, analyzing, containing, eradicating, recovering, and learning from an incident, using various methods and tools. An incident response can provide several benefits, such as:
Investigating all symptoms to confirm the incident is the first step in the incident response process, because it can ensure that the incident is verified and validated, and that the incident response is initiated and escalated. A symptom is a sign or an indication that an incident may have occurred or is occurring, such as an alert, a log, or a report. Investigating all symptoms to confirm the incident involves collecting and analyzing the relevant data and information from various sources, such as the IT systems, the network, the users, or the external parties, and determining whether an incident has actually happened or is happening, and how serious or urgent it is. Investigating all symptoms to confirm the incident can also help to:
The other options are not the first steps in the incident response process, but rather steps that should be done after or along with investigating all symptoms to confirm the incident. Determining the cause of the incident is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the root cause and source of the incident are identified and analyzed, and that the incident response is directed and focused. Determining the cause of the incident involves examining and testing the affected IT systems and data, and tracing and tracking the origin and path of the incident, using various techniques and tools, such as forensics, malware analysis, or reverse engineering. Determining the cause of the incident can also help to:
Disconnecting the system involved from the network is a step that should be done along with investigating all symptoms to confirm the incident, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the incident response is conducted in a safe and controlled environment. Disconnecting the system involved from the network can also help to:
Isolating and containing the system involved is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the incident is confined and restricted, and that the incident response is continued and maintained. Isolating and containing the system involved involves applying and enforcing the appropriate security measures and controls to limit or stop the activity and impact of the incident on the IT systems and data, such as firewall rules, access policies, or encryption keys. Isolating and containing the system involved can also help to:
A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following?
Guaranteed recovery of all business functions
Minimization of the need decision making during a crisis
Insurance against litigation following a disaster
Protection from loss of organization resources
Minimization of the need for decision making during a crisis is the main benefit that a Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies, procedures, and resources that enable an organization to continue or resume its critical functions and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such as:
Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will provide, because it can ensure that the organization and its staff have a clear and consistent guidance and direction on how to respond and act during a disruption or disaster, and avoid any confusion, uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to reduce the stress and pressure on the organization and its staff during a crisis, and increase their confidence and competence in executing the plans.
The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged. A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may have to suspend or terminate the less critical or non-essential business functions. Insurance against litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a guarantee or protection that the organization will not face any legal or regulatory consequences or liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or regulatory risks, and may have to comply with or report to the relevant authorities or parties. Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it is not a prevention or avoidance of any damage or destruction of the organization’s assets or resources during a disruption or disaster, especially if the disruption or disaster is physical or natural. A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have to incur some costs or losses.
Which of the following types of business continuity tests includes assessment of resilience to internal and external risks without endangering live operations?
Walkthrough
Simulation
Parallel
White box
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations. Business continuity is the ability of an organization to maintain or resume its critical functions and operations in the event of a disruption or disaster. Business continuity testing is the process of evaluating and validating the effectiveness and readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various methods and scenarios. Business continuity testing can provide several benefits, such as:
There are different types of business continuity tests, depending on the scope, purpose, and complexity of the test. Some of the common types are:
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations, because it can simulate various types of risks, such as natural, human, or technical, and assess how the organization and its systems can cope and recover from them, without actually causing any harm or disruption to the live operations. Simulation can also help to identify and mitigate any potential risks that might affect the live operations, and to improve the resilience and preparedness of the organization and its systems.
The other options are not the types of business continuity tests that include assessment of resilience to internal and external risks without endangering live operations, but rather types that have other objectives or effects. Walkthrough is a type of business continuity test that does not include assessment of resilience to internal and external risks, but rather a review and discussion of the BCP and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does not endanger live operations, but rather maintains them, while activating and operating the alternate site or system. Full interruption is a type of business continuity test that does endanger live operations, by shutting them down and transferring them to the alternate site or system.
With what frequency should monitoring of a control occur when implementing Information Security Continuous Monitoring (ISCM) solutions?
Continuously without exception for all security controls
Before and after each change of the control
At a rate concurrent with the volatility of the security control
Only during system implementation and decommissioning
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. ISCM can provide several benefits, such as:
A security control is a measure or mechanism that is implemented to protect the system or network from the security threats or risks, by preventing, detecting, or correcting the security incidents or impacts. A security control can have various types, such as administrative, technical, or physical, and various attributes, such as preventive, detective, or corrective. A security control can also have different levels of volatility, which is the degree or frequency of change or variation of the security control, due to various factors, such as the security requirements, the threat landscape, or the system or network environment.
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing ISCM solutions, because it can ensure that the ISCM solutions can capture and reflect the current and accurate state and performance of the security control, and can identify and report any issues or risks that might affect the security control. Monitoring of a control at a rate concurrent with the volatility of the security control can also help to optimize the ISCM resources and efforts, by allocating them according to the priority and urgency of the security control.
The other options are not the correct frequencies for monitoring of a control when implementing ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not feasible or necessary to monitor all security controls at the same and constant rate, regardless of their volatility or importance. Continuously monitoring all security controls without exception might cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before and after each change of the control is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not sufficient or timely to monitor the security control only when there is a change of the security control, and not during the normal operation of the security control. Monitoring the security control only before and after each change might cause the ISCM solutions to miss or ignore the security status, events, and activities that occur between the changes of the security control, and might delay or hinder the ISCM solutions from detecting and responding to the security issues or incidents that affect the security control. Only during system implementation and decommissioning is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not appropriate or effective to monitor the security control only during the initial or final stages of the system or network lifecycle, and not during the operational or maintenance stages of the system or network lifecycle. Monitoring the security control only during system implementation and decommissioning might cause the ISCM solutions to neglect or overlook the security status, events, and activities that occur during the regular or ongoing operation of the system or network, and might prevent or limit the ISCM solutions from improving and optimizing the security control.
An organization is found lacking the ability to properly establish performance indicators for its Web hosting solution during an audit. What would be the MOST probable cause?
Absence of a Business Intelligence (BI) solution
Inadequate cost modeling
Improper deployment of the Service-Oriented Architecture (SOA)
Insufficient Service Level Agreement (SLA)
Insufficient Service Level Agreement (SLA) would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit. A Web hosting solution is a service that provides the infrastructure, resources, and tools for hosting and maintaining a website or a web application on the internet. A Web hosting solution can offer various benefits, such as:
A Service Level Agreement (SLA) is a contract or an agreement that defines the expectations, responsibilities, and obligations of the parties involved in a service, such as the service provider and the service consumer. An SLA can include various components, such as:
Insufficient SLA would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it could mean that the SLA does not include or specify the appropriate service level indicators or objectives for the Web hosting solution, or that the SLA does not provide or enforce the adequate service level reporting or penalties for the Web hosting solution. This could affect the ability of the organization to measure and assess the Web hosting solution quality, performance, and availability, and to identify and address any issues or risks in the Web hosting solution.
The other options are not the most probable causes for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, but rather the factors that could affect or improve the Web hosting solution in other ways. Absence of a Business Intelligence (BI) solution is a factor that could affect the ability of the organization to analyze and utilize the data and information from the Web hosting solution, such as the web traffic, behavior, or conversion. A BI solution is a system that involves the collection, integration, processing, and presentation of the data and information from various sources, such as the Web hosting solution, to support the decision making and planning of the organization. However, absence of a BI solution is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the analysis or usage of the performance indicators for the Web hosting solution. Inadequate cost modeling is a factor that could affect the ability of the organization to estimate and optimize the cost and value of the Web hosting solution, such as the web hosting fees, maintenance costs, or return on investment. A cost model is a tool or a method that helps the organization to calculate and compare the cost and value of the Web hosting solution, and to identify and implement the best or most efficient Web hosting solution. However, inadequate cost modeling is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the estimation or optimization of the cost and value of the Web hosting solution. Improper deployment of the Service-Oriented Architecture (SOA) is a factor that could affect the ability of the organization to design and develop the Web hosting solution, such as the web services, components, or interfaces. A SOA is a software architecture that involves the modularization, standardization, and integration of the software components or services that provide the functionality or logic of the Web hosting solution. A SOA can offer various benefits, such as:
However, improper deployment of the SOA is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the design or development of the Web hosting solution.
Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the following?
Hardware and software compatibility issues
Applications’ critically and downtime tolerance
Budget constraints and requirements
Cost/benefit analysis and business objectives
Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP should include various components, such as:
Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives, because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a technique that compares the costs and benefits of different recovery strategies, and determines the optimal one that provides the best value for money. A business objective is a goal or a target that the organization wants to achieve through its IT systems and infrastructure, such as increasing the productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the cost/benefit analysis and business objectives can help to:
The other options are not the factors that the recovery strategies of a DRP must be aligned with, but rather factors that should be considered or addressed when developing or implementing the recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be considered when developing the recovery strategies of a DRP, because they can affect the functionality and interoperability of the IT systems and infrastructure, and may require additional resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are factors that should be addressed when implementing the recovery strategies of a DRP, because they can determine the priority and urgency of the recovery for different applications, and may require different levels of recovery objectives and resources. Budget constraints and requirements are factors that should be considered when developing the recovery strategies of a DRP, because they can limit the availability and affordability of the IT resources and funds for the recovery, and may require trade-offs or compromises to balance them.
What is the PRIMARY reason for implementing change management?
Certify and approve releases to the environment
Provide version rollbacks for system changes
Ensure that all applications are approved
Ensure accountability for changes to the environment
Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change management can provide several benefits, such as:
Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment.
TESTED 21 Nov 2024
Copyright © 2014-2024 DumpsTool. All Rights Reserved