Refer to the exhibit.
What does the ‘’X’’ represent on the icon?
Share Disconnected File
Corrupt ISO
Distributed shared file
Tiered File
The “X” on the icon represents a distributed shared file, which is a file that belongs to a distributed share or export. A distributed share or export is a type of SMB share or NFS export that distributes the hosting of top-level directories across multiple FSVMs. The “X” indicates that the file is not hosted by the current FSVM, but by another FSVM in the cluster. The “X” also helps to identify which files are eligible for migration when using the Nutanix Files Migration Tool. References: Nutanix Files Administration Guide, page 34; Nutanix Files Migration Tool User Guide, page 10
An administrator has changed the user management authentication on an existing file server. A user accessing the NFS share receives a "Permission denied" error in the Linux client machine. Which action will most efficiently resolve this problem?
Change the permission for user.
Restart the nfs-utils service.
Restart the client machine.
Restart the RPC-GSSAPI service on the clients.
Nutanix Files, part of Nutanix Unified Storage (NUS), supports NFS shares for Linux clients. The administrator changed the user management authentication on the file server (e.g., updated Active Directory settings, modified user mappings, or changed authentication methods like Kerberos). This change has caused a "Permission denied" error for a user accessing an NFS share from a Linux client, indicating an authentication or permission issue.
Analysis of Options:
Option A (Change the permission for user): Incorrect. While incorrect permissions can cause a "Permission denied" error, the error here is likely due to the authentication change on the file server, not a share-level permission issue. Changing user permissions might be a workaround, but it does not address the root cause (authentication mismatch) and is less efficient than resolving the authentication issue directly.
Option B (Restart the nfs-utils service): Correct. The nfs-utils service on the Linux client manages NFS-related operations, including authentication and mounting. After the file server’s authentication settings are changed (e.g., new user mappings, Kerberos configuration), the client may still be using cached credentials or an outdated authentication state. Restarting the nfs-utils service (e.g., via systemctl restart nfs-utils) refreshes the client’s NFS configuration, re-authenticates with the file server, and resolves the "Permission denied" error efficiently.
Option C (Restart the client machine): Incorrect. Restarting the entire client machine would force a reconnection to the NFS share and might resolve the issue by clearing cached credentials, but it is not the most efficient solution. It causes unnecessary downtime for the user and other processes on the client, whereas restarting the nfs-utils service (option B) achieves the same result with less disruption.
Option D (Restart the RPC-GSSAPI service on the clients): Incorrect. The RPC-GSSAPI service (related to GSSAPI for Kerberos authentication) might be relevant if the file server is using Kerberos for NFS authentication. However, there is no standard rpc-gssapi service in Linux—GSSAPI is typically handled by rpc.gssd, a daemon within nfs-utils. Restarting rpc.gssd directly is less efficient than restarting the entire nfs-utils service (which includes rpc.gssd), and the question does not specify Kerberos as the authentication method, making this option less applicable.
Why Option B?
The "Permission denied" error after an authentication change on the file server suggests that the Linux client’s NFS configuration is out of sync with the new authentication settings. Restarting the nfs-utils service on the client refreshes the NFS client’s state, re-authenticates with the file server using the updated authentication settings, and resolves the error efficiently without requiring a full client restart or manual permission changes.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“If a user receives a ‘Permission denied’ error on an NFS share after changing user management authentication on the file server, the issue is often due to the Linux client using cached credentials or an outdated authentication state. To resolve this efficiently, restart the nfs-utils service on the client (e.g., systemctl restart nfs-utils) to refresh the NFS configuration and re-authenticate with the file server.”
An administrator needs to improve the performance for Volume Group storage connected to a group of VMs with intensive I/O. Which vg.update vg_name command parameter should be used to distribute the I/O across multiple CVMs?
flash_mode=enable
load_balance_vm_attachments=true
load_balance_vm_attachments=enable
flash_mode=true
Nutanix Volumes, part of Nutanix Unified Storage (NUS), provides block storage via iSCSI to VMs and external hosts. A Volume Group (VG) in Nutanix Volumes is a collection of volumes that can be attached to VMs. For VMs with intensive I/O, performance can be improved by distributing the I/O load across multiple Controller VMs (CVMs) in the Nutanix cluster. The vg.update command in the Nutanix CLI (e.g., ncli) is used to modify Volume Group settings, including parameters that affect I/O distribution.
Analysis of Options:
Option A (flash_mode=enable): Incorrect. The flash_mode parameter enables flash mode for a Volume Group, which prioritizes SSDs for I/O operations to improve performance. While this can help with intensive I/O, it does not distribute I/O across multiple CVMs—it focuses on storage tiering, not load balancing.
Option B (load_balance_vm_attachments=true): Correct. The load_balance_vm_attachments=true parameter enables load balancing of VM attachments for a Volume Group. When enabled, this setting distributes the iSCSI connections from VMs to multiple CVMs in the cluster, balancing the I/O load across CVMs. This improves performance for VMs with intensive I/O by ensuring that no single CVM becomes a bottleneck.
Option C (load_balance_vm_attachments=enable): Incorrect. While this option is close to the correct parameter, the syntax is incorrect. The load_balance_vm_attachments parameter uses true or false as its value, not enable. The correct syntax is load_balance_vm_attachments=true (option B).
Option D (flash_mode=true): Incorrect. Similar to option A, flash_mode=true enables flash mode for the Volume Group, prioritizing SSDs for I/O. This does not distribute I/O across multiple CVMs, as it addresses storage tiering rather than load balancing.
Why Option B?
The load_balance_vm_attachments=true parameter in the vg.update command enables load balancing for VM attachments to a Volume Group, distributing iSCSI connections across multiple CVMs. This ensures that the I/O load from VMs with intensive I/O is balanced across the cluster’s CVMs, improving performance by preventing any single CVM from becoming a bottleneck. This directly addresses the requirement to distribute I/O for better performance.
Exact Extract from Nutanix Documentation:
From the Nutanix Volumes Administration Guide (available on the Nutanix Portal):
“To improve performance for Volume Groups with intensive I/O, use the vg.update command to enable load balancing with the parameter load_balance_vm_attachments=true. This setting distributes iSCSI connections from VMs across multiple CVMs in the cluster, balancing the I/O load and preventing bottlenecks.”
An administrator is required to place all iSCSI traffic on an isolated network. How can the administrator meet this requirement?
Create a new network interface on the CVMs via ncli.
Configure the Data Services IP on an isolated network.
Configure network segmentation for Volumes.
Create a Volumes network in Prism Central.
Nutanix Volumes, part of Nutanix Unified Storage (NUS), provides block storage services via iSCSI to external hosts, such as physical servers. The iSCSI traffic is managed by the Controller VMs (CVMs) in the Nutanix cluster, and a virtual IP address called the Data Services IP is used for iSCSI communication. To isolate iSCSI traffic on a dedicated network, the administrator must ensure that this traffic is routed over the isolated network.
Analysis of Options:
Option A (Create a new network interface on the CVMs via ncli): Incorrect. While it’s possible to create additional network interfaces on CVMs using the ncli command-line tool, this is not the recommended or standard method for isolating iSCSI traffic. The Data Services IP is the primary mechanism for managing iSCSI traffic, and it can be assigned to an isolated network without creating new interfaces on each CVM.
Option B (Configure the Data Services IP on an isolated network): Correct. The Data Services IP (also known as the iSCSI Data Services IP) is a cluster-wide virtual IP used for iSCSI traffic. By configuring the Data Services IP to use an IP address on the isolated network (e.g., a specific VLAN or subnet dedicated to iSCSI), the administrator ensures that all iSCSI traffic is routed over that network, meeting the requirement for isolation. This configuration is done in Prism Element under the cluster’s iSCSI settings.
Option C (Configure network segmentation for Volumes): Incorrect. Network segmentation in Nutanix typically refers to isolating traffic using VLANs or separate subnets, which is indirectly achieved by configuring the Data Services IP (option B). However, “network segmentation for Volumes” is not a specific feature or configuration step in Nutanix; the correct approach is to assign the Data Services IP to the isolated network, which inherently segments the traffic.
Option D (Create a Volumes network in Prism Central): Incorrect. Prism Central is used for centralized management of multiple clusters, but the configuration of iSCSI traffic (e.g., the Data Services IP) is performed at the cluster level in Prism Element, not Prism Central. There is no concept of a “Volumes network” in Prism Central for this purpose.
Why Option B?
The Data Services IP is the key configuration for iSCSI traffic in a Nutanix cluster. By assigning this IP to an isolated network (e.g., a dedicated VLAN or subnet), the administrator ensures that all iSCSI traffic is routed over that network, achieving the required isolation. This is a standard and recommended approach in Nutanix for isolating iSCSI traffic.
Exact Extract from Nutanix Documentation:
From the Nutanix Volumes Administration Guide (available on the Nutanix Portal):
“To isolate iSCSI traffic on a dedicated network, configure the Data Services IP with an IP address on the isolated network. This ensures that all iSCSI traffic between external hosts and the Nutanix cluster is routed over the specified network, providing network isolation as required.”
Which Nutanix Unified Storage capability allows for monitoring usage for all Files deployment globally?
File Analytics
Nutanix Cloud Manager
Files Manager
Data Lens
Data Lens is a feature that provides insights into the data stored in Files across multiple sites, including different geographical locations. Data Lens allows administrators to monitor usage, performance, capacity, and growth trends for all Files deployments globally. Data Lens also provides reports on file types, sizes, owners, permissions, and access patterns3. References: Nutanix Data Lens Administration Guide3
An administrator is expanding an Objects store cluster. Which action should the administrator take to ensure the environment is configured properly prior to performing the installation?
Configure NTP on only Prism Central.
Upgrade MSP to 2.0 or later.
Upgrade Prism Element to 5.20 or later.
Configure DNS on only Prism Element.
Nutanix Objects, part of Nutanix Unified Storage (NUS), is deployed as Object Store Service VMs on a Nutanix cluster. Expanding an Objects store cluster involves adding more resources (e.g., nodes, Object Store Service VMs) to handle increased demand. Prior to expansion, the environment must meet certain prerequisites to ensure a successful installation.
Analysis of Options:
Option A (Configure NTP on only Prism Central): Incorrect. Network Time Protocol (NTP) synchronization is critical for Nutanix clusters, but it must be configured on both Prism Central and Prism Element (the cluster) to ensure consistent time across all components, including Object Store Service VMs. Configuring NTP on only Prism Central is insufficient and can lead to time synchronization issues during expansion.
Option B (Upgrade MSP to 2.0 or later): Incorrect. MSP (Microservices Platform) is a Nutanix component used for certain services, but it is not directly related to Nutanix Objects expansion. Objects relies on AOS and Prism versions, not MSP, and there is no specific MSP version requirement mentioned in Objects documentation for expansion.
Option C (Upgrade Prism Element to 5.20 or later): Correct. Nutanix Objects has specific version requirements for AOS (which runs on Prism Element) to support features and ensure compatibility during expansion. According to Nutanix documentation, AOS 5.20 or later is recommended for Objects deployments and expansions, as it includes stability improvements, bug fixes, and support for newer Objects features. Upgrading Prism Element to 5.20 or later ensures the environment is properly configured for a successful Objects store cluster expansion.
Option D (Configure DNS on only Prism Element): Incorrect. DNS configuration is important for name resolution in a Nutanix environment, but it must be configured for both Prism Element and Prism Central, as well as for the Object Store Service VMs. Configuring DNS on only Prism Element is insufficient, as Objects expansion requires proper name resolution across all components, including Prism Central for management.
Why Option C?
Expanding a Nutanix Objects store cluster requires the underlying AOS version (managed via Prism Element) to meet minimum requirements for compatibility and stability. AOS 5.20 or later includes necessary updates for Objects, making this upgrade a critical prerequisite to ensure the environment is properly configured for expansion. Other options, like NTP and DNS, are also important but require broader configuration, and MSP is not relevant in this context.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“Before expanding a Nutanix Objects store cluster, ensure that the environment meets the minimum requirements. Upgrade Prism Element to AOS 5.20 or later to ensure compatibility, stability, and support for Objects expansion features.”
An administrator has been asked to confirm the ability of a physical windows Server 2019 host to boot from storage on a Nutanix AOS cluster.
Which statement is true regarding this confirmation by the administrator?
Physical servers may boot from an object bucket from the data services IP and MPIO is required.
Physical servers may boot from a volume group from the data services IP and MPIO is not required.
Physical servers may boot from a volume group from the data services IP and MPIO is
Physical servers may boot from an object bucket from the data services IP address and MPIO is not required.
Nutanix Volumes allows physical servers to boot from a volume group that is exposed as an iSCSI target from the data services IP. To ensure high availability and load balancing, multipath I/O (MPIO) is required on the physical server. Object buckets cannot be used for booting physical servers1. References: Nutanix Volumes Administration Guide1
An organization is implementing their first Nutanix cluster. In addition to hosting VMs, the cluster will be providing block storage services to existing physical servers, as well as CIFS shares and NFS exports to the end users. Security policies dictate that separate networks are used for different functions, which are already configured as:
Management - VLAN 500 - 10.10.50.0/24
iSCSI access - VLAN 510 - 10.10.51.0/24
Files access - VLAN 520 - 10.10.52.0/24How should the administrator configure the cluster to ensure the CIFS and NFS traffic is on the correct network and accessible by the end users?
Create a new subnet in Network Configuration, assign it VLAN 520, and configure the Files client network on it.
Configure the Data Services IP in Prism Element with an IP on VLAN 520.
Create a new virtual switch in Network Configuration, assign it VLAN 520, and configure the Files client network on it.
Configure the Data Services IP in Prism Central with an IP on VLAN 520.
The organization is deploying a Nutanix cluster to provide block storage (via iSCSI), CIFS shares, and NFS exports (via Nutanix Files). Nutanix Files, part of Nutanix Unified Storage (NUS), uses File Server Virtual Machines (FSVMs) to serve CIFS (SMB) and NFS shares to end users. The security policy requires separate networks:
Management traffic on VLAN 500 (10.10.50.0/24).
iSCSI traffic on VLAN 510 (10.10.51.0/24).
Files traffic on VLAN 520 (10.10.52.0/24).
To ensure CIFS and NFS traffic uses VLAN 520 and is accessible by end users, the cluster must be configured to route Files traffic over the correct network.
Analysis of Options:
Option A (Create a new subnet in Network Configuration, assign it VLAN 520, and configure the Files client network on it): Correct. Nutanix Files requires two networks: a Client network (for CIFS/NFS traffic to end users) and a Storage network (for internal communication with the cluster’s storage pool). To isolate Files traffic on VLAN 520, the administrator should create a new subnet in the cluster’s Network Configuration (via Prism Element), assign it to VLAN 520, and then configure the Files instance to use this subnet as its Client network. This ensures that CIFS and NFS traffic is routed over VLAN 520, making the shares accessible to end users on that network.
Option B (Configure the Data Services IP in Prism Element with an IP on VLAN 520): Incorrect. The Data Services IP is used for iSCSI traffic (as seen in Question 25, where it was configured for VLAN 510). It is not used for CIFS or NFS traffic, which is handled by Nutanix Files. Configuring the Data Services IP on VLAN 520 would incorrectly route iSCSI traffic, not Files traffic.
Option C (Create a new virtual switch in Network Configuration, assign it VLAN 520, and configure the Files client network on it): Incorrect. A virtual switch is used for VM networking (e.g., for AHV VMs), but Nutanix Files traffic is handled by FSVMs, which use the cluster’s network configuration for external communication. While FSVMs are VMs, their network configuration is managed at the Files instance level by specifying the Client network, not by creating a new virtual switch. The correct approach is to configure the subnet for the Files Client network, as in option A.
Option D (Configure the Data Services IP in Prism Central with an IP on VLAN 520): Incorrect. As with option B, the Data Services IP is for iSCSI traffic, not CIFS/NFS traffic. Additionally, the Data Services IP is configured in Prism Element, not Prism Central, making this option doubly incorrect.
Why Option A?
Nutanix Files requires a Client network for CIFS and NFS traffic. By creating a new subnet in the cluster’s Network Configuration, assigning it to VLAN 520, and configuring the Files instance to use this subnet as its Client network, the administrator ensures that all CIFS and NFS traffic is routed over VLAN 520, meeting the security policy and ensuring accessibility for end users.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Nutanix Files requires a Client network for CIFS and NFS traffic to end users. To isolate Files traffic on a specific network, create a subnet in the cluster’s Network Configuration in Prism Element, assign it the appropriate VLAN (e.g., VLAN 520), and configure the Files instance to use this subnet as its Client network. This ensures that all client traffic (SMB/NFS) is routed over the specified network.”
An administrator has received reports of resource issues on a file server. The administrator needs to review the following graphs, as displayed in the exhibit:
Storage Used
Open Connections
Number of Files
Top Shares by Current Capacity
Top Shares by Current ConnectionsWhere should the administrator complete this action?
Files Console Shares View
Files Console Monitoring View
Files Console Data Management View
Files Console Dashboard View
Nutanix Files, part of Nutanix Unified Storage (NUS), provides a management interface called the Files Console, accessible via Prism Central. The administrator needs to review graphs related to resource usage on a file server, including Storage Used, Open Connections, Number of Files, Top Shares by Current Capacity, and Top Shares by Current Connections. These graphs provide insights into the file server’s performance and resource utilization, helping diagnose reported resource issues.
Analysis of Options:
Option A (Files Console Shares View): Incorrect. The Shares View in the Files Console displays details about individual shares (e.g., capacity, permissions, quotas), but it does not provide high-level graphs like Storage Used, Open Connections, or Top Shares by Current Capacity/Connections. It focuses on share-specific settings, not overall file server metrics.
Option B (Files Console Monitoring View): Incorrect. While “Monitoring View” sounds plausible, there is no specific “Monitoring View” tab in the Files Console. Monitoring-related data (e.g., graphs, metrics) is typically presented in the Dashboard View, not a separate Monitoring View.
Option C (Files Console Data Management View): Incorrect. There is no “Data Management View” in the Files Console. Data management tasks (e.g., Smart Tiering, as in Question 58) are handled in other sections, but graphs like Storage Used and Top Shares are not part of a dedicated Data Management View.
Option D (Files Console Dashboard View): Correct. The Dashboard View in the Files Console provides an overview of the file server’s performance and resource usage through various graphs and metrics. It includes graphs such as Storage Used (total storage consumption), Open Connections (active client connections), Number of Files (total files across shares), Top Shares by Current Capacity (shares consuming the most storage), and Top Shares by Current Connections (shares with the most active connections). This view is designed to help administrators monitor and troubleshoot resource issues, making it the correct location for reviewing these graphs.
Why Option D?
The Files Console Dashboard View is the central location for monitoring file server metrics through graphs like Storage Used, Open Connections, Number of Files, and Top Shares by Capacity/Connections. These graphs provide a high-level overview of resource utilization, allowing the administrator to diagnose reported resource issues effectively.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“The Files Console Dashboard View provides an overview of file server performance and resource usage through graphs, including Storage Used, Open Connections, Number of Files, Top Shares by Current Capacity, and Top Shares by Current Connections. Use the Dashboard View to monitor and troubleshoot resource issues on the file server.”
While creating a replication rule for a bucket, an administrator finds that the Object Store drop-down option under the Destination section shows an empty list. Which two conditions explain possible causes for this issue? (Choose two.)
The deployment of the Object Store is not in a running state.
The Remote site has not been configured in the Protection Group.
The deployment of the Object Store is not in a Complete state.
The logged-in user does not have permissions to view the Object Store.
Nutanix Objects, part of Nutanix Unified Storage (NUS), supports replication rules to replicate bucket data to a destination Object Store for disaster recovery or data redundancy. When creating a replication rule, the administrator selects a destination Object Store from a drop-down list. If this list is empty, it indicates that the system cannot display any available Object Stores, which can be due to several reasons.
Analysis of Options:
Option A (The deployment of the Object Store is not in a running state): Correct. For an Object Store to appear in the drop-down list as a replication destination, it must be in a running state. If the destination Object Store is not running (e.g., due to a failure, maintenance, or incomplete deployment), it will not be listed as an available target for replication.
Option B (The Remote site has not been configured in the Protection Group): Incorrect. Nutanix Objects replication does not use Protection Groups, which are a concept associated with Nutanix Files or VMs in Prism Central for disaster recovery. Objects replication is configured directly between Object Stores, typically requiring a remote site configuration, but this is not tied to Protection Groups. The issue of an empty drop-down list is more directly related to the Object Store’s state or permissions.
Option C (The deployment of the Object Store is not in a Complete state): Incorrect. While an incomplete deployment might prevent an Object Store from being fully operational, Nutanix documentation typically uses “running state” to describe the operational status of an Object Store (as in option A). “Complete state” is not a standard term in Nutanix Objects documentation for this context, making this option less accurate.
Option D (The logged-in user does not have permissions to view the Object Store): Correct. Nutanix Objects uses role-based access control (RBAC). If the logged-in user lacks the necessary permissions to view or manage the destination Object Store, it will not appear in the drop-down list. For example, the user may need “Object Store Admin” privileges to see and select Object Stores for replication.
Selected Conditions:
A: An Object Store not in a running state (e.g., stopped, failed, or under maintenance) will not appear as a destination for replication.
D: If the user lacks permissions to view the Object Store, it will not be visible in the drop-down list, even if the Object Store is running.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“When configuring a replication rule, the destination Object Store must be in a running state to appear in the drop-down list. Additionally, the user configuring the replication rule must have sufficient permissions (e.g., Object Store Admin role) to view and manage the destination Object Store. If the Object Store is not running or the user lacks permissions, the drop-down list will appear empty.”
An administrator has deployed a new Files cluster within a Windows Environment.
After some days, he Files environment is not able to synchronize users with the Active Directory server anymore. The administrator observes a large time difference between the Files environment and the Active Directory Server that is responsible for the behavior.
How should the administrator prevent the Files environment and the AD Server from having such a time difference in future?
Use the same NTP Servers for the File environment and the AD Server.
Use 0.pool.ntp.org as the NTP Server for the AD Server.
Use 0.pool.ntp.org as the NTP Server for the Files environment.
Connect to every FSVM and edit the time manually.
The administrator should prevent the Files environment and the AD Server from having such a time difference in future by using the same NTP Servers for the File environment and the AD Server. NTP (Network Time Protocol) is a protocol that synchronizes the clocks of devices on a network with a reliable time source. NTP Servers are devices that provide accurate time information to other devices on a network. By using the same NTP Servers for the File environment and the AD Server, the administrator can ensure that they have consistent and accurate time settings and avoid any synchronization issues or errors. References: Nutanix Files Administration Guide, page 32; Nutanix Files Troubleshooting Guide
An organization currently has two Objects instances deployed between two sites. Both instances are managed via manage the same Prism Central to simplify management.
The organization has a critical application with all data in a bucket that needs to be replicated to the secondary site for DR purposes. The replication needs to be asynchronous, including al delete the marker versions.
Create a Bucket replication rule, set the destination Objects instances.
With Object Browser, upload the data at the destination site.
Leverage the Objects Baseline Replication Tool from a Linus VM
Use a protection Domain to replicate the objects Volume Group.
The administrator can achieve this requirement by creating a bucket replication rule and setting the destination Objects instance. Bucket replication is a feature that allows administrators to replicate data from one bucket to another bucket on a different Objects instance for disaster recovery or data migration purposes. Bucket replication can be configured with various parameters, such as replication mode, replication frequency, replication status, etc. Bucket replication can also replicate all versions of objects, including delete markers, which are special versions that indicate that an object has been deleted. By creating a bucket replication rule and setting the destination Objects instance, the administrator can replicate data from one Objects instance to another asynchronously, including all delete markers and versions. References: Nutanix Objects User Guide, page 19; Nutanix Objects Solution Guide, page 9
Nutanix Objects, part of Nutanix Unified Storage (NUS), supports replication of buckets between Object Store instances for disaster recovery (DR). The organization has two Objects instances across two sites, managed by the same Prism Central, and needs to replicate a bucket’s data asynchronously, including delete marker versions, to the secondary site.
Analysis of Options:
Option A (With Object Browser, upload the data at the destination site): Incorrect. The Object Browser is a UI tool in Nutanix Objects for managing buckets and objects, but it is not designed for replication. Manually uploading data to the destination site does not satisfy the requirement for asynchronous replication, nor does it handle delete marker versions automatically.
Option B (Leverage the Objects Baseline Replication Tool from a Linux VM): Incorrect. The Objects Baseline Replication Tool is not a standard feature in Nutanix Objects documentation. While third-party tools or scripts might be used for manual replication, Nutanix provides a native solution for bucket replication, making this option unnecessary and incorrect for satisfying the requirement.
Option C (Use a Protection Domain to replicate the Objects Volume Group): Incorrect. Protection Domains are used in Nutanix for protecting VMs and Volume Groups (block storage) via replication, but they do not apply to Nutanix Objects. Objects uses bucket replication rules for DR, not Protection Domains.
Option D (Create a Bucket replication rule, set the destination Objects instance): Correct. Nutanix Objects supports bucket replication rules to replicate data between Object Store instances asynchronously. This feature allows the organization to replicate the bucket to the secondary site, including all versions (such as delete marker versions), as required. The replication rule can be configured in Prism Central, specifying the destination Object Store instance, and it supports asynchronous replication for DR purposes.
Why Option D?
Bucket replication in Nutanix Objects is the native mechanism for asynchronous replication between Object Store instances. It supports replicating all versions of objects, including delete marker versions (which indicate deleted objects in a versioned bucket), ensuring that the secondary site has a complete replica of the bucket for DR. Since both Object Store instances are managed by the same Prism Central, the administrator can easily create a replication rule to meet the requirement.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“Nutanix Objects supports asynchronous bucket replication for disaster recovery. To replicate a bucket to a secondary site, create a bucket replication rule in Prism Central, specifying the destination Object Store instance. The replication rule can be configured to include all versions, including delete marker versions, ensuring that the secondary site maintains a complete replica of the bucket for DR purposes.”
Which Data Lens feature maximizes the available file server space by moving cold data from the file server to an object store?
Smart Tier
Smart DR
Backup
Versioning
Nutanix Data Lens, part of Nutanix Unified Storage (NUS), provides data governance and analytics for Nutanix Files, including features to optimize storage usage. The administrator wants to maximize available space on the file server by moving cold (infrequently accessed) data to an object store (e.g., AWS S3, Azure Blob), which aligns with a specific Data Lens feature.
Analysis of Options:
Option A (Smart Tier): Correct. Smart Tier is a feature in Data Lens (and Nutanix Files, as noted in Question 34) that identifies cold data based on access patterns and tiers it to an external object store, such as AWS S3 or Azure Blob. This process frees up space on the file server while keeping the data accessible through the same share, maximizing available space as required.
Option B (Smart DR): Incorrect. Smart DR is a disaster recovery solution for Nutanix Files that automates replication policies between file servers (e.g., using NearSync). It replicates data to a recovery site for DR purposes, not to an object store, and does not free up space on the primary file server—it creates a copy.
Option C (Backup): Incorrect. Data Lens does not have a “Backup” feature. While Nutanix Files can be backed up using third-party tools or replication, this is not a Data Lens feature, and backups do not move cold data to an object store to free up space—they create additional copies for recovery purposes.
Option D (Versioning): Incorrect. Versioning is a feature in Nutanix Objects (as seen in Questions 11 and 15), not Data Lens, and it retains multiple versions of objects, not file server data. Even if versioning were applied to Files shares (e.g., via snapshots), it does not move cold data to an object store—it retains versions locally, consuming more space.
Why Option A?
Smart Tier, supported by Data Lens, identifies cold data on the file server and moves it to an external object store, freeing up space on the primary storage while keeping the data accessible. This directly addresses the requirement to maximize available file server space by offloading cold data, aligning with Data Lens’s data management capabilities.
Exact Extract from Nutanix Documentation:
From the Nutanix Data Lens Administration Guide (available on the Nutanix Portal):
“Data Lens supports Smart Tier, a feature that maximizes available file server space by identifying cold data based on access patterns and tiering it to an external object store, such as AWS S3 or Azure Blob. This process frees up space on the file server while maintaining data accessibility.”
What process is initiated when a share is protected for the first time?
Share data movement is started to the recovery site.
A remote snapshot is created for the share.
The share is created on the recovery site with a similar configuration.
A local snapshot is created for the share.
Nutanix Files, part of Nutanix Unified Storage (NUS), supports data protection for shares through mechanisms like replication and snapshots. When a share is “protected for the first time,” this typically refers to enabling a protection mechanism, such as a replication policy (e.g., NearSync, as seen in Question 24) or a snapshot schedule, to ensure the share’s data can be recovered in case of failure.
Analysis of Options:
Option A (Share data movement is started to the recovery site): Incorrect. While data movement to a recovery site occurs during replication (e.g., with NearSync), this is not the first step when a share is protected. Before data can be replicated, a baseline snapshot is typically created to capture the share’s initial state. Data movement follows the snapshot creation, not as the first step.
Option B (A remote snapshot is created for the share): Incorrect. A remote snapshot implies that a snapshot is created directly on the recovery site, which is not how Nutanix Files protection works initially. The first step is to create a local snapshot on the primary site, which is then replicated to the remote site as part of the protection process (e.g., via NearSync).
Option C (The share is created on the recovery site with a similar configuration): Incorrect. While this step may occur during replication setup (e.g., the remote site’s file server is configured to host a read-only copy of the share, as seen in the exhibit for Question 24), it is not the first process initiated. The share on the recovery site is created as part of the replication process, which begins after a local snapshot is taken.
Option D (A local snapshot is created for the share): Correct. When a share is protected for the first time (e.g., by enabling a snapshot schedule or replication policy), the initial step is to create a local snapshot of the share on the primary site. This snapshot captures the share’s current state and serves as the baseline for protection mechanisms like replication or recovery. For example, in a NearSync setup, a local snapshot is taken, and then the snapshot data is replicated to the remote site.
Why Option D?
Protecting a share in Nutanix Files typically involves snapshots as the foundation for data protection. The first step is to create a local snapshot of the share on the primary site, which captures the share’s data and metadata. This snapshot can then be used for local recovery (e.g., via Self-Service Restore) or replicated to a remote site for DR (e.g., via NearSync). The question focuses on the initial process, making the creation of a local snapshot the correct answer.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“When a share is protected for the first time, whether through a snapshot schedule or a replication policy, the initial step is to create a local snapshot of the share on the primary site. This snapshot captures the share’s current state and serves as the baseline for subsequent protection operations, such as replication to a remote site or local recovery.”
A team of developers are working on a new processing application and requires a solution where they can upload the … code for testing API calls. Older iterations should be retained as newer code is developer and tested.
Create an SMB Share with Files and enable Previous Version
Provision a Volume Group and connect via iSCSI with MPIO.
Create an NFS Share, mounted on a Linux Server with Files.
Create a bucket in Objects with Versioning enabled.
Nutanix Objects supports versioning, which is a feature that allows multiple versions of an object to be preserved in the same bucket. Versioning can be useful for developers who need to upload their code for testing API calls and retain older iterations as newer code is developed and tested. Versioning can also provide protection against accidental deletion or overwrite of objects. References: Nutanix Objects Administration Guide
The development team needs a solution to upload code via API calls while retaining older versions of the code as newer versions are developed. This use case aligns with versioned object storage, which supports API-based uploads (e.g., S3 APIs) and automatic versioning.
Analysis of Options:
Option A (Create a bucket in Objects with Versioning enabled): Correct. Nutanix Objects, part of Nutanix Unified Storage (NUS), provides S3-compatible object storage. It supports versioning, which allows multiple versions of an object to be retained when new versions are uploaded. The S3 API is ideal for programmatic uploads via API calls, meeting the developers’ requirement to upload code for testing while retaining older iterations.
Option B (Create an SMB Share with Files and enable Previous Versions): Incorrect. Nutanix Files supports SMB shares with the Previous Versions feature (via Self-Service Restore), which allows users to access earlier versions of files. However, SMB is not typically accessed via API calls—it’s designed for file sharing over a network (e.g., Windows clients). This does not align with the requirement for API-based uploads.
Option C (Provision a Volume Group and connect via iSCSI with MPIO): Incorrect. Nutanix Volumes provides block storage via iSCSI, which is suitable for applications requiring low-level storage access (e.g., databases). However, iSCSI does not support API-based uploads or versioning, making it unsuitable for the developers’ needs.
Option D (Create an NFS Share mounted on a Linux Server with Files): Incorrect. An NFS share in Nutanix Files allows file access over the NFS protocol, which can be mounted on a Linux server. While NFS supports file storage, it does not natively provide versioning, and NFS is not typically accessed via API calls for programmatic uploads.
Why Option A is the Best Solution:
Nutanix Objects with Versioning: Objects supports S3 APIs, which are widely used for programmatic uploads in development workflows. Enabling versioning ensures that older versions of the code are retained automatically when new versions are uploaded, meeting the requirement to retain older iterations.
API Support: The S3 API is a standard for API-based uploads, making it ideal for the developers’ workflow.
Scalability: Objects is designed for scalable object storage, suitable for development and testing environments.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“Nutanix Objects supports versioning for buckets, allowing multiple versions of an object to be retained. When versioning is enabled, uploading a new version of an object preserves the previous versions, which can be accessed or restored via S3 API calls. This feature is ideal for development workflows where older iterations of files need to be retained.”
An administrator has planned to copy any large files to a Files share through the RoboCopy tool. While moving the data, the copy operation was interrupted due to a network bandwidth issue. Which command option resumes any interrupted copy operation?
robocopy with the /c option
robocopy with the /s option
robocopy with the /z option
robocopy with the /r option
Nutanix Files, part of Nutanix Unified Storage (NUS), provides CIFS (SMB) shares that can be accessed by Windows clients. RoboCopy (Robust File Copy) is a Windows command-line tool commonly used to copy files to SMB shares, such as those provided by Nutanix Files. The administrator is copying large files to a Files share using RoboCopy, but the operation was interrupted due to a network bandwidth issue. The goal is to resume the interrupted copy operation without restarting from scratch.
Analysis of Options:
Option A (robocopy with the /c option): Incorrect. The /c option is not a valid RoboCopy option. RoboCopy options typically start with a forward slash (e.g., /z, /s), and there is no /c option for resuming interrupted copies.
Option B (robocopy with the /s option): Incorrect. The /s option in RoboCopy copies subdirectories (excluding empty ones) but does not provide functionality to resume interrupted copy operations. It is used to define the scope of the copy, not to handle interruptions.
Option C (robocopy with the /z option): Correct. The /z option in RoboCopy enables “restartable mode,” which allows the tool to resume a copy operation from where it left off if it is interrupted (e.g., due to a network issue). This mode is specifically designed for copying large files over unreliable networks, as it checkpoints the progress and can pick up where it stopped, ensuring the copy operation completes without restarting from the beginning.
Option D (robocopy with the /r option): Incorrect. The /r option in RoboCopy specifies the number of retries for failed copies (e.g., /r:3 retries 3 times). While this can help with transient errors, it does not resume an interrupted copy operation from the point of interruption—it retries the entire file copy, which is inefficient for large files.
Why Option C?
The /z option in RoboCopy enables restartable mode, which is ideal for copying large files to a Nutanix Files share over a network that may experience interruptions. This option ensures that if the copy operation is interrupted (e.g., due to a network bandwidth issue), RoboCopy can resume from the point of interruption, minimizing data retransmission and ensuring efficient completion of the copy.
Exact Extract from Microsoft Documentation (RoboCopy):
From the Microsoft RoboCopy Documentation (available on Microsoft Docs):
“/z : Copies files in restartable mode. In restartable mode, if a file copy is interrupted, RoboCopy can resume the copy operation from where it left off, which is particularly useful for large files or unreliable networks.”
Additional Notes:
Since RoboCopy is a Microsoft tool interacting with Nutanix Files SMB shares, the behavior of RoboCopy options is standard and not specific to Nutanix. However, Nutanix documentation recommends using tools like RoboCopy with appropriate options (e.g., /z) for reliable data migration to Files shares.
Nutanix Files supports SMB features like Durable File Handles (as noted in Question 19), which complement tools like RoboCopy by maintaining session state during brief network interruptions, but the /z option directly addresses resuming the copy operation itself.
Question: Deploying Files instances requires which two minimum resources? (Choose two.)
Options:
8 vCPUs per host
8 GiB of memory per host
12 GiB of memory per host
4 vCPUs per host
Nutanix Files instances are deployed using File Server Virtual Machines (FSVMs) that run on the Nutanix cluster’s hypervisor (AHV, ESXi, or Hyper-V). The minimum resource requirements for deploying FSVMs are specified in the Nutanix Files documentation to ensure proper performance. These requirements are typically defined per FSVM, not per host, as FSVMs are virtual machines distributed across the cluster’s hosts.
According to the official Nutanix documentation:
vCPUs: Each FSVM requires a minimum of 4 vCPUs to operate effectively.
Memory: Each FSVM requires a minimum of 12 GiB of memory (RAM).
The question asks for the “minimum resources” required for deploying Files instances, and the options are framed as “per host.” However, in the context of Nutanix Files, resource requirements are specified per FSVM, as FSVMs are the entities consuming these resources. The options likely reflect a misunderstanding in the original question phrasing, but based on the standard Nutanix Files deployment requirements:
4 vCPUs per FSVM (option D) is correct, as this is the minimum vCPU requirement.
12 GiB of memory per FSVM (option C) is correct, as this is the minimum memory requirement.
Options A (8 vCPUs per host) and B (8 GiB of memory per host) do not align with the documented minimum requirements for FSVMs:
8 vCPUs is higher than the minimum requirement of 4 vCPUs per FSVM.
8 GiB of memory is lower than the minimum requirement of 12 GiB per FSVM.
Exact Extract from Nutanix Documentation:
“For a Nutanix Files deployment, each File Server Virtual Machine (FSVM) requires the following minimum resources:
4 vCPUs
12 GiB of RAMThese resources ensure that the FSVM can handle file service operations efficiently.”— Nutanix Files Deployment Guide, Version 4.0, Section: “System Requirements for Nutanix Files”
After configuring Smart DR, an administrator is unable to see the policy in the Policies tab. The administrator has confirmed that all FSVMs are able to connect to Prism Central via port 9440 bidirectionally. What is the possible reason for this issue?
The primary and recovery file servers do not have the same version.
Port 7515 should be open for all External/Client IPs of FSVMs on the Source and Target cluster.
The primary and recovery file servers do not have the same protocols.
Port 7515 should be open for all Internal/Storage IPs of FSVMs on the Source and Target cluster.
Smart DR in Nutanix Files, part of Nutanix Unified Storage (NUS), is a disaster recovery (DR) solution that simplifies the setup of replication policies between file servers (e.g., using NearSync, as seen in Question 24). After configuring a Smart DR policy, the administrator expects to see it in the Policies tab in Prism Central, but it is not visible despite confirmed connectivity between FSVMs and Prism Central via port 9440 (used for Prism communication, as noted in Question 21). This indicates a potential mismatch or configuration issue.
Analysis of Options:
Option A (The primary and recovery file servers do not have the same version): Correct. Smart DR requires that the primary and recovery file servers (source and target) run the same version of Nutanix Files to ensure compatibility. If the versions differ (e.g., primary on Files 4.0, recovery on Files 3.8), the Smart DR policy may fail to register properly in Prism Central, resulting in it not appearing in the Policies tab. This is a common issue in mixed-version environments, as Smart DR relies on consistent features and APIs across both file servers.
Option B (Port 7515 should be open for all External/Client IPs of FSVMs on the Source and Target cluster): Incorrect. Port 7515 is not a standard port for Nutanix Files or Smart DR communication. The External/Client network of FSVMs (used for SMB/NFS traffic) communicates with clients, not between FSVMs or with Prism Central for policy management. Smart DR communication between FSVMs and Prism Central uses port 9440 (already confirmed open), and replication traffic between FSVMs typically uses other ports (e.g., 2009, 2020), but not 7515.
Option C (The primary and recovery file servers do not have the same protocols): Incorrect. Nutanix Files shares can support multiple protocols (e.g., SMB, NFS), but Smart DR operates at the file server level, not the protocol level. The replication policy in Smart DR replicates share data regardless of the protocol, and a protocol mismatch would not prevent the policy from appearing in the Policies tab—it might affect client access, but not policy visibility.
Option D (Port 7515 should be open for all Internal/Storage IPs of FSVMs on the Source and Target cluster): Incorrect. Similar to option B, port 7515 is not relevant for Smart DR or Nutanix Files communication. The Internal/Storage network of FSVMs is used for communication with the Nutanix cluster’s storage pool, but Smart DR policy management and replication traffic do not rely on port 7515. The key ports for replication (e.g., 2009, 2020) are typically already open, and the issue here is policy visibility, not replication traffic.
Why Option A?
Smart DR requires compatibility between the primary and recovery file servers, including running the same version of Nutanix Files. A version mismatch can cause the Smart DR policy to fail registration in Prism Central, preventing it from appearing in the Policies tab. Since port 9440 connectivity is already confirmed, the most likely issue is a version mismatch, which is a common cause of such problems in Nutanix Files DR setups.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Smart DR requires that the primary and recovery file servers run the same version of Nutanix Files to ensure compatibility. A version mismatch between the source and target file servers can prevent the Smart DR policy from registering properly in Prism Central, resulting in the policy not appearing in the Policies tab.”
Nutanix Objects can use no more than how many vCPUs for each AHV or ESXi node?
12
16
8
10
Nutanix Objects, a component of Nutanix Unified Storage (NUS), provides an S3-compatible object storage solution. It is deployed as a set of virtual machines (Object Store Service VMs) running on the Nutanix cluster’s hypervisor (AHV or ESXi). The resource allocation for these VMs, including the maximum number of vCPUs per node, is specified in the Nutanix Objects documentation to ensure optimal performance and resource utilization.
According to the official Nutanix documentation, each Object Store Service VM is limited to a maximum of 8 vCPUs per node (AHV or ESXi). This constraint ensures that the object storage service does not overburden the cluster’s compute resources, maintaining balance with other workloads.
Option C: Correct. The maximum number of vCPUs for Nutanix Objects per node is 8.
Option A (12), Option B (16), and Option D (10): Incorrect, as they exceed or do not match the documented maximum of 8 vCPUs per node.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“Each Object Store Service VM deployed on an AHV or ESXi node is configured with a maximum of 8 vCPUs to ensure efficient resource utilization and performance. This limit applies per node hosting the Object Store Service.”
Additional Notes:
The vCPU limit is per Object Store Service VM on a given node, not for the entire Objects deployment. Multiple VMs may run across different nodes, but each is capped at 8 vCPUs.
The documentation does not specify different limits for AHV versus ESXi, so the 8 vCPU maximum applies universally.
An administrator has been tasked with creating a distributed share on a single-node cluster, but has been unable to successfully complete the task.
Why is this task failing?
File server version should be greater than 3.8.0
AOS version should be greater than 6.0.
Number of distributed shares limit reached.
Distributed shares require multiple nodes.
A distributed share is a type of SMB share or NFS export that distributes the hosting of top-level directories across multiple FSVMs, which improves load balancing and performance. A distributed share cannot be created on a single-node cluster, because there is only one FSVM available. A distributed share requires at least two nodes in the cluster to distribute the directories. Therefore, the task of creating a distributed share on a single-node cluster will fail. References: Nutanix Files Administration Guide, page 33; Nutanix Files Solution Guide, page 8
A distributed share in Nutanix Files, part of Nutanix Unified Storage (NUS), is a share that spans multiple File Server Virtual Machines (FSVMs) to provide scalability and high availability. Distributed shares are designed to handle large-scale workloads by distributing file operations across FSVMs.
Analysis of Options:
Option A (File server version should be greater than 3.8.0): Incorrect. While Nutanix Files has version-specific features, distributed shares have been supported since earlier versions (e.g., Files 3.5). The failure to create a distributed share on a single-node cluster is not due to the Files version.
Option B (Distributed shares require multiple nodes): Correct. Distributed shares in Nutanix Files require a minimum of three FSVMs for high availability and load balancing, which in turn requires a cluster with at least three nodes. A single-node cluster cannot support a distributed share because it lacks the necessary nodes to host multiple FSVMs, which are required for the distributed architecture.
Option C (AOS version should be greater than 6.0): Incorrect. Nutanix AOS (Acropolis Operating System) version 6.0 or later is not a specific requirement for distributed shares. Distributed shares have been supported in earlier AOS versions (e.g., AOS 5.15 and later with compatible Files versions). The issue is related to the cluster’s node count, not the AOS version.
Option D (Number of distributed shares limit reached): Incorrect. The question does not indicate that the administrator has reached a limit on the number of distributed shares. The failure is due to the single-node cluster limitation, not a share count limit.
Why Option B?
A single-node cluster cannot support a distributed share because Nutanix Files requires at least three FSVMs for a distributed share, and each FSVM typically runs on a separate node for high availability. A single-node cluster can support a non-distributed (standard) share, but not a distributed share, which is designed for scalability across multiple nodes.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Distributed shares in Nutanix Files require a minimum of three FSVMs to ensure scalability and high availability. This requires a cluster with at least three nodes, as each FSVM is typically hosted on a separate node. Single-node clusters do not support distributed shares due to this requirement.”
An administrator has received an alert A130357 - VolumeGroupProtectionFailedOnPC with the following details:
Block Serial Number: 16Suxxxxxxxx
Alert Time: Thu Jan 19 2023 20:31:10 GMT-0800 (PST)
Alert Type: VolumeGroupProtectionFailedOnPC
Alert Message: A130357:VolumeGroupProtectionFailedOnPC
Cluster ID: xxxxx
Alert Body: Volume Group protection failed on PCWhich two conditions need to be addressed to allow successful protection of the Volume Group? (Choose two.)
Volume Group is protected in a legacy protection domain.
The Protection Policy applied on Volume Group has an Async snapshot schedule applied.
The Protection Policy applied on Volume Group has a NearSync snapshot schedule applied.
Volume Group is not protected in a legacy protection domain.
The alert A130357 - VolumeGroupProtectionFailedOnPC in a Nutanix environment indicates a failure to protect a Volume Group (VG) in a Protection Domain (PD) managed through Prism Central (PC). Nutanix Volumes, part of Nutanix Unified Storage (NUS), provides block storage via iSCSI, and Volume Groups can be protected using Protection Domains for snapshots and replication. The alert suggests that the protection process failed, likely due to incompatible configurations.
Understanding the Issue:
Volume Group Protection: A Volume Group in Nutanix Volumes can be added to a Protection Domain in Prism Central for data protection (e.g., snapshots, replication).
Protection Failed on PC: The failure occurred during the protection process, managed through Prism Central, indicating an issue with the Protection Domain or policy settings.
Conditions to Address: The failure is likely due to configuration mismatches or unsupported settings in the Protection Domain or policy.
Analysis of Conditions:
Option A (Volume Group is protected in a legacy protection domain): Correct. A “legacy protection domain” refers to an older protection mechanism in Nutanix (e.g., from earlier AOS versions) that may not be fully compatible with newer Prism Central features or Volume Group protection workflows. If the Volume Group is part of a legacy PD, the protection process may fail due to deprecated features or APIs. Addressing this involves migrating the Volume Group to a modern Protection Domain in Prism Central, ensuring compatibility.
Option B (The Protection Policy applied on Volume Group has an Async snapshot schedule applied): Incorrect. An Async (asynchronous) snapshot schedule is a standard and supported configuration for Volume Group protection in a Protection Domain. Async schedules take snapshots at intervals (e.g., hourly, daily) and replicate them to a remote site, and this does not cause protection failures—it’s a valid setup.
Option C (The Protection Policy applied on Volume Group has a NearSync snapshot schedule applied): Correct. NearSync is a near-synchronous replication schedule (e.g., 1-minute RPO, as in Question 24) that is supported for VMs and some Nutanix Files configurations, but it is not supported for Volume Groups in a Protection Domain. If a NearSync schedule is applied to a Volume Group’s Protection Policy, the protection will fail because Volume Groups only support Async schedules. Addressing this involves changing the schedule to an Async policy, which is compatible with Volume Groups.
Option D (Volume Group is not protected in a legacy protection domain): Incorrect. This option suggests that the Volume Group is already in a modern (non-legacy) Protection Domain, which would not cause the failure. The issue lies in specific conditions (e.g., legacy PD or incompatible schedule), so this option does not identify a condition that needs addressing.
Selected Conditions:
A: A legacy Protection Domain can cause compatibility issues, leading to protection failures. Migrating to a modern PD in Prism Central resolves this.
C: A NearSync schedule is not supported for Volume Groups, causing the protection to fail. Switching to an Async schedule ensures compatibility.
Why These Conditions?
Legacy Protection Domain (A): Legacy PDs may use outdated mechanisms that are incompatible with Prism Central’s modern protection workflows for Volume Groups, causing failures.
NearSync Schedule (C): Volume Groups in a Protection Domain only support Async snapshot schedules. A NearSync schedule, designed for low-RPO replication, is not supported and will cause the protection process to fail.
Exact Extract from Nutanix Documentation:
From the Nutanix Prism Alerts Reference Guide (available on the Nutanix Portal):
“Alert A130357 - VolumeGroupProtectionFailedOnPC: This alert is triggered when Volume Group protection fails in a Protection Domain managed through Prism Central. Common causes include:
The Volume Group is protected in a legacy protection domain, which is not fully compatible with modern Prism Central workflows. Migrate the Volume Group to a modern Protection Domain.
The Protection Policy applied to the Volume Group has a NearSync snapshot schedule, which is not supported for Volume Groups. Change the schedule to an Async policy to allow successful protection.”
Which two platform are currently supported for Smart Tiering? (Choose two.)
Google Cloud Storage
AWS Standard
Wasabi
Azure Blob
The two platforms that are currently supported for Smart Tiering are AWS Standard and Azure Blob. Smart Tiering is a feature that allows administrators to tier data from Files to cloud storage based on file age, file size, and file type. Smart Tiering can help reduce the storage cost and optimize the performance of Files. Smart Tiering currently supports AWS Standard and Azure Blob as the cloud storage platforms, and more platforms will be added in the future. References: Nutanix Files Administration Guide, page 99; Nutanix Files Solution Guide, page 11
Which two steps are required for enabling Data Lens? (Choose two.)
In Prism, enable Pulse health monitoring.
Configure a MyNutanix account to access the Data Lens console.
Configure the Data Services IP in Prism Central.
Add File Services VM admin credentials to a MyNutanix account.
Nutanix Data Lens, part of Nutanix Unified Storage (NUS), provides data governance, analytics, and ransomware protection for Nutanix Files. Enabling Data Lens involves setting up access to the Data Lens service, which is a cloud-based service hosted by Nutanix, and integrating it with the on-premises file server.
Analysis of Options:
Option A (In Prism, enable Pulse health monitoring): Incorrect. Pulse is a Nutanix feature that collects telemetry data for health monitoring and support, sending it to Nutanix Insights. While Pulse is recommended for overall cluster health, it is not a required step for enabling Data Lens. Data Lens operates independently of Pulse and focuses on file server analytics, not cluster health monitoring.
Option B (Configure a MyNutanix account to access the Data Lens console): Correct. Data Lens is a cloud-based service, and accessing its console requires a MyNutanix account. The administrator must configure the MyNutanix account credentials in Prism Central to enable Data Lens and access its features, such as the Data Lens dashboard for monitoring file server activity. This is a mandatory step to integrate with the cloud service.
Option C (Configure the Data Services IP in Prism Central): Incorrect. The Data Services IP is used for iSCSI traffic in Nutanix Volumes (as noted in Questions 25 and 31), not for Data Lens. Data Lens communicates with the Nutanix cloud (insights.nutanix.com) over the internet and does not require a Data Services IP configuration.
Option D (Add File Services VM admin credentials to a MyNutanix account): Correct. To enable Data Lens for a file server, the administrator must provide the File Services VM (FSVM) admin credentials, which are used to authenticate and integrate the file server with the Data Lens service. These credentials are added via the MyNutanix account configuration in Prism Central, allowing Data Lens to access the file server for monitoring and analytics.
Selected Steps:
B: Configuring a MyNutanix account is required to access the Data Lens console and enable the service.
D: Adding FSVM admin credentials to the MyNutanix account ensures that Data Lens can authenticate and monitor the file server.
Exact Extract from Nutanix Documentation:From the Nutanix Data Lens Administration Guide (available on the Nutanix Portal):
“To enable Data Lens, configure a MyNutanix account in Prism Central to access the Data Lens console. Additionally, add the File Services VM admin credentials to the MyNutanix account to allow Data Lens to authenticate with the file server and enable monitoring and analytics features.”
What is a prerequisite for deploying Smart DR?
Open TCP port 7515 on all client network IPs uni-directionally on the source and recovery file servers.
The primary and recovery file servers must have the same domain name.
Requires one-to-many shares.
The Files Manager must have at least three file servers.
Smart DR in Nutanix Files, part of Nutanix Unified Storage (NUS), simplifies disaster recovery (DR) by automating replication policies between file servers (e.g., using NearSync, as seen in Question 24). Deploying Smart DR has specific prerequisites to ensure compatibility and successful replication between the primary and recovery file servers.
Analysis of Options:
Option A (Open TCP port 7515 on all client network IPs uni-directionally on the source and recovery file servers): Incorrect. Port 7515 is not a standard port for Nutanix Files or Smart DR communication. Smart DR replication typically uses ports like 2009 and 2020 for data transfer between FSVMs, and port 9440 for communication with Prism Central (as noted in Question 45). The client network IPs (used for SMB/NFS traffic) are not involved in Smart DR replication traffic, and uni-directional port opening is not a requirement.
Option B (The primary and recovery file servers must have the same domain name): Correct. Smart DR requires that the primary and recovery file servers are joined to the same Active Directory (AD) domain (i.e., same domain name) to ensure consistent user authentication and permissions during failover. This is a critical prerequisite, as mismatched domains can cause access issues when the recovery site takes over, especially for SMB shares relying on AD authentication.
Option C (Requires one-to-many shares): Incorrect. Smart DR does not require one-to-many shares (i.e., a single share replicated to multiple recovery sites). Nutanix Files supports one-to-one replication for shares (e.g., primary to recovery site, as seen in the exhibit for Question 24), and one-to-many replication is not a prerequisite—it’s an optional configuration not supported by Smart DR.
Option D (The Files Manager must have at least three file servers): Incorrect. “Files Manager” is not a standard Nutanix term, but assuming it refers to the Files instance or deployment, there is no requirement for three file servers. Smart DR can be deployed with a single file server on each site (primary and recovery), though three FSVMs per file server are recommended for high availability (not file servers). This option misinterprets the requirement.
Why Option B?
Smart DR ensures seamless failover between primary and recovery file servers, which requires consistent user authentication. Both file servers must be joined to the same AD domain (same domain name) to maintain user permissions and access during failover, especially for SMB shares. This is a documented prerequisite for Smart DR deployment to avoid authentication issues.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“A prerequisite for deploying Smart DR is that the primary and recovery file servers must be joined to the same Active Directory domain (same domain name). This ensures consistent user authentication and permissions during failover, preventing access issues for clients.”
An administrator wants to monitor their Files environment for suspicious activities, such mass deletion or access denials.
How can the administrator be alerted to such activities?
How can the administrator be alerted to such activities?
Configure Alerts & Events in the Files Console, filtering for Warning severity.
Deploy the Files Analytics VM. and configure anomaly rules.
Configure Files to use ICAP servers, with monitors for desired activities.
Create a data protection policy in the Files view in Prism Central.
The administrator can monitor their Files environment for suspicious activities, such as mass deletion or access denials, by deploying the File Analytics VM and configuring anomaly rules. File Analytics is a feature that provides insights into the usage and activity of file data stored on Files. File Analytics consists of a File Analytics VM (FAVM) that runs on a Nutanix cluster and communicates with the File Server VMs (FSVMs) that host the file shares. File Analytics can alert the administrator when there is an unusual or suspicious activity on file data, such as mass deletion, encryption, permission change, or access denial. The administrator can configure anomaly rules to define the threshold, time window, and notification settings for each type of anomaly. References: Nutanix Files Administration Guide, page 93; Nutanix File Analytics User Guide
Workload optimization on Files is configured on which entity?
Volume
Share
Container
File Server
Workload optimization in Nutanix Files, part of Nutanix Unified Storage (NUS), involves tuning the Files deployment to handle specific workloads efficiently. This was previously discussed in Question 13, where workload optimization was based on FSVM quantity. The question now asks which entity workload optimization is configured on.
Analysis of Options:
Option A (Volume): Incorrect. Volumes in Nutanix refer to block storage provided by Nutanix Volumes, not Nutanix Files. Workload optimization for Files does not involve Volumes, which are a separate entity for iSCSI-based storage.
Option B (Share): Incorrect. Shares in Nutanix Files are the individual file shares (e.g., SMB, NFS) accessed by clients. While shares can be tuned (e.g., quotas, permissions), workload optimization in Files is not configured at the share level—it applies to the broader file server infrastructure.
Option C (Container): Incorrect. Containers in Nutanix are logical storage pools managed by AOS, used to store data for VMs, Files, and other services. While Files data resides in a container, workload optimization is not configured at the container level—it is specific to the Files deployment.
Option D (File Server): Correct. Workload optimization in Nutanix Files is configured at the File Server level, which consists of multiple FSVMs (as established in Question 13). The File Server is the entity that manages all FSVMs, shares, and resources, and optimization tasks (e.g., scaling FSVMs, adjusting resources) are applied at this level to handle workloads efficiently.
Why Option D?
Workload optimization in Nutanix Files involves adjusting resources and configurations at the File Server level, such as scaling the number of FSVMs (as in Question 13) or tuning memory and CPU for the File Server. The File Server encompasses all FSVMs and shares, making it the entity where optimization is configured to ensure the entire deployment can handle the workload effectively.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Workload optimization in Nutanix Files is configured at the File Server level. This involves adjusting the number of FSVMs, allocating resources (e.g., CPU, memory), and tuning configurations to optimize the File Server for specific workloads.”
Immediately after creation, the administrator is asked to change the name of Objects store.
How will the administrator achieve this request?
Enable versioning and then rename the Objects store, disable versioning
The Objects store can only be renamed if hosted on ESXi.
Delete and recreate a new Objects store with the updated name.
Update the name of the Objects stores by using a CORS XML file
The administrator can achieve this request by deleting and recreating a new Objects store with the updated name. Objects is a feature that allows users to create and manage object storage clusters on a Nutanix cluster. Objects clusters can provide S3-compatible access to buckets and objects for various applications and users. Objects clusters can be created and configured in Prism Central. However, once an Objects cluster is created, its name cannot be changed or edited. Therefore, the only way to change the name of an Objects cluster is to delete the existing cluster and create a new cluster with the updated name. References: Nutanix Objects User Guide, page 9; Nutanix Objects Solution Guide, page 8
An administrator is tasked with deploying a Microsoft Server Failover Cluster for a critical application that uses shared storage.
The failover cluster instance will consist of VMs running on an AHV-hosted cluster and bare metal servers for maximum resiliency.
What should the administrator do to satisfy this requirement?
Create a Bucket with Objects.
Provision a Volume Group with Volume.
Create an SMB Share with Files.
Provision a new Storage Container.
Nutanix Volumes allows administrators to provision a volume group with one or more volumes that can be attached to multiple VMs or physical servers via iSCSI. This enables the creation of a Microsoft Server Failover Cluster that uses shared storage for a critical application.
Microsoft Server Failover Cluster typically uses shared block storage for its quorum disk and application data. Nutanix Volumes provides this via iSCSI by provisioning a Volume Group, which can be accessed by both the AHV-hosted VMs and bare metal servers. This setup ensures maximum resiliency, as the shared storage is accessible to all nodes in the cluster, allowing failover between VMs and bare metal servers as needed.
Exact Extract from Nutanix Documentation:
From the Nutanix Volumes Administration Guide (available on the Nutanix Portal):
“Nutanix Volumes provides block storage via iSCSI, which is ideal for Microsoft Server Failover Clusters requiring shared storage. To deploy an MSFC with VMs and bare metal servers, provision a Volume Group in Nutanix Volumes and expose it via iSCSI to all cluster nodes, ensuring shared access to the storage for high availability and failover.”
An administrator has performed an upgrade to Files. After upgrading, the file server cannot reach the given domain name with the specified DNS server list.
Which two steps should the administrator perform to resolve the connectivity issues with the domain controller servers? (Choose two.)
Verify the DNS settings in Prism Element.
DNS entries for the given domain name.
Verify the DNS settings in Prism Central.
DNS server addresses of the domain controllers.
The two steps that the administrator should perform to resolve the connectivity issues with the domain controller servers are:
Verify the DNS settings in Prism Element: DNS (Domain Name System) is a system that translates domain names into IP addresses. DNS settings are configurations that specify which DNS servers to use for resolving domain names. Verifying the DNS settings in Prism Element is a step that the administrator should perform, because it can help identify and correct any incorrect or outdated DNS server addresses or domain names that may cause connectivity issues with the domain controller servers.
Verify the DNS entries for the given domain name: DNS entries are records that map domain names to IP addresses or other information. Verifying the DNS entries for the given domain name is another step that the administrator should perform, because it can help check and update any incorrect or outdated IP addresses or other information that may cause connectivity issues with the domain controller servers. References: Nutanix Files Administration Guide, page 32; Nutanix Files Troubleshooting Guide
TESTED 29 Apr 2025
Copyright © 2014-2025 DumpsTool. All Rights Reserved