Professional Documents
Culture Documents
Conventions
Convention Description
root@esx# command The commands are executed as the root user in the ESXi shell.
8
Chapter 1
About Nutanix Complete Cluster
Nutanix Complete Cluster is a converged, scale-out compute and storage system that is purpose-
built to host and store virtual machines. All nodes in a Nutanix cluster converge to deliver a
unified pool of tiered storage and present resources to VMs for seamless access. A global data
system architecture integrates each new node into the cluster, allowing you to scale the solution
to meet the needs of your infrastructure.
Cluster Architecture
The building block for the cluster is a Nutanix Complete Block, which is a rackable 2U chassis
with four high-performance servers, each running a standard hypervisor, that contain processors,
memory, and local storage (SSDs and hard disks).
Each node hosts a Nutanix Controller VM that enables the pooling of local storage from all nodes
in the cluster.
A Nutanix Complete Block is a 2U rackable chassis with four industry-standard x86 servers, or
nodes. Each node contains the following components:
10
About Nutanix Complete Cluster
Hardware Software
Nutanix Networking
Interfaces
Each Nutanix node has three network interfaces: one 10-gigabit Ethernet interface and two 1-
gigabit Ethernet interfaces. A factory-installed Nutanix Complete Block sends all traffic through
a 10-gigabit Ethernet port on each node. The two 1-gigabit ports per node are set up as standby
interfaces.
IP Addresses
All Controller VMs and ESXi hosts have two network interfaces.
Note: The ESXi and CVM interfaces on vSwitch0 cannot use IP addresses in any
subnets that overlap with subnet 192.168.5.0/24.
Ports
Nutanix uses a number of ports for internal communication. The following unique ports are
required for external access to Controller VMs in a Nutanix cluster.
11
Nutanix Complete Cluster Version 2.6
vSwitches
A Nutanix node is configured with two vSwitches:
• vSwitchNutanix is used for local communication between the Controller VM and the ESXi
host. It has no uplinks.
• vSwitch0 is used for all other communication. It has uplinks to the three physical network
interfaces. vSwitch0 has two networks:
• Management Network is used for HA, vMotion, and vCenter communication.
• VM Network is used by all VMs.
Caution: If you need to manage network traffic between VMs with greater control,
create additional port groups on vSwitch0. Do not modify vSwitchNutanix.
System Maximums
The figures listed here are the maximum tested and supported values for entities in a Nutanix
cluster. Nutanix clusters are also subject to the vSphere maximum values documented by
VMware.
Entity Supported Maximum
12
About Nutanix Complete Cluster
Factory-Installed Components
The components listed here are configured by the Nutanix manufacturing process. Do not modify
any of these components except under the direction of Nutanix support.
Nutanix Software
• Local datastore name
• Settings and contents of any Controller VM, including the name
Important: If you create vSphere resource pools, Nutanix Controller VMs must have
the top share.
ESXi Settings
• NFS settings
• VM swapfile location
• VM startup/shutdown order
• iSCSI software adapter settings
13
Nutanix Complete Cluster Version 2.6
Hosts read and write data in shared Nutanix datastores as if they were connected to a SAN. From
the perspective of an ESXi host, the only difference is the improved performance that results
from data not traveling across a network. VM data is stored locally, and replicated on other nodes
for protection against hardware failure.
When a guest VM submits a write request through ESXi, that request is sent to the Controller
VM on the host. To provide a rapid response to the guest VM, this data is first stored on the
SSD-PCIe device, within a subset of storage called the HOT Cache. This cache is rapidly
distributed across the 10 GigE network to other SSD-PCIe devices in the cluster. HOT Cache
data is periodically transferred to persistent storage within the cluster. Data is written locally for
performance and replicated on multiple nodes for high availability.
When the guest VM sends a read request through ESXi, the Controller VM will read from the
local copy first, if present. If the host does not contain a local copy, then the Controller VM will
read across the network from a host that does contain a copy. As remote data is accessed, it will
be migrated to storage devices on the current host, so that future read requests can be local.
Heat-Optimized Tiering
The Nutanix cluster dynamically manages data based on how frequently it is accessed. When
possible, new data is saved on the SSD tier. Frequently-accessed, or "hot" data is kept on this
tier, while "cold" data is migrated to the HDD tier. Data that is accessed frequently again will be
moved back to the SSD tier.
This automated data migration also applies to read requests across the network. If a block of data
is repeatedly accessed by a guest VM on a remote host, the cluster will migrate the data to the
SSD device on the remote host. This migration not only reduces network latency, but also ensures
that frequently-accessed data is stored on the fastest storage tier.
14
Chapter 2
Cluster Management
Nutanix Command Center is a central location to monitor and configure all entities within the
cluster, including virtual machines, vDisks, and snapshots. You can access Command Center
either through the web-based management console or the Nutanix Command-Line Interface
(nCLI).
Many of the common administrative actions you need to perform can be completed using either
interface. In such cases, it is recommended that you take advantage of the features in the web
console, which provide context and up-to-date information about the cluster.
The web console is also best-suited for monitoring the cluster. The Dashboard page provides
an overview of all Nutanix entities, and allows you to filter the display based on one of these
entities.
Some tasks are only supported in the nCLI. These tasks are not available in the web console for
one of the following reasons:
• The task is a new feature that has not yet been incorporated into the web console.
• The task is part of an advanced feature that most administrators do not need to use.
Web Console
The web console enables you to monitor and manage a Nutanix cluster through an intuitive web
interface.
Context-based Filtering
Throughout the web console, all entities are presented in trays, which dynamically update as
you filter the view. For example, if you select a single VM from the VMs tray, the Hosts tray
updates to display only the node that is currently hosting the selected VM. Removing the VM
filter returns both trays to their previous states.
Intuitive Searching
Entities and actions are found not only on their relevant pages, but also through the search field
that is always present in the upper-right corner of the web console.
For example, you can type the name of a virtual machine, such as Win7 or the action edit
Win7. Either text string presents all VMs and other entities that contain Win7 in their name. You
can click any of these options to navigate to the appropriate page of the web console.
16
Cluster Management
Tip: Refer to Default Cluster Credentials on page 3 for the default credentials of all
cluster components.
1. Verify that your system has Java Runtime Environment (JRE) version 5.0 or higher.
To check which version of Java is installed on your system or to download the latest version,
go to http://www.java.com/en/download/installed.jsp.
17
Nutanix Complete Cluster Version 2.6
The procedure to complete this step depends on your operating system. For more information,
go to http://java.com/en/download/help/path.xml.
If you do not set these environment variables, you will need to specify the complete path to
the ncli command when you run it.
If you receive the message Error: Could not connect to Nutanix Gateway, the
cluster is not started. To start the cluster, log on to a Controller VM as the nutanix user and
run the following commands:
nutanix@cvm$ cluster start
If the cluster starts properly, output similar to the following is displayed for each node in the
cluster:
CVM: 172.16.8.191 Up
Medusa UP [22088, 22089, 22090, 22098]
Pithos UP [22331, 22332, 22333, 22334]
Stargate UP [22336, 22341, 22342, 22347]
Chronos UP [22457, 22458, 22459, 22460]
Curator UP [22463, 22466, 22467, 22473]
Prism UP [22472, 22477, 22483, 22501]
AlertManager UP [22502, 22506, 22507, 22537]
Scavenger UP [22536, 22543, 22544, 22564]
StatsAggregator UP [22556, 22557, 22558, 22559]
SysStatCollector UP [22567, 22570, 22571, 22583]
When the cluster is up, exit the nCLI and start it again.
Results. The Nutanix CLI is now in interactive mode. To exit this mode, type exit at the ncli>
prompt.
18
Cluster Management
Command Format
Embedded Help
The nCLI provides assistance on all entities and actions. By typing help at the command line,
you can request additional information at one of three levels of detail.
The nCLI provides additional details at each level. To control the scope of the nCLI help output,
add the detailed parameter, which can be set to either true or false.
For example, type the following command to request a detailed list of all actions and parameters
for the cluster entity.
ncli> cluster help detailed=true
19
Nutanix Complete Cluster Version 2.6
You can also type the following command if you prefer to see a list of parameters for the
cluster edit-params action without descriptions.
ncli> cluster edit-params help detailed=false
Statistics in nCLI
Textual statistics are available in the nCLI with the list-stats action, for example:
ncli> host list-stats id=29
Serviceability
Remote Support
Remote support is enabled during the site installation procedure. Remote support can be managed
in Nutanix Command Center, using either the web console or the nCLI.
Click System Settings > Remote Support to enable, temporarily enable, or disable remote
support. If remote support is temporarily enabled, an icon appears in the header, including a
countdown until remote support will be disabled.
20
Cluster Management
To start or stop remote support, select the appropriate option in the Remote Support dialog box.
You can also use the nCLI command cluster stop-remote-support. To start remote
support after you have stopped it, use the nCLI command cluster start-remote-support.
Both commands have an optional duration parameter. For example, if you have stopped remote
support and want to enable it only for the next hour, use the following nCLI command.
ncli> cluster start-remote-support duration=60
At the end of 60 minutes, remote support will be disabled.
Email Alerts
Email alerts to Nutanix support are enabled by default. To stop email alerts, use the nCLI
command cluster stop-email-alerts. To start email alerts after you have stopped them,
use the nCLI command cluster start-email-alerts.
Both commands have an optional duration parameter. For example, if email alerts are in effect
and you want to disable them only for the next half hour, use the following nCLI command.
ncli> cluster stop-email-alerts duration=30
At the end of 30 minutes, email alerts will again be sent.
21
Nutanix Complete Cluster Version 2.6
22
Chapter 3
Storage Management
Nutanix Complete Cluster classifies available storage into separate tiers with distinct
performance capabilities. Storage is managed hierarchically with storage pools, containers,
and vDisks. You can specify the amount of storage contributed from each tier to these storage
entities, and thereby manage their storage performance characteristics.
Storage Tiers
The cluster defines the following tiers, based on the physical storage that is included on each
node.
Note: The cluster also includes an empty tier, named SSD-SATA. This tier will be used
in future product releases, but should be ignored at present.
Storage Pools
Storage pools are groups of physical disks from one or more tiers. Nutanix recommends creating
a single storage pool to hold all disks within the cluster. This configuration, which supports the
majority of use cases, allows the cluster to dynamically optimize the distribution of resources
like capacity and IOPS. Isolating disks into separate storage pools provides physical separation
between VMs, but can also create an imbalance of these resources if the disks are not actively
used.
When you expand your cluster by adding new nodes, the new disks can also be added to the
existing storage pool. This scale-out architecture allows you to build a cluster that grows with
your needs.
2. Click the configuration wheel at the top of the storage pool table to open the Create Storage
Pool pane.
24
Storage Management
5. Assign disks from one or more nodes by clicking + or - in the relevant rows.
You can also type a number directly in the text fields or check the Select All box.
6. Repeat steps 4 and 5 with any additional tiers that you want to add to the pool.
7. Click Create.
You can also perform this task using the nCLI. For more information, type the
following command:
ncli> storagepool create help
2. Right click the storage pool to expand and select Edit Storage Pool.
The Update Storage Pool pane appears.
4. Assign disks from one or more nodes by clicking + in the relevant rows.
You can also type a number directly in the text fields.
5. Repeat steps 3 and 4 with any additional tiers that you want to add to the pool.
Nutanix recommends adding all available disks to a single storage pool.
6. Click Update.
You can also perform this task using the nCLI. For more information, type the
following command:
ncli> storagepool update help
Containers
A container is a subset of available storage within a storage pool. Containers hold the virtual
disks (vDisks) used by virtual machines. Selecting a storage pool for a new container defines the
disks where the vDisks are stored.
25
Nutanix Complete Cluster Version 2.6
To Create a Container
2. Click the configuration wheel at the top of the table to open the Create Container pane.
5. Click Create.
A dialog box appears with the following message:
Container (container_name) has been created. Do you want to create
an NFS Datastore on this container?
a. Type a name for the NFS datastore in the Datastore Name field.
b. Click Create.
A message similar to the following is displayed for each host in the cluster:
You can also perform this task using the nCLI. For more information, type the
following command:
ncli> container create help
26
Storage Management
vDisks
A vDisk is a subset of available storage within a container. If the container is mounted as an
NFS volume, then the creation and management of vDisks within that container is handled
automatically by the cluster. You can view these vDisks within Nutanix Command Center.
It may be necessary to enable iSCSI access for a subset of your VM workloads. To provide iSCSI
access at the VM or host level, you can create vDisks of one of two types:
• RDM, which can be directly attached to a virtual machine as an iSCSI LUN to provide high-
performance storage.
• VMFS, which can be mounted as a VMFS datastore to provide additional shared storage
within the cluster.
Important: VMFS datastores are not recommended for most VM workloads. For
more information, see Datastores on page 30.
vDisk Parameters
vDisks have the following parameters.
27
Nutanix Complete Cluster Version 2.6
To Create a vDisk
Before you begin. Create a container in Nutanix Command Center. See To Create a Container
on page 26.
7. Click Create.
You can also perform this task using the nCLI. For more information, type the
following command:
ncli> vdisk create help
What to do next. If the vDisk is not visible to the host and vmkernel.log shows an error
ChannelID or TargetID is out of range, remove iSCSI targets associated with deleted
vDisks.
The following procedure is only necessary if you plan to attach an iSCSI vDisk to a host (as a
VMFS datastore) or to a VM (as a raw device mapping). If you are using an NFS datastore, the
vDisks are managed completely by the cluster.
28
Storage Management
4. Right-click the device under iSCSI Software Adapter and select Rescan.
5. Wait until two new tasks (Rescan HBA and Rescan VMFS) show a status of Complete at the
bottom of the vSphere client.
6. Confirm that at least one device with a name that starts with Nutanix iSCSI Disk was added
to the Details pane of the iSCSI adapter.
ESXi has a limit of 256 RDM vDisks per host, as documented in Configuration Maximums
for VMware vSphere 5.0. If a host has detected this number of vDisks, even if they have
subsequently been deleted, new vDisks will not appear. The vmkernel.log shows a message
like this:
To resolve this issue, remove deleted vDisks from the host iSCSI static discovery list in vCenter.
4. Click the iSCSI adapter under iSCSI Software Adapter and click Properties.
6. Locate the deleted vDisk in the list of discovered targets. The target is named
iqn.2010-06.com.nutanix:vdisk_name.
29
Nutanix Complete Cluster Version 2.6
Datastores
Nutanix provides choice by supporting both iSCSI and NFS protocols when mounting a storage
volume as a datastore within vSphere. NFS has many performance and scalability advantages
over iSCSI, and is the recommended datastore type.
NFS Datastores
The Nutanix NFS implementation (NDFS) reduces unnecessary network chatter by localizing
the data path of guest VM traffic to its host. This boosts performance by eliminating unnecessary
hops between remote storage devices that is common with the pairing of iSCSI and VMFS.
To enable vMotion and related vSphere features, each host in the cluster must mount an NFS
volume using the same datastore name. The Nutanix web console and nCLI both have a function
to create an NFS datastore on multiple hosts in a Nutanix cluster.
VMFS Datastores
VMFS vDisks are exported as iSCSI LUNs that can be mounted as VMFS datastores. The
vDisk name is included in the iSCSI identifier, which helps you identify the correct LUN when
mounting the VMFS volume.
VMFS datastores are not recommended for most VM workloads. To optimize your deployment,
it is recommended that you discuss the needs of all VM workloads with a Nutanix representative
before creating a new VMFS datastore within the cluster.
30
Storage Management
2. Click the Common Tasks menu toward the top of the dashboard.
4. Type a name for the NFS datastore in the Datastore Name field.
5. If the option is available, select a container from the Container menu. Otherwise, proceed to
the next step.
7. Click Create.
A message similar to the following is displayed for each host in the cluster:
You can also perform this task using the nCLI. For more information, type the
following command:
ncli> datastore create help
2. Select a host in the Nutanix cluster and click the Configuration tab.
6. Select a VMFS vDisk that was previously created in Nutanix Command Center.
Caution: Do not attempt to mount a VMFS datastore on an RDM vDisk. If you are
unsure about the vDisk type, return to Nutanix Command Center and type update
vdisk-name in the web console search field.
31
Nutanix Complete Cluster Version 2.6
9. Type a meaningful name in the Name field, such as NTNX-VMFS and click Next.
12. Confirm that the new datastore appears in the storage view of all other nodes in the cluster.
If the datastore does not appear, or it is shown as inactive, click Rescan All to rescan the
host's iSCSI adapter.
32
Appendix A
System Specifications
Hardware Components
Software Components
System Characteristics
Operating Environment
34
System Specifications
35
Nutanix Complete Cluster Version 2.6
36
Appendix B
Glossary
block
A set of four Nutanix nodes contained in a single enclosure.
clone
A writeable copy of a vDisk.
cluster
A group of nodes contained in one or more Nutanix blocks.
Controller VM (CVM)
A Nutanix VM that manages storage and other cluster functions on a node.
Command Center
Cluster management tools from Nutanix; includes the web console and nCLI.
container
A subset of available storage within a storage pool.
datastore
A logical container for files necessary for VM operations.
guest VM
A VM running on a Nutanix cluster that executes a workload, such as VDI or Exchange, as
opposed to a VM that is involved in cluster operations, such as the vMA or a Controller VM.
host
An instance of the ESXi hypervisor that runs on a Nutanix node.
node
A physical server contained in a Nutanix block; runs an ESXi host.
snapshot
A read-only copy of the state and data of a VM at a point in time.
storage pool
A group of physical disks from one or more tiers.
tier
A type of physical storage in a Nutanix node. There are two tiers: SSD-PCIe (solid-state drives in
a PCIe slot) and DAS-SATA (hard disk drives on a SATA controller).
vDisk
Data associated with a VM represented as a set of files on a datastore.
38
Glossary
vZone
A group of hosts and vDisks to overcome the ESXi limit of 256 RDM vDisks per host.
Abbreviations
CVM
Controller VM
HOT
Heat-optimized tiering
nCLI
Nutanix command-line interface; part of Nutanix Command Center
RDM
Raw device mapping
RF
Replication factor
SVM
Service VM; see Controller VM
VM
Virtual machine
39
Nutanix Complete Cluster Version 2.6
vMA
vSphere Management Assistant
VMFS
Virtual machine file system
40