A
Acropolis
The service stack on AHV that handles VM lifecycle: create, clone, migrate, snapshot, plus HA (host failure response), Live Migration, and ADS (load balancing). Acropolis is what makes AHV more than just KVM. Functional analog to vCenter's VM-control responsibilities.
See also:AHV · Live Migration · HA · ADS
ADS (Acropolis Dynamic Scheduling)
The AHV equivalent of VMware DRS. Continuously monitors cluster load and rebalances VMs across hosts. Default polling interval 15 minutes (with a 30-minute backoff after a rebalance). Runs automatically; admin-tunable thresholds. Triggered by CPU contention, memory pressure, or storage hotspots. The internal service that runs ADS is named Lazan; you will see the name in logs and acli output. Older docs and the curriculum's earlier draft used "Distributed Scheduler"; the authoritative name is "Dynamic Scheduling."
See also:Acropolis · Live Migration
AHV (Acropolis Hypervisor)
Nutanix's KVM-based hypervisor. Included with NCI / AOS at no additional licensing cost. Runs on every Nutanix node by default. Operationally similar to ESXi for VM admins; technically a Linux+KVM stack underneath. Acropolis services provide the management plane.
See also:Acropolis · KVM · ESXi-on-Nutanix
Anti-ransomware
Real-time ransomware detection capability in Nutanix Files (via on-cluster Files Analytics) and the broader Data Lens product. Detects suspicious write patterns (mass encryption, mass rename, suspicious extension changes) using a 65,000+ signature library plus behavior-based anomaly detection. Alerts, optionally blocks, and snapshots the pre-attack state. One layer in a broader anti-ransomware strategy (complement with endpoint protection, network segmentation, backup hygiene).
See also:Files Analytics · Data Lens · Nutanix Files
AOS (Acropolis Operating System)
The Nutanix platform's core software stack: AHV + DSF + Prism Element. The AOS software is unchanged; the licensing SKU name AOS has been replaced by NCI (Nutanix Cloud Infrastructure). Legacy AOS Pro / AOS Ultimate licenses are no longer available for new sale or renewal; existing AOS customers are being converted to NCI Pro / NCI Ultimate. Per-core subscription. AHV is included at every tier at no extra fee.
See also:NCI · NCI Pro · NCI Ultimate · DSF
AOS Pro
Legacy subscription tier name. Replaced by NCI Pro; no longer available for new sale or renewal. See NCI Pro.
AOS Ultimate
Legacy subscription tier name. Replaced by NCI Ultimate; no longer available for new sale or renewal. See NCI Ultimate.
Application affinity group
A logical grouping of VMs that should be considered together during migration planning, DR orchestration, or policy enforcement. Often defined by application architecture: web tier + app tier + database tier of a single application. Used in Recovery Plans for startup-order definition.
See also:Recovery Plan · Categories
Application-consistent snapshot
Snapshot taken after the application has flushed in-memory state to disk. On Windows, requires VSS (provided by NGT). On Linux, application-level quiesce coordinated through NGT. Compare to crash-consistent (default), which captures disk state without app coordination. Preferred for direct database restoration.
See also:Crash-consistent snapshot · NGT · VSS
Async Replication
Periodic snapshot-based replication between Nutanix clusters. RPO typically 1 hour, configurable down to 15 minutes. Bandwidth-efficient (delta-based). Works over WAN, no strict latency requirements. Default for general-purpose DR. Compare to NearSync (sub-15-min RPO, low-latency) and Metro (zero RPO, metro-area only).
See also:NearSync · Metro Availability · RPO
B
Bond
A Linux/OVS bonding interface aggregating multiple physical NICs into a single logical link. Provides redundancy and (some modes) load balancing. Modes: active-backup (default), balance-slb (no switch config required), balance-tcp / LACP (requires switch coordination). Maps to vSphere NIC teaming policies.
See also:Open vSwitch · LACP
BoM (Bill of Materials)
The complete list of items in a customer proposal. A complete BoM has six sections: hardware, software, services, support, one-time items, recurring items. Hardware-and-license-only quotes are not BoMs; they cause customer pain six months in. BlueAlly discipline: name every item.
See also:TCO
Bridge (br0, br0.local)
OVS bridges in AHV. br0 is the data bridge connected to physical NICs (via a bond), carrying user VM traffic. br0.local is the management bridge for CVM-hypervisor communication. The two-bridge separation is intentional and operationally meaningful.
See also:Open vSwitch
Bucket
The unit of organization in Nutanix Objects (and S3 generally). A bucket holds objects and has access policies, versioning, lifecycle policies, and optionally WORM compliance settings. Backup software targets buckets; cloud-native applications read and write to buckets.
See also:Nutanix Objects · WORM · S3
C
Capex / Opex
Capex (capital expenditure) is hardware, depreciated over 3-5 years, balance-sheet item. Opex (operating expenditure) is subscriptions and services, expensed in period, income-statement impact. Customer accounting preferences vary; ask early. Nutanix subscriptions typically opex; hardware typically capex.
Cassandra
Distributed metadata store at the heart of DSF. A fork of Apache Cassandra optimized for the access patterns DSF needs. Tracks every extent's location, every vDisk's configuration, every cluster-wide piece of data placement. Runs in a ring across all CVMs. Stargate consults Cassandra for "where does extent X live?"
Categories
Key-value tags assigned to VMs (and other entities) in Prism Central. Drive policy enforcement: backup policies, DR plans, microsegmentation rules, quotas, reporting. More powerful than vSphere tags because they are first-class policy keys. The integration point for Flow, Protection Policies, and Self-Service automation.
See also:Projects · Protection Policy · Flow Network Security
Cluster Virtual IP (VIP)
The IP address that Prism Element listens on. Floats between CVMs for HA. Customers and admins access Prism Element at https://<VIP>:9440. Distinct from the Data Services IP (used for iSCSI target presentation by Volumes).
See also:Prism Element · Data Services IP
Compression (DSF)
Inline compression applied during OpLog drain to Extent Store using LZ4 (fast, modest ratios, latency-friendly). Cold-data and post-process compression uses LZ4HC (higher compression at higher CPU cost). Inline compression is selective: applies to sequential streams and large I/Os (>64K) to avoid impacting random write performance. Real-world ratios for mixed enterprise workloads: 1.5-2.5x. Configured per Storage Container. Quote ranges, not marketing peaks.
See also:DSF · OpLog · Storage Container
Content Cache
The in-memory read cache in CVM RAM. Stargate caches recently-accessed extents here. Cache hits return immediately. The first stop in the read path before checking the local Extent Store or remote nodes.
See also:Stargate · Extent Store · OpLog
Crash-consistent snapshot
The default Nutanix snapshot type. Captures vDisk state at a moment without application-level quiesce. Equivalent to pulling the power cord and rebooting: file systems may need recovery, in-memory data is lost, transactions may roll back. Sufficient for most workloads.
See also:Application-consistent snapshot
Curator
The background scrubbing and rebalancing service in DSF. Runs on every CVM (one master, others followers). Periodic scans walk metadata to identify operations: re-replication after failures, EC conversion, ILM/tiering, compression/dedup post-processing, capacity reclamation. Does not appear in the synchronous data path.
Cutover
The moment when a workload transitions from old platform to new during migration. For Move: source VM shut down, final delta synced, target VM started. Typical cutover downtime per VM: 5-30 minutes. Scheduled in maintenance windows. The brief moment after months of preparation.
See also:Move · Parallel-running · Pilot wave
CVM (Controller VM)
The Nutanix-managed VM that runs on every node, hosting the storage services (Stargate, Cassandra, Curator, Pithos, Zeus) plus management services. Receives I/O from co-located user VMs, replicates across the cluster, manages metadata. The "tax" of HCI: typically 8-16 vCPU and 32-64 GB RAM per node consumed by the CVM. The price of a distributed storage layer running on the same hardware as your VMs.
D
Data Lens
Nutanix's cloud-based (and, with v2.0 GA in 2026, fully on-premises including air-gapped) data governance and ransomware-detection service for unified storage. Evolved from on-cluster Files Analytics into a broader product covering Files, Objects, and (increasingly) other unified-storage targets. Carries a 65,000+ ransomware signature library plus behavior-based anomaly detection. Detect-and-block flow watches for encrypt-at-write, mass-rename, and suspicious-extension patterns.
See also:Files Analytics · Anti-ransomware · Nutanix Files
Data Services IP
The cluster-level IP used for external iSCSI initiators connecting to Volumes. Distinct from the Cluster VIP (which serves Prism Element). External hosts connect to the Data Services IP for iSCSI target discovery.
See also:Nutanix Volumes · iSCSI
Deduplication
DSF capability to detect duplicate data blocks and store one copy with references. Two flavors: cache dedup (in-CVM-RAM, always on for compatible workloads) and on-disk dedup (per-Storage-Container, requires more CVM metadata). Most useful for VDI and similar high-data-similarity workloads. Wrong tool for low-uniqueness workloads.
See also:Compression · DSF · Storage Container
Dependency mapping
Phase 0 migration deliverable identifying which VMs talk to which other VMs, which applications depend on which services, where the firewall and load-balancer dependencies live. Active discovery tools, application owner workshops, network flow analysis. Always takes longer than budgeted.
See also:Pilot wave · Risk register
Decommissioning
The process of removing old infrastructure after migration. Physical removal, contract termination, asset disposal, license reconciliation. Often the customer's responsibility but worth naming in the BoM. Real money in licenses you stop paying and rack space you reclaim.
See also:Hybrid steady-state · BoM
DSF (Distributed Storage Fabric)
The Nutanix software-defined storage layer. Runs across all CVMs in a cluster. Pools local disks of every node, replicates writes (RF2/RF3), caches reads, compresses/dedupes/erasure-codes, tiers data, self-heals. Built on Stargate (data path), Cassandra (metadata), Curator (background work), with Pithos and Zeus completing the service set.
E
EC-X (Erasure Coding)
DSF's erasure coding implementation. Configurable per Storage Container. Common configs: 4+1 (1-failure tolerance, 25% overhead vs RF2's 100%, requires 5+ nodes); 4+2 (2-failure tolerance, 50% overhead vs RF3's 200%, requires 7+ nodes). Trade-off: capacity savings vs write amplification on small random writes. Best for cold/archive workloads; bad for OLTP.
See also:RF · Storage Container · Curator
ESXi-on-Nutanix
Running VMware ESXi as the hypervisor on Nutanix hardware, instead of AHV. The CVM still runs DSF; vCenter still manages ESXi; Prism manages the Nutanix-native features. The migration-friendly path: keep ESXi mental model, get DSF benefits, decide on AHV later. Common starting point for VMware-mature customers.
Extent
The 1 MB metadata unit in DSF. Each extent has a Cassandra metadata record indicating where it physically lives. vDisks are logical entities backed by extents. Multiple extents can share an Extent Group (the physical allocation unit).
See also:Extent Group · vDisk · Cassandra
Extent Group
The physical allocation unit on disk in DSF holding extents. 1 MB on non-deduplicated containers; 4 MB on deduplicated containers. The unit Stargate writes to the Extent Store. Compression and erasure coding operate at this level.
See also:Extent · Extent Store
Extent Store
The persistent backing storage on each node where DSF writes go after draining from OpLog. The "cold" tier in the data path (though on all-flash nodes everything is fast). Reads check Content Cache first, then local Extent Store, then remote.
See also:OpLog · Content Cache · Stargate
F
File Server
The logical SMB/NFS service in Nutanix Files. Implemented as a cluster of FSVMs (typically 3+). Customers can have multiple File Servers per cluster (one for Production, one for Engineering, etc.). Integrates with AD via Kerberos and ACLs.
See also:Nutanix Files · FSVM
Files Analytics
The on-cluster analytics service that ships inside Nutanix Files. Provides file aging, top users by capacity and I/O, file type breakdown, anomaly detection (anti-ransomware foundation), permission auditing. The 3-minute customer demo that often unlocks tiering decisions. The broader Nutanix product that evolved from Files Analytics is Data Lens (cloud-based or, with v2.0 GA in 2026, fully on-prem); the two coexist but Data Lens has the wider scope.
See also:Data Lens · Nutanix Files · Anti-ransomware
Flow Network Security
Nutanix's distributed firewall and microsegmentation product. Category-driven policy (not IP-based). Stateful rules. Distributed enforcement at OVS flow-rule level on each AHV host. Licensed via NCI Ultimate or via the Security Add-On for NCI Pro (per usable TiB; bundles Flow microsegmentation with Data-at-Rest Encryption). Not an NCM tier feature. Functional comparison to NSX-T's distributed firewall.
See also:Microsegmentation · Categories · NSX-T · Flow Virtual Networking · NCI Ultimate
Flow Virtual Networking (FVN)
Nutanix's overlay networking product. VPC-style virtual networks with internal routing, NAT, BGP integration, service insertion. Younger than NSX-T; less mature for advanced routing patterns; sufficient for most multi-tenant use cases. Increasingly capable in 2024-2026.
See also:Flow Network Security · VPC · Service Insertion
Foundation
Nutanix's bare-metal cluster bootstrapping tool. Takes a set of new nodes and turns them into a working Nutanix cluster: imaging hypervisor, configuring CVMs, forming the cluster, validating networking. Run once per new cluster.
FSVM (File Server VM)
The dedicated VM running the file-services stack as part of a File Server cluster. Three FSVMs typically form a File Server (the logical share-serving entity). Real VMs running on the underlying Nutanix cluster, not abstractions.
See also:File Server · Nutanix Files
H
HA (High Availability)
Acropolis-driven automatic VM restart on surviving hosts when a host fails. The AHV equivalent of vSphere HA. Triggered by host loss; affected VMs restart on cluster nodes with capacity. No additional licensing; built into AOS. Typical recovery time: 30-90 seconds for VM restart after failure detection.
HCI (Hyperconverged Infrastructure)
Architecture that runs compute and software-defined storage on the same x86 nodes, eliminating the separate storage array. The category Nutanix invented commercially. Compare to three-tier (separate compute/storage/network) and CI (preconfigured but still tiered). HCI's wins: simplification, scaling unit, operational consolidation. HCI's gaps: storage-imbalanced workloads, compute-imbalanced workloads.
HCIR (Hyperconverged Infrastructure Ready)
Commodity hardware certified to run Nutanix software. Hardware sourcing option for customers who buy servers separately and license Nutanix software (software-only deployment). Most flexible; multi-vendor support boundaries.
See also:NX appliance · OEM partner
Hybrid steady-state
A successful end-state where 70-90% of workloads run on Nutanix and specific workloads remain on VMware for legitimate reasons (NSX-T routing complexity, NetApp ONTAP-specific workflows, vendor-certified-only-on-ESXi applications, regulatory constraints). Not a project failure; often the architecturally correct outcome.
See also:Cutover · Production wave
I
ILM (Information Lifecycle Management)
DSF's automated data movement: promoting hot data to fast tiers, demoting cold to capacity tiers, migrating for locality after VM moves, rebalancing on cluster expansion. Driven by Curator scans. Continuous background work; not in the synchronous I/O path.
Intelligent Operations
The NCM Pro feature set covering capacity analytics, anomaly detection, what-if planning, runway analysis, advanced reporting. Aria Operations (vROps) functional equivalent. Available at NCM Pro tier and above.
See also:NCM · Prism Central
IPAM (IP Address Management)
Optional per-virtual-network DHCP service in AHV. When enabled, the cluster acts as DHCP server for VMs on that network. Useful for self-contained tenant networks (test/dev, VDI floating pools). Typically disabled for production networks served by corporate DHCP.
See also:Virtual Network
iSCSI
The block-storage protocol used by Nutanix Volumes to expose LUNs to external initiators (physical hosts, non-Nutanix VMs, bare-metal databases). Standard initiator software on consumers; Nutanix cluster acts as iSCSI target. Multi-pathing via multiple portal IPs.
See also:Nutanix Volumes · Volume Group
K
KVM (Kernel-based Virtual Machine)
The Linux kernel hypervisor underlying AHV. Open-source, widely used, well-understood. AHV is KVM with the Acropolis service stack on top providing the management plane. Customers comfortable with KVM transfer their mental model directly to AHV.
L
LACP (Link Aggregation Control Protocol)
The 802.3ad standard for link aggregation. Used in some AHV bond modes (active-active LACP, balance-tcp). Requires switch-side configuration. Provides the highest throughput and load-balancing options when properly configured.
See also:Bond · Open vSwitch
LCM (Life Cycle Manager)
Prism's coordinated upgrade tool for AOS, AHV, BIOS, BMC, NIC firmware, drive firmware. One-click rolling upgrades that respect cluster availability. The integrated equivalent of vSphere LCM but with broader scope (firmware coordination across the stack).
See also:Foundation · NCC
Live Migration
The AHV equivalent of VMware vMotion. Move a running VM between AHV hosts in a cluster with no perceptible downtime. Triggered manually or by ADS rebalancing. Memory pre-copy plus brief switchover.
LWS (Light-Weight Snapshots)
The technical mechanism underlying NearSync replication. Frequent (sub-minute) micro-snapshots at the source cluster that flow continuously to the destination. Achieves 20-second-to-15-minute RPO. Higher cluster overhead than Async snapshots.
See also:NearSync · Async Replication
M
Metro Availability
Synchronous replication between two Nutanix clusters at metro-area distance. Zero RPO. Requires <5 ms RTT between sites. Witness VM at a third site for split-brain protection. Active-standby (typical) or active-active. Highest cost mode; only viable within metro distance. For long-distance DR, layer Async or NearSync to a remote third site.
See also:Witness VM · Async Replication · NearSync · RPO
Microsegmentation
Network security pattern that enforces firewall rules between individual VMs (or VM groups) rather than just at network perimeters. Implemented by Flow Network Security via category-driven policies and OVS flow-rule enforcement. Blocks lateral movement (compromised Web tier reaching DB tier directly).
See also:Flow Network Security · Categories
Move
Nutanix's cross-platform VM migration tool. Supports ESXi-to-AHV, Hyper-V-to-AHV, AWS-to-NC2, Azure-to-NC2, AHV-to-AHV. Initial replication + incremental sync + brief planned cutover (5-30 min per VM). Not vMotion (no zero-downtime). Each migration scheduled in maintenance windows.
See also:Cutover · Pilot wave · Production wave
Multi-tenancy
The capability to run multiple isolated tenants on shared infrastructure. In Prism Central: Projects with quotas and RBAC. In Flow Virtual Networking: separate VPCs per tenant. In Objects: separate object stores per tenant. The integration of these gives Nutanix multi-tenant capabilities equivalent to dedicated infrastructure for each tenant.
See also:Projects · Flow Virtual Networking
N
NC2 (Nutanix Cloud Clusters)
Nutanix software running on AWS or Azure bare-metal infrastructure. From the platform's perspective, an NC2 cluster looks like any other Nutanix cluster. Enables cloud DR without a second physical datacenter, cloud-burst capacity, hybrid-cloud parity. Cloud bare-metal pricing varies; egress fees apply.
See also:Async Replication · NearSync
NCC (Nutanix Cluster Check)
The cluster's built-in health-check tool. Runs automated checks across hardware, software, configuration, performance. Run on demand (ncc health_checks run_all) or on schedule. The first place to look when something is wrong.
See also:LCM · Foundation
NCI (Nutanix Cloud Infrastructure)
The current platform-licensing SKU. Replaces the legacy AOS Pro / AOS Ultimate naming. Tiers: NCI Pro (foundational; AHV, DSF, Prism Element, Async replication, baseline features) and NCI Ultimate (adds NearSync, Metro Availability, Flow Network Security, advanced data services). A separate NCI-Compute (NCI-C) SKU exists for compute-only clusters that don't need DSF storage features. Per-core subscription. AHV is included at every tier. NCI requires AOS 6.1.1 (LTS 6.5+) / Prism Central 2022.4+ / NCC 4.5.0+.
See also:NCI Pro · NCI Ultimate · NCM · NCP
NCI Pro
Foundational NCI tier. Includes AHV, DSF, Prism Element, baseline replication (Async), basic snapshots, network features, baseline security. The right tier for most customers. Per-core subscription. Successor to the legacy "AOS Pro" SKU.
See also:NCI · NCI Ultimate
NCI Ultimate
Higher NCI tier adding NearSync replication, Metro Availability, Flow Network Security (also available as Security Add-On to NCI Pro), advanced storage features, additional security capabilities. For workloads needing advanced replication, microsegmentation, or compliance features. Successor to the legacy "AOS Ultimate" SKU.
See also:NCI · NCI Pro · NearSync · Metro Availability · Flow Network Security
NCM (Nutanix Cloud Manager)
The umbrella name for advanced multi-cluster management features that sit on top of NCI in Prism Central. NCM is licensed separately from NCI; do not assume "NCM Starter is bundled with Prism Central"; basic Prism Central is included with NCI, and NCM tiers are paid add-ons. Tier structure: NCM Starter (paid; basic Intelligent Operations, low-code automation), NCM Pro (deeper Intelligent Operations: anomaly detection, what-if planning, runway analysis), NCM Ultimate (Self-Service / X-Play / cost governance). Tier contents shift between releases.
See also:NCI · NCP · Prism Central · Intelligent Operations · Self-Service
NCP (Nutanix Cloud Platform)
Bundle SKUs that combine NCI and NCM tiers at matching levels: NCP Starter (NCI Pro + NCM Pro), NCP Pro (NCI Ultimate + NCM Pro), NCP Ultimate (NCI Ultimate + NCM Ultimate). For customers who want everything in one SKU rather than buying NCI and NCM separately.
NearSync
Nutanix's near-synchronous replication mode. Uses Light-Weight Snapshots (LWS) to achieve 20-second-to-15-minute RPO. Requires lower-latency networking than Async (typically <5 ms RTT) and adds cluster overhead at the source. For Tier-1 production with sub-15-minute RPO requirements.
See also:LWS · Async Replication · Metro Availability
NetApp
Major storage incumbent. Customers commonly run NetApp filers (ONTAP), often alongside their VMware compute. Nutanix Files competes with ONTAP for general SMB/NFS workloads; ONTAP retains advantages in specific advanced workflows (FlexClone, FlexCache, advanced quotas, mature snapshot policies). Coexistence is often the right answer for established NetApp shops.
See also:Nutanix Files
NFS (Network File System)
Linux-native file-sharing protocol. Supported by Nutanix Files (v3 and v4). Used by Linux clients and applications, NFS-aware backup tools, and some hypervisor-level integrations.
See also:Nutanix Files · SMB
NGT (Nutanix Guest Tools)
Guest-OS agent installed in VMs running on Nutanix. Enables application-consistent snapshots (VSS coordination on Windows, application quiesce on Linux), self-service file restore, and VM mobility between hypervisors. The Nutanix equivalent of VMware Tools.
See also:Application-consistent snapshot · VSS
NSX-T
VMware's overlay networking and security product. Comparison anchor for Flow Network Security (microsegmentation) and Flow Virtual Networking (overlays). NSX-T is more mature for advanced routing patterns, edge services, L2VPN, and third-party ecosystem. Coexistence pattern (NSX-T-on-Nutanix-on-ESXi) is common for established NSX-T customers.
See also:Flow Network Security · Flow Virtual Networking
Nutanix Files
Scale-out SMB/NFS file storage running on Nutanix. Implemented as a cluster of FSVMs on top of DSF. Replaces dedicated filers (NetApp, Isilon, Pure FlashBlade) for typical enterprise workloads. Includes Files Analytics, anti-ransomware, Self-Service Restore.
See also:FSVM · File Server · Files Analytics · Self-Service Restore
Nutanix Objects
S3-compatible object storage running on Nutanix. Implemented as Object Service VMs on top of DSF. Replaces dedicated object storage appliances (Cloudian, Scality, MinIO, NetApp StorageGRID). Common backup-target consolidation: replaces Data Domain or similar for Veeam/Commvault/Rubrik repositories.
Nutanix Volumes
iSCSI block storage service. Exposes LUNs (Virtual Volumes) over iSCSI to external initiators: physical Linux/Windows servers, bare-metal databases (Oracle RAC), legacy iSCSI consumers. Not for AHV VMs (use vDisks instead).
See also:Volume Group · iSCSI · Data Services IP
NX appliance
Nutanix-branded hardware. Manufactured by Super Micro under the Nutanix brand. Single-vendor support: Nutanix is the throat to choke for both hardware and software. Tightly validated. The choice for customers wanting one vendor relationship.
See also:OEM partner · HCIR
O
Object Store
A multi-tenant container in Nutanix Objects. Holds buckets, has IAM-style users and access policies. Multiple Object Stores enable hard tenant isolation in service-provider scenarios.
See also:Nutanix Objects · Bucket
OEM partner
Hardware vendors that sell Nutanix-validated hardware: Dell EMC XC, Lenovo HX, HPE DX, Cisco UCS. Customers preserve their existing server-vendor relationship while running Nutanix software. Joint Nutanix + OEM support model. Common path for established server-vendor preferences.
See also:NX appliance · HCIR
Open vSwitch (OVS)
The open-source kernel-level software switch in AHV. Same OVS used in OpenStack and many production Linux platforms. Provides the bridge, bonding, and VLAN functionality. Auditable, well-understood, not proprietary.
See also:Bridge · Bond · Flow Network Security
OpLog
The persistent write buffer in DSF. On each node's hot tier (NVMe/SSD). Receives writes from Stargate, replicates to peer node OpLogs (RF2 or RF3), acknowledges to VM after durable. Drains asynchronously to Extent Store. The source of DSF's low-latency write characteristic.
See also:Stargate · Extent Store · RF
P
Parallel-running
Period during migration when source and target platforms both operate. Common for risk-managed cutovers (run new platform alongside old, validate, switch). Costs both platforms' operational expenses simultaneously. Plan 1-3 months for typical migrations; longer for complex environments.
See also:Cutover · Hybrid steady-state
Per-core licensing
Subscription model where customers pay per CPU core in their cluster (not per-VM, not per-CPU socket). Used by NCI, NCM, and post-Broadcom VMware. Multi-year terms with discount tiers. True-up provisions for mid-term core additions.
Pilot wave
The first migration wave: 5-20 low-risk VMs (typically internal IT-team-owned, low criticality, representative of common patterns). Validates platform, tooling, processes. Establishes runbook for subsequent waves. Skipping pilot is the common project-failure pattern.
See also:Production wave · Move
Pithos
The DSF service that owns vDisk configuration metadata. Tracks which vDisks exist, their attributes, their associations with VMs and Storage Containers. Smaller than Cassandra but architecturally distinct.
Prism
The umbrella name for the Nutanix management plane. Two products: Prism Element (per-cluster) and Prism Central (multi-cluster). Replaces vCenter, vSphere LCM, Aria Operations, and Aria Automation in functional scope, integrated into one management surface.
See also:Prism Element · Prism Central · NCM
Prism Central (PC)
The multi-cluster Nutanix management product. Deployed as a VM (single or scale-out three-VM). Aggregates multiple Prism Element clusters. Hosts Categories, Projects, Self-Service, X-Play, Intelligent Operations. The platform-wide management surface. Required for advanced features beyond per-cluster operations.
See also:Prism Element · Categories · Projects · NCM
Prism Element (PE)
The per-cluster Nutanix management UI. Runs in-cluster (on the CVMs), no separate appliance to deploy. Accessible at the Cluster VIP on port 9440. Handles VM lifecycle, host management, storage configuration, networking, monitoring for that cluster.
See also:Prism Central · Cluster Virtual IP
Production wave
A migration wave moving production workloads. Three typical waves: Wave 1 (low-tier, Tier-2/Tier-3, 50-200 VMs), Wave 2 (Tier-1, 200-600 VMs), Wave 3 (mission-critical, Tier-0, smallest count, highest scrutiny). Risk-sequenced to build confidence wave by wave.
See also:Pilot wave · Cutover
Projects
First-class multi-tenancy construct in Prism Central. Group of VMs, networks, images, and quotas with assigned users and roles. Richer than vCenter's resource pools. Used for environment separation (Dev/Test/Prod), business-unit isolation, service-provider tenancy.
See also:Categories · Multi-tenancy
Protection Domain (PD)
Legacy DSF construct for protecting groups of VMs/vDisks together. Configured in Prism Element. Manual VM membership. Single replication and snapshot schedule per PD. Continues to work; superseded by Protection Policies for new deployments.
See also:Protection Policy
Protection Policy
Modern protection construct in Prism Central. Category-driven membership (auto-enrollment). Multiple schedules. Tied to Recovery Plans for orchestrated failover. The recommended protection construct for new deployments.
See also:Protection Domain · Recovery Plan · Categories
R
Recovery Plan (Nutanix Disaster Recovery / formerly Leap)
The runbook construct inside Nutanix Disaster Recovery (the current product name; formerly branded Leap). Lives in Prism Central. Defines what VMs are protected (via category match), startup order (groups), network mapping, IP remapping, pre/post checks, and test failover capability. The SRM functional equivalent on Nutanix. "Leap" still appears in older docs and customer vocabulary; the product was renamed but the runbook construct kept the "Recovery Plan" name.
See also:SRM · Protection Policy · Async Replication
RF (Replication Factor)
The per-Storage-Container setting determining how many copies of every write DSF stores. RF2 (two copies, 50% effective capacity, single-failure tolerance), RF3 (three copies, 33% effective capacity, two-failure tolerance). Most workloads run RF2 with backup; mission-critical or compliance-driven workloads may run RF3.
See also:Storage Container · EC-X
Risk register
Living document tracking identified migration risks, mitigations, ownership, and status. Reviewed weekly with project lead, monthly with executive sponsor. New risks added as discovered. Categories: technical, operational, commercial, relationship.
See also:Pilot wave · Dependency mapping
RPO (Recovery Point Objective)
The maximum acceptable data loss measured in time. RPO 1-hour means up to 1 hour of data may be lost in a disaster. Drives replication mode choice: zero RPO (Metro Availability), sub-15-minute (NearSync), 1-hour-plus (Async).
See also:RTO · Async Replication · NearSync · Metro Availability
RTO (Recovery Time Objective)
The maximum acceptable downtime during recovery. RTO 30-minute means service must be restored within 30 minutes of disaster. Drives orchestration design: Recovery Plan automation, startup order, validation checks. Works alongside RPO; the two together define DR requirements.
See also:RPO · Recovery Plan
S
S3
The de-facto standard object storage API, originated by AWS. Nutanix Objects is S3-compatible: standard S3 endpoints, signatures, requests. Tools that work with AWS S3 work with Objects with endpoint-only changes (Veeam, Commvault, custom applications, AWS SDKs).
See also:Nutanix Objects · Bucket
SAML
Security Assertion Markup Language, the standard for identity federation. Prism Central supports SAML 2.0 for SSO with major identity providers (Azure AD/Entra ID, Okta, Ping, etc.). Configured once in Prism Central; applies across the platform.
See also:Prism Central
Self-Service (formerly Calm)
The application blueprint and provisioning automation in NCM Ultimate. Define application stacks as blueprints with parameters; users self-provision through a service catalog. Aria Automation (vRA) functional equivalent. Brand renamed from "Calm" to "Self-Service."
Self-Service Restore (SSR)
The end-user-facing recovery feature in Nutanix Files. Snapshots exposed via the Windows "Previous Versions" tab. Users restore deleted files or earlier versions without IT involvement. Eliminates routine help-desk tickets for file recovery.
See also:Nutanix Files · Application-consistent snapshot
Service Insertion
The Flow Virtual Networking capability to insert third-party network services (firewalls, load balancers, IDS/IPS) into traffic flow. Common pattern: Palo Alto VM-Series, Check Point, Fortinet inserted for deep packet inspection while Flow Network Security handles VM-tier microsegmentation.
See also:Flow Virtual Networking · Flow Network Security
SMB (Server Message Block)
The Windows-native file-sharing protocol. Nutanix Files supports SMB 2.x and 3.x with full Active Directory integration (Kerberos, ACLs, ABE, DFS-N).
See also:Nutanix Files · NFS
SRM (Site Recovery Manager)
VMware's DR orchestration product. Comparison anchor for Recovery Plans. SRM has 15+ years of maturity with deep VMware integration. Recovery Plans is younger, integrated, and capable for typical use. Coexistence pattern (SRM on ESXi-on-Nutanix) is common for established SRM deployments.
See also:Recovery Plan · ESXi-on-Nutanix
Stargate
The DSF data-path service. Runs on every CVM. Receives I/O from local hypervisor, writes to local OpLog, replicates to peer OpLogs (RF2/RF3), acknowledges to VM. Manages reads from Content Cache, Extent Store, or remote nodes. The "controller" of the distributed storage layer.
See also:Cassandra · Curator · OpLog · Content Cache
Storage Container
The logical "datastore equivalent" carved from the Storage Pool. The policy boundary in DSF: RF, compression, deduplication, erasure coding, advertised capacity, reservations are configured per-container. A cluster typically has multiple containers with different policies.
See also:Storage Pool · vDisk · RF
Storage Pool
The aggregate of all physical disks in a cluster. Each cluster has one Storage Pool by default. Storage Containers are carved from the pool. Almost never manipulated directly.
See also:Storage Container · DSF
T
TCO (Total Cost of Ownership)
The complete cost of an infrastructure decision over a time horizon (typically 5 years). Categories: hardware, software, services, support, training, migration, operations, decommissioning. A real TCO has all categories, year-by-year breakdown, stated assumptions, and sensitivity analysis. CFOs evaluate models on this standard.
See also:BoM · Capex / Opex
True-up
Contractual provision in multi-year subscriptions allowing customers to add cores during the term at agreed pricing. Protects customers from being locked at year-1 sizing for the entire term. Always negotiate true-up clauses for multi-year deals.
See also:Per-core licensing
V
v4 API
The unified Nutanix REST API. JSON-based. URL pattern https://<pc-or-pe-ip>:9440/api/<namespace>/<version>/<path> (e.g., /api/vmm/v4.0/ahv/config/vms). Single authentication, consistent schema across compute, storage, networking, automation. GA shipped with Prism Central 2024.3 / AOS 7.0. Officially supports a PowerShell module, Python SDK (ntnx-<namespace>-py-client per namespace), Terraform provider (nutanix/nutanix), and Ansible collection (nutanix.ansible). A community-maintained Pulumi provider exists but is not officially supported by Nutanix. Earlier API versions (v0.8, v1, v2, v3) are deprecating starting Q4 2026.
See also:Prism Central · X-Play
vDisk
The virtual disk attached to a VM in Nutanix. Logical entity backed by extents (1 MB units). Cassandra tracks every extent's location. From the VM's perspective, vDisk looks like a SCSI or virtio block device.
See also:Extent · Extent Group · Cassandra
VCF (VMware Cloud Foundation)
VMware's bundled subscription including vSphere, vSAN, NSX-T, and Aria. Post-Broadcom licensing tier that customers must often buy to get vSphere. Per-core subscription. Comparison anchor for the Broadcom math.
See also:VVF · Per-core licensing · NSX-T
Virtual Network
The Nutanix equivalent of a vSphere port group. Carries a VLAN tag (or none), optional IPAM, connection to a bridge (typically br0). VMs attach to virtual networks. Configured in Prism Element or Prism Central.
VLAN
Standard 802.1Q virtual LAN tagging. Configured per Nutanix Virtual Network. AHV's OVS handles the tagging on br0. Inter-VLAN traffic exits via the bond uplink to the physical network for routing.
See also:Virtual Network · Open vSwitch
Volume Group
The management unit in Nutanix Volumes. Logical grouping of one or more LUNs presented together over iSCSI. Useful for application-aware grouping (e.g., all the disks for a SQL Server cluster instance).
See also:Nutanix Volumes · iSCSI
VPC (Virtual Private Cloud)
Isolated multi-subnet network environment with its own IP space and policies. Provided by Flow Virtual Networking. Concept similar to AWS VPC. Multiple VPCs per cluster enable hard tenant isolation; routing rules enable controlled inter-VPC communication.
See also:Flow Virtual Networking · Multi-tenancy
VSS (Volume Shadow Copy Service)
Microsoft Windows feature for application-consistent snapshots. Coordinates application quiesce (databases, Exchange, AD) so the snapshot captures a consistent state. NGT provides the integration; Nutanix snapshots leverage VSS when application-consistent mode is requested.
See also:Application-consistent snapshot · NGT
VVF (VMware vSphere Foundation)
VMware subscription tier including vSphere and vSAN. The level below VCF (which adds NSX-T and Aria). Post-Broadcom packaging change. Per-core subscription.
See also:VCF · Per-core licensing
W
Witness VM
Small VM at a third failure domain that provides quorum during partition events for Metro Availability and two-node ROBO cluster deployments. Minimum spec: 2 vCPU, 6 GB RAM, 25 GB disk. Network latency between cluster and Witness should be ≤ 500 ms. Witness VMs cannot run on AWS or Azure cloud platforms. When the protected sites lose connectivity, the Witness arbitrates which site continues operation.
See also:Metro Availability
WORM (Write Once Read Many)
Immutability feature in Nutanix Objects. Once written, objects cannot be modified or deleted until the retention period expires. Used for compliance archives: financial records, healthcare data, legal archives. Regulatory-grade protection.
See also:Nutanix Objects · Bucket
X
X-Play
Event-driven automation engine in Prism Central (NCM Ultimate tier typically). Playbooks have triggers (alerts, events, schedules) and actions (notifications, API calls, snapshots, external scripts). Common patterns: alert-to-Slack, threshold-to-snapshot, event-to-ServiceNow ticket.
See also:Self-Service · v4 API · NCM
Z
References
The glossary entries are derived from and verified against the per-module References sections. The most cross-cutting authorities used here:
- Nutanix Bible. The single most useful Nutanix-architecture reference; cited from nearly every glossary entry.
- Nutanix Cloud Platform Software Options. Authoritative source for the NCI / NCM / NCP licensing structure that replaced AOS Pro / Ultimate.
- Nutanix Portal Tech Notes (TN series). TN-2027 (data protection), TN-2032 (data efficiency), TN-2041 (Files), TN-2106 (Objects), TN-2094 (Flow) are referenced from many entries.
- Nutanix Developer Portal. v4 API reference, SDKs, automation tooling.
- Nutanix Cloud Infrastructure (NCI) Datasheet. NCI Pro / Ultimate tier comparison; backs the AOS-to-NCI rename throughout.
- Nutanix Cloud Manager (NCM) Datasheet. NCM tier structure and what is NOT bundled with NCI.
- BP-2083 ROBO Deployment · Witness Requirements. Authoritative Witness VM specs (2 vCPU / 6 GB / 25 GB / ≤500 ms / not on AWS or Azure).
Cross-References
- Modules: Each entry's "Module N" reference points back to where the term is taught in depth.
- See also: Internal links walk between related entries within this glossary.
- Comparison Matrix: Appendix B has feature-by-feature comparisons (Nutanix vs VMware, NetApp, AWS S3, NSX-T, SRM, VxRail).
- Objections: Appendix D has response scripts for common customer pushbacks. live now
- CLI Reference: Appendix G has command syntax for the operational terms (
ncli,acli,manage_ovs,curator_cli). Look up in Appendix G