NutaNIX
Eight architecture profile cards arranged in a 4-by-2 grid: Small, Medium, Large, VDI on the top row; Hybrid+NC2, ROBO, Greenfield, Compliance on the bottom row. Each card shows node-count silhouette, VM range, and license tier. A three-principle strip at the bottom reads: starting points, run Sizer for binding numbers, name the trade-offs.
/nix/nutanix/appendix-i-reference-architectures

Appendix I · Reference Architectures

REFERENCE 8 architectures profile + sizing + stack trade-offs explicit starting points

Eight reference architectures covering the most common BlueAlly deployment patterns. Each is a starting point; customize per customer. Always run Sizer for the binding numbers in the proposal.

Three principles:

  1. Reference architectures are starting points, not deliverables. Customize per customer.
  2. Always run Sizer for the binding numbers in the proposal.
  3. Name the trade-offs explicitly. Every architecture sacrifices something.

The format for each architecture: customer profile, sizing with cluster shape and hardware recommendations, network topology, software stack (NCI / NCM / Files / Objects / Volumes / Flow), DR approach, design rationale, trade-offs accepted, variations for common customer requests, and cross-references.

1Small Reference Architecture

Profile

Customer profile

Mid-market, ~150-300 employees in IT-relevant roles, single primary datacenter, secondary site for DR (often a colo or smaller office).

  • VM count: 100-250
  • Storage: 30-100 TB usable
  • Workloads: general-purpose VMs, file shares, perhaps a small VDI deployment
  • Compliance: standard (no PCI Level 1, no FedRAMP)
  • Team: 3-6 person infrastructure team
  • Annual run-rate today: $150-400K across hardware, software, support

Sizing

TierCluster shapeHardwareCapacity / network
Production6-8 nodesDell XC, HPE DX, or NX appliances · all-NVMe · 768 GB-1 TB RAM · 32-48 cores per node60-120 TB raw → 30-60 TB usable (RF2 + compression) · 25 GbE · 4 NICs per node minimum
DR4 nodesSame node spec for operational consistency~70-80% of primary capacity

Network topology

Software stack
  • Hypervisor: AHV (default) or ESXi-on-Nutanix (if customer is mid-VMware-migration)
  • NCI: Pro tier (formerly AOS Pro). Add the Security Add-On for NCI Pro if Flow Network Security microsegmentation is in scope (per usable TiB; bundles Data-at-Rest Encryption); otherwise upgrade to NCI Ultimate where Flow is bundled.
  • NCM: Pro tier (Intelligent Operations: capacity analytics, anomaly detection, runway analysis)
  • Files: sized for existing file workloads (typically 20-50 TB)
  • Objects: sized for backup target consolidation (typically 30-80 TB)
  • Volumes: as needed for any iSCSI consumers
  • Flow Network Security: licensed via the Security Add-On for NCI Pro (or NCI Ultimate); baseline microsegmentation policies
DR approach
  • Async replication to DR site (1-hour RPO typical)
  • Recovery Plans for orchestrated failover
  • Quarterly test failover into isolated network at DR site
  • Backup target replication via Veeam (or equivalent) to Objects at primary, replicated to DR
Design rationale

Calibrated for the BlueAlly mid-market sweet spot. Captures the consolidation story across compute, storage tiers, DR, and management without overprovisioning for enterprise-scale features the customer doesn't need.

8-node primary provides N+1 redundancy with margin during rebuild; supports growth to ~200 VMs comfortably; allows 150-VM workload to run with healthy headroom.

NCM Pro tier is the practical sweet spot for management features: it includes Intelligent Operations (capacity analytics, anomaly detection, runway analysis), which most mid-market customers want. NCM Ultimate is overkill unless the customer specifically needs Self-Service or X-Play depth. Flow Network Security is separately licensed: it ships with NCI Ultimate or as a Security Add-On to NCI Pro (per usable TiB; bundles Data-at-Rest Encryption). Don't conflate the NCM management tiers with the NCI-side Flow licensing; that confusion bites in customer pricing conversations.

Trade-offs accepted
  • No NearSync or Metro (Async-only DR; sub-15-min RPO not in scope)
  • No NCM Ultimate (no Self-Service blueprints, no X-Play automation depth)
  • Single-cluster production (no multi-cluster blast-radius management)
  • 25 GbE rather than 100 GbE (sufficient for the workload; 100 GbE is overprovisioning)
Variations
  • Heavily Linux/Containers: add Kubernetes-on-Nutanix consideration; cluster shape unchanged
  • Higher RPO: upgrade to NCI Ultimate (formerly AOS Ultimate) for NearSync on Tier-1 workloads. NCI Ultimate also bundles Flow Network Security, which removes the Security Add-On per-TiB cost.
  • Starting smaller (50-100 VMs): scale down to 4-5 node primary

SEE ALSO: C · Scenario 1 (Mid-Market Consolidation) · F · Cluster Sizing Fundamentals · M9 · Pricing Construction

2Medium Reference Architecture

Profile

Customer profile

Larger mid-market or smaller enterprise, ~500-1,000 employees in IT-relevant roles, primary datacenter plus dedicated DR site, possibly some ROBO sites.

  • VM count: 400-800
  • Storage: 100-300 TB usable
  • Workloads: general-purpose VMs, file shares, VDI (~500-1,000 sessions), databases, possibly small object storage needs
  • Compliance: industry-standard (HIPAA, PCI DSS Level 2, SOC 2)
  • Team: 6-12 person infrastructure team
  • Annual run-rate today: $400K-1M

Sizing

TierCluster shapeHardwareCapacity / network
Production12-16 nodesOEM partner (Dell XC, HPE DX, Lenovo HX, or Cisco UCS) · all-NVMe · 1-1.5 TB RAM · 48-64 cores per node200-400 TB raw → 100-200 TB usable · 25 GbE production · 100 GbE optional spine · 4-6 NICs/node
DR8-10 nodesSame hardware family~75% of primary capacity
Files/Objects (optional)4-6 nodesDedicated cluster for file/object workloadsTuned independently from general VM workload

Network topology

Software stack
  • Hypervisor: AHV with possible ESXi-on-Nutanix subset
  • NCI: Pro tier for general workloads (NCI Ultimate for tier-1 workloads needing NearSync; NCI Ultimate also bundles Flow Network Security at no incremental per-TiB cost)
  • NCM: Pro tier (Intelligent Operations); Ultimate if Self-Service / X-Play depth is needed
  • Files: dedicated File Server for user shares + application file storage; 50-150 TB
  • Objects: dedicated Object Store for backup targets and on-prem analytics; 100-200 TB
  • Volumes: as needed
  • Flow Network Security: category-based microsegmentation; PCI scope boundary enforcement
  • Flow Virtual Networking: if multi-tenancy is in scope
DR approach
  • Tier-1 (~50-100 VMs): NearSync replication, 15-minute RPO, Recovery Plans orchestration
  • Tier-2 (~200-300 VMs): Async hourly, 1-hour RPO
  • Tier-3 (~150-300 VMs): Async daily or redeploy from gold image
  • Quarterly test failover; SRM coexistence if customer has deep SRM investment
Design rationale

Sized for the consolidation story across compute, file/object/block storage, DR with tiered SLAs, and microsegmentation. The split between general-purpose cluster and dedicated Files/Objects cluster (when warranted) lets each cluster tune for its workload pattern; this is one of Nutanix's strengths over single-cluster-with-everything designs.

NCM Pro is the typical choice; upgrade to Ultimate when the customer needs Self-Service blueprints or X-Play event-driven automation for ServiceNow/SIEM integration.

Tiered DR matches infrastructure investment to workload criticality; not every workload needs NearSync.

Trade-offs accepted
  • Multi-cluster operational complexity (more clusters means more upgrade coordination, more management surface)
  • Some workloads stay on hybrid (ESXi-on-Nutanix subset for NSX-T-deep or SRM-deep workloads)
  • Not pursuing extreme-scale features (Metro Availability typically not in scope unless metro-area DR is required)
Variations
  • Multi-site Metro Availability needed: add witness VM at third site, tune for synchronous replication, network <5ms RTT
  • Heavy NSX-T retention: plan permanent ESXi-on-Nutanix subset; map workloads to AHV vs ESXi-on-Nutanix
  • Deep SRM customization: keep SRM for SRM-orchestrated workloads, Recovery Plans for new

SEE ALSO: C · Scenario 1 + 2 · M7 · Data Protection

3Large Reference Architecture

Profile

Customer profile

Enterprise customer, 2,000+ employees in IT-relevant roles, multiple datacenters (often 2+ production, 1+ DR, possibly cloud-extended), significant footprint of mixed workloads.

  • VM count: 1,500-5,000+
  • Storage: 500 TB-2 PB usable
  • Workloads: full enterprise mix; tier-0 mission-critical, multi-tenant business units, VDI at scale, large databases, significant file and object storage
  • Compliance: heavy (SOX, PCI DSS Level 1, HIPAA, FedRAMP, industry-specific)
  • Team: 15-30+ person infrastructure team
  • Annual run-rate today: $1M-5M+

Sizing

Multiple production clusters, workload-aligned. Single-cluster-with-everything fails at this scale due to blast radius, upgrade coordination, and workload-specific tuning needs.

Cluster roleShapeNotes
General-purpose16-32 nodes each (multiple clusters)Workload-aligned blast-radius management
Database8-12 nodesAll-NVMe, sized for database performance specifically
VDI12-24 nodesIf VDI is significant; tuned for boot-storm patterns
Files6-12 nodesDedicated for file workloads
Objects6-12 nodesDedicated for object storage
DRMirrored topology at DR siteFull failover capacity for tier-1 / tier-2; tier-3 redeploys
NC2 cloud DR (optional)8-12 NC2 nodesCloud-extended DR; compliance-acceptable workloads

Cluster splitting rationale: blast-radius management. A 32-node cluster failing causes more impact than 4x 8-node clusters failing one. Multiple smaller clusters also enable independent upgrade cadences.

Network topology

Software stack
  • Hypervisor: mixed AHV + ESXi-on-Nutanix; AHV for new workloads; ESXi for NSX-T-deep / SRM-deep
  • NCI: Pro for general; Ultimate for tier-1 (NearSync, Metro Availability, bundled Flow Network Security). Many large enterprises standardize on NCI Ultimate cluster-wide for the simpler licensing posture and the Flow bundling.
  • NCM: Ultimate (Self-Service blueprints, X-Play, Cost Governance)
  • Prism Central: scale-out (3 VMs) for HA and >10K VM management
  • Files: multiple File Servers for tenant or workload separation
  • Objects: multiple Object Stores for tenant or use-case separation; WORM-enabled buckets for regulatory archives
  • Volumes: as needed for bare-metal databases, legacy iSCSI consumers
  • Flow Network Security: comprehensive category-based microsegmentation; PCI / SOX scope boundary enforcement
  • Flow Virtual Networking: multi-tenant VPC overlays; service insertion for advanced security
DR approach
  • Tier-0 mission-critical: Metro Availability between primary datacenters where applicable; NearSync to remote DR
  • Tier-1 production: NearSync replication, 15-min RPO
  • Tier-2 production: Async hourly, 1-hour RPO
  • Tier-3 / non-production: Async daily or redeploy
  • Quarterly test failover at minimum, with audit attestation
  • SRM coexistence indefinite; Recovery Plans for new
  • Compliance-driven WORM archives in Objects with multi-year retention
Design rationale

Multi-cluster architecture is the differentiator for large enterprise. Single-cluster-with-everything fails at this scale due to blast radius, upgrade coordination, and workload-specific tuning needs. Workload-aligned clusters allow each to be tuned for its pattern (database, VDI, file/object) while the central Prism Central provides the unified management view.

NCM Ultimate enables the platform-level value (Self-Service for tenant onboarding, X-Play for event-driven automation, Cost Governance for chargeback) that large enterprise environments require.

Mixed-hypervisor architecture is the honest answer at this scale. Pure AHV migration is rare for enterprise customers with deep NSX-T / SRM / specific-application investments. Hybrid steady-state is the typical successful end state, possibly indefinite.

Trade-offs accepted
  • Multi-cluster operational complexity is unavoidable; addressed via Prism Central centralization and disciplined upgrade-coordination
  • Multi-vendor (Nutanix + remaining VMware) is the steady state; full consolidation rarely achieved
  • Highest licensing cost tier (Ultimate everywhere); justified by enterprise feature needs
  • Significant network investment (spine-leaf, 100 GbE)
Variations
  • NC2 cloud DR in scope: add cloud cluster as third site; align replication topology
  • Financial-services compliance: add HSM integration, more rigorous audit logging, more frequent test failover
  • Multi-region (international): plan cross-region replication carefully; bandwidth and latency become critical design constraints

SEE ALSO: C · Scenario 2 + 4 · M9 · Licensing Tier Selection

4VDI Reference Architecture

Profile

Customer profile

VDI-centric deployment for healthcare, financial services, education, or contact centers. VDI is the primary workload (not a side use case).

  • Sessions: 1,000-3,000 typical; can extend to 5,000+
  • VDI broker: Citrix CVAD or VMware Horizon
  • Profile: persistent or non-persistent depending on use case
  • Compliance: typically HIPAA (healthcare), PCI (financial), or FERPA (education)
  • Team: dedicated VDI team plus infrastructure support

Sizing

TierCluster shapeHardware / capacity
VDI primary12-24 nodes (by session count)All-NVMe required (boot-storm I/O); high RAM density (1-1.5 TB/node) for VM density; 25 GbE minimum / 100 GbE for >2K sessions
Capacity (persistent)30-80 GB per user before dedupOn-disk dedup typically 2-3x for similar-OS profiles
Capacity (non-persistent)Smaller per-userHigher density
VDI DRSmaller than primaryBroker-tier first; profiles second; rebuild from gold image

Network topology

Software stack
  • Hypervisor: AHV (Citrix CVAD has good AHV support)
  • NCI: Pro tier (formerly AOS Pro). Add Security Add-On for Flow if VDI-tier microsegmentation is in scope, or upgrade to NCI Ultimate for the bundled Flow + DARE.
  • NCM: Pro tier (Intelligent Operations)
  • Files: dedicated File Server for persistent profile storage if not using broker-managed profile management
  • Volumes: as needed for VDI infrastructure components
  • Flow Network Security: category-based microsegmentation isolating VDI tier from back-end services (licensed via NCI Ultimate or Security Add-On for NCI Pro)
DR approach
  • Broker tier: Async replication, fast recovery (RTO 30-60 min)
  • Persistent profiles: Async replication daily; restore from backup if lost
  • VDI sessions are not preserved across DR; users re-authenticate on DR site
Design rationale

VDI is one of Nutanix's strongest sweet spots. The architectural advantages of distributed I/O (boot storms handled gracefully across all nodes vs centralized array bottleneck) plus on-disk dedup (significant capacity savings on similar OS images) plus AHV included in AOS (no separate hypervisor licensing) combine to win VDI deals consistently.

All-NVMe is non-negotiable. Spinning-disk VDI causes boot-storm pain that no amount of caching fully solves.

Trade-offs accepted
  • Higher per-node hardware cost (all-NVMe, high RAM density) than general-purpose clusters
  • Dedicated VDI cluster vs general-purpose blend (purposeful, for tuning)
  • VDI DR is "infrastructure DR" not "session DR"; users restart sessions
Variations
  • Non-persistent / pooled deployment: lower capacity needs; higher density per node
  • Extreme scale (5,000+ sessions): multi-cluster VDI; partition by department or region
  • GPU-enabled (engineering / design): add GPU-equipped nodes; AHV supports GPU passthrough

SEE ALSO: C · Scenario 3 (VDI) · F · VDI Sizing · M5 · DSF Performance

5Hybrid Nutanix + NC2 Reference Architecture

Profile

Customer profile

Customer with on-prem Nutanix and cloud-extended capabilities via NC2. Use cases: cloud DR (no second physical datacenter), cloud burst capacity, hybrid-cloud workload mobility.

  • On-prem footprint: any size
  • Cloud workload: typically 20-30% of on-prem at steady state; can scale up dramatically during failover or burst
  • Cloud platform: AWS or Azure
  • Compliance: must allow for data in chosen cloud region

Sizing

TierSizingNotes
On-prem clusterPer general reference (Small / Medium / Large)Sized for on-prem workload
NC2 cluster4-8 nodes steady state · scales to full failover at DR timeHibernation strategy if cloud bare-metal pricing supports
NetworkAWS Direct Connect or Azure ExpressRoute1-10 Gbps depending on replication / workload volume; IPSec VPN as backup
Software stack (same on both sides)
  • NCI: Pro tier on both (Ultimate if NearSync to NC2 is in scope or if Flow microsegmentation is needed cluster-wide without the Security Add-On)
  • NCM: Pro or Ultimate; Prism Central manages both clusters as one fleet
  • Files / Objects: can run in either location depending on workload locality
DR approach
  • Async replication (1-hour RPO) for tier-1 and tier-2 from on-prem to NC2
  • NearSync for tier-1 if bandwidth supports
  • Recovery Plans orchestration for failover
  • Quarterly test failover into isolated network at NC2
  • Replicate backup target (Objects) for off-cloud archive copies
Design rationale

NC2 makes cloud DR practical for customers without a second datacenter. The platform parity (same Nutanix on both sides) means runbooks transfer; failover doesn't require translation between platforms; the operational model is consistent.

Hibernation strategies (where supported) reduce steady-state cloud cost; the cluster scales up only when needed for test or actual failover.

Trade-offs accepted
  • Cloud bare-metal cost is higher than on-prem at steady state; the value is in elimination of second-datacenter capex and the on-demand failover capability
  • Cloud egress fees apply for replication direction
  • Compliance constraints around data residency must be satisfied; cloud region selection critical
Variations
  • Burst capacity (not DR): size NC2 cluster for typical burst rather than full failover
  • Multi-cloud (AWS + Azure): complexity increases; NC2 in both with separate replication topologies
  • True cloud-native hybrid (Kubernetes cross-cloud): broader cloud strategy beyond NC2

SEE ALSO: C · Scenario 6 (Cloud DR) · M7 · NC2 · B · NC2 vs VMware Cloud

6ROBO Fleet Reference Architecture

Profile

Customer profile

Customer with a primary datacenter plus many distributed sites (retail stores, branch offices, manufacturing plants, oil-and-gas remote sites).

  • Sites: 50-1,000+
  • Per-site footprint: small (1-3 servers per site)
  • Per-site workloads: POS, inventory, surveillance, basic application services
  • WAN: variable per site; some on cable internet, some on dedicated MPLS or SD-WAN
  • Critical data: replicated to central; non-critical local-only

Sizing

TierCluster shapeNotes
Per-site cluster1-node or 2-node Nutanix (small-footprint)5-20 VMs per site; OEM hardware; ruggedized for harsh environments
Central cluster (datacenter)Per Small or Medium referenceCentral management hub, central backup target, central replication destination
Network (per site)50-200 Mbps WANSized for change rate; SD-WAN for multi-site routing
Network (central)Aggregated WAN sized for sum of sitesConcentration point
Software stack
  • Hypervisor: AHV everywhere
  • NCI: Pro on edge sites (formerly AOS Pro); NCI Pro or Ultimate on central based on workload and whether Flow Network Security or NearSync are needed centrally
  • Prism Central at central datacenter: manages all sites as one fleet
  • Categories: Site: Store-001 through Site: Store-NNN for fleet-wide policy
  • Async replication: critical data from each site to central
  • Objects (central): backup target for fleet, with appropriate retention
  • Flow Network Security: consistent policy across fleet via category-based rules
DR approach
  • Per-site failure: critical data is at central; redeploy from gold image on replacement hardware; restore site-specific state from central
  • Central failure: business continuity for HQ; remote sites continue operating with cached data; degraded mode for cross-site workflows
  • No site-to-site replication (would exponentially complicate the topology)
Design rationale

The central Prism Central pattern is the differentiator. Managing 200+ sites individually is operationally impossible; managing them as a fleet through one Prism Central is workable. Categories-based policy enforcement applies the same security and operational rules across the fleet automatically.

Local-only workloads run on local cluster (POS, inventory cache); critical data replicates to central (transaction logs, customer data); the split optimizes for WAN bandwidth.

Per-site small footprint (1-2 nodes) is cost-efficient; full 3-node clusters at every site is cost-prohibitive at fleet scale.

Trade-offs accepted
  • Single-node sites have no local redundancy (per-site failure means restore from central)
  • 2-node sites have limited redundancy
  • Central WAN aggregation is a single point of concentration
  • LCM upgrade coordination across hundreds of sites is a real operational discipline
Variations
  • Higher per-site availability: scale to 3-node per-site clusters (higher cost)
  • Extreme distributed (1,000+ sites): consider sub-aggregation by region; multiple central hubs

SEE ALSO: C · Scenario 5 (ROBO) · F · Cluster Sizing

7Greenfield Reference Architecture

Profile

Customer profile

Newly-funded company, no existing infrastructure, hybrid-cloud-native development model.

  • Employees: 50-500 in IT-relevant roles, scaling rapidly
  • VM workload: SaaS application backend, internal tools, dev/test environments
  • Cloud presence: significant AWS or Azure footprint
  • Steady-state target: hybrid (on-prem for predictable, cloud for variable)
  • Team: small, often 2-5 infrastructure engineers
  • Cost-sensitivity: high

Sizing

TierCluster shapeNotes
Production4-node cluster initiallyHCIR commodity hardware for cost flexibility, or OEM partner; all-NVMe; balanced compute/capacity; scales by adding nodes
Dev/test2-3 nodesSmaller node spec acceptable
Network25 GbE productionVPN or cloud direct-connect to AWS/Azure VPC for hybrid integration; modest WAN
Software stack
  • Hypervisor: AHV
  • NCI: Pro tier (formerly AOS Pro). Add Security Add-On if Flow Network Security is needed for greenfield-tier microsegmentation; otherwise upgrade to NCI Ultimate as scale justifies.
  • NCM: Pro (Ultimate optional based on growth)
  • Files: sized for application file storage and development shares (smaller than enterprise scale)
  • Objects: for steady-state object workloads on-prem complementing cloud S3
  • Volumes: as needed
  • Flow Network Security: baseline microsegmentation (licensed via NCI Ultimate or Security Add-On for NCI Pro)
DR approach
  • Initial: rely on AWS for cloud-resident data; on-prem critical data replicated to cloud Objects via Async
  • As scale grows: consider NC2 cloud DR for full Nutanix-platform DR
  • Avoid building out a second physical datacenter; cloud is the elasticity and the DR
Design rationale

Greenfield deployments should optimize for cost flexibility, growth, and avoiding premature commitments. HCIR or OEM hardware (rather than NX) preserves sourcing flexibility. Initial 4-node cluster is sized for current needs plus 6-month growth; scale by adding nodes as needed.

3-year subscription term (rather than 5-year) preserves flexibility given growth uncertainty. Subscription growth provisions ensure mid-term core additions are pre-priced.

Hybrid-cloud-native is the architecture, not on-prem replacing cloud. Steady-state predictable workloads on-prem; variable / elastic workloads in cloud. The split optimizes both cost and elasticity.

Trade-offs accepted
  • Smaller initial cluster means less headroom for spikes (acceptable for greenfield)
  • Cloud dependency for elasticity (intentional)
  • Less mature operational tooling than enterprise environments (small team)
Variations
  • Purely on-prem strategy: scale on-prem cluster sooner; no cloud burst plan
  • Primarily cloud strategy: smaller on-prem footprint; on-prem only for steady-state

SEE ALSO: C · Scenario 9 (Greenfield) · F · Cluster Sizing · M9 · Subscription Terms

8Compliance-Heavy Reference Architecture

Profile

Customer profile

Financial services, healthcare provider, or government contractor. Compliance is the central design driver.

  • Compliance: PCI DSS Level 1, HIPAA, FedRAMP, SOX, FFIEC, or similar
  • Workload sensitivity: regulated data alongside general-purpose
  • Audit cadence: frequent (quarterly external, monthly internal)
  • Identity rigor: SSO, MFA, separation of duties, JIT elevation
  • Encryption: at-rest with HSM-backed keys, in-transit
  • Audit retention: 7-year typical for financial; 6-year HIPAA; varies by jurisdiction

Sizing

TierSizingNotes
Production clustersPer Large or Medium referenceMay split into compliance-scoped vs general for blast-radius and audit-scope management
EncryptionHSM (Vault, customer's HSM, or cloud HSM)KMIP integration with Nutanix cluster encryption
AuditSIEM (Splunk, Elastic, Sentinel)Receives Prism audit logs; 7-year log retention at SIEM tier
WORM storageDedicated Objects buckets, WORM-enabledRetention policies aligned with regulatory requirements

Network topology

Software stack
  • Hypervisor: AHV
  • NCI: Ultimate (formerly AOS Ultimate). Bundles NearSync, Metro, Flow Network Security, Data-at-Rest Encryption capabilities; the right tier for compliance environments.
  • NCM: Ultimate (Self-Service with approval workflows, X-Play for compliance-driven automation)
  • Files: with anti-ransomware enabled; SMB encryption mandatory
  • Objects: WORM-enabled buckets for archives
  • Volumes: encrypted; HSM-backed keys
  • Flow Network Security: comprehensive category-driven enforcement; default-deny posture
  • Cluster encryption: enabled with HSM key management
DR approach
  • RF3 for tier-0 financial systems (compliance often mandates)
  • Metro Availability between primary datacenters where applicable
  • NearSync to remote DR (typically same compliance zone)
  • Recovery Plans with audit-attested test failover quarterly
  • DR runbook documentation maintained per compliance requirements

Identity and access

Design rationale

Compliance is the central design driver, not an overlay. Architecture choices (cluster splitting for audit scope, network segmentation, encryption everywhere, audit logging end-to-end) all flow from compliance requirements.

NCM Ultimate is the typical license tier; the Self-Service approval workflows and X-Play automation are operationally valuable for compliance regimes that require change-control evidence.

WORM Objects buckets are the durable answer for regulatory archives; the multi-year immutability requirement is satisfied by Objects' Object Lock equivalent.

Trade-offs accepted
  • Highest licensing cost tier
  • Operational complexity from compliance-scoped cluster splitting
  • HSM integration adds vendor relationship and operational dependency
  • Documentation overhead for compliance evidence is real
Variations
  • FedRAMP: validate Nutanix's specific FedRAMP authorization status; some features may have different availability in FedRAMP environments
  • Multi-region with sovereignty: plan replication topology carefully; cloud DR may not be available if data residency is strict

SEE ALSO: C · Scenario 4 (Compliance) · M7 · Encryption and Compliance · M8 · WORM Objects

Reminder: these are starting points. The binding numbers in any proposal come from the official Nutanix Sizer, with Sizer-output as the single source of truth for the customer-facing sizing.

How to Use These Reference Architectures

For initial proposal sketching

  1. Identify which reference matches the customer's profile (Small / Medium / Large / VDI / Hybrid+NC2 / ROBO / Greenfield / Compliance-Heavy)
  2. Use it as the starting-point architecture
  3. Adjust per customer specifics
  4. Run Sizer for binding numbers before submitting

For architecture review with the customer

  1. Walk through the reference's design rationale ("we typically design like this for customers in your profile because...")
  2. Engage the customer on where their requirements deviate
  3. Document the deviations explicitly in the proposal

For competitive engagements

  1. Use the reference to anchor the conversation in concrete numbers
  2. Explain the trade-offs accepted; this signals you've done this before
  3. Compare to the competitor's likely architecture for the same profile

For cert prep

  1. NCP-MCI tests sizing-and-design knowledge similar to the Small / Medium architectures
  2. NCM-MCI extends to design depth that the Medium / Large / VDI architectures embody
  3. NCX-MCI panel defense often involves architectures like the Compliance-Heavy or Hybrid+NC2 patterns

Common Mistakes with Reference Architectures

  1. Treating reference architectures as the proposal. They're starting points; the actual proposal is customized.
  2. Skipping Sizer. The reference gets you within ~25%; Sizer gets you to the binding number.
  3. Not naming the trade-offs. Every architecture sacrifices something; customers respect explicit acknowledgment.
  4. Forcing a customer into the wrong reference. If the customer profile genuinely doesn't match any of the eight, design from first principles.
  5. Neglecting variations. Each reference has variations for common deviations; use them rather than re-inventing.
  6. Underestimating compliance complexity. Compliance-heavy customers genuinely need the Compliance-Heavy reference; don't try to make a Medium architecture work.

References

The reference architectures consume technical specifications and licensing structure verified in the modules and earlier appendices.

  • Module 09 References. NCI Pro / Ultimate (replaced AOS Pro / Ultimate), NCM Pro / Ultimate, NCP bundles, the Security Add-On for NCI Pro that bundles Flow Network Security and Data-at-Rest Encryption (per usable TiB). Module 9
  • Module 06 References. Flow Network Security licensing (NCI Ultimate or Security Add-On for NCI Pro). Bond modes; LACP cautions. Module 6
  • Module 07 References. Async / NearSync / Metro Availability characteristics; Recovery Plans (Nutanix Disaster Recovery / formerly Leap); Witness VM specs. Module 7
  • Module 05 References. RF / EC math used in cluster sizing; EC 4+1 needs ≥ 6 nodes. Module 5
  • Module 08 References. Files (FSVM count), Objects (WORM, S3 Object Lock semantics), Volumes. Module 8
  • Appendix B References. HyperFlex EOL dates and Cisco-Nutanix partnership product (relevant to ROBO and Cisco-mature accounts). Appendix B
  • Appendix F References. Per-tier Prism Central VM specs and X-Small PC tier. Appendix F

Cross-references

  • Modules: Each reference architecture pulls from multiple modules; the cross-references in each section point to specific modules.
  • Glossary: Appendix A defines the terms used.
  • Sizing Rules: Appendix F provides the sizing math behind these architectures.
  • Scenarios: Appendix C has the design-exercise versions of these architectures.
  • POC Playbook: Appendix J has the validation steps for these architectures.
  • Nutanix Sizer: the official tool for binding sizing.
← back to home