NutaNIX
Three-tier microsegmentation: Web, App, and DB tier planes connected by allowed cyan traffic arcs (443 inbound, 8080 to App, 5432 to DB) and a severed red lateral arc showing Web-to-DB blocked, all sitting above an Open vSwitch substrate band.
/nix/nutanix/06-networking-flow

Module 6: Networking and Microsegmentation

~38 min read NCA ~10% NCP-MCI ~18% NCP-NS ~50% NCM-MCI ~12%
Cert coverage NCA (~10%) · NCP-MCI (~18%) · NCP-NS (~50% heavy) · NCM-MCI (~12%) SA toolkit Objections #16, #17, #20, #25 · Discovery Q-NET-01 through Q-NET-04, Q-SEC-01
Prerequisites
  • Modules 01 through 05
  • vSphere networking (vSS, vDS, port groups, VLAN trunking)
  • Working CE cluster
  • Familiarity with VLAN concepts and IP routing
Key terms
Open vSwitch (OVS) Bridge (br0, br0.local) Bond Virtual Network VLAN IPAM Flow Network Security Flow Virtual Networking (FVN) VPC Microsegmentation Service Insertion

The Promise

By the end of this module you will:

  1. Configure AHV networking from vSphere muscle memory. AHV uses different vocabulary (Open vSwitch, bridges, bonds, virtual networks) for largely equivalent concepts. By the end of this module the mapping is automatic.
  2. Pass roughly half of NCP-NS (Nutanix Certified Professional, Network and Security). The 7.5 version opened for public scheduling on April 4, 2026. Roughly half of its blueprint is the material in this module.
  3. Build a microsegmentation policy using categories and explain why category-driven policy is operationally cleaner than the IP-based ACL world.
  4. Defend Flow against an NSX-T-loyal architect. Flow Network Security is competitive with NSX-T's distributed firewall for VM microsegmentation, often simpler. Flow Virtual Networking is younger than NSX-T and behind it for advanced routing. Draw the comparison without flinching.
  5. Make the network economics case. Flow Network Security is licensed via NCI Ultimate or the Security Add-On for NCI Pro; FVN is bundled at the platform level. NSX-T is a separate VMware product with its own licensing.
  6. Diagnose AHV networking issues from the CLI. OVS bridges, bonds, virtual networks, Flow policy state.

The network is the dimension where customers diverge most. Some come from a Cisco-only world and don't trust software-defined networking. Some come from heavy NSX-T deployments and want to know what they lose. Some treat the network as commodity. Read the customer; pick the framing.


Foundation: What You Already Know

Picture a vSphere host. Physical NICs (vmnics) are aggregated into a vSwitch (vSS for standard, vDS for distributed). Port groups define VLAN tagging and policy. Each VM connects to a port group. NIC teaming is configured at the vSwitch level (active-active, active-standby, route-based-on-IP-hash, etc.). vCenter centralizes this on vDS: one configuration deployed across many hosts.

You also know the limits. vSS is per-host configuration. vDS used to require Enterprise Plus (post-Broadcom: bundled into Foundation/Cloud Foundation tiers). NIC teaming policies are well-defined but bonded across many configurations.

Now switch underneath. AHV networking uses Open vSwitch (OVS): an open-source production-grade software switch that runs in the Linux kernel of every AHV host. The same OVS used in Red Hat OpenStack, OVN, and many Linux-based virtualization platforms.

vSphere TermAHV / OVS Term
vSphere Distributed Switch (vDS)(no exact equivalent; per-host config managed centrally via Prism Central)
Standard vSwitch (vSS)Open vSwitch bridge (typically br0)
Port GroupVirtual Network (or "subnet" in Prism)
VLAN trunk on uplinksVLAN configuration on the bond / bridge
NIC Teaming Policy (active-standby, IP hash)Bond Mode (active-backup, balance-slb, balance-tcp, LACP)
vmkernel ports (management, vMotion, etc.)Internal interfaces on br0.local (CVM/host management)

Core Content

Open vSwitch: The Substrate

OVS is the software switch underneath every AHV host. It runs in the Linux kernel. Each AHV host has at least two bridges by default:

Inside br0, OVS handles VLAN tagging, MAC learning, broadcast/multicast, and (for Flow Network Security) flow rules that implement the firewall.

Bonds: NIC Teaming for AHV

A bond in AHV is a Linux bonding interface (or, more recently, OVS bond) that aggregates multiple physical NICs into a single logical link. Bonds provide redundancy and (for some modes) load balancing.

Bond ModeWhat It DoesWhen to Use
active-backupOne active NIC; others standby. Failover on link loss.Default. Simplest. Works with any switch.
balance-slbOVS balances by source MAC. No switch-side configuration required.Better-than-active-backup throughput without LACP.
balance-tcpBalances per-flow on TCP/IP headers. Requires LACP on the switch.Best throughput with proper switch coordination.
LACP (active-active)802.3ad link aggregation. Requires LACP on the switch.Production with proper switch configuration.

The default for new clusters is typically active-backup. Most production deployments upgrade to balance-slb (no switch-side change needed). LACP / balance-tcp is supported but Nutanix's official recommendation has historically leaned away from LACP because misconfigured upstream switches can disable cluster connectivity in ways that are hard to recover from in the field. If you choose LACP, document the switch-side configuration carefully and validate before go-live.

Virtual Networks (the Port Group Equivalent)

A Virtual Network in Nutanix vocabulary is the equivalent of a vSphere port group. It carries:

When you create a VM and attach it to a Virtual Network, the VM's vNIC lands on br0 with the configured VLAN. Inter-VM traffic on the same VLAN flows at OVS speed (in-host, kernel-level). Inter-VLAN traffic exits via the bond uplink to the physical network.

Naming convention: virtual networks in Prism are commonly named vlan.<id> (e.g., vlan.100), though arbitrary names are allowed.

IPAM: When Nutanix Acts as Your DHCP

For each Virtual Network, you can enable IPAM. When IPAM is enabled, AHV provides DHCP services for VMs on that network. You configure the subnet, default gateway, DNS servers, and DHCP pool range.

This is useful for self-contained tenant networks (test/dev, isolated environments, VDI floating pools) where you don't want to rely on external DHCP. For production environments connected to the corporate network, you typically disable IPAM and let the physical network's DHCP serve VMs.

Diagram: AHV Networking Stack

Whiteboard ready NCA NCP-MCI NCP-NS
From physical NICs up to VMs. Bonds aggregate NICs, bridges carry traffic, virtual networks provide VLAN-tagged port groups. The same OVS used in OpenStack and many Linux platforms.

The Cycle, Frame Two: AHV Networking as Linux Plus OVS

For network engineers from the Linux world, AHV networking is just Linux networking with Open vSwitch as the switch. The CLI tools you would use on any Linux host (ip, ovs-vsctl, ovs-ofctl, ip link) work on the AHV hypervisor.

The Cycle, Frame Three: Networking as Policy, Not Topology

In traditional networking, you secure VMs by VLAN segmentation, ACLs on switch ports, and firewall rules at gateways. Topology drives policy: a VM is in vlan.100, and policy is whatever applies to vlan.100. Move the VM, the policy changes.

Flow Network Security inverts this. Policy is defined by categories (Module 4) attached to VMs, not by topology. A VM tagged Tier: Web follows the Web-tier policy regardless of which VLAN it lives in. Move the VM to a different cluster, the policy follows.

The Cycle, Frame Four: What Comes Free vs What Costs

VMware networking scales up by license tier:

Nutanix:

The customer paying separately for NSX-T microsegmentation will find Flow Network Security at NCI Ultimate (or as the Security Add-On) a meaningful licensing change. Always run the actual numbers; the answer depends on tier, workload size, and whether the Security Add-On's encryption capabilities also displace another vendor's tooling.


Flow Network Security: Microsegmentation Done Differently

Flow Network Security is Nutanix's distributed firewall and microsegmentation product. It runs across all AHV hosts in the cluster (Prism Central drives policy distribution; OVS enforces rules at each host).

Web tier (VMs with Tier: Web):
  Inbound from any: 443/tcp, 80/tcp (allow)
  Outbound to App tier: 8080/tcp (allow)
  Default: deny

App tier (VMs with Tier: App):
  Inbound from Web tier: 8080/tcp (allow)
  Outbound to DB tier: 5432/tcp (allow)
  Default: deny

DB tier (VMs with Tier: DB):
  Inbound from App tier: 5432/tcp (allow)
  Outbound: deny
  Default: deny

The result: a three-tier application where each tier can only reach the next; lateral movement (a compromised Web VM directly reaching DB) is blocked at the OVS flow-rule level. No physical network changes. No VLAN reorganization. Categories assigned in Prism Central drive everything.

Diagram: Flow Microsegmentation

Whiteboard ready NCP-NS NCP-MCI
Three-tier application secured by category-driven Flow Network Security. Lateral movement blocked at the OVS flow-rule level on each host.

Flow Virtual Networking (FVN): The VPC Overlay

Flow Virtual Networking is Nutanix's overlay network product. It provides VPC-style virtual networks: isolated, multi-subnet networks with their own routing, NAT, and policy, decoupled from the underlying physical network topology.

FVN is meaningfully newer than Flow Network Security (introduced in PC 2022+, with substantial maturity gains in 2024-2025). For customers with simple network requirements (flat or VLAN-segmented), FVN is overkill. For customers who want true overlay networking, multi-tenant VPCs, or hybrid-cloud network parity, FVN is the right tool.

Diagram: Flow Virtual Networking Topology

NCP-NS
A multi-VPC FVN deployment with internal routing, NAT, BGP integration, and service insertion for a third-party firewall.

Service Insertion: Third-Party Security Integration

For customers who run third-party network security (Palo Alto VM-Series, Check Point, Fortinet, etc.) and want to insert those services into Flow's traffic flow, FVN supports service insertion:

  1. Deploy the third-party security VM (e.g., a Palo Alto VM-Series instance) on the cluster.
  2. Configure FVN to redirect specific traffic flows through the security VM.
  3. The security VM inspects, applies its own policy, and forwards (or drops).

This lets customers leverage their existing security tooling without giving up Flow Network Security for VM-level microsegmentation. The two layers complement: Flow does VM-tier microsegmentation; the third-party service does deep packet inspection, IDS/IPS, threat prevention.

The honest framing: Flow Network Security is competitive for microsegmentation. FVN provides VPC-style overlays. For deep packet inspection and advanced threat prevention, you typically still want a specialized security product. Service insertion lets you have both.


What Networking Genuinely Lacks Compared to NSX-T

  1. Advanced routing patterns. NSX-T has years of depth in distributed routing, BGP integration, OSPF, dynamic routing protocols. FVN handles common cases well but does not match NSX-T's full routing feature set.
  2. Edge services maturity. NSX-T's edge services (load balancing, VPN, gateway firewall, IDS/IPS) are mature and well-integrated. Nutanix relies more on service insertion for advanced services.
  3. Third-party ecosystem depth. NSX-T has had more time to develop integrations with third-party security and networking products.
  4. Geographic / multi-cluster networking. NSX-T Federation provides multi-site network policy with strong consistency. FVN is increasingly capable here but not yet at NSX-T Federation parity.
  5. Some performance-tuning depth at the network plane for very large, very dense deployments.

For typical mid-market and enterprise general-purpose deployments, none of these are deal-breakers. For customers running deep NSX-T deployments or with hyperscale requirements, the gaps are real.

What Networking Has That Differentiates Nutanix

  1. Open vSwitch as the substrate. Open standard, auditable, well-understood.
  2. Category-driven policy. Operationally cleaner than IP-based ACLs for typical enterprise use.
  3. Bundle-friendly licensing for microsegmentation. Flow Network Security ships with NCI Ultimate or the Security Add-On for NCI Pro; NSX-T microsegmentation is separately licensed.
  4. Single management plane. Network configuration, microsegmentation, and policy all live in Prism Central alongside compute and storage.
  5. Native integration with categories from compute and storage. A single category like Environment: Production drives backup policy, microsegmentation, quotas, and reporting.

Lab Exercise: Build a Microsegmentation Policy

  1. Inventory existing networking. From a CVM:
    manage_ovs show_uplinks
    manage_ovs show_bridges
    manage_ovs show_bonds
  2. Create three Virtual Networks in Prism Element:
    • vlan.100 for Web tier (VLAN 100, IPAM disabled)
    • vlan.200 for App tier (VLAN 200, IPAM enabled, 10.20.30.0/24)
    • vlan.300 for DB tier (VLAN 300, IPAM enabled, 10.20.40.0/24)
  3. Deploy three VMs (one per tier). Use any small Linux image. Boot them.
  4. Define categories in Prism Central: Key Tier, Values Web, App, DB.
  5. Apply categories to VMs from Prism Central VM list.
  6. Create a Flow Network Security policy in Prism Central → Policies → Security Policies → Create. Application Type: 3-tier (Web/App/DB). Define rules per the data path; default deny.
  7. Test the policy. Web → App on 8080 should succeed; Web → DB direct should fail; App → DB on 5432 should succeed.
  8. Inspect OVS flow rules on an AHV host: ovs-ofctl dump-flows br0. You should see flow entries that implement the firewall rules.
  9. Optional: enable monitor mode first to capture traffic without enforcing, then enable enforcement. This is the recommended customer pattern.

Practice Questions

Twelve questions. Six knowledge MCQ, four scenario MCQ, two NCX-style design questions.

Q1NCA · NCP-MCI · NCP-NS

What is the role of br0 in AHV networking?

Why this answer

br0 is the default OVS data bridge. It connects to physical NICs (typically through a bond) and carries all user VM traffic. Virtual networks attach to br0 with VLAN tags.

Why not the others

  • A) That describes br0.local, not br0.
  • C) br0 is the active bridge in normal operation.
  • D) IPAM is per-virtual-network, not a function of any bridge.

The trap

A is the seductive distractor for someone who confuses the data and management bridges. br0 = data; br0.local = management.

Q2NCP-MCI · NCP-NS

Which bond mode does NOT require corresponding configuration on the upstream physical switch?

Why this answer

Active-backup uses one NIC at a time and switches over on link failure. It requires no special switch configuration; any switch supporting basic Ethernet works.

Why not the others

  • A) LACP requires the switch to support and be configured for 802.3ad.
  • B) Balance-tcp requires LACP-style aggregation on the switch.
  • D) Both require switch configuration.

The trap

Customers who default to LACP as "the right answer" miss that LACP requires switch-side cooperation. Active-backup is the simpler, switch-agnostic default.

Q3NCP-MCI · NCP-NS

Which describes Nutanix's IPAM (IP Address Management)?

Why this answer

IPAM is an optional per-virtual-network feature. When enabled, the cluster provides DHCP for that network's VMs. When disabled (typical for production), VMs use external DHCP from the corporate network.

Why not the others

  • A) IPAM is optional, not required.
  • C) IPAM coexists with corporate DHCP; customers typically disable IPAM where corporate DHCP serves.
  • D) IPAM is included with AHV.

The trap

A reflects a partial understanding. IPAM is a tool, not a requirement.

Q4NCP-MCI · NCP-NS

What does Flow Network Security primarily provide?

Why this answer

Flow Network Security is the distributed firewall product. Rules are defined per category and enforced by OVS flow rules on each AHV host.

Why not the others

  • A) That describes Flow Virtual Networking (FVN).
  • C) Flow is not a load balancer.
  • D) Flow has some intrusion-detection capabilities but is not primarily a deep-packet-inspection product.

The trap

Confusion between Flow Network Security (microsegmentation) and Flow Virtual Networking (overlays). Network Security = microsegmentation; Virtual Networking = overlays.

Q5NCP-NS

Which is true about Flow Network Security policy enforcement?

Why this answer

Each host enforces rules for the VMs running on it, via OVS flow entries. This avoids the bottleneck of centralized firewalls and allows microsegmentation at scale.

Why not the others

  • A) Centralized enforcement is the older model. Flow specifically does not work this way.
  • C) Flow operates at the OVS layer in AHV, not on physical switches.
  • D) Stateful rules apply in both directions.

The trap

A is what older firewall architectures looked like. Test-writers know this old mental model is sticky.

Q6NCA · NCP-MCI

Which is the correct mapping of vSphere networking concepts to AHV/OVS concepts?

Why this answer

vDS port groups map to AHV Virtual Networks. vSwitch maps to OVS bridge. NIC teaming policy maps to bond mode.

Why not the others

  • A) Misaligns multiple terms; IPAM is unrelated to VLAN trunking.
  • C) Conflates Flow products with vSphere base networking.
  • D) Misaligns multiple concepts.

The trap

This question rewards practitioners who have actually built the mental mapping.

Q7NCP-NS · sales-relevant

A customer has 50 VMs running a three-tier application on AHV. They want microsegmentation so Web cannot directly reach DB, even though all three tiers share the same VLAN. Recommended approach?

Why this answer

This is exactly the use case Flow Network Security was designed for: VM-tier microsegmentation independent of VLAN topology, driven by categories. Distributed enforcement, no central firewall bottleneck.

Why not the others

  • A) NSX-T deployment is unnecessary; Flow handles this at NCI Ultimate (or Security Add-On for NCI Pro).
  • B) VLAN reorganization is the old way; doesn't scale to many policies.
  • D) Centralized firewall creates a bottleneck.

The trap

A and B work but are operationally inferior. The exam rewards the architecturally correct approach.

Q8Sales-relevant

Senior network architect: "We've spent 4 years building NSX-T. Why would I look at Flow?" Strongest SA response?

Why this answer

Respects the architect's investment, names the coexistence pattern (NSX-T on ESXi-on-Nutanix continues to work), differentiates Flow Network Security from NSX-T's full feature set, and proposes a discovery conversation.

Why not the others

  • A) Dismissive of years of work.
  • C) Untrue. Flow Network Security is competitive for microsegmentation; NSX-T retains advantages for advanced routing and edge services.
  • D) Speculative and aggressive. Costs you the relationship.

The trap

Confident overclaims (A, C) and competitive negativity (D) all read as sales pressure. The honest answer respects existing architecture.

Q9NCP-NS

Which correctly distinguishes Flow Network Security from Flow Virtual Networking?

Why this answer

Network Security = firewall/segmentation. Virtual Networking = overlays/VPCs/routing. Different products solving different problems.

Why not the others

  • A) Reverses the products' purposes.
  • C) Distinct products with different feature sets.
  • D) Both are AHV-native.

The trap

New learners conflate the two Flow products because they share the brand name.

Q10NCP-NS · sales-relevant

A 10-node AHV cluster has Flow Network Security enabled. Considering FVN for a new multi-tenant project. Which is correct?

Why this answer

The two products are complementary. FVN provides VPC isolation and overlay networking; Flow Network Security provides VM-level microsegmentation. Many real deployments use both.

Why not the others

  • A) Not true; they coexist by design.
  • C) FVN runs on AHV.
  • D) Different products with different purposes.

The trap

B requires understanding the products are layered, not competing.

Q11NCX-MCI prep · NCP-NS prep · sales-relevant

NCX-style design: financial-services compliance network and security architecture.

Scenario: A financial-services customer is deploying a new Nutanix cluster for their core banking application. Three-tier system (Web/App/DB) with sensitive customer data. Compliance requires:

  • VM-level microsegmentation (no lateral movement between tiers).
  • Audit logging of all denied traffic.
  • Network isolation between Production and DR environments.
  • Service insertion of an existing Palo Alto VM-Series for DPI on incoming traffic.
  • Multi-tenant isolation between Banking, Wealth Management, and Internal Apps business units.
  • Integration with corporate identity for policy administration.

Challenge: Walk through your design.

A strong answer covers

  • VPC topology with FVN. Three VPCs, one per business unit. Each VPC has its own IP space and isolation by default. Inter-VPC routing rules for required cross-business shared services.
  • Per-VPC virtual networks. Within each VPC, separate subnets for Web, App, DB tiers. VLAN tagging for routing semantics where physical-network integration is required.
  • Categories. BusinessUnit (Banking/Wealth/InternalApps), Tier (Web/App/DB), Environment (Prod/DR), Compliance (PCI/SOX as applicable).
  • Flow Network Security policies. Web allows inbound 443 from internet (after Palo Alto); outbound 8080 to App. App allows inbound 8080 from Web; outbound 5432 to DB. DB allows inbound 5432 from App only; outbound denied. Cross-business-unit denial by default.
  • Service insertion. Configure FVN service insertion to redirect all internet-facing inbound traffic through the Palo Alto VM-Series before it reaches the Web tier.
  • Audit logging. Configure Flow to log all denies; forward to the customer's SIEM.
  • Production/DR isolation. Use Environment categories. Cross-environment policies are denial-by-default; explicit replication-traffic rules permit Async / NearSync (Module 7).
  • Identity integration. Prism Central with SAML to the customer's IdP. Map identity groups to Prism roles for policy administration.
  • What you still need: specific compliance frameworks (PCI v4.0, GLBA, SOX), Palo Alto VM-Series model and licensing, separate management VPC for PC, corporate-network BGP topology, audit-log retention requirements.

A weak answer misses

  • Defaulting to flat virtual networks without VPCs for multi-tenant isolation.
  • Using IP-based ACLs instead of category-driven Flow rules.
  • Forgetting service insertion for the existing Palo Alto.
  • Not addressing audit logging requirements.
  • Treating Production/DR as the same security zone.
Q12NCX-MCI prep · NCP-NS prep · sales-relevant

NCX-style architectural defense: an NSX-T-loyal architect challenges Flow.

Scenario: Customer's senior network and security architect (12 years of NSX-T):

"Flow looks fine for basic microsegmentation, but our environment uses NSX-T's distributed routing, gateway services with stateful firewall, IDS/IPS at the edge, BGP integration with our SD-WAN, L2VPN to remote sites, and a third-party-integration ecosystem we've spent years tuning. Flow doesn't have most of that. Why would we walk away from NSX-T?"

Challenge: Respond. He has named real NSX-T capabilities. Address each.

A strong answer covers

  • Acknowledge the NSX-T capability set is real and substantial. 12 years of investment is not an obstacle to argue with.
  • Reframe each capability:
    • Distributed routing. FVN provides distributed routing for VPCs. NSX-T's routing is more mature for complex topologies.
    • Gateway services / stateful firewall. FVN gateways provide NAT and basic firewall. For deep edge services, retain NSX-T-on-ESXi or insert third-party services via FVN service insertion.
    • IDS/IPS at the edge. Service insertion in FVN: insert a third-party IDS/IPS into the traffic flow.
    • BGP / SD-WAN integration. FVN supports BGP. SD-WAN integration depth varies by SD-WAN vendor.
    • L2VPN to remote sites. Not natively in FVN; this is a legitimate gap. Customers with L2VPN needs typically retain NSX-T or use SD-WAN-overlay alternatives.
    • Third-party ecosystem. NSX-T has more depth here. Service insertion bridges some of this; absolute parity is not yet achieved.
  • The honest reframe. "You don't have to walk away from NSX-T. NSX-T continues to run on Nutanix-on-ESXi, which means the cluster's HCI benefits don't require a network-architecture change. Flow Network Security is genuinely competitive for VM microsegmentation and is an option for new workloads."
  • Concrete proposal. "Let me build a feature-by-feature mapping with you. We'll mark which NSX-T capabilities Flow matches, which it partially matches, which require service insertion, and which require keeping NSX-T."

A weak answer misses

  • Claiming Flow has all NSX-T features.
  • Dismissing NSX-T as outdated.
  • Not acknowledging the L2VPN gap.
  • Missing the NSX-T-on-Nutanix-on-ESXi coexistence pattern.
  • Not proposing the feature-mapping exercise as a concrete next step.

What You Now Have

You can translate vSphere networking vocabulary to AHV/OVS terminology fluently. vDS port group becomes Virtual Network, vSwitch becomes OVS bridge, NIC teaming becomes bond mode.

You know the OVS substrate: br0 for VM data traffic, br0.local for management. Bonds with active-backup, balance-slb, balance-tcp, and LACP. VLAN configuration on virtual networks. IPAM as an optional per-virtual-network DHCP service.

You know Flow Network Security: distributed firewall, category-driven policy, stateful rules, OVS-level enforcement, and the operational pattern of "monitor first, then enforce" for production. You can build a three-tier microsegmentation policy in 90 seconds and demo it.

You know Flow Virtual Networking: VPC-style overlays, internal routing, NAT, BGP, service insertion. You know FVN is younger than NSX-T but increasingly capable.

You have the honest comparison to NSX-T: Flow Network Security is competitive for microsegmentation; FVN is behind on advanced routing, edge services, L2VPN, and third-party ecosystem. NSX-T-on-Nutanix-on-ESXi is the durable answer for customers with established NSX-T investment.

You know the licensing reality: Flow Network Security at NCI Ultimate (or via the Security Add-On for NCI Pro) is included; NSX-T microsegmentation is separately licensed. The economics matter and you can walk through them.

You are now ready for data protection and DR. Module 7 covers Protection Domains, Async/NearSync/Metro replication, Recovery Plans (Nutanix Disaster Recovery, formerly Leap), and the durable DR story that frequently decides enterprise deals.


References

Authoritative sources verified during the technical review pass on this module. Flow licensing in particular is the most volatile area; reverify against the current Nutanix Cloud Platform software-options page before quoting tier-specific costs.

Cross-References

  • Previous: Module 5: DSF Deep Dive
  • Next: Module 7: Data Protection and DR
  • Glossary: Open vSwitch · Bridge · Bond · Virtual Network · VLAN · IPAM · Flow Network Security · Flow Virtual Networking · Microsegmentation · Service Insertion see appendix
  • Comparison Matrix: Networking Row · Microsegmentation Row · Network Overlay Row see appendix
  • Objections: #16 "What about NSX-T?" · #17 "Network performance vs my Cisco fabric" · #20 "We've already invested in microsegmentation" · #25 "Software-defined networking trust" see appendix
  • Discovery Questions: Q-NET-01 through Q-NET-04, Q-SEC-01 (physical topology, NSX-T footprint, microsegmentation requirements, multi-tenancy / VPC, compliance) see appendix
  • Competitive Matrix: Flow vs NSX-T feature comparison see appendix