The Promise
By the end of this module you will:
- Configure AHV networking from vSphere muscle memory. AHV uses different vocabulary (Open vSwitch, bridges, bonds, virtual networks) for largely equivalent concepts. By the end of this module the mapping is automatic.
- Pass roughly half of NCP-NS (Nutanix Certified Professional, Network and Security). The 7.5 version opened for public scheduling on April 4, 2026. Roughly half of its blueprint is the material in this module.
- Build a microsegmentation policy using categories and explain why category-driven policy is operationally cleaner than the IP-based ACL world.
- Defend Flow against an NSX-T-loyal architect. Flow Network Security is competitive with NSX-T's distributed firewall for VM microsegmentation, often simpler. Flow Virtual Networking is younger than NSX-T and behind it for advanced routing. Draw the comparison without flinching.
- Make the network economics case. Flow Network Security is licensed via NCI Ultimate or the Security Add-On for NCI Pro; FVN is bundled at the platform level. NSX-T is a separate VMware product with its own licensing.
- Diagnose AHV networking issues from the CLI. OVS bridges, bonds, virtual networks, Flow policy state.
The network is the dimension where customers diverge most. Some come from a Cisco-only world and don't trust software-defined networking. Some come from heavy NSX-T deployments and want to know what they lose. Some treat the network as commodity. Read the customer; pick the framing.
Foundation: What You Already Know
Picture a vSphere host. Physical NICs (vmnics) are aggregated into a vSwitch (vSS for standard, vDS for distributed). Port groups define VLAN tagging and policy. Each VM connects to a port group. NIC teaming is configured at the vSwitch level (active-active, active-standby, route-based-on-IP-hash, etc.). vCenter centralizes this on vDS: one configuration deployed across many hosts.
You also know the limits. vSS is per-host configuration. vDS used to require Enterprise Plus (post-Broadcom: bundled into Foundation/Cloud Foundation tiers). NIC teaming policies are well-defined but bonded across many configurations.
Now switch underneath. AHV networking uses Open vSwitch (OVS): an open-source production-grade software switch that runs in the Linux kernel of every AHV host. The same OVS used in Red Hat OpenStack, OVN, and many Linux-based virtualization platforms.
| vSphere Term | AHV / OVS Term |
|---|---|
| vSphere Distributed Switch (vDS) | (no exact equivalent; per-host config managed centrally via Prism Central) |
| Standard vSwitch (vSS) | Open vSwitch bridge (typically br0) |
| Port Group | Virtual Network (or "subnet" in Prism) |
| VLAN trunk on uplinks | VLAN configuration on the bond / bridge |
| NIC Teaming Policy (active-standby, IP hash) | Bond Mode (active-backup, balance-slb, balance-tcp, LACP) |
| vmkernel ports (management, vMotion, etc.) | Internal interfaces on br0.local (CVM/host management) |
Core Content
Open vSwitch: The Substrate
OVS is the software switch underneath every AHV host. It runs in the Linux kernel. Each AHV host has at least two bridges by default:
br0(the data bridge). Carries user VM traffic. Connected to the host's physical NICs (typically via a bond). This is where your VMs' NICs land.br0.local(the local management bridge). Used for CVM-to-hypervisor traffic and internal management. Not exposed to user VMs.
Inside br0, OVS handles VLAN tagging, MAC learning, broadcast/multicast, and (for Flow Network Security) flow rules that implement the firewall.
Bonds: NIC Teaming for AHV
A bond in AHV is a Linux bonding interface (or, more recently, OVS bond) that aggregates multiple physical NICs into a single logical link. Bonds provide redundancy and (for some modes) load balancing.
| Bond Mode | What It Does | When to Use |
|---|---|---|
| active-backup | One active NIC; others standby. Failover on link loss. | Default. Simplest. Works with any switch. |
| balance-slb | OVS balances by source MAC. No switch-side configuration required. | Better-than-active-backup throughput without LACP. |
| balance-tcp | Balances per-flow on TCP/IP headers. Requires LACP on the switch. | Best throughput with proper switch coordination. |
| LACP (active-active) | 802.3ad link aggregation. Requires LACP on the switch. | Production with proper switch configuration. |
The default for new clusters is typically active-backup. Most production deployments upgrade to balance-slb (no switch-side change needed). LACP / balance-tcp is supported but Nutanix's official recommendation has historically leaned away from LACP because misconfigured upstream switches can disable cluster connectivity in ways that are hard to recover from in the field. If you choose LACP, document the switch-side configuration carefully and validate before go-live.
Virtual Networks (the Port Group Equivalent)
A Virtual Network in Nutanix vocabulary is the equivalent of a vSphere port group. It carries:
- VLAN tag (or none, for untagged traffic).
- Optional IPAM: the cluster can act as DHCP for VMs on this network.
- Connection to a bridge (typically
br0).
When you create a VM and attach it to a Virtual Network, the VM's vNIC lands on br0 with the configured VLAN. Inter-VM traffic on the same VLAN flows at OVS speed (in-host, kernel-level). Inter-VLAN traffic exits via the bond uplink to the physical network.
Naming convention: virtual networks in Prism are commonly named vlan.<id> (e.g., vlan.100), though arbitrary names are allowed.
IPAM: When Nutanix Acts as Your DHCP
For each Virtual Network, you can enable IPAM. When IPAM is enabled, AHV provides DHCP services for VMs on that network. You configure the subnet, default gateway, DNS servers, and DHCP pool range.
This is useful for self-contained tenant networks (test/dev, isolated environments, VDI floating pools) where you don't want to rely on external DHCP. For production environments connected to the corporate network, you typically disable IPAM and let the physical network's DHCP serve VMs.
Diagram: AHV Networking Stack
The Cycle, Frame Two: AHV Networking as Linux Plus OVS
For network engineers from the Linux world, AHV networking is just Linux networking with Open vSwitch as the switch. The CLI tools you would use on any Linux host (ip, ovs-vsctl, ovs-ofctl, ip link) work on the AHV hypervisor.
- Standard troubleshooting tools work.
tcpdumponbr0shows VM traffic.ovs-vsctl showlists bridges and ports.ovs-appctlqueries OVS internals. - Standard automation works. Ansible, Terraform, custom scripts can manipulate OVS configuration where the Nutanix abstraction is insufficient.
- The architecture is auditable. There is no proprietary kernel module hiding what the network does. OVS is open source and well-understood.
The Cycle, Frame Three: Networking as Policy, Not Topology
In traditional networking, you secure VMs by VLAN segmentation, ACLs on switch ports, and firewall rules at gateways. Topology drives policy: a VM is in vlan.100, and policy is whatever applies to vlan.100. Move the VM, the policy changes.
Flow Network Security inverts this. Policy is defined by categories (Module 4) attached to VMs, not by topology. A VM tagged Tier: Web follows the Web-tier policy regardless of which VLAN it lives in. Move the VM to a different cluster, the policy follows.
The Cycle, Frame Four: What Comes Free vs What Costs
VMware networking scales up by license tier:
- vSS (standard switch): included.
- vDS (distributed switch): historically Enterprise Plus, now bundled in Foundation tiers.
- NSX-T microsegmentation: separate NSX licensing, per-CPU or per-workload.
- NSX-T routing/edge services: even higher NSX tiers.
Nutanix:
- AHV networking with OVS, virtual networks, IPAM: included with NCI baseline.
- Flow Network Security (microsegmentation): licensed via NCI Ultimate, or via the optional Security Add-On for NCI Pro (per usable TiB; bundles Data-at-Rest Encryption with Flow microsegmentation). It is not an NCM tier feature; do not confuse the NCM management tiers with Flow's NCI-side licensing.
- Flow Virtual Networking (VPC overlay, advanced routing): bundled at the platform level with PC / AOS; specific feature gating depends on PC and AOS version.
The customer paying separately for NSX-T microsegmentation will find Flow Network Security at NCI Ultimate (or as the Security Add-On) a meaningful licensing change. Always run the actual numbers; the answer depends on tier, workload size, and whether the Security Add-On's encryption capabilities also displace another vendor's tooling.
Flow Network Security: Microsegmentation Done Differently
Flow Network Security is Nutanix's distributed firewall and microsegmentation product. It runs across all AHV hosts in the cluster (Prism Central drives policy distribution; OVS enforces rules at each host).
- Define categories on VMs (e.g.,
Tier: Web,Tier: App,Tier: DB,Environment: Prod). - Define rules that apply to category groups: "VMs with
Tier: Webmay receive traffic from anywhere on port 443; may send traffic to VMs withTier: Appon port 8080; may not communicate withTier: DBdirectly." - Rules are stateful (return traffic is automatically permitted for established connections).
- Enforcement is distributed: each host enforces rules for VMs running on that host. There is no central firewall bottleneck.
- Default policy is configurable: explicit-allow (whitelist, the secure default) or explicit-deny.
Web tier (VMs with Tier: Web): Inbound from any: 443/tcp, 80/tcp (allow) Outbound to App tier: 8080/tcp (allow) Default: deny App tier (VMs with Tier: App): Inbound from Web tier: 8080/tcp (allow) Outbound to DB tier: 5432/tcp (allow) Default: deny DB tier (VMs with Tier: DB): Inbound from App tier: 5432/tcp (allow) Outbound: deny Default: deny
The result: a three-tier application where each tier can only reach the next; lateral movement (a compromised Web VM directly reaching DB) is blocked at the OVS flow-rule level. No physical network changes. No VLAN reorganization. Categories assigned in Prism Central drive everything.
Diagram: Flow Microsegmentation
Flow Virtual Networking (FVN): The VPC Overlay
Flow Virtual Networking is Nutanix's overlay network product. It provides VPC-style virtual networks: isolated, multi-subnet networks with their own routing, NAT, and policy, decoupled from the underlying physical network topology.
- Virtual Private Clouds (VPCs). Create isolated network environments with their own IP space.
- Routing between VPCs. Internal routing rules, similar to AWS VPC peering.
- NAT. Inbound and outbound NAT for VPC-to-physical traffic.
- External connectivity. Connect VPCs to the physical network via gateway services.
- BGP integration. Some configurations support BGP with the upstream physical network.
- Service insertion. Insert third-party network services (firewalls, load balancers) into VPC traffic flow.
FVN is meaningfully newer than Flow Network Security (introduced in PC 2022+, with substantial maturity gains in 2024-2025). For customers with simple network requirements (flat or VLAN-segmented), FVN is overkill. For customers who want true overlay networking, multi-tenant VPCs, or hybrid-cloud network parity, FVN is the right tool.
Diagram: Flow Virtual Networking Topology
Service Insertion: Third-Party Security Integration
For customers who run third-party network security (Palo Alto VM-Series, Check Point, Fortinet, etc.) and want to insert those services into Flow's traffic flow, FVN supports service insertion:
- Deploy the third-party security VM (e.g., a Palo Alto VM-Series instance) on the cluster.
- Configure FVN to redirect specific traffic flows through the security VM.
- The security VM inspects, applies its own policy, and forwards (or drops).
This lets customers leverage their existing security tooling without giving up Flow Network Security for VM-level microsegmentation. The two layers complement: Flow does VM-tier microsegmentation; the third-party service does deep packet inspection, IDS/IPS, threat prevention.
The honest framing: Flow Network Security is competitive for microsegmentation. FVN provides VPC-style overlays. For deep packet inspection and advanced threat prevention, you typically still want a specialized security product. Service insertion lets you have both.
What Networking Genuinely Lacks Compared to NSX-T
- Advanced routing patterns. NSX-T has years of depth in distributed routing, BGP integration, OSPF, dynamic routing protocols. FVN handles common cases well but does not match NSX-T's full routing feature set.
- Edge services maturity. NSX-T's edge services (load balancing, VPN, gateway firewall, IDS/IPS) are mature and well-integrated. Nutanix relies more on service insertion for advanced services.
- Third-party ecosystem depth. NSX-T has had more time to develop integrations with third-party security and networking products.
- Geographic / multi-cluster networking. NSX-T Federation provides multi-site network policy with strong consistency. FVN is increasingly capable here but not yet at NSX-T Federation parity.
- Some performance-tuning depth at the network plane for very large, very dense deployments.
For typical mid-market and enterprise general-purpose deployments, none of these are deal-breakers. For customers running deep NSX-T deployments or with hyperscale requirements, the gaps are real.
What Networking Has That Differentiates Nutanix
- Open vSwitch as the substrate. Open standard, auditable, well-understood.
- Category-driven policy. Operationally cleaner than IP-based ACLs for typical enterprise use.
- Bundle-friendly licensing for microsegmentation. Flow Network Security ships with NCI Ultimate or the Security Add-On for NCI Pro; NSX-T microsegmentation is separately licensed.
- Single management plane. Network configuration, microsegmentation, and policy all live in Prism Central alongside compute and storage.
- Native integration with categories from compute and storage. A single category like
Environment: Productiondrives backup policy, microsegmentation, quotas, and reporting.
Lab Exercise: Build a Microsegmentation Policy
- Inventory existing networking. From a CVM:
manage_ovs show_uplinks manage_ovs show_bridges manage_ovs show_bonds
- Create three Virtual Networks in Prism Element:
vlan.100for Web tier (VLAN 100, IPAM disabled)vlan.200for App tier (VLAN 200, IPAM enabled, 10.20.30.0/24)vlan.300for DB tier (VLAN 300, IPAM enabled, 10.20.40.0/24)
- Deploy three VMs (one per tier). Use any small Linux image. Boot them.
- Define categories in Prism Central: Key
Tier, ValuesWeb,App,DB. - Apply categories to VMs from Prism Central VM list.
- Create a Flow Network Security policy in Prism Central → Policies → Security Policies → Create. Application Type: 3-tier (Web/App/DB). Define rules per the data path; default deny.
- Test the policy. Web → App on 8080 should succeed; Web → DB direct should fail; App → DB on 5432 should succeed.
- Inspect OVS flow rules on an AHV host:
ovs-ofctl dump-flows br0. You should see flow entries that implement the firewall rules. - Optional: enable monitor mode first to capture traffic without enforcing, then enable enforcement. This is the recommended customer pattern.
Practice Questions
Twelve questions. Six knowledge MCQ, four scenario MCQ, two NCX-style design questions.
What is the role of br0 in AHV networking?
Why this answer
br0 is the default OVS data bridge. It connects to physical NICs (typically through a bond) and carries all user VM traffic. Virtual networks attach to br0 with VLAN tags.
Why not the others
- A) That describes
br0.local, notbr0. - C)
br0is the active bridge in normal operation. - D) IPAM is per-virtual-network, not a function of any bridge.
The trap
A is the seductive distractor for someone who confuses the data and management bridges. br0 = data; br0.local = management.
Which bond mode does NOT require corresponding configuration on the upstream physical switch?
Why this answer
Active-backup uses one NIC at a time and switches over on link failure. It requires no special switch configuration; any switch supporting basic Ethernet works.
Why not the others
- A) LACP requires the switch to support and be configured for 802.3ad.
- B) Balance-tcp requires LACP-style aggregation on the switch.
- D) Both require switch configuration.
The trap
Customers who default to LACP as "the right answer" miss that LACP requires switch-side cooperation. Active-backup is the simpler, switch-agnostic default.
Which describes Nutanix's IPAM (IP Address Management)?
Why this answer
IPAM is an optional per-virtual-network feature. When enabled, the cluster provides DHCP for that network's VMs. When disabled (typical for production), VMs use external DHCP from the corporate network.
Why not the others
- A) IPAM is optional, not required.
- C) IPAM coexists with corporate DHCP; customers typically disable IPAM where corporate DHCP serves.
- D) IPAM is included with AHV.
The trap
A reflects a partial understanding. IPAM is a tool, not a requirement.
What does Flow Network Security primarily provide?
Why this answer
Flow Network Security is the distributed firewall product. Rules are defined per category and enforced by OVS flow rules on each AHV host.
Why not the others
- A) That describes Flow Virtual Networking (FVN).
- C) Flow is not a load balancer.
- D) Flow has some intrusion-detection capabilities but is not primarily a deep-packet-inspection product.
The trap
Confusion between Flow Network Security (microsegmentation) and Flow Virtual Networking (overlays). Network Security = microsegmentation; Virtual Networking = overlays.
Which is true about Flow Network Security policy enforcement?
Why this answer
Each host enforces rules for the VMs running on it, via OVS flow entries. This avoids the bottleneck of centralized firewalls and allows microsegmentation at scale.
Why not the others
- A) Centralized enforcement is the older model. Flow specifically does not work this way.
- C) Flow operates at the OVS layer in AHV, not on physical switches.
- D) Stateful rules apply in both directions.
The trap
A is what older firewall architectures looked like. Test-writers know this old mental model is sticky.
Which is the correct mapping of vSphere networking concepts to AHV/OVS concepts?
Why this answer
vDS port groups map to AHV Virtual Networks. vSwitch maps to OVS bridge. NIC teaming policy maps to bond mode.
Why not the others
- A) Misaligns multiple terms; IPAM is unrelated to VLAN trunking.
- C) Conflates Flow products with vSphere base networking.
- D) Misaligns multiple concepts.
The trap
This question rewards practitioners who have actually built the mental mapping.
A customer has 50 VMs running a three-tier application on AHV. They want microsegmentation so Web cannot directly reach DB, even though all three tiers share the same VLAN. Recommended approach?
Why this answer
This is exactly the use case Flow Network Security was designed for: VM-tier microsegmentation independent of VLAN topology, driven by categories. Distributed enforcement, no central firewall bottleneck.
Why not the others
- A) NSX-T deployment is unnecessary; Flow handles this at NCI Ultimate (or Security Add-On for NCI Pro).
- B) VLAN reorganization is the old way; doesn't scale to many policies.
- D) Centralized firewall creates a bottleneck.
The trap
A and B work but are operationally inferior. The exam rewards the architecturally correct approach.
Senior network architect: "We've spent 4 years building NSX-T. Why would I look at Flow?" Strongest SA response?
Why this answer
Respects the architect's investment, names the coexistence pattern (NSX-T on ESXi-on-Nutanix continues to work), differentiates Flow Network Security from NSX-T's full feature set, and proposes a discovery conversation.
Why not the others
- A) Dismissive of years of work.
- C) Untrue. Flow Network Security is competitive for microsegmentation; NSX-T retains advantages for advanced routing and edge services.
- D) Speculative and aggressive. Costs you the relationship.
The trap
Confident overclaims (A, C) and competitive negativity (D) all read as sales pressure. The honest answer respects existing architecture.
Which correctly distinguishes Flow Network Security from Flow Virtual Networking?
Why this answer
Network Security = firewall/segmentation. Virtual Networking = overlays/VPCs/routing. Different products solving different problems.
Why not the others
- A) Reverses the products' purposes.
- C) Distinct products with different feature sets.
- D) Both are AHV-native.
The trap
New learners conflate the two Flow products because they share the brand name.
A 10-node AHV cluster has Flow Network Security enabled. Considering FVN for a new multi-tenant project. Which is correct?
Why this answer
The two products are complementary. FVN provides VPC isolation and overlay networking; Flow Network Security provides VM-level microsegmentation. Many real deployments use both.
Why not the others
- A) Not true; they coexist by design.
- C) FVN runs on AHV.
- D) Different products with different purposes.
The trap
B requires understanding the products are layered, not competing.
NCX-style design: financial-services compliance network and security architecture.
Scenario: A financial-services customer is deploying a new Nutanix cluster for their core banking application. Three-tier system (Web/App/DB) with sensitive customer data. Compliance requires:
- VM-level microsegmentation (no lateral movement between tiers).
- Audit logging of all denied traffic.
- Network isolation between Production and DR environments.
- Service insertion of an existing Palo Alto VM-Series for DPI on incoming traffic.
- Multi-tenant isolation between Banking, Wealth Management, and Internal Apps business units.
- Integration with corporate identity for policy administration.
Challenge: Walk through your design.
A strong answer covers
- VPC topology with FVN. Three VPCs, one per business unit. Each VPC has its own IP space and isolation by default. Inter-VPC routing rules for required cross-business shared services.
- Per-VPC virtual networks. Within each VPC, separate subnets for Web, App, DB tiers. VLAN tagging for routing semantics where physical-network integration is required.
- Categories.
BusinessUnit(Banking/Wealth/InternalApps),Tier(Web/App/DB),Environment(Prod/DR),Compliance(PCI/SOX as applicable). - Flow Network Security policies. Web allows inbound 443 from internet (after Palo Alto); outbound 8080 to App. App allows inbound 8080 from Web; outbound 5432 to DB. DB allows inbound 5432 from App only; outbound denied. Cross-business-unit denial by default.
- Service insertion. Configure FVN service insertion to redirect all internet-facing inbound traffic through the Palo Alto VM-Series before it reaches the Web tier.
- Audit logging. Configure Flow to log all denies; forward to the customer's SIEM.
- Production/DR isolation. Use
Environmentcategories. Cross-environment policies are denial-by-default; explicit replication-traffic rules permit Async / NearSync (Module 7). - Identity integration. Prism Central with SAML to the customer's IdP. Map identity groups to Prism roles for policy administration.
- What you still need: specific compliance frameworks (PCI v4.0, GLBA, SOX), Palo Alto VM-Series model and licensing, separate management VPC for PC, corporate-network BGP topology, audit-log retention requirements.
A weak answer misses
- Defaulting to flat virtual networks without VPCs for multi-tenant isolation.
- Using IP-based ACLs instead of category-driven Flow rules.
- Forgetting service insertion for the existing Palo Alto.
- Not addressing audit logging requirements.
- Treating Production/DR as the same security zone.
NCX-style architectural defense: an NSX-T-loyal architect challenges Flow.
Scenario: Customer's senior network and security architect (12 years of NSX-T):
"Flow looks fine for basic microsegmentation, but our environment uses NSX-T's distributed routing, gateway services with stateful firewall, IDS/IPS at the edge, BGP integration with our SD-WAN, L2VPN to remote sites, and a third-party-integration ecosystem we've spent years tuning. Flow doesn't have most of that. Why would we walk away from NSX-T?"
Challenge: Respond. He has named real NSX-T capabilities. Address each.
A strong answer covers
- Acknowledge the NSX-T capability set is real and substantial. 12 years of investment is not an obstacle to argue with.
- Reframe each capability:
- Distributed routing. FVN provides distributed routing for VPCs. NSX-T's routing is more mature for complex topologies.
- Gateway services / stateful firewall. FVN gateways provide NAT and basic firewall. For deep edge services, retain NSX-T-on-ESXi or insert third-party services via FVN service insertion.
- IDS/IPS at the edge. Service insertion in FVN: insert a third-party IDS/IPS into the traffic flow.
- BGP / SD-WAN integration. FVN supports BGP. SD-WAN integration depth varies by SD-WAN vendor.
- L2VPN to remote sites. Not natively in FVN; this is a legitimate gap. Customers with L2VPN needs typically retain NSX-T or use SD-WAN-overlay alternatives.
- Third-party ecosystem. NSX-T has more depth here. Service insertion bridges some of this; absolute parity is not yet achieved.
- The honest reframe. "You don't have to walk away from NSX-T. NSX-T continues to run on Nutanix-on-ESXi, which means the cluster's HCI benefits don't require a network-architecture change. Flow Network Security is genuinely competitive for VM microsegmentation and is an option for new workloads."
- Concrete proposal. "Let me build a feature-by-feature mapping with you. We'll mark which NSX-T capabilities Flow matches, which it partially matches, which require service insertion, and which require keeping NSX-T."
A weak answer misses
- Claiming Flow has all NSX-T features.
- Dismissing NSX-T as outdated.
- Not acknowledging the L2VPN gap.
- Missing the NSX-T-on-Nutanix-on-ESXi coexistence pattern.
- Not proposing the feature-mapping exercise as a concrete next step.
What You Now Have
You can translate vSphere networking vocabulary to AHV/OVS terminology fluently. vDS port group becomes Virtual Network, vSwitch becomes OVS bridge, NIC teaming becomes bond mode.
You know the OVS substrate: br0 for VM data traffic, br0.local for management. Bonds with active-backup, balance-slb, balance-tcp, and LACP. VLAN configuration on virtual networks. IPAM as an optional per-virtual-network DHCP service.
You know Flow Network Security: distributed firewall, category-driven policy, stateful rules, OVS-level enforcement, and the operational pattern of "monitor first, then enforce" for production. You can build a three-tier microsegmentation policy in 90 seconds and demo it.
You know Flow Virtual Networking: VPC-style overlays, internal routing, NAT, BGP, service insertion. You know FVN is younger than NSX-T but increasingly capable.
You have the honest comparison to NSX-T: Flow Network Security is competitive for microsegmentation; FVN is behind on advanced routing, edge services, L2VPN, and third-party ecosystem. NSX-T-on-Nutanix-on-ESXi is the durable answer for customers with established NSX-T investment.
You know the licensing reality: Flow Network Security at NCI Ultimate (or via the Security Add-On for NCI Pro) is included; NSX-T microsegmentation is separately licensed. The economics matter and you can walk through them.
You are now ready for data protection and DR. Module 7 covers Protection Domains, Async/NearSync/Metro replication, Recovery Plans (Nutanix Disaster Recovery, formerly Leap), and the durable DR story that frequently decides enterprise deals.
References
Authoritative sources verified during the technical review pass on this module. Flow licensing in particular is the most volatile area; reverify against the current Nutanix Cloud Platform software-options page before quoting tier-specific costs.
- Nutanix Bible · AHV Architecture (networking). Authoritative source for OVS bridges (
br0,br0.local), bond modes, default networking topology. - Nutanix Bible · AHV Administration.
manage_ovsCLI reference, bond-mode configuration workflows. - AHV Bond Modes (Virtual Ramblings). Independent walkthrough of active-backup, balance-slb, balance-tcp, and LACP semantics.
- Changing AHV Bonding Mode to LACP / Balance-TCP (Nutanix Community). Source for Nutanix's caution on LACP and the upstream-switch coordination requirement.
- Flow Network Security Product Page. Current product positioning.
- Nutanix Cloud Platform Software Options. Authoritative source for Flow Network Security licensing: included in NCI Ultimate, or via the Security Add-On for NCI Pro (per-usable-TiB, bundles Data-at-Rest Encryption).
- Nutanix Bible · Flow Network Security. Distributed enforcement at OVS, category-driven policy, stateful rules.
- Nutanix Bible · Flow Virtual Networking. VPC overlay architecture, NAT, BGP, service insertion.
- Flow Virtual Networking Guide (PC 2024.2). Current FVN product documentation.
- Exploring BGP Routing in FVN (Nutanix.dev). FVN BGP integration details.
- NCP-NS 7.5 Open for Scheduling (Nutanix University). Confirms April 4, 2026 public exam launch.
- TN-2094 Flow Network Security Tech Note. Detailed Flow technical reference.
Cross-References
- Previous: Module 5: DSF Deep Dive
- Next: Module 7: Data Protection and DR
- Glossary: Open vSwitch · Bridge · Bond · Virtual Network · VLAN · IPAM · Flow Network Security · Flow Virtual Networking · Microsegmentation · Service Insertion see appendix
- Comparison Matrix: Networking Row · Microsegmentation Row · Network Overlay Row see appendix
- Objections: #16 "What about NSX-T?" · #17 "Network performance vs my Cisco fabric" · #20 "We've already invested in microsegmentation" · #25 "Software-defined networking trust" see appendix
- Discovery Questions: Q-NET-01 through Q-NET-04, Q-SEC-01 (physical topology, NSX-T footprint, microsegmentation requirements, multi-tenancy / VPC, compliance) see appendix
- Competitive Matrix: Flow vs NSX-T feature comparison see appendix