From VMware to IBM Cloud VPC VSI, part 2: VPC network design

See all blog posts in this series:

  1. From VMware to IBM Cloud VPC VSI, part 1: Introduction
  2. From VMware to IBM Cloud VPC VSI, part 2: VPC network design
  3. From VMware to IBM Cloud VPC VSI, part 3: Migrating virtual machines
  4. From VMware to IBM Cloud VPC VSI, part 4: Backup and restore
  5. From VMware to IBM Cloud VPC VSI, part 5: VPC object model
  6. From VMware to IBM Cloud VPC VSI, part 6: Disaster recovery

For a VMware administrator, here are some key things to understand about IBM Cloud VPC networking:

  • The VPC network is a layer 3 software-defined network rather than a layer 2 network. Although your VSIs may believe they are interacting with a layer 2 network, this is not entirely true.
  • Every IP address that is intended for use by a virtual machine should be represented by a virtual network interface (VNI) that is assigned to the VSI. The VNI represents the linkage between the virtual machine and the IP address. You can assign secondary IP addresses to a VNI, and you can also assign a public “floating IP” to a VNI, which acts as both a SNAT and DNAT for that specific VSI with respect to the public internet. Depending on your instance profile, you could also assign more than one VNI to a VSI, which will be surfaced to the VSI as an additional NIC.
  • For outbound public network traffic (only), you can assign a private gateway to an entire subnet. All subnets in the same zone will share the same private gateway IP. This acts as a SNAT to the public internet.
  • It is also possible for a VNI to be the target of routed (private) traffic. To accomplish this, the VNI needs to have IP spoofing enabled, for outbound traffic; and for inbound traffic you need to configure static routes in your VPC targeting the VNI.
  • In addition to floating IPs, IBM also recently released support for public address ranges (PARs), which are routed public IPs. You can route an entire subnet to a VSI/VNI, if it has IP spoofing enabled, by means of static routing. You could use this, for example, if you wanted to use a firewall or gateway appliance to inspect or regulate public network traffic.
  • There is not a simple and reliable mechanism to share a VIP between multiple VSIs. Because of the need for static routing, using a routed IP for a VIP is not a viable approach unless you programmatically automate the reconfiguration of the static route. Floating VNIs are supported for VPC bare metal but not for VPC VSI. VPC offers application and network load balancers which can cover some of the potential use cases for a VIP. If you have the need to use a VIP for a firewall or gateway appliance, you should explore either the use of BGP as an alternative, or else consider deploying your appliance on a smaller bare metal profile where floating VNIs are supported.
  • VPC offers security groups as a mechanism to implement network segmentation. You can think of security groups as analogous to distributed firewall, but they are implemented somewhat differently compared to the idea of a simple enumerated ruleset. You can assign multiple security groups to an interface, any one of which might be allowed to pass traffic. Also, the rules of a security group can reference the group itself as a way of expressing “members of this group are allowed to exchange this traffic with me.” This can be a powerful way of constructing segmentation, but it can also easily lead to great complexity; it is not always obvious which traffic will be permitted to a device.
  • IBM Cloud’s transit gateway offering provides a means of connecting networks together. You can use it to connect multiple VPCs, but also to connect your VPC to your VMware workload.
    • In case you are connecting to a VMware workload living directly on IBM Cloud classic networks, you would connect your transit gateway to your classic account
    • In case you are connecting to a VMware workload living on an NSX overlay on IBM Cloud classic networks, you would connect your transit gateway to your NSX edges using GRE tunnels
    • In case you are connecting to a VMware workload living in VCF as a Service (VCFaaS), you would connect your transit gateway to your VCFaaS edge using GRE tunnels

As you plan a VMware migration to VPC VSI, transit gateway will likely provide the interconnectivity between your environments. Commonly, you should plan to move at least one subnet worth of virtual machines at a time, because you will not be able to stretch an individual subnet between your VMware and VPC environments.

You should also be aware that in every subnet, VPC strictly reserves the .0 address, the .1 address (which it uses as the gateway address), the .2 and .3 addresses, and the broadcast address. You cannot assign these addresses to your VSI VNI, and thus, even though VPC gives you the freedom to use private networks of your choice, you may still need to plan to re-IP some of your virtual machines on migration.

This is just a short list of key items. The VPC documentation is quite good and thorough; you should spend some time reviewing it to familiarize yourself with other concepts such as how Cloud Service Endpoints and Virtual Private Endpoints work, and to look at related offerings like DNSaaS and IBM’s load balancers.

It’s also worth exploring IBM Cloud’s solution library. There are many VPC patterns there. For example, the VPC hub-and-spoke pattern is a common pattern to leverage a transit VPC to provide gateway and firewall capabilities for multiple VPCs, whether they are connecting to each other, to an on-premises network, or to the public network.

Leave a comment