Traditionally you authenticate with the IBM Cloud SoftLayer “classic infrastructure” API using a SoftLayer or “classic infrastructure” API key. However, IBM Cloud has introduced support to authenticate with these APIs using the standardized IAM API keys and identities. At one point IBM implemented a method to exchange IAM credentials for an IMS token, but IBM’s Martin Smolny writes more recently that the classic APIs now “support IAM tokens directly.”
I’ve written a brief script to demonstrate this approach. The script first calls the IAM token API to exchange an API key for an IAM token. Then it constructs a SoftLayer API client object that leverages this token for authentication. Note that for the Python SDK, some paths which create an API client will use the XMLRPC API endpoint by default instead of using the REST API endpoint. The XMLRPC API does not fully support IAM-based authentication. The method used in this script leverages the REST API endpoint and transport, which does support IAM-based authentication.
Here are some important resiliency considerations if you are using NFS datastores for your VMware vSphere cluster. You should be aware of these considerations so that you can evaluate the tradeoffs of your NFS version choice in planning your storage architecture.
NFSv3 considerations
For NFSv3 datastores, ESXi supports storage I/O control (SIOC), which allows you to enable congestion control for your NFS datastore. This helps ensure that your hosts do not overrun the storage array’s IOPS allocation for the datastore. Hosts that detect congestion will adaptively back off the operations they are driving. You should test your congestion thresholds to ensure that they are sufficient to detect and react to problems.
However, NFSv3 does not support multipathing. This is not just a limitation on possible throughput, but a limitation on resiliency. You cannot configure multiple IP addresses for your datastore, and even if your datastore is known by a hostname, ESXi does not allow you to leverage DNS based load balancing to redirect hosts to a new IP address in case of interface maintenance at your storage array; ESXi will not reattempt to resolve the hostname in case of connection failure. Thus, NFSv3 is subject to the possibility that you lose the connection to your datastore in case of interface maintenance on your storage array.
NFSv4.1 considerations
NFSv4.1 datastores have the opposite characteristics for the above issues:
NFSv4.1 supports multipathing, so you are able to configure multiple IP addresses for your datastore connection. This possibly allows you to obtain better network throughput, but more importantly it helps to ensure that your connection to the datastore is resilient in case one of those paths is lost.
However, at this time NFSv4.1 does not support SIOC congestion control. Therefore, if you are using NFSv4.1 you run the risk of triggering a disconnection from your datastore if your host—or especially if multiple hosts—exceeds your storage array’s IOPS allocation for the datastore.
With the new vSAN Express Storage Architecture (ESA), you may need to carefully plan your migration path from vSAN 7 to vSAN 8. At the moment, VMware only supports greenfield deployments of vSAN ESA. As a result, even if you have a vSAN cluster with NVMe storage, you will need to migrate your workloads to a new cluster to reach vSAN ESA. Furthermore, if you are moving from SSD to NVMe, you’ll need to ensure your order of operations is correct.
The following graph illustrates your possible migration and upgrade paths:
Your fastest path to ESA is to leave your existing cluster at the vSphere 7 level and create a vSphere 8 ESA cluster after upgrading to vCenter 8.
It’s important to consider both your vSphere and vSAN licensing during this process. For one, you will incur dual licensing for the duration of the migration. But you should also be aware that your vSAN license is tied to your vCenter version rather than your vSphere version. KB 80691 documents the fact that after upgrading to vCenter 8, your vSAN cluster will be operating under an evaluation license until you obtain vSAN 8 licenses. You should work with VMware to ensure both proper vSphere and vSAN licensing throughout this transition process.
I’ve expanded my sample IBM Cloud for VMware Solutions API usage to demonstrate how you can remove NFS storage, hosts, clusters, and VCS instances dynamically.
I’ve expanded my sample IBM Cloud for VMware Solutions API calls to demonstrate how you can add file storage dynamically to clusters in your vCenter Server (VCS) instance.
It’s been awhile since I first posted sample IBM Cloud for VMware Solutions API calls. Since then, our offering has moved from NSX–V to NSX–T, and to vSphere 7.0. This results in some changes to the structure of the API calls you need to make for ordering instances, clusters, and hosts.
IBM Cloud’s KMIP for VMware offering provides the foundation for cloud-based key management when using VMware vSphere encryption or vSAN encryption. KMIP for VMware is highly available within a single region:
KMIP for VMware and Key Protect are highly available when you configure vCenter connections to both regional endpoints. If any one of the three zones in that region fail entirely, key management continues to be available to your VMware workloads.
KMIP for VMware and Hyper Protect Crypto Services (HPCS) are highly available if you deploy two or more crypto units for your HPCS instance. If you do so and any one of the three zones in that region fail entirely, key management continues to be available to your VMware workloads.
If you need to migrate or failover your workloads outside of a region, your plan depends on whether you are using vSAN encryption or vSphere encryption:
When you are using vSAN encryption, each site is protected by its own key provider. If you are using vSAN encryption to protect workloads that you replicate between multiple sites, you must create separate KMIP for VMware instances in each site, that are connected to separate Key Protect or HPCS instances in those sites. You must connect your vCenter Server in each site to the local KMIP for VMware instance as its key provider.
When you are using vSphere encryption, most VMware replication and migration techniques today (for example, cross-vCenter vMotion and vSphere replication) rely on having a common key manager between the two sites. This topology is not supported by KMIP for VMware. Instead, you must create separate KMIP for VMware instances in each site, that is connected to separate Key Protect or HPCS instances in those sites. You must connect your vCenter server in each site to the local KMIP for VMware instance as its key provider, and then use a replication technology that supports the attachment and replication of decrypted disks.
Veeam Backup and Replication supports this replication technique. To implement this technique correctly, see the steps that you must take as indicated in the Veeam documentation.
Note that this technique currently does not support the replication of virtual machines with a vTPM device.
We saw previously that we could use PowerCLI to rekey objects to a different key provider. It is much more common that you simply want to rekey objects within the same key provider, perhaps to meet a compliance requirement. We can use the same set of commands without specifying a key provider to perform rekey operations.
The simplest and fastest of the three is a vSAN rekey, which only needs to reissue one root key for each cluster protected by vSAN encryption:
This performs a shallow rekey. You can perform a deep rekey by changing $false to $true. This will take much longer to complete.
We can also rekey each of our VMs that is protected by vSphere encryption, as follows:
PS C:\Users\Administrator> foreach($myvm in Get-VM){
>> if($myvm.KMSserver){
>> echo $myvm.name
>> Set-VMEncryptionKey -VM $myvm
>> }
>> }
scott-test
Type Value
---- -----
Task task-23093
PS C:\Users\Administrator>
This took a couple minutes to complete for each VM. You can perform a deep rekey—which will take longer to complete—by adding the -Deep parameter to the Set-VMEncryptionKey cmdlet.
Finally, if you wish to rekey the host encryption keys used to protect core dumps, you can run the following:
VMware Solutions instances in IBM Cloud are deployed with a built-in Active Directory domain with one or two directory controllers. Recently IBM Cloud changed the domain name requirements to require three qualifiers (e.g., cloud.example.com) rather than two (e.g., example.com). The reason for this is that we want to ensure you can integrate with your existing domain and forest without experiencing conflict. The domain controllers are configured as SSO provider for vCenter and NSX, and also as DNS provider for the infrastructure components. IBM Cloud creates an administrator userid in this domain which it uses for subsequent operations, such as logging into vCenter to add a new host, updating DNS records for that host, and creating utility accounts for add-on services like Veeam.
This Active Directory domain is your responsibility to secure and manage, including backup, patching, group policy, etc.
In order of integration from loosest to tightest coupling:
1. No integration
You are free to leverage your instance domain directly for user management within the instance. You can point additional components to the instance’s domain controllers for SSO; for example, the IBM Cloud automation does this for you when it deploys and configures HyTrust Cloud Control. You can join other devices to the domain and also use this for DNS management beyond the instance infrastructure.
2. Additional SSO provider
This option and all of the following options each entail some kind of integration with your instance and your existing Active Directory forest. You will first need to establish network connectivity between your instance and your existing Active Directory forest. You might accomplish this with either a VPN connection or a direct link between IBM Cloud and your on-premises environment. As always, you should take great care to secure your domain controllers, so you should explore security measures such as the use of read-only directory controllers, session recording, bastion servers, and gateway firewalls.
You can leverage your own Active Directory domain for SSO purposes by configuring your directory controllers as additional SSO providers for vCenter and NSX manager and by granting your users and groups appropriate permissions. You will need to determine how you configure DNS; some customers manually duplicate the DNS records from their instance domain into their existing Active Directory domain, but it is also possible to establish mutual DNS delegation between the two Active Directory domains.
This approach may allow you to limit the cloud connections to your directory controllers so that you are only opening up LDAPS and DNS ports.
3. One-way trust
You can establish one-way trust from your instance’s Active Directory domain controllers to your existing Active Directory domain. This will enable you to expose and authorize your existing users and groups to vCenter and NSX manager without having to add these directly as SSO providers. You may need to make additional provision for DNS updates, either copying them to your existing domain or establishing DNS delegation to the instance’s domain.
4. Two-way trust
This option requires your existing domain to establish mutual trust with your instance’s domain. If you are comfortable doing this, it could simplify your DNS management between the two domains.
5. Forest merge
I am not aware of any IBM Cloud customers who have done this, and I do not recommend it since it is a disruptive and potentially risky operation. The idea here is to merge the instance’s forest with your existing forest and to configure the instance’s domain as a child domain of your existing domain.
6. Rebuild
IBM Cloud’s VMware Solutions Shared offering implements a variation of the forest merge. It deploys VCS instances and builds VMware Cloud Director environments on top of them. This solution leverages an existing internal Active Directory forest and domain. After each new VCS instance is deployed, our process removes the VCS instance from its domain and reconfigures it to point to the existing domain.
A variation of this option is to create a new child domain in your existing forest for your VCS instance, and leverage the controllers for this child domain for use with your VCS instance.
There are a few important points to observe:
You should either deploy your instance with the same domain name that you intend to convert it to, or else you should accept the fact that your infrastructure components will have host names in a different DNS domain from your Active Directory domain. Changing the DNS domain of infrastructure components is not supported by IBM Cloud automation.
You will need to re-create the IBM Cloud automation user in your existing domain as an administrator and ensure that this user has administrative permissions in vCenter and NSX manager. This user may in the future create additional users or DNS entries. After performing the reconfiguration, you should open a support ticket to the VMware Solutions team asking them to update the automation user’s password in the IBM Cloud database for your instance, and provide the updated password.
Because this process is complex it is error prone, and you should consider this option only if the options above do not work for you. Additionally, you should practice this with a non-production or pre-production VCS deployment, including the test of adding a new host to the environment, before you implement it in production.
IBM Cloud offers IBM–managed VMware Cloud Director through its VMware Solutions Shared offering. This offering is currently available in IBM Cloud’s Dallas and Frankfurt multi-zone regions, enabling you to deploy VMware virtual machines across three availability zones in those regions.
IBM Cloud also offers a virtual private cloud (VPC) for deployment of virtual machine and container workloads. Although VMware Cloud Director is operated in IBM Cloud’s “classic infrastructure,” it is still possible to interconnect your Cloud Director workload with your VPC workload using private network endpoints (PNEs) that are visible to your VPC.
In this article we’ll discuss how to implement this solution. This solution allows for bidirectional connectivity, but for illustrative purposes consider the use case of hosting an application in IBM Cloud VPC and a database in VMware Cloud Director:
The load balancer distributes connections to applications running on virtual server instances (VSIs) in our example, or optionally to kubernetes services. The application is deployed in two zones for high availability.
Each zone in the VPC has a router that will tunnel traffic to and from Cloud Director using BGP over IPsec. For the purposes of this exercise we used a RedHat Enterprise Linux 8 VSI, but you could deploy virtual gateway appliances from a vendor of your choice.
The VPC routers connect over the private IBM Cloud network through private network endpoints (PNEs) to edge appliances in Cloud Director.
The Cloud Director workload is distributed across three virtual datacenters (VDCs), one in each availability zone. Two edge services gateways (ESGs), one in each of two zones, serve as the ingress and egress points. These operate in active–standby state so that a stateful firewall can be used.
The database is deployed across three zones for high availability.
Caveats
The solution described here uses the IBM Cloud private network. This is a nice feature of the solution, but for reasons that may not be initially obvious, it is also required at the moment. If you wish to connect a single availability zone between VCD and VPC, you could do so using a public VPN connection between your VCD edge and the IBM Cloud VPC VPN gateway service. However, the VPC VPN service currently does not support BGP peering, so it is not possible to create a highly available connection that is able to failover to a different VCD edge endpoint.
Also, the solution outlined here deploys only a single router device in each VPC zone. For high availability, you likely want to deploy multiple virtual router appliances, and for routing purposes share a virtual IP address which you reserve in your VPC subnet. At this time, IBM Cloud VPC does not support multicast or protocols other than ICMP, TCP, and UDP. These limitations exclude protocols like HSRP and VRRP; you should ensure that your router’s approach to HA is able to operate using unicast ICMP, TCP, or UDP.
Deploy your VPC resources
Create a VPC in Dallas or Frankfurt. The VPC will automatically generate address prefixes and subnets for you; I recommend you de-select “Create a default prefix for each zone” so that you can choose your own later:
Next, navigate to your VPC and create address prefixes of your choice:
In order to create subnets, you must navigate away from the VPC to the subnet page. In our case, since we are hosting workloads in only two zones, we had a need only for two subnets:
Next, create four virtual server instances (VSIs), two in each zone. Within each zone, one VSI will serve as the application and the other will serve as a virtual router. For the purposes of this example we use RedHat Enterprise Linux 8.
You need to modify the router VSI network interfaces, either when you create it or afterwards, to enable IP spoofing. This will allow the routers to route traffic other than their own IP address:
Be sure to update the operating system packages and reboot each VSI.
Finally, create an IBM Cloud load balancer instance pointing to each of your application VSIs. Because this is a multi-zone load balancer you must use the DNS-based application load balancer:
Deploy your Cloud Director resources
Next create three VMware Solutions Shared virtual data centers (VDCs). Note that while VPC availability zones are named 1, 2, and 3, VDC availability zones are named according to the IBM Cloud classic infrastructure data center names. Thus, we will deploy to Dallas 10, 12, and 13, which correspond to the three VDC zones:
After creating your three virtual data centers, you need to view any one of these VDCs and reset the administrator password to gain access to the single Cloud Director organization for your account. Using this administrator account you can create additional users and optionally integrate with your own SSO provider:
Next, use these credentials to login to the Cloud Director console. We will create a Data Center Group and assign all three of our VDCs to it so that they have a shared stretch network and network egress. Navigate to Data Centers | Data Center Groups and create a new data center group. Ensure that you select the “Create Local Group” option; although the VDCs are actually in different availability zones, they are designated in the same fault domain from a Cloud Director perspective and we will use active-standby routing. There is only one network pool available for you to use:
After creating the data center group, create a stretched network that will be shared by all three VDCs:
Add your DAL10 edge as the active egress point, and your DAL12 edge as the passive egress point:
Next, navigate to each of your VDCs, view the stretched network, and create an IP pool for each VDC that is a subset of your stretched network:
Next, configure your DAL10 and DAL12 edges (see IBM Cloud docs for details) to allow and to SNAT egress traffic from your VPC to the IBM Cloud service network (e.g., for DNS and RedHat Satellite) and to the public network. If you wish to DNAT traffic from the public internet to reach your virtual machines, keep in mind that the DAL10 edge is the active edge and you should not use DAL12 for ingress except in case of DAL10 failure.
Minimally you want your workload to reach the IBM private service network which includes 52.117.132.0/24 and 161.26.0.0/16. Because we are using private network endpoints (PNEs) you also need to permit 166.9.0.0/15; this address range is also used by any other IBM Cloud services offering private endpoints. For this example I simply configured the edge firewalls to permit all outbound traffic to both private and public:
You must configure an SNAT rule for the private service network (note that this rule is created on the service interface):
and, if needed, an SNAT rule for the public network (note that this rule is created on the external interface):
Next, create the virtual machines that will serve as your database, one in each VDC. For the purposes of this example, we deployed RHEL 8 virtual machines from the provided templates and connected them to IBM Cloud’s Satellite server following the directions in the /etc/motd file. There are a few caveats to the deployment:
You should connect the virtual machine interfaces to the stretched network before starting them so that the network customization configures their IP address. Choose an IP address from the pool you created earlier.
At first power-on, you should “power on and force recustomization;” afterwards you can view the root password from the customization properties.
When using a stretched network, customization does not set the DNS settings for your virtual machines. For RHEL we entered the IBM Cloud DNS servers into /etc/sysconfig/network-scripts/ifcfg-ens192 as follows:
DNS1=161.26.0.10
DNS2=161.26.0.11
Configure BGP over IPsec connectivity between VCD and VPC
In order to expose your Cloud Director edges to your VPC using the IBM Cloud private network, you must create private network endpoints (PNEs) for your DAL10 and DAL12 VDCs. First, in the IBM Cloud console, view your VPC details. A panel on that page lists the “Cloud Service Endpoint service addresses” which are addresses not visible to your VPC but which are the addresses representing your VPC that you will need to permit to access your PNEs. Take note of these addresses:
Now, navigate to your DAL10 and DAL12 VDCs in the IBM Cloud console and click “Create a private network endpoint.” Select the device type of your choice and enter the IP addresses you noted above:
The PNE may take some time to create as it is an operator assisted activity. After it has been successfully created, you will need to create a second PNE in each of the two zones. The reason we need to create a second PNE is that the PNE hides the source IP address of incoming connections, so we cannot configure policies for two different IPsec tunnels using the same PNE. The IBM Cloud console does not allow you to create a second PNE automatically, so you must open a support ticket to the VMware Solutions team. Phrase your ticket as follows:
Hi, I have already created a PNE for my VCD edges edge-dal10-xxxxxxxx and edge-dal12-yyyyyyyy. Please create a second service IP for each of these edges with an additional PNE for each edge. Please use the same whitelist for the existing PNEs. Thank you!
Note that in our example we are connecting only Dallas 1 and Dallas 2 zones from our VPC to Cloud Director. If you wanted to connect Dallas 3 as well, you would need to request three rather than two PNEs for each of your DAL10 and DAL12 edges.
Now we need to configure each of our two NSX edges and our two VPC routers to have dual BGP over IPsec connections to their peers. You need to select which PNE will be used for each VPC router connection.
On the VCD side, the IPsec VPN site configuration for one of the VPC routers looks as follows. In this case, the 52.x address is the PNE’s “service network IP” and the 166.x address is the PNE’s “private network IP:”
And the corresponding BGP configuration is as follows:
Finally, you must be sure to permit the VCD and VPC interconnectivity in both edge firewalls:
For the purposes of this example we are using RHEL8 VSIs as simple routers on the VPC side. First of all, we need to modify /etc/sysctl.conf to allow IP forwarding:
Next we installed the libreswan package for IKE/IPsec support, and the frr package for BGP support.
In order to use dynamic routing, the IPsec tunnel must be configured using a virtual tunnel interface (VTI). The IPsec configuration for our Dallas 1 router is as follows. The left and leftid values are the address and identity of the router appliance itself. The right value has been obscured; it reflects the address of the VCD edge as known to the router; this is the PNE’s “private network IP.” The rightid value has also been obscured; it reflects the identity of the VCD edge, which we have previously set to the PNE’s “service network IP:”
Note that the tunnels use a different mark and VTI interface. Next, in /etc/frr/daemons, enable bgpd:
bgpd=yes
Then define your tunnel interfaces in /etc/frr/zebra.conf; these are the interfaces for our Dallas 1 router:
! interface vti1 ip address 10.10.10.1/30 ipv6 nd suppress-ra ! interface vti2 ip address 10.10.10.5/30 ipv6 nd suppress-ra
Finally, configure BGP in /etc/frr/bgpd.conf:
hostname smoonen-router1
router bgp 64555
bgp router-id 10.10.10.1
network 10.10.10.0/30
network 10.10.10.4/30
network 192.168.1.0/24
neighbor 10.10.10.2 remote-as 65010
neighbor 10.10.10.2 route-map RMAP-IN in
neighbor 10.10.10.2 route-map RMAP-OUT out
neighbor 10.10.10.2 soft-reconfiguration inbound
neighbor 10.10.10.2 weight 2
neighbor 10.10.10.6 remote-as 65010
neighbor 10.10.10.6 route-map RMAP-IN in
neighbor 10.10.10.6 route-map RMAP-OUT out
neighbor 10.10.10.6 soft-reconfiguration inbound
neighbor 10.10.10.6 weight 1
ip prefix-list PRFX-VCD seq 5 permit 172.16.0.0/12 le 32
ip prefix-list PRFX-VPC seq 5 permit 192.168.0.0/16 le 32
route-map RMAP-IN permit 10
match ip address prefix-list PRFX-VCD
route-map RMAP-OUT permit 10
match ip address prefix-list PRFX-VPC
log file /var/log/frr/bgpd.log debug
Taken together, we have configured:
Cloud Director to use DAL10 as active and DAL12 as standby
Cloud Director edges will advertise the entire stretch network (172.16.1.0/24) to the VPC routers
Each VPC router is configured to prefer the DAL10 edge
Each VPC router will advertise its own zone (192.168.1.0/24 or 192.168.2.0/24) to the Cloud Director edges
Now enable IPsec and FRR:
systemctl start ipsec
systemctl enable ipsec
ipsec auto --add routed-vpn-esg1
ipsec auto --add routed-vpn-esg2
ipsec auto --up routed-vpn-esg1
ipsec auto --up routed-vpn-esg2
chown frr:frr /etc/frr/bgpd.conf
chown frr:frr /etc/frr/staticd.conf
systemctl start frr
systemctl enable frr
Finally, you need to visit the IBM Cloud console and find the route table configuration for your VPC:
Modify the route table configuration to direct the VCD networks to your router VSI in each zone. Remember that for this example we are hosting applications only in two zones:
After the tunnel is up and the initial BGP exchange complete, you should have bidirectional connectivity between both environments. Here is a ping from one of our application VSIs:
[root@smoonen-application1 ~]# ping -c 3 -I 192.168.1.5 172.16.1.10
PING 172.16.1.10 (172.16.1.10) from 192.168.1.5 : 56(84) bytes of data.
64 bytes from 172.16.1.10: icmp_seq=1 ttl=61 time=3.21 ms
64 bytes from 172.16.1.10: icmp_seq=2 ttl=61 time=2.34 ms
64 bytes from 172.16.1.10: icmp_seq=3 ttl=61 time=2.87 ms
--- 172.16.1.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 2.344/2.809/3.210/0.356 ms
[root@smoonen-application1 ~]#
We have not tuned BGP, but in spite of this, if we disable BGP on the DAL10 edge (this effectively severs both its connection to the stretched network and its connection to VPC), we see that the connectivity from the VPC fails over to the DAL12 edge:
64 bytes from 172.16.1.10: icmp_seq=16 ttl=61 time=2.51 ms
64 bytes from 172.16.1.10: icmp_seq=17 ttl=61 time=16.9 ms
64 bytes from 172.16.1.10: icmp_seq=18 ttl=61 time=2.63 ms
64 bytes from 172.16.1.10: icmp_seq=137 ttl=61 time=8.52 ms
64 bytes from 172.16.1.10: icmp_seq=138 ttl=61 time=6.06 ms
64 bytes from 172.16.1.10: icmp_seq=139 ttl=61 time=5.07 ms
Conclusion
We have successfully established bidirectional connectivity over the IBM Cloud private network between VMware Cloud Director and IBM Cloud VPC using BGP over IPsec.
As described above, it is possible to extend this solution by deploying a router appliance in the third VPC availability zone, in which case you would need to deploy two more PNEs, one for each of your VCD edges. Also, you will need additional PNEs if you deploy more than one router appliance into each zone for HA. Thus, you could require up to twelve PNEs (two router appliances in each of three zones, each of which has a connection to two VCD edges).
Many thanks to Mike Wiles and Jim Robbins for their assistance in developing this solution.