I was lucky to experience QNX 4 as a high schooler in the early 1990s thanks to the kindness and generosity of my mentor.
QNX is truly an amazing platform. So I was excited to see this recent article: The QNX Operating System. A blast from the past!
I was lucky to experience QNX 4 as a high schooler in the early 1990s thanks to the kindness and generosity of my mentor.
QNX is truly an amazing platform. So I was excited to see this recent article: The QNX Operating System. A blast from the past!
See all blog posts in this series:
In this post we will leverage the OpenShift APIs for Data Protection (ADP) using Velero and Kopia to backup and restore our virtual machines.
RedHat’s OpenShift APIs for Data Protection (OADP) leverages Velero to provide backup capabilities. In my OpenShift web console, I visited the OperatorHub and installed the RedHat OADP Operator.


I then created an IBM Cloud Object Storage bucket. I created a service ID and created HMAC credentials for it. For reference:
Following the OADP instructions, I created a credentials-velero file holding the HMAC credentials and a default secret based on it.
I then created and applied a DataProtectionApplication YAML modeled from the OADP instructions and including my bucket details. Some noteworthy points:
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
namespace: openshift-adp
name: dpa-scotts-cos-bucket
spec:
configuration:
velero:
defaultPlugins:
- openshift
- aws
- csi
- kubevirt
nodeAgent:
enable: true
uploaderType: kopia
backupLocations:
- velero:
provider: aws
default: true
objectStorage:
bucket: smoonen-oadp-xy123d
prefix: velero
config:
insecureSkipTLSVerify: 'true'
profile: default
region: us-south
s3ForcePathStyle: 'true'
s3Url: https://s3.direct.us-south.cloud-object-storage.appdomain.cloud
credential:
key: cloud
name: cloud-credentials
I followed the steps to verify that this was deployed properly.

Next I created a schedule to run an hourly backup of the default namespace in which my new and migrated VM live. I could have chosen to provide a selector to backup specific VMs but for now I am not doing so. Notice that the defaultVolumesToFsBackup parameter is commented out; I had originally believed that this should be specified, but read on for some confirmation that this is not needed for ODF-backed virtual machines at least. Note also that this is a similar format to what is needed for a point in time backup, except that much of the configuration is here nested under template.
apiVersion: velero.io/v1
kind: Schedule
metadata:
name: smoonen-hourly-backup
namespace: openshift-adp
spec:
schedule: 30 * * * *
template:
hooks: {}
includedNamespaces:
- default
storageLocation: dpa-scotts-cos-bucket-1
snapshotMoveData: true
#defaultVolumesToFsBackup: true
ttl: 720h0m0s
I found that my backup was “PartiallyFailed.”

Browsing the controller logs, it appears that there were failures related to pods not being in running state. This was the case for me because I had some prior migration attempts failing for various reasons such as lack of access to the VDDK image.
I then installed the Velero CLI to see what additional insight it would give me. It seems to automatically integrate with oc. It is able to provide some insights, but interestingly, it attempts to extract some data from IBM Cloud Object Storage which it is unable to do because I am attempting to access using the direct URL from outside of a VPC.

So I switched to running oc and velero on my VPC VSI jump server. When doing this, I discovered that the reason direct access to the COS storage was working for me at all was because ROKS had already automatically created a VPE in my VPC for COS direct access. I had to expand the security group for this VPE to allow my jump server to connect.

After doing so, the commands are now successful. Most of the errors and warnings were as I expected, but there were also warnings for block volumes for my two virtual machines that cause me to second-guess the use of FS backup as noted above.

Therefore I updated my schedule to remove the FS backup as noted above. This significantly reduced my errors. I also identified and cleaned up a leftover PVC from a failed migration attempt. Digging into the PVCs also led me to archive and delete my migration plan and migration pod in order to free up the PVC from the successful migration.

My next backup completed without error.

Kopia seems to be appropriately processing snapshots incrementally; or if not, it is doing an amazing job at deduplication and compression. For my two VMs, with a total storage of 55GB, my COS bucket storage increased by 0.1GB between two successful backups. Collecting a longer series of backups, the storage increase reported by COS seems to be around 0.17GB per increment.

I next attempted to restore one of these backups to a new namespace.
apiVersion: velero.io/v1
kind: Restore
metadata:
name: test-restore
namespace: openshift-adp
spec:
backupName: smoonen-hourly-backup-20250924233017
restorePVs: true
namespaceMapping:
default: test-restore-application
In this case the persistent volumes were restored, but the VMs were not re-created due to an apparent MAC address conflict with the existing VMs.

I learned that the following labels are commonly used when restoring virtual machines:
I added the first to my restore definition.
apiVersion: velero.io/v1
kind: Restore
metadata:
name: test-restore
namespace: openshift-adp
labels:
velero.kubevirt.io/clear-mac-address: "true"
spec:
backupName: smoonen-hourly-backup-20250924233017
restorePVs: true
namespaceMapping:
default: test-restore-application2
This restore was successful.

The virtual machines are constituted in the new namespace.

Because this was a backup and restore of the entire namespace, even my ALB was reconstituted!

Thus I am able to SSH to that endpoint / VM.

See all blog posts in this series:
In this post, we will install the OpenShift Migration Toolkit for Virtualization and use it to migrate a VMware virtual machine to OpenShift Virtualization.
In the OpenShift web UI, navigate to Operators | OperatorHub and search for “migration.” Select the “Migration Tookit for Virtualization Operator” then click “Install.” I didn’t customize any of the parameters.

Afterwards this prompted me to create a custom resource for the ForkliftController.

In time a Migration for Virtualization menu item appears in the web UI.

I deployed an Ubuntu VM into an overlay network in an IBM Cloud “VCS” instance (AKA “VCF on Classic Automated”) and connected my classic account to my VPC using an IBM Cloud Transit Gateway. This particular VCS instance was leveraging NFS storage.

Interestingly, VMware disables CBT by default for virtual machines. I found later in my testing that the migration provider warned me that CBT was disabled. I followed Broadcom’s instructions to manually enable it although this required me to reboot my VM.
In order to create a migration provider, RedHat recommends you create a “VDDK image.” Recent versions of the Migration operator will build this for you, and all you need to do is provide the VDDK toolkit downloaded from Broadcom. See RedHat’s instructions.
Although the migration provider is able to connect to vCenter by IP address rather than hostname, the final migration itself will attempt to connect to a vSphere host by its hostname. Therefore we need to prepare the environment to delegate the VCS instance domain to its domain controllers. I followed the RedHat instructions to configure a forwarding zone in my DNS controller. Here is the clause that I added.
servers:
- forwardPlugin:
policy: Random
upstreams:
- 10.50.200.3
- 10.50.200.4
name: vcs-resolver
zones:
- smoonen.example.com
I then went into the Providers view in the OCP web UI and created a VMware provider. Be sure to add /sdk to the end of your vCenter URL as shown below. Note also that the migration operator automatically creates a “host” provider for you, representing your OCP cluster, in the openshift-mtv project. In order to meaningfully migrate your VMs to this provider, it is best to create your VMware provider in the same project.

In the OpenShift web console I created a migration plan.

Then I selected my virtual machine.

Then I created a network mapping. The only currently supported network mapping in IBM Cloud ROKS is the pod network.

Then I created a storage mapping, being sure to select the ODF storage.

Then I chose a warm migration.

The preservation of static IPs is not currently supported in ROKS with the Calico provider.

I chose not to create migration hooks. You could use these, for example, to reconfigure the network configuration.
In my migration plan I chose to migrate the VM to the default project. My migration plan actually failed to initialize because it could not retrieve the VDDK image that had been built for me. Either before or after creating the migration plan, run the following command to ensure that it can access the cluster’s image registry:
oc adm policy add-cluster-role-to-user registry-viewer system:serviceaccount:default:default
Then I clicked to start the migration.

The migration created a snapshot and left my VM running.

After this completed, the VM remains running on the VMware side and is not yet instantiated on the ROKS side. The migration plan appears in a “paused” state.

Next I performed the cutover. I had a choice to run it immediately or schedule it for a future time.

The cutover resulted in the stopping of my VM on the VMware side, the removal of the snapshot, and the creation and removal of an additional snapshot; I presume this represented the replication of the remaining data as signaled by CBT.

It then created and started a VM on the ROKS side.

In order to establish network connectivity for this VM, it was necessary to reconfigure its networking. The static IP must be exchanged for DHCP. In my case I also found that the device name changed.

For completeness I also installed qemu-guest-agent but it appears this is not strictly necessary. I then edited /boot/efi/loader/loader.conf to force the loading of virtio modules per Ubuntu instructions. After doing so, it appears that they are in use.

In theory, MTV should have both triggered the installation of qemu-guest-agent as well as the installation of the virtio drivers. I observed that on first boot it did attempt to install the agent, but understandably failed because the network connction was not yet established.
See all blog posts in this series:
I had some initial difficulties creating a virtual machine from the OpenShift web console UI in the Virtualization | Catalog page, but later this worked okay. Here is a screenshot of that page, but in this post I will document a command-line approach.

For my command-line approach, I first used ssh-keygen to create an SSH key pair, and then created a secret based on the public key:
oc create secret generic smoonen-rsakey --from-file=rhel-key.pub -n=default
I then created a YAML file, referencing this secret, and with the help of the example YAML generated by the OpenShift console UI. Here is my configuration:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: rhel-10-smoonen5
namespace: default
spec:
dataVolumeTemplates:
- metadata:
name: rhel-10-smoonen5-volume
spec:
sourceRef:
kind: DataSource
name: rhel10
namespace: openshift-virtualization-os-images
storage:
resources:
requests:
storage: 30Gi
instancetype:
name: u1.large
preference:
name: rhel.10
runStrategy: Always
template:
metadata:
labels:
network.kubevirt.io/headlessService: headless
spec:
domain:
devices:
autoattachPodInterface: false
disks: []
interfaces:
- masquerade: {}
name: default
networks:
- name: default
pod: {}
subdomain: headless
volumes:
- dataVolume:
name: rhel-10-smoonen5-volume
name: rootdisk
- cloudInitNoCloud:
userData: |
#cloud-config
chpasswd:
expire: false
password: xxxx-xxxx-xxxx
user: rhel
runcmd: []
name: cloudinitdisk
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: smoonen-rsakey
I applied this by running the command oc apply -f virtual-machine.yaml.
I relied on this blog post which describes several methods for connecting to a virtual machine.
I chose to use virtctl/SSH. Steps:
virtctl for your platform.oc to allow virtctl to run.Here you can see me connecting to my virtual machine.

Be sure to read Neil Taylor’s blog posts referenced in the first post in this series, which explain why this has an address of 10.0.2.2.
As it stands it can reach out to the public network, since I configured a public gateway on the worker nodes’ subnet. Although I believe I have entitlement to run RHEL on these workers, the VM is not initially connected to a Satellite server or to any repositories. I wanted to run a quick iperf3 test, but this makes it not as simple as doing a yum install. I was able eventually to snag libsctp and iperf3 RPMs and ran a simple test. Compared to a VMware VM running on VPC bare metal, the ROKS VM gets comparable throughput on iperf3 tests to public servers.
As I receive more insight into the RHEL entitlement I will document this.
NLB (layer 4) does not currently support bare metal members. Therefore we need to create an ALB (layer 7). I created a public one just to see how that works. I’m reasoning through what I need to build based on Neil’s blog and IBM Cloud documentation.
Here is the YAML I constructed:
apiVersion: v1
kind: Service
metadata:
name: smoonen-rhel-vpc-alb-3
annotations:
service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "public"
# Restrict inbound to my IPs
service.kubernetes.io/ibm-load-balancer-cloud-provider-vpc-security-group: "smoonen-jump-sg"
spec:
type: LoadBalancer
selector:
vm.kubevirt.io/name: rhel-10-smoonen5
ports:
- port: 22
protocol: TCP
targetPort: 22
Importantly, you should not specify the service.kubernetes.io/ibm-load-balancer-cloud-provider-vpc-lb-name annotation; what IBM Cloud calls a persistent load balancer. This reuses an existing load balancer of that name if it exists. So, for example, if you have a scenario where you are testing restore of an application to a new and temporary namespace, it will hijack the load balancer for your running application.
After provisioning this, I was able to successfully SSH into my VM with the load balancer resource that was created.

See all blog posts in this series:
In this article we will work through the steps of creating a ROKS cluster, deploying and configuring prerequisites for OpenShift Virtualization, and installing OpenShift Virtualization.
Click on the IBM Cloud hamburger menu and select Containers | Clusters. Click Create cluster. Ensure that RedHat OpenShift is selected and VPC is selected. Choose your VPC and select the region and zone(s) of interest. For the purpose of my testing I am creating a single zone.

Select OCP licensing as you require. In my case I needed to purchase a license.

Take care in your selection of worker nodes. Currently virtualization is supported only with bare metal worker nodes. In my case I selected three bare metals each with some amount of extra storage which I will use for Ceph/ODF software-defined storage.

If you wish, encrypt your worker node storage using Key Protect.

I chose to attach a Cloud Object Storage instance for image registry.

I thought at first that I would enable outbound traffic protection to learn how to make use of it. However, the OpenShift Virtualization operator documentation indicates that you should disable it.

I selected cluster encryption as well.

At present I chose not to leverage ingress secrets management or custom security groups.

Enable activity tracking, logging, and monitoring as needed, then click Create.
Note: it is wise to open a ticket to ask for assistance from IBM Cloud support to check for bare metal capacity in your chosen VPC region and zone. In my case my first deployment attempt failed as insufficient bare metals of my selected flavor were available in the zone; this is why I have a jump server in zone 1 but workers in zone 3. Although my second deployment had one host fail, this was not due to capacity but apparently to an incidental error. Redeploying a new worker in its place worked fine. It’s difficult to assess the total deployment time in light of these errors, but I would guess it was somewhere between 2-3 hours.
At the time of this writing, recent CoreOS kernel versions appear to have a bug where several NVMe drives are not properly mounted. After the cluster is provisioned, login to the OpenShift web console and use the Terminal feature on each host to display whether your system has all of its NVMe disks. For example, the profile I deployed should have 8 disks. If there are missing disks, follow the steps in the screenshot below to rediscover them, using the ids from the error messages.

Once your drives are all present, you can proceed to install OpenShift Data Foundation (ODF), which is currently a requirement for OpenShift Virtualization.
ODF is a convenient wrapper for software-defined storage based on Ceph. It is the OpenShift equivalent of VMware vSAN. In this case I’m deploying a single zone / failure domain, with a default configuration of 3-way mirroring, but ODF is able to provide other configurations including multiple zonal fault domains.
Because it must be licensed and in order to provide other custom integrations with IBM Cloud, the ODF installation is driven from the IBM Cloud UI rather than from the OpenShift OperatorHub. In the IBM Cloud UI, on your cluster’s Overview tab, scroll down and click Install on the OpenShift Data Foundation card.

Below is an example of the input parameters I used. Note that I did not enable volume encryption because the integration with KP and HPCS was not clear to me. Most importantly, you should be careful with the pod configuration. For local storage, ignore the fact that the pod size appears to be 1GiB. This simply indicates the minimum claim that ODF will attempt; in reality it will be greedy and will make use of your entire NVMe drive. Also for the number of pods, specify the number of NVMe disks on each host that you want to consume. Although I have three hosts, I have 8 NVMe disks on each host and wish to use all of them. For this reason I specified a pod count of 8.

Note that it takes some time to install, deploy, and configure all components.
After the ODF installation completes, you need to install the OpenShift Virtualization operator using the OpenShift CLI (oc). Although the IBM Cloud CLI has an “oc” operator, this is not a proxy for the oc CLI but rather an alias for IBM’s ks plugin. I performed the following steps:

First, in the IBM Cloud UI, click through to the OpenShift web console. In the top-right corner, click the ? icon and choose Command Line Tools. Download the tool appropriate to your workstation.

In my case, on MacOS, I had to override the security checks for downloaded software. I attempted to run oc and received an error. I then opened the System Settings app, selected Privacy & Security, scrolled to the bottom, and selected “Open Anyway” for oc.

Then, in the IBM Cloud UI, I clicked through to the OpenShift web console. In the top-right corner I clicked on my userid and then selected Copy login command. Then I ran the login command on my workstation.
Finally, I followed the IBM Cloud instructions for installing the OpenShift Virtualization operator. Because I intend to use ODF/Ceph storage rather than block or file, I performed the step to mark block as non-default, but I did not install or configure file storage.
I have some thoughts on what the upgrade process might look like for ODF / Ceph when upgrading my cluster and worker nodes. I’m waiting for a new supported release of ODF to test these out and will post my experience once I’ve had a chance to test it.
See all blog posts in this series:
IBM Cloud offers the opportunity to create virtual private clouds, which are software-defined network bubbles where you provision cloud resources and infrastructure into a network address space allocated and managed by you. For some more background, read and watch “What is a virtual private cloud?”
Our OpenShift resources will be provisioned into this VPC space. So first we need to create a VPC, and choose the network addressing. In addition, because this is a private network space, we will need to gain access to it. There are two common modes of access: VPN, and jump server. For the purposes of my experiment I created a jump server, which will also help to introduce us to some VPC concepts.
In this article I show you how to create an IBM Cloud VPC and jump server VSI (virtual server instance; i.e., virtual machine) using the IBM Cloud UI. Of course, you can also use the IBM Cloud CLI, APIs, or SDKs to do this as well. I have on GitHub samples of Python code to create a VPC and to create a jump server.
After logging in to your IBM Cloud account, click the “hamburger menu” button in the top-left, then select Infrastructure | Network | VPCs.

From the Region drop-down, select the region of your choice, and then click Create.

As it works currently, if you allow the VPC to create a default address prefix for you, the prefix will be automatically selected for you without your ability to modify it. I prefer to choose my own address prefix and therefore I deselect this checkbox before clicking the Create button.

After creating your VPC, view the list of VPCs and click on your new VPC to display its details. Select the Address prefixes tab. For each zone where you plan to create resources or run workloads, create an address prefix. For example, I created a VSI in zone 1 and OpenShift worker nodes in zone 3, so I have address prefixes created in these two zones.

Interestingly, the address prefix is not itself a usable subnet in a zone. Instead, it is a broader construct that represents an address range out of which you can create one or more usable subnets in that zone. Therefore, you need to go to Infrastructure | Network | Subnets and create a subnet in each zone where you will be creating resources or running workloads. Note carefully that you choose the region and name of your subnet before you choose the VPC in which to create it. At that point you can choose which address prefix it should draw from. In my case I used up the entire address prefix for each of my subnets.
For your convenience, I also recommend that you choose to attach a public gateway to your subnet. The public gateway allows resources on the subnet to communicate with public networks, but only in the outbound direction.

First you should create a security group to restrict access to the jump server. Navigate to Infrastructure | Network | Security groups and click Create. Ensure that your new VPC is selected, create one or more rules to represent the allowed inbound connections, and then create an outbound rule allowing all traffic.

Next, navigate to Infrastructure | Compute | Virtual server instances and click Create.
Select the zone and your new VPC. Note that the VPC selection is far down the page so it is easy to miss this. Choose your preferred operating system image; e.g., Windows Server 2025. Customize the VSI profile if you need more or different horsepower for your VM.
Unless you already have an SSH key, create a new one as part of this flow. The UI will save the private key to your system. Be sure to hold on to this for later.
It is fine to take most of the default settings for network and storage unless you prefer to select a specific IP from your subnet. However, you do need to edit the network attachment and select the security group you created above instead of the VPC default group. You’ll notice that the creation of your VSI results in the creation of something called a virtual network interface, or VNI. The VNI is an independent object that mediates the VSI’s attachment to an IP address in your subnet. VNIs serve as an abstract model for such attachments and can be attached to other resources such as file storage and bare metal servers. You could elect to allow spoofing on your VNI (which would be necessary if you wanted your VSI to share a VIP with other VSIs or to route traffic for additional IPs and networks), and also to allow the VNI to continue to exist even after the VSI is deleted.

Click Create virtual server.
If you created a Linux jump server, you can use the SSH private key created earlier to connect to your jump server using SSH. However, if you created a Windows jump server, the Administrator password is encrypted using the SSH key you created earlier. Here is how you can decrypt the Administrator password using this key. Select your VSI. On the instance details panel, copy the VSI instance id.

Click the IBM Cloud Shell icon in the top right corner of the IBM Cloud UI. This will open a new tab in your browser. Ensure that your region of choice is selected.

Within the IBM Cloud Shell in your browser, run a common editor to create a new privkey.txt file in the cloud shell; e.g., vi privkey.txt or nano privkey.txt. Locate the private key file that was downloaded to your system, copy its contents, paste them into the cloud shell editor, and save the file. Then run the following command in the Cloud Shell, substituting the VSI instance ID which is visible in the VSI details page:
ibmcloud is instance-initialization-values 0717_368f7ea8-0879-465f-9ab3-02ede6549b6c --private-key @privkey.txt
For example:

The last thing we need to do is assign a public IP to our jump server. Navigate to Infrastructure | Network | Floating IPs and click Reserve.
Select the appropriate zone, then select the jump server as the resource to bind to. Click Reserve. Note that we did not have to apply our security group at this point because it has already previously been applied to the VSI interface.
Note the IP that was created for you. You can now connect to your jump server using this IP and either the SSH key or password from earlier in this procedure.

See all blog posts in this series:
In the VMware world, there is presently a lot of interest in alternative virtualization solutions such as RedHat’s OpenShift Virtualization. In the past I’ve used RedHat Virtualization, or RHEV. RedHat has discontinued their RHEV offering and is focusing their virtualization efforts and investment in OpenShift Virtualization instead. In order to become familiar with OpenShift Virtualization I resolved to experiment with it via IBM Cloud’s managed OpenShift offering, RedHat OpenShift on IBM Cloud, affectionately known as “ROKS” (RedHat OpenShift Kubernetes Service) in my circles.
My colleague Neil Taylor was tremendously helpful in providing background information to familiarize myself with the technology for the purposes of my experiment. He has written a series of blog posts with the purpose of familiarizing VMware administrators like myself with OpenShift Virtualization and specifically the form it takes in IBM Cloud’s managed offering. If you are interested in following along with my experiment, you should read his articles first:
I expect that in the future we will see IBM Cloud ROKS adopting the new user-defined networking capabilities that are coming to OpenShift Virtualization soon, but I expect it will take some time to operationalize these capabilities in the IBM Cloud virtual private cloud (VPC) environment. In the meantime I’m content to experiment with virtualization within the limits of Calico networking.

I mentioned previously that PowerCLI allows you to rekey VM and VMHost objects natively without needing to use community-supported extensions. As far as I can tell, rekeying vSAN clusters still requires you to work in the UI or to use the community-supported extensions.
Examining the code for these extensions, I was able to put together a brief way to display the current key manager in use by each object. This way you can verify your rekeying is successful! Here is an example:
$vmlist = @()
foreach($vm in Get-VM) {
$vmlist += [pscustomobject]@{ vm = $vm.name; provider = $vm.ExtensionData.Config.KeyId.ProviderId.Id}
}
$vmlist | Format-Table
$hostlist = @()
foreach($vmhost in Get-VMHost) {
$vmhostview = Get-View $vmhost
$hostlist += [pscustomobject]@{ host = $vmhost.name; provider = $vmhostview.Runtime.CryptoKeyId.ProviderId.Id}
}
$hostlist | Format-Table
$clusterlist = @()
$vsanclusterconfig = Get-VsanView -Id "VsanVcClusterConfigSystem-vsan-cluster-config-system"
foreach($cluster in Get-Cluster) {
$encryption = $vsanclusterconfig.VsanClusterGetConfig($cluster.ExtensionData.MoRef).DataEncryptionConfig
$clusterlist += [pscustomobject]@{ cluster = $cluster.name; provider = $encryption.KmsProviderId.Id }
}
$clusterlist | Format-Table