
I created a clock that visualizes local solar time.
Have a look at the clock here.
I describe some of my motivation and some details on the clock’s behavior on my other blog.

I created a clock that visualizes local solar time.
Have a look at the clock here.
I describe some of my motivation and some details on the clock’s behavior on my other blog.
In IBM Cloud’s virtual server instance (VSI) API, the VSI object does not itself identify the creator of the VSI. However, the creator is known to the cloud resource controller, so you can use the cloud resource API to determine the creator of a VSI.
The path to do this is to:
Here is some sample code using the ibmcloud CLI to do this:
#!/bin/zsh
# Identify owners of all VPC VSIs in IBM Cloud account
users=$(ibmcloud account users --output json)
for region in $(ibmcloud regions --output json | jq -r '.[]|.Name')
do
ibmcloud target -r $region -q > /dev/null
for instance in $(ibmcloud is instances --output json | jq -r '.[]|.name')
do
# VSI names might be reused in multiple regions; filter by region
user=$(ibmcloud resource service-instance $instance --output json 2> /dev/null | jq -r ".[]|select(.region_id==\"$region\")|.created_by")
email=$(echo $users | jq -r ".[]|select(.ibmUniqueId==\"$user\")|.email")
echo "VSI $instance in region $region was deployed by $email"
done
done
See all blog posts in this series:
If you consider the VPC object model, it is clear that to deploy and manage a large-scale environment, you need to consider the use of automation. Your operational challenge is further multiplied if you are planning for disaster recovery and intend to replicate or re-create your VPC environment from one region to another.
I’ve created a relatively simple set of Terraform modules with the goal of demonstrating how to:
My two-tier application is a toy application. It consists of:
The “application” running in the first tier is simply SSH. The VSIs share a common SSH host key much like you would share a certificate among a web application.
The database is PostgreSQL configured in streaming replication mode. The “application” connection to the database is simply by means of the psql command. The database is configured to allow direct connection from the application VSIs without password.
Apply complete! Resources: 76 added, 0 changed, 0 destroyed.
Outputs:
region1_lb_hostname = "2e3d3210-us-east.lb.appdomain.cloud"
region2_lb_hostname = "5532cedf-ca-tor.lb.appdomain.cloud"
smoonen@laptop ibmcloud-vpc-automation % ssh root@2e3d3210-us-east.lb.appdomain.cloud
Welcome to Ubuntu 24.04.3 LTS (GNU/Linux 6.8.0-1041-ibm x86_64)
. . .
root@smoonen-tier1-tlha2hqzy6-lrioe:~# psql -h db-primary.example.com testdb appuser
psql (16.11 (Ubuntu 16.11-0ubuntu0.24.04.1))
testdb=> \dt
List of relations
Schema | Name | Type | Owner
--------+------------------+-------+----------
public | test_replication | table | postgres
(1 row)
Failover within the primary region of the primary database server is beyond the scope of this test. You would need to develop your own automation or administrative process to manage the PostgreSQL failover and the DNS reassignment.
The following diagram illustrates the application topology as well as the storage replication that is established to a secondary region:

As we discussed previously, this automation leverages block storage snapshots and cross-region copies as a simple approach to replication. This imposes some limitations, including a lack of write-order consistency between volumes, and RPO constraints. This simple example has volumes that can be copied at hourly intervals, but a real-world example is likely to have a longer RPO.
Because of the lack of write-order consistency, in this model you would need to assess which of the two databases had won the race and should be reconstituted as the primary database server. If you were storing and replicating application data (for example, transaction logs stored on IBM Cloud VPC file storage which is also being replicated to the secondary region) you would need to perform a similar analysis of consistency before completing the recovery process.
In this example, since the application servers are stateless, their storage is not replicated to the secondary region. They can be re-created purely from their definition.
You can see from the diagram above that no running infrastructure other than the load balancer exists in the secondary region during steady-state replication. Upon failover, this example leverages an additional Terraform module to identify the most recent copied storage snapshot and re-create the instances and instance groups for the application and database servers.
Refer to the documentation in the GitHub repository for additional instructions and considerations.
Here’s how I transcribe audio on macOS. To prepare:
brew install ffmpeg whisper.cppggml-large-v3.bin model with good results.Then, to convert a file:
# Convert MP3 file to WAV
ffmpeg -i lecture01.mp3 -ar 16000 lecture01.wav
# Run Whisper model to transcribe audio
whisper-cli --language en --max-context 0 --max-len 65 --split-on-word --output-json --model ~/Downloads/ggml-large-v3.bin --file lecture01.wav --output-file lecture01
# Convert JSON output to text (see explanation and sample script below)
python3 json2text.py lecture01.json > lecture01.txt
# Cleanup
rm lecture01.wav lecture01.json
Some notes on the above:
--output-file specification.--max-context recommended to help with this, and it did help in my case. Other people recommend switching to the ggml-large-v2 model, or restarting the transcription near that point in the recording using the --offset-t parameter (note that this takes input in milliseconds).stdout, it does not include them in an output text file. Therefore I’ve chosen to output JSON and convert it to text myself. I use the simple script below to do so.
# json2text.py
import json, sys
j = json.load(open(sys.argv[1]))
for i in j['transcription'] :
print(f"[{i['timestamps']['from']} --> {i['timestamps']['to']}] {i['text']}")
As a test case I transcribed a Biblical theology lecture, and I was pleased to find that the model had no difficulty with names such as Hittites, Ephraim, Manasseh, Melchizedek, and Eber. It also had a decent sense of capitalization of titles and of acronyms. My test case was also a relatively poor-quality recording and this did not seem to pose a problem for the model.
I found on an M1 MacBook Air this 45-minute lecture took about 20 minutes to transcribe. It successfully leveraged the GPU. This was much faster than using the faster-whisper tool (about 120 minutes, using CPU only). I also attempted the insanely-fast-whisper tool, but this took even longer as well as having difficulty using the GPU. I confess I did no tuning, but in spite of using a batch-size of 4 (as recommended) it failed after many hours with a GPU allocation error. So I am quite pleased with the performance of whisper.cpp!
By comparison, an M4 MacBook Pro was able to process the same 45-minute lecture in 3 minutes.
Merry Christmas!
See all blog posts in this series:
In this article we will briefly consider the native capabilities of IBM Cloud VPC VSI that you could use to build a disaster recovery solution and compare this with alternative approaches.
As we saw previously, the IBM Cloud Backup for VPC backup policies allow you not only to schedule the creation of snapshots for your VSI volumes, but you can also schedule the copying of these snapshots to another IBM Cloud region. You could use this approach to perform periodic replication of all of your VSI data to another region for the purpose of disaster recovery. This approach has a number of limitations that you should take into consideration:
crontab style expressions to schedule the snapshot and copy. Note that in principle your snapshots and copies within a given region exist in a space-efficient chain. However, you should note that the size of your volumes will affect the time that it takes to perform the initial full copy from region to region. And furthermore, for performance reasons you will need to backoff your snapshot and copy frequency based on your volume size if you want the cross-region copy to be incremental; see this reference table. Thus, for example, in my testing I had a 250GB boot volume and needed to set my snapshot and copy frequency to 2 hours.cloud-init to re-run; you should be prepared for its side effects such as resetting your root password and authorized SSH keys.Depending on your application and requirements, you may be able to work with these limitations. If not, you will need to devise an alternate approach.
It is well known that you need to move up the stack—or invest in solutions that stretch across layers of the stack—to achieve more stringent BCDR goals. For example, you may be able to leverage storage array replication for highly efficient replication with low RPOs, but you will need to pair this with a solution that is able to quiesce your file system or your database if you want your replicas to be transactionally consistent rather than merely crash consistent.
Thus, enterprise architectures often need to leverage agent-based tools or application- and database-specific methods either to perform the replication or at least to orchestrate it. Such approaches are highly dependent on your solution architecture, including your choice of operating systems, application software, and messaging and database software.
Because of this, you need to investigate and evaluate which tools and techniques are suitable for your solution architecture and your disaster recovery objectives. For example, if you are using DB2, you might consider Q replication or SQL replication to replicate your database between different IBM Cloud regions. Use of OS agents tends to be more common in the backup realm than in the disaster recovery realm, but this may be a viable option for you depending on your RPO objectives. However, for agent-based backups you will need to investigate whether your recovery options are limited due to the current lack of support for booting a VSI from an ISO image.
Approaches like this typically depend on having active infrastructure running in both your production and DR locations. This complicates some aspects of planning and execution; for example, your replicated infrastructure will likely not have the same IP addressing as your original infrastructure, and you will likely use DNS updates to hide this from your application users. On the other hand, it simplifies other aspects of your planning and execution, because you will have pre-created most of the necessary resources instead of needing to create them during the failover.
See all blog posts in this series:
We interrupt our normal VPC VSI programming to briefly discuss the VPC object model, which is relevant not only to your VPC design and your automation strategy, but also to your backup and especially your DR scenarios. For BCDR, you need to make plans to reconstitute all of these resources. As we have already discussed, it is not sufficient to backup your VSI volumes; you need to be prepared to reconstitute the VSI itself, including details such as the instance profile, IP address, floating IP, and security groups.
Here is my rough attempt to diagram the VPC object model to help you think about your VPC design as well as your BCDR design and planning. Afterwards I will list some caveats.

Some resources I have designated by abbreviation (e.g., VPE, LB, FIP, PAR, VNI). I have attempted to specify some cardinality based on my understanding, but it is likely that I have made some mistakes. I’ve also set aside some related resources to the side as a kind of appendix rather than attempting to include them and all of their possible relations in the main diagram. Because the security group impact is so extensive, I have used blue color coding to highlight the extent of its influence.
I’ve included some loosely coupled resources (such as DNS service), but not all such resources (for example, you may be leveraging Cloud Object Storage, IBM Cloud Databases, or Backup and Recovery resources connected through a VPE).
There are other considerations you will need to make such as IBM Cloud IAM permissions; these apply to every single resource. Minimally you need to consider which resource groups each resource is placed in and which users, access groups, and services should have access to them. If you are using the metadata service or allowing your VSIs to access trusted profiles and other resources, you will also need to consider the appropriate IAM configuration for this as well.
You may also need to consider quota and capacity management.
See all blog posts in this series:
IBM Cloud provides two backup offerings that are relevant to the backup of your applications running on IBM Cloud VSI:
Because of these offerings’ complementary focus on volume backup versus file backup, you will need to combine the two of them to cover all failure scenarios. Let’s consider their capabilities and limitations in turn.

If you navigate the IBM Cloud “hamburger menu” to Infrastructure | Storage | Backup policies, you can create backup policies. These backup policies allow you to select one or more volumes (block or file) by volume tagging criteria, or to select the volumes for one or more VSIs by VSI tagging criteria. You can define up to four schedules for the snapshots, meaning that you could, for example, have a daily schedule with 7 days of retention plus a weekly schedule with 90 days of retention. IBM Cloud maintains the snapshots in a space-efficient chain. Unlike VMware snapshots, IBM Cloud’s block storage snapshots exist in a separate chain from the VSI volumes and the VSI boot image. IBM Cloud’s snapshots remain intact even if the VSI is deleted. There are some important things to be aware of:
sdp volume profiles. I hope for this to change over time, but for now I recommend against using them.Whole-volume and whole-VSI backup and restore is quite heavy-handed for some backup scenarios such as recovery of deleted files. For your convenience, you may wish to complement the Backup for VPC capabilities by also using Backup and Recovery.

The Backup and Recovery offering leverages agents running on your VSIs (although it calls them “physical servers”) to backup files, folders, and certain databases. IBM publishes a list of currently supported operating system and database versions.
The steps you will follow to leverage this service are:
As always, be sure to thoroughly review the documentation to familiarize yourself with other considerations such as alerting.
Among the VM to VSI migration techniques I considered, one possible option was to boot the destination VSI using a GHOST-like ISO tool. This is difficult to accomplish because IBM Cloud does not currently support booting a VSI from an ISO. What I did instead was to craft a qcow2 disk image that loaded this ISO. Then, any VSI booted using this image would initially load the tool. Afterwards the tool could be used to transfer the actual disk to be used for the VM.
There are some limitations for this approach. Most importantly, the use of the tool as an image template entirely obscures the actual underlying OS for any VSI booted from that image. It seems to me an abuse of the idea of an image template. Furthermore, given the limitations of many small Linux distributions, customizing the image so that it has all of the packages you need is potentially a tedious process. For example, I found that out of the box TinyCore Linux did not have openssh, qemu, nor did it have virtio drivers. Combined with the fact that I also wanted to leverage libguestfs in many cases, this was a challenging limitation.
As a result, I rejected this approach as one of the viable migration paths. However, it was still a fun experiment to build the boot image and boot my VSI using it. Here are the steps I took to create a TinyCore Linux boot image:
core.isoqemu-img create -f raw core.img 64Mparted core.img --script mklabel gpt
parted core.img --script mkpart ESP fat32 1MiB 100%
parted core.img --script set 1 boot onLOOPDEV=$(losetup --find --show --partscan core.img)mkfs.vfat "${LOOPDEV}p1"mkdir -p /mnt/esp
mount "${LOOPDEV}p1" /mnt/esp
grub-install --target=x86_64-efi --efi-directory=/mnt/esp --boot-directory=/mnt/esp/boot --removable --recheck --no-nvram/mnt/esp/boot/grub/grub.cfg:menuentry "Boot Core" {
set isofile="/boot/core.iso"
loopback loop $isofile
linux (loop)/boot/vmlinuz quiet
initrd (loop)/boot/core.gz
}cp core.iso /mnt/esp/bootumount /mnt/esp; losetup -d "$LOOPDEV"qemu-img convert -f raw -O qcow2 core.img core.qcow2tce-load -wi openssh, or qemu tools: tce-load -wi qemu.See all blog posts in this series:
In this blog post we’ll consider your options for a simple lift-and-shift migration of entire virtual machines from VMware to IBM Cloud VPC VSI. Although this is a one-size-fits-all approach, it may not be the only option depending on your situation. For example, if you have a well-established practice of automated deployment, you should consider retooling your deployment process (eventually you will need to do this anyway) so that you can deploy entirely new virtual machines in IBM Cloud and migrate your data, rather than migrating entire virtual machines.
There are no readily available warm migration approaches to migrate VMware workloads to IBM Cloud VPC VSI. You should plan for a sufficient outage window that includes stopping the original virtual machine, possibly exporting its disks, and transferring the disks at least once to the final destination.
Updated 2025–12–03: Change guidance on use of cloud-init; add notes on RedHat considerations; reorganize Windows considerations.
Updated 2026-01-23: Link to newly available additional resources.
Currently you cannot create a VSI with more than 12 disks, nor can your VSI have a boot disk smaller than 10GB or larger than 250GB. If your boot disk is larger than 250GB you will have to restructure your VM before migrating it.
VPC VSI does not support shared block volumes. For some shared storage use cases, you may be able to leverage VPC file storage and attach it to multiple virtual machines (but note that IBM Cloud File Storage for VPC currently does not support Windows clients). This blog post does not address migration of such shared files to VPC file storage. If you have a need for shared block storage for use as a clustered file system, you could take the approach of deploying your own VSI and using it to expose an iSCSI target to other VSIs.
Using FSTRIM for your VSI is harmless but currently it does not have any effect.
Broadly, you should prepare your system by (1) uninstalling VMware tools, (2) installing virtio drivers, (3) installing cloud-init, and (4) resetting the network configuration. IBM Cloud has existing documentation on migrating from classic VSI to VPC VSI which covers many of these points.
Note that because the initial setup of your VSI depends on cloud-init, this means that you should be prepared for it to modify certain parts of your system configuration as if it were a first-time boot even though this is not a true first-boot situation. For example, this could result in the resetting of your root or Administrator password, the re-generation of your authorized SSH keys, the reconfiguration of your SSHD settings, and the re-generation of host keys. You should carefully examine, customize, and test the cloud-init configuration and its side effects so that you are prepared for these.
Installation of virtio is simpler on Linux than it is on Windows, to the degree that you could do so manually, but I still recommend that you use the virt-v2v tool in the steps described below.
If you are using RHEL and if you choose to obtain your license from IBM Cloud rather than to bring your own license (see further discussion below), the IBM Cloud VSI automation will expect to find your system registered with the IBM Cloud subscription and using the expected system UUID. You should check to be sure that you do not have a file /etc/rhsm/facts/uuid_override.facts that overrides the system’s UUID. Remove this file if it exists.
Your selected network configuration will be primed by a combination of cloud-init and DHCP, and you may also find that interface names change. Stale network configuration data can prevent the network configuration from fully initializing; for example, it could prevent your system from acquiring a default network route. You should clean out as much of the network configuration as possible. For example, on a typical RHEL 9 system, you should:
/etc/sysconfig/network-scripts/etc/NetworkManager/system-connections/etc/sysconfig/network and make sure that no GATEWAYDEV is specifiedIf your system is unable to establish network connectivity including a default route at the time of first boot, it’s possible that the cloud-init registration process will fail.
For Windows there are a number of important considerations related to installation of virtio drivers. First, you must source the drivers from RedHat. One way to do so is to deploy a RHEL VSI, install the virtio-win package, and copy the ISO file installed with this package, which includes various operating system drivers. You can find some instructions here. I copied the ISO to my Windows VM, mounted it as a drive, and ran the virtio-win-gt-x64 and virtio-win-guest-tools programs from the ISO.
Second, it is not sufficient to install the drivers. Even if you install the virtio drivers into your Windows VM, the drivers are typically bound to the device and you will not simply be able to boot your VM as a VSI successfully. There are two possible approaches:
sysprep tool to generalize your virtual machine immediately prior to migrating it. IBM Cloud’s VSI documentation suggests this approach. This ensures that driver assignment is released, but it also has many side effects and limitations that you should review and be aware of. You can control and limit some of this behavior if you use the Windows System Image Manager to generate an unattended answer file directing sysprep‘s execution.libguestfs toolkit, describe in detail below, to prepare the image. This toolkit is the basis for RedHat’s Migration Toolkit for VMware (MTV) that we saw used to migrate virtual machines to RedHat OpenShift Virtualization, and it is capable of injecting virtio drivers and also forcing Windows to make use of them. There are some important caveats to using the libguestfs tools outside of an MTV contest, for which see below. If you take this approach, be sure to shut down your Windows system cleanly. The virt-v2v tool will not process a Windows VM if it was not stopped cleanly.I have had success using both of these approaches to transfer Windows VMs to VSI. I prefer the latter approach.
Third, you need to be sure to install the drivers in both your boot disk and your recovery image; note especially that the virt-v2v tool will only help with your boot disk. The IBM Cloud documentation provides some notes on the recovery image. In my own testing, I found some additional caveats:
reagentc reported that it was on a recovery volume, that volume was empty and I found it in C:\Windows\system32\Recovery instead.list volume and select volume instead of list partition and select partition07 and 27 to mark it as data versus system, you will need to set the id first to ebd0a0a2-b9e5-4433-87c0-68b6b72699c7, and afterwards to c12a7328-f81f-11d2-ba4b-00a0c93ec93b.Fourth, you should note that IBM Cloud VSI has a special rule that causes it to present a Windows boot disk as a virtio SCSI device, while presenting all other volumes as virtio block devices. This is in contrast with non-Windows VSIs, all of whose volumes are presented as block devices. What this means to you is that if you use the libguestfs approach to install the virtio drivers, you must add a special parameter to force the boot drive to be SCSI: --block-driver virtio-scsi.
Fifth, note that RedHat provides virtio drivers for only the following versions of Windows:
In addition to virtio considerations, ensure that you install cloudbase-init. Note that I have had fewer difficulties with network configuration on Windows compared to Linux.
When you create a VSI, a boot volume for that VSI is created based on an existing image template. A boot volume is a special kind of storage volume that has some attributes indicating its intended processor architecture, operating system, etc. The boot volume also exists as a kind of space-efficient linked clone of the original image. There are some variations of this boot process where you could base your boot volume on alternate images (e.g., using a custom image, or using a snapshot of another boot volume as the image template), or even choose to use an existing boot volume that is not already attached to an existing VSI. Note that currently it is not possible to boot a VSI using an ISO image.
The combination of these capabilities gives us several possible approaches to importing your virtual machine’s disks:
There are four broad approaches to migrating your virtual machine to VSI:
virt-v2v to prepare the image), and boot your VSIvirt-v2v to prepare the image), then boot your VSI. Spoiler alert: this is my preferred method.virt-v2v VDDK capability to copy them to IBM Cloud volumes, and boot your VSIThe following image illustrates these approaches:

Understandably, there are a few caveats that you should be aware of. First we’ll discuss a few general caveats and then work through the various methods.
Many of these migration approaches use the libguestfs toolkit, a powerful migration toolkit which includes the following capabilities:
virt-v2v tool is able to transform virtual machine images on your local disk, including the installation of virtio drivers.nbdkit VDDK plugin, the virt-v2v tool supports an efficient direct connection to vCenter and your vSphere hosts to extract your image and transform it to your local disk.virt-p2v tool can be used as one of the ISO options when booting your source VM to connect to the location where the VM will be processed and copied to local disk.However, there are some important caveats to be aware of:
libguestfs tools leverage qemu-kvm to run some of their logic in a virtual machine context with the disk(s) attached to that virtual machine. If you are running them on an IBM Cloud VSI, you should note that nested virtualization is not formally supported. I have not encountered any problems using it in my testing. You could also leverage a VPC bare metal server as your conversion worker if you prefer.virtio-win package that virt-v2v uses to install virtio drivers is available only on RHEL. You will need to do your work on RHEL or else copy the /usr/share/virtio-win tree from a RHEL system to your work location.virt-v2v does not support the --block-driver virtio-scsi option which is required to prepare drivers for Windows systems in IBM Cloud. You will either need to build libguestfs yourself, or else run virtio-v2v on a system other than RHEL (e.g., Ubuntu).libguestfs includes the nbdkit VDDK plugin, but the Ubuntu build does not. If you use Ubuntu you will either be unable to use the VDDK approach, or you will need to build libguestfs yourself.virt-v2v-in-place command but RHEL does not. This command can be useful for some scenarios to avoid excess copying.virt-v2v command usage only allows you to designate a destination directory for a VM’s disks, rather than destination files. So it does not naturally allow you to directly write the output to a /dev/vdX device. It is possible to trick it using symbolic links. So, for example, knowing that the virtual machine boot disk for smoonen-win will be named smoonen-win-sda, I can run the following:
ln -fs /dev/vdb /tmp/smoonen-win-sda
virt-v2v -i disk smoonen-win.img -o disk -os /tmp --block-driver virtio-scsi
Not all of the methods we will discuss require you to export your virtual machine. But if you are exporting your virtual machine, there are some important considerations to be aware of.
If you are exporting a virtual machine from VCFaaS, you will need to stop your vApp and “download” it. This will initiate the download of an OVA file. The OVA file is in ZIP format and its contents include an OVF descriptor for your virtual machine(s) as well as VMDK files for the VM disks. Extract the VMDK files for use in subsequent steps.
If you are exporting a virtual machine from vCenter, you will need to stop the virtual machine. Although the datastore browser allows you to download the VMDK file directly from the VM folder, it seems to me that this approach ends up with a thick-provisioned VMDK. Instead I recommend using Actions | Template | Export OVF Template, which seems to preserve thin provisioning.
If your virtual machine has only one disk, a naive approach is to create an image template from your VMDK file and then boot a new VSI using this image. This approach is relatively simple and the VPC VSI documentation discusses how to do it. For a VMDK file, the steps are as follows:
qemu-img convert -f vmdk -O qcow2 smoonen-ubuntu-1.vmdk smoonen-ubuntu-1.qcow2smoonen-ubuntu-migratedThere are, however, some caveats and downsides to this approach. As mentioned above, this only migrates a single disk, so you will need to use one of the techniques below for secondary disks. More importantly, this process is abusing the notion of an image, which is intended to serve as a reusable template. Instead, this approach creates a single image template for every single virtual machine. This is relatively inefficient and wasteful; William of Ockham would not approve.
Instead of uploading your disk as a VPC image, you could write out your VM disks directly to volumes by temporarily attaching them to another VSI to perform this work. This process is slightly convoluted because you have to create and delete an ephemeral VSI just to create a boot volume in the first place. An optional first step in this process allows you to take advantage of linked clone space efficiency if you choose to upload your own virtual machine template as a custom VSI image. Here are the steps:
general-purpose storage profileblockdev --getsize64 /dev/vdb.qemu-img convert -f vmdk -O raw smoonen-ubuntu-1.vmdk /dev/vdbvirt-v2v-in-place (or virt-v2v with a second copy) to transform your image (for example, to install virtio drivers) run it now.fdisk -l /dev/vdb. Note that if you have resized the boot disk, this may rewrite the backup GPT to the appropriate location.blockdev --flushbufs /dev/vdbcloud-initAs a variation on the previous method, instead of exporting your virtual machine disks, you could boot your virtual machine using an ISO that is capable of reading the disks and transferring them to your worker VSI that will process them and copy them to VPC volumes. This approach is inspired by the old GHOST tool.
In order to do this, you will likely need to create an IBM Cloud Transit Gateway to connect your source environment (whether in IBM Cloud classic or in IBM Cloud VCFaaS) to the destination VPC where your worker VSI lives. This enables direct network connectivity between the environments.
One approach, noted above, is to use the virt-p2v tool to generate a boot disk from which you will initiate a network connection to your virt-v2v worker to transfer your virtual machine disks.
You could also boot your virtual machine using your preferred (ideally tiny) Linux distribution such as TinyCore Linux, or using a tool such as G4L. However, note that the smaller the distribution, the more likely it is that you would need to customize it or connect it to public repositories to include needed tools. (For example, I found that TinyCore Linux was missing openssh and qemu packages out of the box.) In my case, I had an Ubuntu install ISO handy, and so I attached that to my original virtual machine and booted into it. For the Ubuntu install ISO, if you select the Help button you will find an Enter shell option that allows you to run commands.
The approach I took was to use the dd command to read and write the disks, combined with the gzip command to help with network throughput, combined with the netcat command to transfer over the network. On the destination worker VSI, I ran the following:
nc -l 192.168.100.5 8080 | gunzip | dd of=/dev/vdd bs=16M status=progress
fdisk -l /dev/vdd
blockdev --flushbufs /dev/vdb
On the source side, I had to configure networking, and then ran the following:
# Note that network device name may vary, e.g., depending on BIOS vs. UEFI
ip addr add 10.50.200.3/26 dev ens192
ip route add 0.0.0.0/0 via 10.50.200.1
dd if=/dev/sda bs=16M | gzip | nc -N -v 192.168.100.5 8080
After transferring the disk you could use virt-v2v-in-place or virt-v2v to further transform the disk. Then, as with method 2, you should detach the volumes from your worker VSI and create the VSI that will make actual use of them.
This method is my favorite method, partly because of its efficiency (export of VMDK and OVA is inefficient) and partly because of its flexibility.
As noted above, it is possible to leverage virt-v2v together with the VMware VDDK toolkit to connect to vCenter and vSphere and directly fetch the virtual machine disks to your worker VSI as well as performing other virt-v2v processing such as installation of virtio drivers. This is quite convoluted due to competing RHEL and Ubuntu limitations, and so it is not currently my preferred method, but it is possible to get it working. This method is available only if you have access to vCenter; it is not applicable to VCFaaS.
You may need to input your vCenter and vSphere hostnames into /etc/hosts to ensure this works. You will also need to know or discover the specific host on which your virtual machine is running. Here is an example command invocation. Note that your vCenter password is specified in a file, and your userid needs to be expressed in domain\user form. You’ll also need to determine the vCenter certificate thumbprint.
virt-v2v -ic vpx://vsphere.local\%5cAdministrator\@smoonen-vc.smoonen.example.com/IBMCloud/cluster1/host000.smoonen.example.com\?no_verify=1 \
smoonen-win \
-ip passwd \
-o disk -os /tmp \
-it vddk \
-io vddk-libdir=vmware-vix-disklib-distrib \
-io vddk-thumbprint=A2:41:6A:FA:81:CA:4B:06:AE:EB:C4:1B:0F:FE:23:22:D0:E8:89:02 \
--block-driver virtio-scsi
The processes outlined above are somewhat tedious. One implication of this is that you will need to carefully develop and test your process around this. This will also enable you to form an estimate of how long the process will take based on network and disk copy times. Portions of this process can be automated, and you can also perform migrations in parallel.
You may also want or need help in executing this. For this purpose, you could reach out to IBM Consulting. IBM Cloud also has partnerships with PrimaryIO and Wanclouds who can provide consulting services in this space.
My colleague Shinobu Yasuda has written a step-by-step VSI migration guide, including screenshots, demonstrating how he successfully migrated a variety of RHEL and Windows releases from vCenter to VPC VSI.
Previously, IBM Cloud published documentation exclusively recommending the image import method; IBM Cloud has recently published new documentation focused on direct migration to boot volumes, including the migration of VMDK files, the direct network transfer of VM disks, and direct connection to vCenter using VDDK.