Among the VM to VSI migration techniques I considered, one possible option was to boot the destination VSI using a GHOST-like ISO tool. This is difficult to accomplish because IBM Cloud does not currently support booting a VSI from an ISO. What I did instead was to craft a qcow2 disk image that loaded this ISO. Then, any VSI booted using this image would initially load the tool. Afterwards the tool could be used to transfer the actual disk to be used for the VM.
There are some limitations for this approach. Most importantly, the use of the tool as an image template entirely obscures the actual underlying OS for any VSI booted from that image. It seems to me an abuse of the idea of an image template. Furthermore, given the limitations of many small Linux distributions, customizing the image so that it has all of the packages you need is potentially a tedious process. For example, I found that out of the box TinyCore Linux did not have openssh, qemu, nor did it have virtio drivers. Combined with the fact that I also wanted to leverage libguestfs in many cases, this was a challenging limitation.
As a result, I rejected this approach as one of the viable migration paths. However, it was still a fun experiment to build the boot image and boot my VSI using it. Here are the steps I took to create a TinyCore Linux boot image:
- Download Core ISO: http://www.tinycorelinux.net/16.x/x86/release/ and rename it to
core.iso - Create raw disk image that will be our UEFI boot image:
qemu-img create -f raw core.img 64M - Create EFI partition:
parted core.img --script mklabel gpt
parted core.img --script mkpart ESP fat32 1MiB 100%
parted core.img --script set 1 boot on - Attach image as a loopback device:
LOOPDEV=$(losetup --find --show --partscan core.img) - Format the partition:
mkfs.vfat "${LOOPDEV}p1" - Install grub:
mkdir -p /mnt/esp
mount "${LOOPDEV}p1" /mnt/esp
grub-install --target=x86_64-efi --efi-directory=/mnt/esp --boot-directory=/mnt/esp/boot --removable --recheck --no-nvram - Edit
/mnt/esp/boot/grub/grub.cfg:menuentry "Boot Core" {
set isofile="/boot/core.iso"
loopback loop $isofile
linux (loop)/boot/vmlinuz quiet
initrd (loop)/boot/core.gz
} - Copy ISO:
cp core.iso /mnt/esp/boot - Cleanup:
umount /mnt/esp; losetup -d "$LOOPDEV" - Convert to qcow2:
qemu-img convert -f raw -O qcow2 core.img core.qcow2 - Upload this to COS and create an image as in our migration method #1. I chose generic OS.
- Deploy a VSI using this image. Configure profile, volumes, and network as needed.
- Display VNC console for your VSI. TinyCore booted with the network properly configured using DHCP. At this point if you had a public gateway attached to the VSI subnet, you could even install extensions like SSH:
tce-load -wi openssh, or qemu tools:tce-load -wi qemu.


