I like to use IBM Cloud Object Storage to transfer large files (e.g., an OVA file) into the IBM Cloud infrastructure private network. Here’s how I do it:
- Order an instance of Cloud Object Storage if you don’t already have one
- Create a storage bucket with the region and storage class of your choice if you don’t already have one
- Create a COS service credential. To ensure interoperability with standard S3 tools, you should create an HMAC style credential. You can do this by adding an
{"HMAC":true}
configuration parameter when creating the credential. - Download the S3 tool of your choice. I like to use the
awscli
tool:-
pip install awscli
- Edit the file
~/.aws/credentials
to specify your credentials created above:
[default]
aws_access_key_id=...
aws_secret_access_key=...
-
- Now you can use the
aws
tool to copy a file to your bucket and to generate a presigned URL that you can use to download it:
aws --endpoint=https://s3-api.us-geo.objectstorage.softlayer.net s3 cp filename s3://bucketname/
aws --endpoint=https://s3-api.us-geo.objectstorage.softlayer.net s3 presign s3://bucketname/filename --expires-in 31536000
# returns a URL that you can then use with curl - You can use this URL within the IBM Cloud private network to download your file. For example, I can SSH to an ESXi host and use
wget
to download an OVA file directly to my vSAN datastore. You’ll need to be sure to adjust the URL to use the correct private endpoint for your storage region.
For large file uploads, it is now also possible to use Aspera for even faster transfers: https://cloud.ibm.com/docs/services/cloud-object-storage/basics?topic=cloud-object-storage-aspera