Large file transfers into the IBM Cloud

I like to use IBM Cloud Object Storage to transfer large files (e.g., an OVA file) into the IBM Cloud infrastructure private network. Here’s how I do it:

  1. Order an instance of Cloud Object Storage if you don’t already have one
  2. Create a storage bucket with the region and storage class of your choice if you don’t already have one
  3. Create a COS service credential. To ensure interoperability with standard S3 tools, you should create an HMAC style credential. You can do this by adding an {"HMAC":true} configuration parameter when creating the credential.
  4. Download the S3 tool of your choice. I like to use the awscli tool:
      1. pip install awscli
      2. Edit the file ~/.aws/credentials to specify your credentials created above:
        [default]
        aws_access_key_id=...
        aws_secret_access_key=...
  5. Now you can use the aws tool to copy a file to your bucket and to generate a presigned URL that you can use to download it:
    aws --endpoint=https://s3-api.us-geo.objectstorage.softlayer.net s3 cp filename s3://bucketname/
    aws --endpoint=https://s3-api.us-geo.objectstorage.softlayer.net s3 presign s3://bucketname/filename --expires-in 31536000
    # returns a URL that you can then use with curl
  6. You can use this URL within the IBM Cloud private network to download your file. For example, I can SSH to an ESXi host and use wget to download an OVA file directly to my vSAN datastore. You’ll need to be sure to adjust the URL to use the correct private endpoint for your storage region.

One thought on “Large file transfers into the IBM Cloud

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s