How to move your Linux EBS backed EC2 AMI to a new region

Lots of ways to do this out there and hell if any of them work properly. This is really for myself as a reminder of how to do things but hey lets share LOL

Keep in mind that things at Amazon change so what works today may not work tomorrow 8 March 2012

Things you must understand:

A snapshot – Its a copy of the instance that is running at any one time. When u request Amazon to make one it stops your instance and makes a carbon copy of the volume (disk) that the instance is using then it fires it up again.

Copying images of running AMI’s – don’t do it! the root image will likely be inconsistent since the OS will be writing things to it. Even if you stop everything and make a copy that appears to work, some time down the line you might find that something doesn’t work properly.

ec2-ami-tools – remember that ec2-ubundle image and ec2-unbundle image are asymmetric i.e. the bundle process takes a file system and bundles it up the un-bundling process takes a bundle and creates a file system IMAGE not the file system itself. This means you need to then write the file system image to a device using something like the dd command.

10 G – No matter what you start with  (always < 10G) you’ll end up with a 10G file system. Don’t ask me why. I transferred an 8G image and ended up with a 10G on the other side.

Keys – remember to get your keys. This is a pig and confusing but I guess secure

The process is:

1. Take a snapshot of the current AMI

2. Make a volume from the snapshot you just tool

3. Mount the volume on your source instance \source

4. Create, attach, mkfs and mount another volume on your source instance \target

5. Bundle up the \source into the \target

6. Migrate the manifest. This will map the right kernel info into your bundles

6. Upload the bundles sitting in \target  to a local bucket

7. Migrate the bundles over to a target on your target region

At your target region

8. Set an instance at your target site and create, attach, 3 volumes to it \source \image \target source and image need a file system on it (mkfs -t ext4 /dev/what-ever) target can remain raw

9. Download the bundles from your target region bucket to \source

10. Unbundle \source to \image

11. Use dd to write \image to \target

12. Sync, unmount \target

13. Snapshoot \target

14. Create an AMi out of it

15. Pop a bottle of champagne

Here is a list of commands to run in order to make it work! The example is for moving a USA instance to Singapore using the ec-ami-tools which you’ll have to install on your instance if its not already there.

 

On the source site

Create the volume you want to migrate

U can do this using the AWS console – make sure you identify the correct snapshot  remember the following:

For EBS backed AMI the AMI is associated to a snapshot

When an instance of the AMI is started a volume of the snapshot is created. This is essentially a copy of the snapshot.

So to identify the right snapshot you need to look at the instance and see what AMI it belongs to and in the AMI look what volume is attached and then look at the volume and c what snapshot it was created from.

 

Create a Volume from the Snapshot belonging to the instance you just stopped

You can tell which snapshot belongs to which AMI looking at description field

Create a volume of the same size as the snapshot from the snapshot itself making sure it’s in the same zone as the instance you are going to restart

Restart your instance

Attach the Volume you just created you can identify it by looking at the Snapshot field; the code there should be the same as the snapshot from which you created it and of course the snap shot you created must belong to the AMI you are trying to migrate.

Mount the volume on any convenient mount point say /source

[root@domU-12-31-39-14-3A-96 keys]# fdisk -l

 

Disk /dev/xvda1: 8589 MB, 8589934592 bytes

255 heads, 63 sectors/track, 1044 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/xvda1 doesn’t contain a valid partition table

[root@domU-12-31-39-14-3A-96 keys]# mkdir /source /target

[root@domU-12-31-39-14-3A-96 keys]# mount /dev/xvda1 /source

 

Now create another volume of the same size

Attach the 2nd volume

[root@domU-12-31-39-14-3A-96 keys]# fdisk -l

 

Disk /dev/xvda1: 8589 MB, 8589934592 bytes

255 heads, 63 sectors/track, 1044 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/xvda1 doesn’t contain a valid partition table

 

Disk /dev/xvdg: 8589 MB, 8589934592 bytes

255 heads, 63 sectors/track, 1044 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Create a file system on it

[root@domU-12-31-39-14-3A-96 keys]# mkfs -t ext4 /dev/xvdg

mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

524288 inodes, 2097152 blocks

104857 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=2147483648

64 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 33 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

 

Mount it on a convenient mount point say /target

mount /dev/xvdg /target

Install ruby, ec2-ami-tools, and ec2-api-tools

Set up the environments as required

Get your ACCESS_KEY, SECRET_KEY, EC2_PRIVATE_KEY and EC2_CERT from the security credentials screen

Create a volume bundle 

 

ec2-bundle-vol -v /source -d /target –all -k $EC2_PRIVATE_KEY -c $EC2_CERT -u 5978-4310-1532 -r i386 -p ExportAMI

Copying /source/ into the image file /target/ExportAMI…

Excluding:

/dev

/media

/mnt

/proc

/sys

1+0 records in

1+0 records out

1048576 bytes (1.0 MB) copied, 0.00169074 s, 620 MB/s

mke2fs 1.41.12 (17-May-2010)

warning: Unable to get device geometry for /target/ExportAMI

Bundling image file…

Splitting /target/ExportAMI.tar.gz.enc…

Created ExportAMI.part.00

Created ExportAMI.part.01

Created ExportAMI.part.02

Created ExportAMI.part.03

Created ExportAMI.part.04

Created ExportAMI.part.05

Created ExportAMI.part.06

Created ExportAMI.part.07

Created ExportAMI.part.08

……………………………………….

……………………………………….

………………………………………

Created ExportAMI.part.87

Created ExportAMI.part.88

Created ExportAMI.part.89

Created ExportAMI.part.90

Created ExportAMI.part.91

Created ExportAMI.part.92

Created ExportAMI.part.93

Created ExportAMI.part.94

Created ExportAMI.part.95

Created ExportAMI.part.96

Generating digests for each part…

Digests generated.

Unable to read instance meta-data for ancestor-ami-ids

Unable to read instance meta-data for ramdisk-id

Unable to read instance meta-data for product-codes

Creating bundle manifest…

ec2-bundle-vol complete.

 

Migrate manifest – set up the proper kernel and ram disk for the new zone

ec2-migrate-manifest -m /source/ExportAMI.manifest.xml -c $EC2_CERT -k $EC2_PRIVATE_KEY -a $ACCESS_KEY -s $SECRET_KEY –region ap-southeast-1

[root@ip-10-128-90-250 keys]# ec2-migrate-manifest -m /source/ExportAMI.manifest.xml -c $EC2_CERT -k $EC2_PRIVATE_KEY -a $ACCESS_KEY -s $SECRET_KEY –region ap-southeast-1

Backing up manifest…

warning: peer certificate won’t be verified in this SSL session

warning: peer certificate won’t be verified in this SSL session

warning: peer certificate won’t be verified in this SSL session

Successfully migrated /source/ExportAMI.manifest.xml

It is now suitable for use in ap-southeast-1.

 

 

Upload your bundle to a local bucket

ec2-upload-bundle -b localbucket -m /target/ExportAMI.manifest.xml  -a $ACCESS_KEY –s $SECRET_KEY

 

[root@domU-12-31-39-14-3A-96 keys]# ec2-upload-bundle -b localbucket -m /target/ExportAMI.manifest.xml  -a $ACCESS_KEY -s $SECRET_KEY

Creating bucket…

Uploading bundled image parts to the S3 bucket localbucket …

Uploaded ExportAMI.part.00

Uploaded ExportAMI.part.01

Uploaded ExportAMI.part.02

Uploaded ExportAMI.part.03

Uploaded ExportAMI.part.04

Uploaded ExportAMI.part.05

Uploaded ExportAMI.part.06

………………………………………….

………………………………………….

………………………………………….

Uploaded ExportAMI.part.90

Uploaded ExportAMI.part.91

Uploaded ExportAMI.part.92

Uploaded ExportAMI.part.93

Uploaded ExportAMI.part.94

Uploaded ExportAMI.part.95

Uploaded ExportAMI.part.96

Uploading manifest …

Uploaded manifest.

Bundle upload completed.

 

Migrate the bundle

ec2-migrate-bundle -c $EC2_CERT -k $EC2_PRIVATE_KEY -mExportAMI.manifest.xml -l ap-southeast-1 -b localbucket -d mysingbucket -a$ACCESS_KEY -s$SECRET_KEY

 

Note that some names are forbidden i.e. remotebucket  if you get

ERROR: Server.AccessDenied(403): Access Denied message try changing the name also avoid any character other than simple char [A..z] unless you want to take your chances.

 

Migrate the bundle to you target region

ec2-migrate-bundle -c $EC2_CERT -k $EC2_PRIVATE_KEY -mExportAMI.manifest.xml -l ap-southeast-1 -b localbucket -d mysingbucket -a$ACCESS_KEY -s$SECRET_KEY

 

[root@domU-12-31-39-14-3A-96 keys]# ec2-migrate-bundle -c $EC2_CERT -k $EC2_PRIVATE_KEY -mExportAMI.manifest.xml -l ap-southeast-1 -b localbucket -d mysingbucket -a$ACCESS_KEY -s$SECRET_KEY

Region not provided, guessing from S3 location: ap-southeast-1

Downloading manifest ExportAMI.manifest.xml from localbucket to /tmp/ami-migration-ExportAMI.manifest.xml/temp-migration.manifest.xml …

warning: peer certificate won’t be verified in this SSL session

warning: peer certificate won’t be verified in this SSL session

Copying ‘ExportAMI.part.00’…

Copying ‘ExportAMI.part.01’…

Copying ‘ExportAMI.part.02’…

Copying ‘ExportAMI.part.03’…

Copying ‘ExportAMI.part.04’…

Copying ‘ExportAMI.part.05’…

Copying ‘ExportAMI.part.06’…

………………………………………….

………………………………………….

………………………………………….

Copying ‘ExportAMI.part.90’…

Copying ‘ExportAMI.part.91’…

Copying ‘ExportAMI.part.92’…

Copying ‘ExportAMI.part.93’…

Copying ‘ExportAMI.part.94’…

Copying ‘ExportAMI.part.95’…

Copying ‘ExportAMI.part.96’…

 

Your new bundle is in S3 at the following location:

mysingbucket/ExportAMI.manifest.xml

Please register it using your favorite EC2 client.

 

 

Now you have 2 options: You either

  1. fire up any AMI in your target location or you can
  2. register the one you have just transferred over and fire that up.

 

Option 2 – Now once this is done you have to register your new AMI in the new region and you can do that from the source origin instance. To create an instance store backed AMI the next command is part of the ec2-api set so you need to change the environment before you call it

 

ec2-register mysingbucket/ExportAMI.manifest.xml –region ap-southeast-1

[root@domU-12-31-39-14-3A-96 keys]# ec2-register mysingbucket/ExportAMI.manifest.xml -n ExportAMI –region ap-southeast-1

IMAGE   ami-8cbdf9de

 

On the target site now

You can now go to the AWS console and fire up the AMI you have just registered however what we will want to achieve is an EBS backed AMI. In order to achieve that follow on.

 

From the AWS console create 2 volumes of the same size as your original AMI

 

Attach them to the instance you have just started

 

Create a file system on each one and mount one on /source and one under /target

[root@ip-10-128-90-250 ~]# fdisk -l

 

Disk /dev/xvda1: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/xvda1 doesn’t contain a valid partition table

 

Disk /dev/xvda2: 160.1 GB, 160104972288 bytes

255 heads, 63 sectors/track, 19464 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/xvda2 doesn’t contain a valid partition table

 

Disk /dev/xvda3: 939 MB, 939524096 bytes

255 heads, 63 sectors/track, 114 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/xvda3 doesn’t contain a valid partition table

 

Disk /dev/xvdf: 8589 MB, 8589934592 bytes

255 heads, 63 sectors/track, 1044 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/xvdf doesn’t contain a valid partition table

 

Create the file system

 

[root@ip-10-128-90-250 ~]# mkfs -t ext4 /dev/xvdf

mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

524288 inodes, 2097152 blocks

104857 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=2147483648

64 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 25 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

 

Mount what will be the source

[root@ip-10-128-90-250 ~]# mount /dev/xvdf /source

 

Now do the same for the target first attach it to the instance using the AWS console

Then identify it in your instance again with fdisk –l

 

[root@ip-10-128-90-250 ~]# fdisk -l

 

Disk /dev/xvda1: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/xvda1 doesn’t contain a valid partition table

 

Disk /dev/xvda2: 160.1 GB, 160104972288 bytes

255 heads, 63 sectors/track, 19464 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/xvda2 doesn’t contain a valid partition table

Disk /dev/xvda3: 939 MB, 939524096 bytes

255 heads, 63 sectors/track, 114 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/xvda3 doesn’t contain a valid partition table

Disk /dev/xvdf: 8589 MB, 8589934592 bytes

255 heads, 63 sectors/track, 1044 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/xvdf doesn’t contain a valid partition table

 

Disk /dev/xvdg: 8589 MB, 8589934592 bytes

255 heads, 63 sectors/track, 1044 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/xvdg doesn’t contain a valid partition table

 

Create a file system on it

[root@ip-10-128-90-250 ~]# mkfs -t ext4 /dev/xvdg

mke2fs 1.41.12 (17-May-2010)

Filesystem label=OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

524288 inodes, 2097152 blocks

104857 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=2147483648

64 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

 

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 39 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

Mount it on your image mount point – remember which one is your target and which your source you can match them up by looking at output of command mount

[root@ip-10-128-90-250 ~]# mount

/dev/xvda1 on / type ext4 (rw)

none on /proc type proc (rw)

none on /sys type sysfs (rw)

none on /dev/pts type devpts (rw,gid=5,mode=620)

none on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

/dev/xvdf on /source type ext4 (rw)

/dev/xvdg on /image type ext4 (rw)

[root@ip-10-128-90-250 ~]#

 

And on the ASW console you can look at Attachment information

i-30907964:/dev/sdf (attached) The last letter is the only identifier in this case this one is the source

i-30907964:/dev/sdg (attached) and this one our image mount

Now source again your ec2-ami-tools environment and download your bucket to your source mount on your target location also ftp over any *.pem files from your source system to your new instance since the bundling process will leave them out for security.

Download bundle to a local volume

 ec2-download-bundle -b mysingbucket -m ExportAMI.manifest.xml -a$ACCESS_KEY -s$SECRET_KEY -k $EC2_PRIVATE_KEY -d /source

[root@ip-10-128-90-250 keys]# ec2-download-bundle -b mysingbucket -m ExportAMI.manifest.xml -a$ACCESS_KEY -s$SECRET_KEY -k $EC2_PRIVATE_KEY -d /source

Downloading manifest ExportAMI.manifest.xml from mysingbucket to /source/ExportAMI.manifest.xml …

Downloading part ExportAMI.part.00 to /source/ExportAMI.part.00 …

Downloaded ExportAMI.part.00 from mysingbucket

Downloading part ExportAMI.part.01 to /source/ExportAMI.part.01 …

Downloaded ExportAMI.part.01 from mysingbucket

Downloading part ExportAMI.part.02 to /source/ExportAMI.part.02 …

———————————————-

———————————————-

———————————————-

Downloading part ExportAMI.part.94 to /source/ExportAMI.part.94 …

Downloaded ExportAMI.part.94 from mysingbucket

Downloading part ExportAMI.part.95 to /source/ExportAMI.part.95 …

Downloaded ExportAMI.part.95 from mysingbucket

Downloading part ExportAMI.part.96 to /source/ExportAMI.part.96 …

Downloaded ExportAMI.part.96 from mysingbucket

 

Unbundle.

Now you need to unbundle your image into your image volume

ec2-unbundle -k $EC2_PRIVATE_KEY -m /source/ExportAMI.manifest.xml -s /source -d /image

 

Note the bundle and unbundle commands are asymmetric . The bundle command takes a file system as its input while the unbundle outputs a file system image. You have to write the image to a device before you can use i
 

Write image to a volume.

Now you have a file that contains an image of you original file system on your image  disk and in order to be able to boot it you need to write the image to a volume. This you can do using the dd  linux command

After creating and attached a 3rd volume of 10G

dd if=/image/ExportAMI of=/dev/xvdh

Note: no need to create a file system on this volume

Where xvdh is the target device.

Identify your target volume detach it from the instance & take a snap shot of it

Now from the snapshot you can register and EBS backed AMI – when u create the instance make sure its got the right kernel.

 

Hell that was a long winded way of doing things! If you think that was a waste of time then you can do all of the above as a one liner using scp, gunzip & dd… will tell you later how 😉

 

This entry was posted in Amazon EC2 Cloud Computing and tagged , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *