Deep Dive: OpenStack Retrieving Nova Instance Console URLs with XVP and XenAPI/XenServer

This post is a deep dive into what happens in Nova (and where in the code) when a console URL is retrieved via the nova API for a Nova configuration backed by XVP and XenServer/XenAPI.  Hopefully the methods used in Nova’s code will not change over time, and this guide will remain good starting point.

Example nova client call:

nova get-vnc-console [uuid] xvpvnc

And the call returns:

+--------+-------------------------------------------------------------------------------------------------------+
| Type   | Url                                                                                                   |
+--------+-------------------------------------------------------------------------------------------------------+
| xvpvnc | https://URL:PORT/console?token=TOKEN |
+--------+-------------------------------------------------------------------------------------------------------+

One thing I particularly enjoy about console URL call in Nova is that it is synchronous  and has to reach all the way down to the VM level. Most calls in Nova are asynchronus, so console is a wonderful test of your cloud’s plumbing. If the call takes over rpc_response/rpc_cast_timeout (60/30 sec respectively), a 500 will bubble up to the user.

It helps to understand how XenServer consoles work in general.

  • XVP is an open source project which serves as a proxy to hypervisor console sessions. Interestingly enough, XVP is no longer used in Nova. The underpinnings of Console were changed in vnc-console-cleanup but the code is still around (console/xvp.py).
  • A XenServer VM has a console attribute associated with it. Console is an object in XenAPI.

This Deep Dive has two major sections:

  1. Generation of the console URL
  2. Accessing the console URL

How is the console URL generated?

console_url

 1) nova-api receives and validates the console request, and then makes a request to the compute API.

  • api/openstack/compute/contrib/consoles.py
  • def get_vnc_console

2) The compute RPC API receives the request and does two things: (2a) calls compute RPC API to gather connection information and (2b) call the console authentication service.

  • compute/api.py
  • def get_vnc_console

2a) The compute RPC receives the call from (1).  An authentication token is generated. For XVP consoles, a URL is generated which has FLAGS.xvpvncproxy_base_url and the generated token. driver.get_vnc_console is called.

  • compute/manager.py
  • def get_vnc_console

2a1) driver is an abstraction to the configured virt library, xenapi in this case. This just calls vmops get_vnc_console. XenAPI information is retrieved about the instance. The local to the hypervisor Xen Console URL generated and returned.

  • virt/xenapi/driver.py
  • def get_vnc_console
  • virt/xenapi/vmops.py
  • def get_vnc_console

2b) Taking the details from 2a1, the consoleauth RPC api is called. The token generated in (2a1) is added to memcache with CONF.console_token_ttl.

  • consoleauth/manager.py
  • def authorize_console

What happens when the console URL is accessed?

console_access

1) The request reaches nova-xvpvncproxy and a call to validate the token is made on the Console Auth RPC API

  • vnc/xvp_proxy.py
  • def __call__

2) The token in the request is checked against the token from the previous section (2b). Compute’s RPC API is called to validate the console’s port against the token’s port.

  • consoleauth/manager.py
  • def check_token
  • def _validate_token
  • compute/manager.py
  • def validate_console_port

3) nova-xvpvnc proxies a connection to the console on the hypervisor.

  • vnc/xvp_proxy.py
  • def proxy_connection
Advertisements

Deep Dive: OpenStack Nova Snapshot Image Creation with XenAPI/XenServer and Glance

Based on currently available code (nova: a77c0c50166aac04f0707af25946557fbd43ad44 2012-11-02/python-glanceclient: 16aafa728e4b8309b16bcc120b10bc20372883f4 2012-11-07/glance: 9dae32d60fc285d03fdb5586e3368d229485fdb4)

This is a deep dive into what happens (and where in the code) during image creation with a Nova/Glance configuration that is backed by XenServer/XenAPI.  Hopefully the methods used in Glance/Nova’s code will not change over time, and this guide will remain good starting point.

Disclaimer: I am _not_ a developer, and these are really just best guesses. Corrections are welcome.

 

1) nova-api receives an imaging request. The request is validated, checking for a name and making sure the request is within quotas. Instance data is retrieved, as well as block device mappings. If the instance is volume backed, a separate compute API call is made to snapshot (self.compute_api.snapshot_volume_backed). For this deep dive, we’ll assume there is no block device mapping. self.compute_api.snapshot is called. The newly created image UUID is returned.

  • nova/api/openstack/compute/servers.py
  • def _action_create_image

2) The compute API gets the request and calls _create_image.  The instance’s task state is set to IMAGE_SNAPSHOT. Notifications are created of the state change. Several properties are collected about the image, including the minimum RAM, customer, and base image ref.The non inheritable instance_system meta data is also collected. (2a, 2b, 2c) self.image_service.create and (3) self.compute_rpcapi.snapshot_instance are called.

  • nova/compute/api.py
  • def snapshot
  • def _create_image

2a) The collected metadata from 2 is put into a glance-friendly format, and sent to glance. The glance client’s create is called.

  • nova/image/glance.py
  • def create

2b) Glance (client) sends a POST the glance server to /v1/images with the gathered image metadata from (3).

  • glanceclient/v1/images.py
  • def create

2c) Glance (server) receives the POST. Per the code comments:

Upon a successful save of the image data and metadata, a response
containing metadata about the image is returned, including its
opaque identifier.

  • glance/api/v1
  • def create
  • def _handle_source

3) Compute RPC API casts a message to the queue for the instance’s compute node.

  • nova/compute/rpcapi.py
  • def snapshot_instance

4) The instance’s power state is read and updated. (4a) The XenAPI driver’s snapshot() is called. Notification is created for the snapshot’s start and end.

  • nova/compute/manager.py
  • def snapshot_instance

4a) The vmops snapshot is called (4a1).

  • nova/virt/xenapi/driver.py
  • def snapshot

4a1) The snapshot is created in XenServer via (4a1i) vm_utils, and (4a1ii) uploaded to glance. The code’s comments say this:

Steps involved in a XenServer snapshot:

1. XAPI-Snapshot: Snapshotting the instance using XenAPI. This
creates: Snapshot (Template) VM, Snapshot VBD, Snapshot VDI,
Snapshot VHD
2. Wait-for-coalesce: The Snapshot VDI and Instance VDI both point to
a ‘base-copy’ VDI. The base_copy is immutable and may be chained
with other base_copies. If chained, the base_copies
coalesce together, so, we must wait for this coalescing to occur to
get a stable representation of the data on disk.
3. Push-to-glance: Once coalesced, we call a plugin on the XenServer
that will bundle the VHDs together and then push the bundle into
Glance.

  • nova/virt/xenapi/vmops.py
  • def snapshot

4a1i) The instance’s root disk is recorded and its VHD parent is also recorded. The SR is recorded. The instance’s root VDI is snapshotted. Operations are blocked until a coalesce completes in _wait_for_vhd_coalesce (4a1i-1).

  • nova/virt/xenapi/vm_utils.py
  • def snapshot_attached_here

4a1i-1) The end result of this process is outlined in the code comments:

Before coalesce:

* original_parent_vhd
    * parent_vhd
        snapshot

After coalesce:

* parent_vhd
    snapshot

In (4a1i) the original vdi uuid was recorded. The SR is scanned. In a nutshell, the code is ensuring that the desired layout above is met before allowing the snapshot to continue. The code polls CONF.xenapi_vhd_coalesce_max_attempts times and sleeps CONF.xenapi_vhd_coalesce_poll_interval: the SR is scanned. The original_parent_uuid is compared to the parent_uuid… if they don’t match we wait a while and check again for the coalescing to complete.

  • nova/virt/xenapi/vm_utils.py
  • def _wait_for_vhd_coalesce

4a1ii) The glance API servers are retrieved from configuration. The glance upload_vhd XenAPI plugin is called.

  • nova/virt/xenapi/vm_utils.py
  • def upload_image

4a2) A staging area is created, prepared, and _upload_tarball is called.

  • plugins/xenserver/xenapi/etc/xapi.d/plugins/glance
  • def upload_vhd

4a3) The staging area is prepared. This basically symlinks the snapshot VHDs to a temporary folder in the SR.

  • plugins/xenserver/xenapi/etc/xapi.d/plugins/utils.py
  • def prepare_staging_area

4a4) The comments say it best:

Create a tarball of the image and then stream that into Glance
using chunked-transfer-encoded HTTP.

A URL is constructed and a connection is opened to it. The image meta properties (like status) are collected and added as HTTP headers. The tarball is created, and streamed to glance in CHUNK_SIZE increments.  The HTTP stream is terminated, the connection checks for an OK from glance and reports accordingly.

  • plugins/xenserver/xenapi/etc/xapi.d/plugins/glance
  • def _upload_tarball

(Glance Server)

5) I’ve removed some of the obvious URL routing functions in glance to get down to the meat of this process. Basically, the PUT request goes to glance API.  The API interacts with the registry again, but this time there is data to be uploaded.  The image’s metadata is validated for activation, and then _upload_and_activate is called. _upload_and_activate is basically a way to call _upload and ensure that if it works, activate the image.  _upload checks to see if we’re copying, but we’re not. It also checks to see if the HTTP request is application/octet-stream. Then, an object store like swift is inferred from the request or used from the glance configuration (self.get_store_or_400). Finally, the image is added to the object store and its checksum is verified and the glance registry is updated. Notifications are also sent for image.upload.

  • glance/api/v1/images.py
  • def update
  • def _handle_source
  • def _upload_and_activate
  • def _upload

Deep Dive: Openstack Nova Rescue Mode with XenAPI / XenServer

Based on currently available code (a77c0c50166aac04f0707af25946557fbd43ad44 2012-11-02)

This is a deep dive into what happens (and where in the code) during a rescue/unrescue scenario with a Nova configuration that is backed by XenServer/XenAPI.  Hopefully the methods used in Nova’s code will not change over time, and this guide will remain good starting point.

Rescue

1) nova-api receives a rescue request. A new admin password is generated via utils.generate_password meeting FLAGS.password_length length requirement. The API calls rescue on the compute api.

  • nova/api/openstack/compute/contrib/rescue.py
  • def _rescue

2) The compute API updates the vm_state to RESCUING, and calls the compute rpcapi rescue_instance with the same details.

  • nova/compute/api.py
  • def rescue

3) The RPC API casts a rescue_instance message to the compute node’s message queue.

  • nova/compute/rpcapi.py
  • def rescue_instance

4) nova-compute consumes the message in the queue containing the rescue request. The admin password is retrieved, if one was not passed this far one will be generated via utils.generate_password with the same flags as step 1. It then records details about the source instance, like networking and image details. The compute driver rescue function is called. After that (4a-4c) completes, the instance’s vm_state is updated to rescued.

  • nova/compute/manager.py
  • def rescue_instance

4a) This abstraction was skipped over in the last two deepdives, but for the sake of completeness: Driver.rescue is called. This just calls _vmops.rescue, where the real work happens.

  • nova/virt/xenapi/driver.py
  • def rescue

4b) Checks are performed to ensure the instance isn’t in rescue mode already. The original instance is shutdown via XenAPI. The original instance is bootlocked. A new instance is spawned with -rescue in the name-label.

  • nova/virt/xenapi/vmops.py
  • def rescue

4c) A new VM is created just as all other VMs, with the source VM’s metadata. The root volume from the instance we are rescuing is attached as a secondary disk. The instance’s networking is the same, however the new hostname is RESCUE-hostname.

  • nova/virt/xenapi/vmops.py
  • def spawn -> attach_disks_step rescue condition

Unrescue

1) nova-api receives an unrescue request.

  • nova/api/openstack/compute/contrib/rescue.py
  • def _unrescue

2) The compute API updates the vm_state to UNRESCUING, and calls the compute rpcapi unrescue_instance with the same details.

  • nova/compute/api.py
  • def unrescue

3) The RPC API casts an unrescue_instance message to the compute node’s message queue.

  • nova/compute/rpcapi.py
  • def unrescue_instance

4) The compute manager receives the unrescue_instance message and calls the driver’s rescue method.

  • nova/compute/manager.py
  • def unrescue_instance

4a)  Driver.unrescue is called. This just calls _vmops.unrescue, where the real work happens.

  • nova/virt/xenapi/driver.py
  • def unrescue

4b) The rescue VM is found. Checks are done to ensure the VM is in rescue mode. The original VM is found. The rescue instance has _destroy_rescue_instance performed (4b1). After that completes, the source VM’s bootlock is released and the VM is started.

  • nova/virt/xenapi/vmops.py
  • def unrescue

4b1) A hard shutdown is issued on the rescue instance. Via XenAPI, the root disk of the original instance is found. All VDIs attached  to the rescue instance are destroyed omitting the root of the original instance. The rescue VM is destroyed.

  • nova/virt/xenapi/vmops.py
  • def _destroy_rescue_instance

 

Deep dive: OpenStack Nova Resize Down with XenAPI/Xenserver

Based on the currently available code (commit 114109dbf4094ae6b6333d41c84bebf6f85c4e48 – 2012-09-13)

This is a deep dive into what happens (and where in the code) during a resize down  (e.g., flavor 4 to flavor 2) with a Nova configuration that is backed by XenServer/XenAPI.  Hopefully the methods used in Nova’s code will not change over time, and this guide will remain good starting point.

Steps 1-6a are identical to my previous entry “Deep dive: OpenStack Nova Resize Up with XenAPI/Xenserver“. This deep dive will examine the divergence between resize up and resize down in Nova, as there are a few key differences.

6b) The instance resize progress gets an update. The VM is shutdown via XenAPI.

  • ./nova/virt/xenapi/vmops.py
  • def _migrate_disk_resizing_down

6c) The source VDI is copied on the hypervisor via XenAPI VDI.copy.  Then, a different, new VDI is along with a VBD that it is plugged into the compute node.  The partition and filesystem of the new disk are resized via _resize_part_and_fs, using e2fsck, tune2fs,  parted, and tune2fs. The source VDI copy is also attached to nova-compute. Via _sparse_copy, which is configurable but by default true, nova-compute temporarily takes ownership of both devices (source read, dest write) and performs a block level copy, omitting zeroed blocks.

  • ./nova/virt/xenapi/vm_utils.py
  • def _resize_part_and_fs
  • def _sparse_copy
  • nova/utils.py
  • def temporary_chown

6d) Progress is again updated. The devices that were attached are unplugged, and the VHDs are copied in the same fashion as outlined in steps 6a1i-6b2 from the deep dive on resizing up are used, aside from 6b2.

Deep Dive: OpenStack Nova Resize Up with XenAPI/Xenserver

Nova is the Compute engine of the OpenStack project.

Based on the currently available code (commit 114109dbf4094ae6b6333d41c84bebf6f85c4e48 – 2012-09-13)

This is a deep dive into what happens (and where in the code) during a resize up  (e.g., flavor 2 to flavor 4) with a Nova configuration that is backed by XenServer/XenAPI.  Hopefully the methods used in Nova’s code will not change over time, and this guide will remain good starting point.

Some abstractions such as go-between RPC calls and basic XenAPI calls have been deliberately ignored.

Disclaimer: I am _not_ a developer, and this is just my best guess through an overly-caffeinated code dive. Corrections are welcome.

1) API receives a resize request.

2) Request Validations performed.

3) Quota verifications are performed.

  • ./nova/compute/api.py
  • def resize

4) Scheduler is notified to prepare the resize request. A target is selected for the resize and notified.

  • ./nova/scheduler/filter_scheduler.py
  • def schedule_prep_resize

5) Usage notifications are sent as the resize is preparing. A migration entry is created in the nova database with the status pre-migrating.  resize_instance (6) is fired.

  • ./nova/compute/manager.py
  • def prep_resize

6) The migration record is updated to migrating. The instance’s task_state is updated from resize preparation to resize migrating. Usages receive notification that the resize has started. The instance is renamed on the source to have a suffix of -orig. (6a) migrate_disk_and_power_off is invoked. The migration record  is updated to post migrating. The instance’s task_state is updated to resize migrated. (6b) Finish resize is called.

  • ./nova/compute/manager.py
  • def resize_instance
  • def migrate_disk_and_power_off
  • ./nova/virt/xenapi/vmops.py

6a) migrate_disk_and_power_off, where the work begins… Progress is zeroed out. A resize up or down is detected. We’re taking the resize up code path in 6a1.

  • ./nova/virt/xenapi/driver.py -> ./nova/virt/xenapi/vmops.py
  • def migrate_disk_and_power_off

6a1) Snapshot the instance. Via migrate_vhd, transfer the immutable VHDs, this is the base copy or the parent VHD belonging to the instance. Instance resize progress updated. Power down the instance. Again, via migrate_vhd (steps 6a1i-6a1v), transfer the COW VHD, or the changes which have occurred since the snapshot was taken.

  • ./nova/virt/xenapi/vmops.py
  • def _migrate_disk_resizing_up

6a1i) Call the XenAPI plugin on the hypervisor to transfer_vhd

  • ./nova/virt/xenapi/vmops.py
  • def _migrate_vhd

6a1ii) Make a staging on the source server to prepare the VHD transfer.

  • ./plugins/xenserver/xenapi/etc/xapi.d/plugins/migration
  • def transfer_vhd

6a1iii) Hard link the VHD(s) transferred to the staging area.

  • ./plugins/xenserver/xenapi/etc/xapi.d/plugins/utils.py
  • def prepare_staging_area

6a1iv) rsync the VHDs to the destination. The destination path is /images/instance-instance_uuid.

  • ./plugins/xenserver/xenapi/etc/xapi.d/plugins/migration
  • def _rsync_vhds

6a1v) Clean up the staging area which was created in 6d.

  • ./plugins/xenserver/xenapi/etc/xapi.d/plugins/utils.py
  • def cleanup_staging_area

6b) (6b1) Set up the newly transferred disk and turn on the instance on the destination host. Make the tenant’s quota usage reflect the newly resized instance.

  • ./nova/compute/manager.py
  • def finish_resize

6b1) The instance record is updated with the new instance_type (ram, CPU, disk, etc). Networking is set up on the destination (another day- best guess: nova-network, quantum, and the configured quantum plugin(s)  are notified of the networking changes). The instance’s task_state is set to resize finish. Usages are notified of the beginning of the end of the resize process. (6b1i) Finish migration is invoked. The instance record is updated to resized. The migration record is set to finished. Usages are notified that the resize has completed.

  • ./nova/compute/manager.py
  • def _finish_resize

6b1i) (6b1ii) move_disks is called. (6b2) _resize_instance is called. The destination VM is created and started via XenAPI. The resize’s progress is updated.

  • ./nova/virt/xenapi/driver.py -> ./nova/virt/xenapi/vmops.py
  •  def finish_migration

6b1ii) (6b1iii) The XenAPI plugin is called to move_vhds_into_sr. The SR is scanned. The Root VDI’s name-label and name-description are set to reflect the instance’s details.

  • ./nova/virt/xenapi/vm_utils.py
  • def move_disks

6b1iii) Remember the VHD destination from step 6a1iv? I thought so! =) (6b1iv) Call import_vhds on the VHDs in that destination. Cleanup this staging area, just like 6a1v.

  • ./plugins/xenserver/xenapi/etc/xapi.d/plugins/migration
  • def move_vhds_into_sr

6b1iv) Move the VHDs from the staging area to the SR. Staging areas are used because if the SR is unaware of VHDs contained within, it could delete our data. This function assembles our VHD chain’s order, and assigns UUIDs to them. Then they are renamed from #.vhd to UUID.vhd. The VHDs are then linked to each other, starting from parent and moving down the tree. This is done via calls to vhd-util modify. The VDI chain is validated. Swap files are given a similar treatment, and appended to the list of files to move into the SR. All of these files are renamed (python os.rename) from the staging area into the host’s SR.

  • ./plugins/xenserver/xenapi/etc/xapi.d/plugins/utils.py
  • def import_vhds

6b2) The instance’s root disk is resized. The current size of the root disk is retrieved from XenAPI (virtual_size).  XenAPI VDI.resize_online ( XenServer > 5 ) is called to resize the disk to its new size as defined by the instance_type.

UPDATE: (2012-10-28) Added to step 6 the renaming to the instance-uuid-orig on the source.