Jump to: navigation, search

Difference between revisions of "Ironic/blueprints/cinder-integration"

(Detailed Design)
Line 3: Line 3:
* Blueprints:  
* Blueprints:  
** Ironic:
** Ironic:
*** https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume
*** https://blueprints.launchpad.net/ironic/+spec/cinder-integration
** Cinder:
** Cinder:
*** TBD
*** TBD

Revision as of 19:52, 13 December 2013



Currently, Ironic has no integration with OpenStack's block storage project called Cinder. Ironic assumes that the bare metal (BM) host has it's own block storage device that Ironic deploys a bootable image onto. Cinder is the block storage project that knows how to provision volumes and create bootable volumes from images that live in Glance. What we would like is to have Ironic use Cinder bootable volumes as the block storage root device on bare metal.

User Story

A user wants to deploy a Bare Metal box as a compute instance from a volume that they have provisioned using Cinder. The user creates the bootable volume from Cinder's create volume from image. Then the user asks Ironic to deploy a Bare Metal host using the newly created Cinder bootable volume.

Detailed Design

High level overview of what is needed to integrate with Cinder. The idea is to have Ironic orchestrate with Cinder, like Nova orchestrates with Cinder, to boot a bare metal node from a Cinder provisioned bootable volume.


  • Cinder has been deployed, configured and is running.
  • User/Admin creates a Cinder bootable Volume and can get the cinder volume uuid
  • Ironic has been deployed, configured and is running.
  • Ironic has been setup with nodes that can be deployed.
  • Ironic can ask the BM BIOS to configure itself to boot from SAN (iSCSI/FC)


  • The BM node can have it's BIOS configured remotely
  • Ironic has an abstracted BIOS communication layer.
  • The volume created in Cinder has a bootable OS on it. It's outside the scope of Ironic to verify this.

Work Flow for Boot from Cinder Volume

  • User asks Ironic to boot a node from a cinder volume (uuid)
  • Ironic gathers the BM connection information
    • iSCSI
      • Ironic creates a new iSCSI initiator name (iqn) for the BM BIOS
    • Fibre Channel
      • Ironic calls the BIOS layer to fetch the BM's FibreChannel HBA WWN (world wide names)
  • Ironic calls Cinder to attach the volume to the host, passing in the connection information
  • Cinder calls the volume driver associated with the volume, initialize_connection()
    • This informs the cinder backend array to export the volume to the BM host.
    • Cinder returns the target connection information to Ironic
      • driver_volume_type ('fibre_channel', 'iscsi')
      • iSCSI
        • target_iqn
        • target_lun
        • target_portal
          • ip
          • port
        • auth_method (CHAP - optional)
        • auth_username
        • auth_password
      • Fibre Channel
        • target_lun
        • target_wwn (list of world wide names for the target)
  • Ironic calls the BIOS layer to configure the BIOS to boot from SAN
    • passing the connection information based upon the volume protocol type (iSCSI/FC)
      • for iSCSI volumes, the BIOS will also need the BM connection information
        • initiator isci name (iqn)
  • Ironic powers on the BM


  • The BM BIOS needs an iSCSI initiator name (iqn) for iSCSI boot from SAN
    • The ironic node can create a new initiator iqn using the open-iscsi util iscsi-iname and use that for the initiator
      • Once the host os is up and running on the BM, it may get it's own iSCSI initiator iqn for volume attaches, which will be different, that the entry in the BIOS.
      • This shouldn't be a problem. For example, on 3PAR arrays, you have 1 initiator host that can declare it uses multiple initiator iqns.
  • Ironic needs to talk to the BM BIOS for a few things
    • List of Fibre Channel HBA World wide names, if any
    • configure BIOS to boot from SAN for iSCSI/FC remotely.

Implementation Details

  • Implement a new Ironic deploy driver that does the Cinder orchestration
  • driver.deploy would implement the workflow as described above
  • driver would use the BIOS abstraction layer to query and configure the BIOS