Jump to: navigation, search

Difference between revisions of "Ironic/Drivers/iLODrivers/master"

(Registering Proliant node in Ironic)
m (Added one Gen10 Plus Server in supported matrix)
 
(86 intermediate revisions by 8 users not shown)
Line 3: Line 3:
 
=== Overview ===  
 
=== Overview ===  
  
iLO drivers enable to take advantage of features of iLO management engine in HP ProLiant servers. iLO drivers are targeted for HP ProLiant Gen 8 systems and above which have [http://www8.hp.com/us/en/products/servers/ilo/ iLO 4 management engine].
+
iLO drivers enable to take advantage of features of iLO management engine in HPE ProLiant servers. iLO drivers are targeted for HPE ProLiant Gen8 and Gen9
 +
systems which have [https://www.hpe.com/us/en/servers/integrated-lights-out-ilo.html iLO 4 management engine]. From '''Pike''' release iLO drivers start supporting ProLiant Gen10 systems which have [https://www.hpe.com/us/en/servers/integrated-lights-out-ilo.html#innovations iLO 5 management engine]. iLO5 conforms to [https://www.dmtf.org/standards/redfish Redfish] API and hence hardware type [https://docs.openstack.org/ironic/latest/admin/drivers/redfish.html '''redfish'''] is also an option for this kind of hardware but it will lack the iLO specific features.
  
Currently there are 3 iLO drivers:
+
For enabling Gen10 systems and getting detailed information on Gen10 feature support in Ironic please check this portion [https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/master#Enabling_ProLiant_Gen10_systems_in_Ironic Enabling ProLiant Gen10 systems in Ironic].
 +
 
 +
ProLiant hardware is supported by the '''ilo''' hardware type and the following classic drivers:
  
 
* [[Ironic/Drivers/iLODrivers/master#iscsi_ilo_driver|iscsi_ilo]]
 
* [[Ironic/Drivers/iLODrivers/master#iscsi_ilo_driver|iscsi_ilo]]
 
* [[Ironic/Drivers/iLODrivers/master#agent_ilo_driver|agent_ilo]]
 
* [[Ironic/Drivers/iLODrivers/master#agent_ilo_driver|agent_ilo]]
 
* [[Ironic/Drivers/iLODrivers/master#pxe_ilo_driver|pxe_ilo]]
 
* [[Ironic/Drivers/iLODrivers/master#pxe_ilo_driver|pxe_ilo]]
 +
 +
    '''Note''': From '''Rocky''' release iLO drivers also support '''ilo5''' hardware type for ProLiant Gen10 and later systems only.
 +
 +
    '''Note''': All HPE ProLiant servers support reference hardware type [https://docs.openstack.org/ironic/latest/admin/drivers/ipmitool.html '''ipmi''']. HPE ProLiant Gen10 servers also support hardware type [https://docs.openstack.org/ironic/latest/admin/drivers/redfish.html '''redfish'''].
  
 
The '''iscsi_ilo''' and '''agent_ilo''' drivers provide security enhanced PXE-less deployment by using iLO virtual media to boot up the bare metal node. These drivers send management info through management channel and separates it from data channel which is used for deployment.  
 
The '''iscsi_ilo''' and '''agent_ilo''' drivers provide security enhanced PXE-less deployment by using iLO virtual media to boot up the bare metal node. These drivers send management info through management channel and separates it from data channel which is used for deployment.  
Line 16: Line 23:
  
 
'''pxe_ilo''' driver uses PXE/iSCSI for deployment (just like normal PXE driver) and deploys from ironic conductor. Additionally it supports automatic setting of requested boot mode from nova. This driver doesn't require iLO Advanced license.
 
'''pxe_ilo''' driver uses PXE/iSCSI for deployment (just like normal PXE driver) and deploys from ironic conductor. Additionally it supports automatic setting of requested boot mode from nova. This driver doesn't require iLO Advanced license.
 +
 +
The hardware type '''ilo''' and iLO-based classic drivers support HPE server features like:
 +
 +
* UEFI secure boot
 +
* Certificate based validation of iLO
 +
* Hardware based secure disk erase using Smart Storage Administrator (SSA) CLI
 +
* Out-of-band discovery of server attributes through hardware inspection
 +
* In-band RAID configuration
 +
* Firmware configuration and secure firmware update
 +
 +
 +
Apart from the above features, '''ilo5''' also supports following features:
 +
 +
* Out of Band RAID Support
 +
 +
=== Hardware Interfaces ===
 +
 +
The '''ilo''' hardware type supports all the standard '''deploy''', '''network''', '''rescue''', '''raid''' and '''storage''' interfaces.
 +
 +
Apart from the standard interfaces, '''ilo''' hardware type supports following ilo interfaces:
 +
 +
* '''bios''' -  '''ilo''' and '''no-bios'''. They can be enabled via the  '''enabled_bios_interfaces''' option in '''ironic.conf''':
 +
    [DEFAULT]
 +
    enabled_hardware_types = ilo
 +
    enabled_bios_interfaces = ilo,no-bios
 +
 +
    '''Note''': '''ilo''' is the default '''bios''' interface for '''ilo''' hardware type.
 +
 +
* '''boot''' -  '''ilo-virtual-media''' and '''ilo-pxe'''. They can be enabled via the  '''enabled_boot_interfaces''' option in '''ironic.conf''':
 +
    [DEFAULT]
 +
    enabled_hardware_types = ilo
 +
    enabled_boot_interfaces = ilo-virtual-media,ilo-pxe
 +
 +
    '''Note''': '''ilo-virtual-media''' is the default '''boot''' interface for '''ilo''' hardware type.
 +
 +
* '''console''' - '''ilo''' and '''no-console'''. The default is '''ilo'''. They can be enabled via the  '''enabled_console_interfaces''' option in '''ironic.conf''':
 +
    [DEFAULT]
 +
    enabled_hardware_types = ilo
 +
    enabled_console_interfaces = ilo,no-console
 +
 +
    '''Note''': To use '''ilo''' console interface you need to enable iLO feature 'IPMI/DCMI over LAN Access' on iLO4 and iLO5 management engine.
 +
 +
* '''inspect''' - '''ilo''' and '''inspector'''. They can be enabled via the '''enabled_inspect_interfaces''' option in '''ironic.conf''':
 +
    [DEFAULT]
 +
    enabled_hardware_types = ilo
 +
    enabled_inspect_interfaces = ilo,inspector
 +
 +
    '''Note''': '''ilo''' is the default '''inspect''' interface for '''ilo''' hardware type. [https://docs.openstack.org/ironic-inspector/latest Ironic Inspector] needs to be configured to use '''inspector''' as the inspect interface.
 +
 +
* '''management''' - '''ilo'''. It can be enabled via the '''enabled_management_interfaces''' option in '''ironic.conf''':
 +
    [DEFAULT]
 +
    enabled_hardware_types = ilo
 +
    enabled_management_interfaces = ilo
 +
 +
* '''power''' -  '''ilo'''. It can be enabled via the '''enabled_power_interfaces''' option in '''ironic.conf''':
 +
    [DEFAULT]
 +
    enabled_hardware_types = ilo
 +
    enabled_power_interfaces = ilo
 +
 +
The '''ilo5''' hardware type supports all the standard and ilo interfaces supported by '''ilo''' hardware type as mentioned above except '''raid''' standard interface.  The following is the details of raid interface:
 +
 +
* '''raid''' -  '''ilo5''' and '''no-raid'''. It can be enabled via the '''enabled_raid_interfaces''' option in '''ironic.conf''':
 +
    [DEFAULT]
 +
    enabled_hardware_types = ilo5
 +
    enabled_raid_interfaces = ilo5, no-raid
 +
 
 +
    '''Note''': '''ilo5''' is the default '''raid''' interface for '''ilo5''' hardware type.
 +
 +
The following command can be used to enroll a ProLiant node with '''ilo''' hardware type:
 +
    openstack baremetal node create --os-baremetal-api-version=1.38 \
 +
            --driver ilo \
 +
            --deploy-interface direct \
 +
            --raid-interface agent \
 +
            --driver-info ilo_address=<ilo-ip-address> \
 +
            --driver-info ilo_username=<ilo-username> \
 +
            --driver-info ilo_password=<ilo-password> \
 +
            --driver-info ilo_deploy_iso=<glance-uuid-of-deploy-iso>
 +
 +
The following command can be used to enroll a ProLiant node with '''ilo5''' hardware type:
 +
    openstack baremetal node create \
 +
            --driver ilo5 \
 +
            --deploy-interface direct \
 +
            --raid-interface ilo5 \
 +
            --driver-info ilo_address=<ilo-ip-address> \
 +
            --driver-info ilo_username=<ilo-username> \
 +
            --driver-info ilo_password=<ilo-password> \
 +
            --driver-info ilo_deploy_iso=<glance-uuid-of-deploy-iso> \
 +
            --driver-info ilo_rescue_iso=<glance-uuid-of-rescue-iso>
 +
 +
Please refer to [https://docs.openstack.org/ironic/latest/install/enabling-drivers.html Enabling drivers and hardware types] for detailed explanation of hardware type.
 +
 +
To enable the same feature set as provided by all iLO classic drivers,apply the following configuration in '''ironic.conf''':
 +
    [DEFAULT]
 +
    enabled_hardware_types = ilo
 +
    enabled_bios_interfaces = ilo
 +
    enabled_boot_interfaces = ilo-virtual-media,ilo-pxe
 +
    enabled_power_interfaces = ilo
 +
    enabled_console_interfaces = ilo
 +
    enabled_raid_interfaces = agent
 +
    enabled_management_interfaces = ilo
 +
    enabled_inspect_interfaces = ilo
 +
 +
The following commands can be used to enroll a node with the same feature set as one of the classic drivers, but using the '''ilo'''
 +
hardware type:
 +
 +
* '''iscsi_ilo''':
 +
        openstack baremetal node create --os-baremetal-api-version=1.31 \
 +
            --driver ilo \
 +
            --deploy-interface iscsi \
 +
            --boot-interface ilo-virtual-media \
 +
            --driver-info ilo_address=<ilo-ip-address> \
 +
            --driver-info ilo_username=<ilo-username> \
 +
            --driver-info ilo_password=<ilo-password> \
 +
            --driver-info ilo_deploy_iso=<glance-uuid-of-deploy-iso>
 +
 +
* '''pxe_ilo''':
 +
        openstack baremetal node create --os-baremetal-api-version=1.31 \
 +
            --driver ilo \
 +
            --deploy-interface iscsi \
 +
            --boot-interface ilo-pxe \
 +
            --driver-info ilo_address=<ilo-ip-address> \
 +
            --driver-info ilo_username=<ilo-username> \
 +
            --driver-info ilo_password=<ilo-password> \
 +
            --driver-info deploy_kernel=<glance-uuid-of-pxe-deploy-kernel> \
 +
            --driver-info deploy_ramdisk=<glance-uuid-of-deploy-ramdisk>
 +
 +
* '''agent_ilo''':
 +
        openstack baremetal node create --os-baremetal-api-version=1.31 \
 +
            --driver ilo \
 +
            --deploy-interface direct \
 +
            --boot-interface ilo-virtual-media \
 +
            --driver-info ilo_address=<ilo-ip-address> \
 +
            --driver-info ilo_username=<ilo-username> \
 +
            --driver-info ilo_password=<ilo-password> \
 +
            --driver-info ilo_deploy_iso=<glance-uuid-of-deploy-iso>
  
 
=== Prerequisites ===  
 
=== Prerequisites ===  
  
* '''proliantutils''' is a python package which contains a set of modules for managing HP ProLiant hardware.  Install [https://pypi.python.org/pypi/proliantutils proliantutils] module on the Ironic conductor node. Minimum version required is 2.1.5.
+
* '''proliantutils''' is a python package which contains a set of modules for managing HPE ProLiant hardware.  Install [https://pypi.python.org/pypi/proliantutils proliantutils] module on the Ironic conductor node. Minimum version required is 2.4.0.
  
  
   $ pip install "proliantutils>=2.1.5"
+
   $ pip install "proliantutils>=2.4.0"
  
  
 
* '''ipmitool''' command must be present on the service node(s) where '''ironic-conductor''' is running. On most Linux distributions, this is provided as part of the '''ipmitool''' package. Source code is available at http://ipmitool.sourceforge.net/.
 
* '''ipmitool''' command must be present on the service node(s) where '''ironic-conductor''' is running. On most Linux distributions, this is provided as part of the '''ipmitool''' package. Source code is available at http://ipmitool.sourceforge.net/.
 
  
 
===Different Configuration for iLO Drivers===
 
===Different Configuration for iLO Drivers===
Line 107: Line 248:
 
* Using an SSL termination proxy. For more information, refer http://docs.openstack.org/security-guide/content/tls-proxies-and-http-services.html.
 
* Using an SSL termination proxy. For more information, refer http://docs.openstack.org/security-guide/content/tls-proxies-and-http-services.html.
 
* Using native SSL support in Swift (recommended only for testing  purpose by Swift):
 
* Using native SSL support in Swift (recommended only for testing  purpose by Swift):
**Create self-signed cert for SSL using the following commands::
+
* Create self-signed cert for SSL using the following commands::
 
       cd /etc/swift
 
       cd /etc/swift
 
       openssl req -new -x509 -nodes -out cert.crt -keyout cert.key
 
       openssl req -new -x509 -nodes -out cert.crt -keyout cert.key
**Add the following lines to /etc/swift/proxy-server.conf under [DEFAULT]::
+
* Add the following lines to /etc/swift/proxy-server.conf under [DEFAULT]::
 
       bind_port = 443  693
 
       bind_port = 443  693
 
       cert_file = /etc/swift/cert.crt  694
 
       cert_file = /etc/swift/cert.crt  694
 
       key_file = /etc/swift/cert.key    695
 
       key_file = /etc/swift/cert.key    695
** Restart the Swift proxy server.
+
* Restart the Swift proxy server.
  
 
====Web server configuration for Standalone iLO Drivers====
 
====Web server configuration for Standalone iLO Drivers====
Line 166: Line 307:
 
* Users who have concerns on PXE driver's security issues and want to have a security enhanced PXE-less deployment mechanism - The PXE driver passes management information in clear-text to the baremetal node.  However, if Swift proxy server has an HTTPS endpoint (See  [[Ironic/Drivers/iLODrivers/master#Enabling HTTPS in Swift|Enabling HTTPS in Swift]] for more information), the '''iscsi_ilo''' driver provides enhanced security by passing management information to and from Swift endpoint over HTTPS.  The management information and boot image will be retrieved over encrypted management network via iLO virtual media.
 
* Users who have concerns on PXE driver's security issues and want to have a security enhanced PXE-less deployment mechanism - The PXE driver passes management information in clear-text to the baremetal node.  However, if Swift proxy server has an HTTPS endpoint (See  [[Ironic/Drivers/iLODrivers/master#Enabling HTTPS in Swift|Enabling HTTPS in Swift]] for more information), the '''iscsi_ilo''' driver provides enhanced security by passing management information to and from Swift endpoint over HTTPS.  The management information and boot image will be retrieved over encrypted management network via iLO virtual media.
  
===== Tested Platforms =====  
+
===== Tested Platforms =====
This driver should work on HP ProLiant Gen8 Servers and above with iLO 4.  
+
This driver should work on HPE ProLiant Gen7 servers with iLO 3, Gen8 and Gen9 servers with iLO 4 and Gen10 servers with iLO 5.
  
 
It has been tested with the following servers:
 
It has been tested with the following servers:
 +
* ProLiant DL380 G7
 
* ProLiant SL230s Gen8
 
* ProLiant SL230s Gen8
 
* ProLiant DL320e Gen8
 
* ProLiant DL320e Gen8
Line 179: Line 321:
 
* ProLiant DL380 Gen9 UEFI
 
* ProLiant DL380 Gen9 UEFI
 
* ProLiant BL460c Gen9
 
* ProLiant BL460c Gen9
 +
* ProLiant XL450 Gen9 UEFI
 +
* ProLiant DL360 Gen10
 +
* ProLiant DL325 Gen10 Plus
 +
* ProLiant DL385 Gen10 Plus
  
 
===== Features =====
 
===== Features =====
Line 199: Line 345:
 
===== Requirements =====
 
===== Requirements =====
  
* '''iLO 4 Advanced License''' needs to be installed on iLO to enable Virtual Media feature.
+
* '''iLO 4''' or '''iLO 5 Advanced License''' needs to be installed on iLO to enable Virtual Media Boot feature.
 
* '''Swift Object Storage Service Or HTTP(s) web server on conductor''' - iLO driver uses either Swift/HTTP(s) web server on the conductor node to store temporary FAT images as well as boot ISO images.
 
* '''Swift Object Storage Service Or HTTP(s) web server on conductor''' - iLO driver uses either Swift/HTTP(s) web server on the conductor node to store temporary FAT images as well as boot ISO images.
 
* '''Glance Image Service with Swift configured as its backend Or HTTP(s) web server''' - When using '''iscsi_ilo''' driver, the image containing the deploy ramdisk is retrieved from Swift/HTTP(s) web server directly by the iLO.
 
* '''Glance Image Service with Swift configured as its backend Or HTTP(s) web server''' - When using '''iscsi_ilo''' driver, the image containing the deploy ramdisk is retrieved from Swift/HTTP(s) web server directly by the iLO.
Line 310: Line 456:
  
 
'''''NOTE''''':  
 
'''''NOTE''''':  
 +
* To update SSL certificates into iLO, you can refer to http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504 . You can use iLO hostname or IP address as a 'Common Name (CN)' while generating Certificate Signing Request (CSR). Use the same value as ilo_address while enrolling node to Bare Metal service to avoid SSL certificate validation errors related to hostname mismatch.
 +
* If configuration values for '''ca_file''', '''client_port''' and '''client_timeout''' are not provided in the '''driver_info''' of the node,  the corresponding config variables defined under '''[ilo]''' section in ironic.conf will be used.
  
To update SSL certificates into iLO, you can refer to http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504 . You can use iLO hostname or IP address as a 'Common Name (CN)' while generating Certificate Signing Request (CSR). Use the same value as ilo_address while enrolling node to Bare Metal service to avoid SSL certificate validation errors related to hostname mismatch.
 
  
 
For example, you could run a similar command like below to enroll the ProLiant node:
 
For example, you could run a similar command like below to enroll the ProLiant node:
  
   ironic node-create -d iscsi_ilo -i ilo_address=<ilo-ip-address> -i ilo_username=<ilo-username> -i ilo_password=<ilo-password> -i ilo_deploy_iso=<glance-uuid-of-deploy-iso>
+
   ironic node-create -d iscsi_ilo -i ilo_address=<ilo-ip-address> -i ilo_username=<ilo-username> -i ilo_password=<ilo-password> -i ilo_deploy_iso=<glance-uuid-of-deploy-iso>
  
 
===== Boot modes =====
 
===== Boot modes =====
Line 349: Line 496:
  
 
===== Tested Platforms =====
 
===== Tested Platforms =====
This driver should work on HP Proliant Gen8 Servers and above with iLO 4.
+
This driver should work on HPE ProLiant Gen7 servers with iLO 3, Gen8 and Gen9 servers with iLO 4 and Gen10 servers with iLO 5.
  
 
It has been tested with the following servers:
 
It has been tested with the following servers:
 +
* ProLiant DL380 G7
 
* ProLiant SL230s Gen8
 
* ProLiant SL230s Gen8
 
* ProLiant DL320e Gen8
 
* ProLiant DL320e Gen8
Line 361: Line 509:
 
* ProLiant DL380 Gen9 UEFI
 
* ProLiant DL380 Gen9 UEFI
 
* ProLiant BL460c Gen9
 
* ProLiant BL460c Gen9
 +
* ProLiant XL450 Gen9 UEFI
 +
* ProLiant DL360 Gen10
 +
* Synergy 480 Gen9
 +
* ProLiant DL325 Gen10 Plus
 +
* ProLiant DL385 Gen10 Plus
  
 
===== Features =====
 
===== Features =====
 
* PXE-less deploy with virtual media using Ironic Python Agent.
 
* PXE-less deploy with virtual media using Ironic Python Agent.
* Remote Console
+
* Remote Console (based on IPMI)
 
* HW Sensors
 
* HW Sensors
 
* Automatic detection of current boot mode.
 
* Automatic detection of current boot mode.
Line 370: Line 523:
 
* UEFI Boot
 
* UEFI Boot
 
* UEFI Secure Boot
 
* UEFI Secure Boot
* IPA runs on the bare metal node and pulls the image directly from Swift.
+
* Supports booting the instance from virtual media as well as booting locally from disk.
* IPA deployed instances always boot from local disk.
+
* Supports deployment of whole disk image and partition image.
* Supports deployment of whole disk image.
+
* Local boot (both BIOS and UEFI)
 
* Segregates management info from data channel.
 
* Segregates management info from data channel.
 
* Support for out-of-band hardware inspection.
 
* Support for out-of-band hardware inspection.
 
* Node cleaning.
 
* Node cleaning.
* Swift-less iLO drivers deployment.
+
* Standalone iLO drivers.
 +
* Supports tenant network isolation for node instances provisioned for vlan type networks.
  
 
===== Requirements =====
 
===== Requirements =====
* '''iLO 4 Advanced License''' needs to be installed on iLO to enable virtual media feature.
+
* '''iLO 4''' or '''iLO 5 Advanced License''' needs to be installed on iLO to enable Virtual Media Boot feature.
 
* '''Swift Object Storage Service Or HTTP(s) web server on conductor''' - iLO driver uses either Swift/HTTP(s) web server on the conductor node to store temporary FAT images as well as boot ISO images.
 
* '''Swift Object Storage Service Or HTTP(s) web server on conductor''' - iLO driver uses either Swift/HTTP(s) web server on the conductor node to store temporary FAT images as well as boot ISO images.
 
* '''Glance Image Service with Swift configured as its backend Or HTTP(s) web server''' - When using '''agent_ilo''' driver, the image containing the agent is retrieved from Swift/HTTP(s) web server directly by the iLO.
 
* '''Glance Image Service with Swift configured as its backend Or HTTP(s) web server''' - When using '''agent_ilo''' driver, the image containing the agent is retrieved from Swift/HTTP(s) web server directly by the iLO.
Line 468: Line 622:
 
* '''client_timeout''': (optional) Timeout for iLO operations. Default timeout is 60 seconds.
 
* '''client_timeout''': (optional) Timeout for iLO operations. Default timeout is 60 seconds.
 
* '''console_port''': (optional) Node's UDP port for console access. Any unused port on the Ironic conductor node may be used.
 
* '''console_port''': (optional) Node's UDP port for console access. Any unused port on the Ironic conductor node may be used.
 +
 +
 +
 +
'''''NOTE''''':
 +
* To update SSL certificates into iLO, you can refer to http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504 . You can use iLO hostname or IP address as a 'Common Name (CN)' while generating Certificate Signing Request (CSR). Use the same value as ilo_address while enrolling node to Bare Metal service to avoid SSL certificate validation errors related to hostname mismatch.
 +
* If configuration values for '''ca_file''', '''client_port''' and '''client_timeout''' are not provided in the '''driver_info''' of the node,  the corresponding config variables defined under '''[ilo]''' section in ironic.conf will be used.
 +
 +
For example, you could run a similar command like below to enroll the ProLiant node:
 +
 +
  ironic node-create -d agent_ilo -i ilo_address=<ilo-ip-address> -i ilo_username=<ilo-username> -i ilo_password=<ilo-password>  -i ilo_deploy_iso=<glance-uuid-of-deploy-iso>
  
 
===== Boot modes =====
 
===== Boot modes =====
Line 502: Line 666:
  
 
===== Tested Platforms =====
 
===== Tested Platforms =====
This driver should work on HP Proliant Gen8 Servers and above with iLO 4.
+
This driver should work on HPE ProLiant Gen7 servers with iLO 3, Gen8 and Gen9 servers with iLO 4 and Gen10 servers with iLO 5.
  
 
It has been tested with the following servers:
 
It has been tested with the following servers:
 +
* ProLiant DL380 G7
 
* ProLiant SL230s Gen8
 
* ProLiant SL230s Gen8
 
* ProLiant DL320e Gen8
 
* ProLiant DL320e Gen8
Line 514: Line 679:
 
* ProLiant DL380 Gen9 UEFI
 
* ProLiant DL380 Gen9 UEFI
 
* ProLiant BL460c Gen9
 
* ProLiant BL460c Gen9
 +
* ProLiant XL450 Gen9 UEFI
 +
* ProLiant DL360 Gen10
 +
* ProLiant DL325 Gen10 Plus
 +
* ProLiant DL385 Gen10 Plus
  
 
===== Features =====
 
===== Features =====
Line 519: Line 688:
 
* Automatic detection of current boot mode.
 
* Automatic detection of current boot mode.
 
* Automatic setting of the required boot mode if UEFI boot mode is requested by the nova flavor's extra spec.
 
* Automatic setting of the required boot mode if UEFI boot mode is requested by the nova flavor's extra spec.
* Remote Console
+
* Remote Console (based on IPMI)
 
* HW Sensors
 
* HW Sensors
 
* UEFI Boot
 
* UEFI Boot
 
* UEFI Secure Boot
 
* UEFI Secure Boot
 
* Local boot (both BIOS and UEFI)
 
* Local boot (both BIOS and UEFI)
* Supports deployment of whole disk image.
+
* Supports deployment of whole disk image and partition image.
 +
* Supports booting the instance from PXE as well as booting locally from disk.
 +
* Segregates management info from data channel.
 
* Support for out-of-band hardware inspection.
 
* Support for out-of-band hardware inspection.
 
* Node cleaning
 
* Node cleaning
 +
* Standalone iLO drivers.
  
 
===== Requirements =====
 
===== Requirements =====
Line 563: Line 735:
 
* '''client_timeout''': (optional) Timeout for iLO operations. Default timeout is 60 seconds.
 
* '''client_timeout''': (optional) Timeout for iLO operations. Default timeout is 60 seconds.
 
* '''console_port''': (optional) Node's UDP port for console access. Any unused port on the Ironic conductor node may be used.
 
* '''console_port''': (optional) Node's UDP port for console access. Any unused port on the Ironic conductor node may be used.
 +
 +
 +
'''''NOTE''''':
 +
* To update SSL certificates into iLO, you can refer to http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504 . You can use iLO hostname or IP address as a 'Common Name (CN)' while generating Certificate Signing Request (CSR). Use the same value as ilo_address while enrolling node to Bare Metal service to avoid SSL certificate validation errors related to hostname mismatch.
 +
* If configuration values for '''ca_file''', '''client_port''' and '''client_timeout''' are not provided in the '''driver_info''' of the node,  the corresponding config variables defined under '''[ilo]''' section in ironic.conf will be used.
 +
 +
 +
For example, you could run a similar command like below to enroll the ProLiant node:
 +
 +
  ironic node-create -d pxe_ilo -i ilo_address=<ilo-ip-address> -i ilo_username=<ilo-username> -i ilo_password=<ilo-password>
 +
  -i deploy_kernel=<glance-uuid-of-pxe-deploy-kernel> -i deploy_ramdisk=<glance-uuid-of-deploy-ramdisk>
  
 
===== Boot modes =====
 
===== Boot modes =====
Line 638: Line 821:
 
* For pxe_ilo driver, in case of deploy of partition image, ensure that the signed grub2 bootloader used during deploy can validate digital signature of the kernel in the instance partition image. If signed grub2 cannot validate kernel in the instance partition image, boot will fail for the same.  
 
* For pxe_ilo driver, in case of deploy of partition image, ensure that the signed grub2 bootloader used during deploy can validate digital signature of the kernel in the instance partition image. If signed grub2 cannot validate kernel in the instance partition image, boot will fail for the same.  
  
* Ensure the public key of the signed image is loaded into baremetal to deploy signed images. For HP Proliant Gen9 servers, one can enroll public key using iLO System Utilities UI. Please refer to section '''Accessing Secure Boot options''' in [http://www.hp.com/ctg/Manual/c04398276.pdf HP UEFI System Utilities User Guide].  One can also refer to white paper on [http://h20195.www2.hp.com/V2/getpdf.aspx/4AA5-4496ENW.pdf  Secure Boot for Linux on HP Proliant servers] for additional details.
+
* Ensure the public key of the signed image is loaded into baremetal to deploy signed images. For HP Proliant Gen9 servers, one can enroll public key using iLO System Utilities UI. Please refer to section '''Accessing Secure Boot options''' in [https://support.hpe.com/hpsc/doc/public/display?docId=c04398276 HP UEFI System Utilities User Guide].  One can also refer to white paper on [https://h50146.www5.hpe.com/products/software/oe/linux/mainstream/support/whitepaper/pdfs/4AA5-4496ENW.pdf  Secure Boot for Linux on HP Proliant servers] for additional details.
 +
 
 +
===UEFI Boot from iSCSI volume support===
 +
 
 +
With '''Gen9 (UEFI firmware version 1.40 or higher)''' and '''Gen10 HPE Proliant servers''', the driver supports firmware based UEFI boot of an iSCSI cinder volume.
 +
 
 +
This feature requires the node to be configured to boot in UEFI boot mode, as well as user image should be UEFI bootable image, and '''PortFast''' needs to be enabled in switch configuration for immediate spanning tree forwarding state so it wouldn’t take much time setting the iSCSI target as persistent device.
 +
 
 +
The driver does not support this functionality when in BIOS boot mode. In case the node is configured with '''ilo-pxe''' boot interface and the boot mode configured on the bare metal is BIOS, the iscsi boot from volume is performed using ipxe. See [https://docs.openstack.org/ironic/latest/admin/boot-from-volume.html#iscsi-configuration Boot From Volume] for more details.
 +
 
 +
To use this feature, configure the boot mode of the bare metal to UEFI and configure the corresponding ironic node using the steps given in [https://docs.openstack.org/ironic/latest/admin/boot-from-volume.html#iscsi-configuration Boot From Volume]. In a cloud environment with nodes configured to boot from BIOS and UEFI boot modes, the virtual media driver only supports UEFI boot mode, and that attempting to use iscsi boot at the same time with a bios volume will result in an error.
  
 
===Hardware Inspection===
 
===Hardware Inspection===
Line 654: Line 847:
  
 
NOTE:  
 
NOTE:  
* The RAID should be pre-configured prior to inspection otherwise proliantutils returns '''0 (zero)''' for '''disk size'''.
+
* The disk size is returned by RIBCL/RIS only when RAID is preconfigured on the storage. If the storage is Direct Attached Storage, then RIBCL/RIS fails to get the disk size..
** The disk size can be retrieved only for real Smart Array controllers with RAID configured.
+
** The SNMPv3 inspection gets disk size for all types of storages. If RIBCL/RIS is unable to get disk size and SNMPv3 inspection is requested, the proliantutils does SNMPv3 inspection to get the disk size. If proliantutils is unable to get the disk size, it raises an error. This feature is available in proliantutils release version >= 2.2.0.
** For direct storage and Dynamic Smart Array controllers, operator has to manually enter the disk size after inspection.
+
** The iLO must be updated with SNMPv3 authentication details. Refer to the section `SNMPv3 Authentication` in `http://h20566.www2.hpe.com/hpsc/doc/public/display?docId=c03334051'  for setting up authentication details on iLO. The  following parameters are mandatory to be given in driver_info  for SNMPv3 inspection:
 +
*** ``snmp_auth_user`` : The SNMPv3 user.
 +
*** ``snmp_auth_prot_password`` : The auth protocol pass phrase.
 +
*** ``snmp_auth_priv_password`` : The privacy protocol pass phrase.
 +
**The  following parameters are optional for SNMPv3 inspection:
 +
*** ``snmp_auth_protocol`` : The Auth Protocol. The valid values are "MD5" and "SHA". The iLO default value is "MD5".
 +
*** ``snmp_auth_priv_protocol`` : The Privacy protocol. The valid values are "AES" and "DES". The iLO default value is "DES".
 
* The '''iLO firmware version''' should be '''2.10''' or above for '''nic_capacity''' to be discovered.
 
* The '''iLO firmware version''' should be '''2.10''' or above for '''nic_capacity''' to be discovered.
  
Line 742: Line 941:
 
* '''reset_ilo''': Resets the iLO. By default, this step is disabled.
 
* '''reset_ilo''': Resets the iLO. By default, this step is disabled.
  
Additionally, '''agent_ilo''' driver supports inband disk erase operation. You may also need to configure a '''Cleaning Network'''. To disable or change the priority of the particular automated clean step, respective configuration options to be updated in ironic.conf.
+
 
 +
Additionally, '''agent_ilo''' driver supports inband automated cleaning operation '''erase_devices'''. It performs Sanitize Erase on disks in HPE Proliant servers. This step supports sanitize disk erase in Proliant servers only if the ramdisk created using diskimage-builder from Ocata release else it performs the '''erase_devices''' implementation available in Ironic Python Agent. By default, this step is disabled. See [[Ironic/Drivers/iLODrivers/master#Sanitize Disk Erase Support|Sanitize Disk Erase Support]] for more information.
 +
 
 +
You may also need to configure a '''Cleaning Network'''. To disable or change the priority of the particular automated clean step, respective configuration options to be updated in ironic.conf.
 
     [ilo]
 
     [ilo]
 
     clean_priority_reset_ilo=0
 
     clean_priority_reset_ilo=0
Line 749: Line 951:
 
     clean_priority_clear_secure_boot_keys=0
 
     clean_priority_clear_secure_boot_keys=0
 
     clean_priority_reset_ilo_credential=30
 
     clean_priority_reset_ilo_credential=30
 +
    [deploy]
 +
    erase_devices_priority=0
 
      
 
      
 
To disable a particular automated clean step, update the priority of step to 0. For more information on node automated cleaning, see [http://docs.openstack.org/developer/ironic/deploy/cleaning.html#automated-cleaning Automated cleaning]
 
To disable a particular automated clean step, update the priority of step to 0. For more information on node automated cleaning, see [http://docs.openstack.org/developer/ironic/deploy/cleaning.html#automated-cleaning Automated cleaning]
Line 762: Line 966:
 
:Activates the iLO Advanced license. This is an out-of-band manual cleaning step associated with the management interface. Please note that this operation cannot be performed using virtual media based drivers like iscsi_ilo and agent_ilo as they need this type of advanced license already active to use virtual media to boot into to start cleaning operation. Virtual media is an advanced feature. If an advanced license is already active and the user wants to overwrite the current license key, for example in case of a multi-server activation key delivered with a flexible-quantity kit or after completing an Activation Key Agreement (AKA), then these drivers can still be used for executing this cleaning step.
 
:Activates the iLO Advanced license. This is an out-of-band manual cleaning step associated with the management interface. Please note that this operation cannot be performed using virtual media based drivers like iscsi_ilo and agent_ilo as they need this type of advanced license already active to use virtual media to boot into to start cleaning operation. Virtual media is an advanced feature. If an advanced license is already active and the user wants to overwrite the current license key, for example in case of a multi-server activation key delivered with a flexible-quantity kit or after completing an Activation Key Agreement (AKA), then these drivers can still be used for executing this cleaning step.
 
:See [https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/mitaka#Activating_iLO_Advanced_license_as_manual_clean_step Activating iLO Advanced license as manual clean step] for user guidance on usage.
 
:See [https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/mitaka#Activating_iLO_Advanced_license_as_manual_clean_step Activating iLO Advanced license as manual clean step] for user guidance on usage.
 +
    '''Note:''' This feature is not applicable to Synergy 480 Gen9 machines because the Synergy machines are installed with the Advanced iLO license by default.
 
* '''update_firmware''':
 
* '''update_firmware''':
 
:Updates the firmware of the devices. Also an out-of-band step associated with the management interface. The supported devices for firmware update are: '''ilo''', '''cpld''', '''power_pic''', '''bios''' and '''chassis'''. Some devices firmware cannot be updated via this method, such as: storage controllers, host bus adapters, disk drive firmware, network interfaces and Onboard Administrator (OA). Refer below table for the above components' commonly used descriptions.
 
:Updates the firmware of the devices. Also an out-of-band step associated with the management interface. The supported devices for firmware update are: '''ilo''', '''cpld''', '''power_pic''', '''bios''' and '''chassis'''. Some devices firmware cannot be updated via this method, such as: storage controllers, host bus adapters, disk drive firmware, network interfaces and Onboard Administrator (OA). Refer below table for the above components' commonly used descriptions.
Line 895: Line 1,100:
 
     $ md5sum image.rpm
 
     $ md5sum image.rpm
 
     66cdb090c80b71daa21a67f06ecd3f33  image.rpm
 
     66cdb090c80b71daa21a67f06ecd3f33  image.rpm
 +
 +
=====RAID Support=====
 +
 +
The inband RAID functionality is supported by iLO drivers. See http://docs.openstack.org/developer/ironic/deploy/raid.html#raid  for more information. Bare Metal service update node with following information after successful configuration of RAID:
 +
 +
* Node '''properties/local_gb''' is set to the size of root volume.
 +
* Node '''properties/root_device''' is filled with '''wwn''' details of root volume. It is used by iLO drivers as root device hint during provisioning.
 +
* The value of raid level of root volume is added as '''raid_level''' capability to the node's '''capabilities''' parameter within '''properties''' field. The operator can specify the '''raid_level''' capability in nova flavor for node to be selected for scheduling:
 +
 +
    nova flavor-key ironic-test set capabilities:raid_level="1+0"
 +
    nova boot --flavor ironic-test --image test-image instance-1
 +
 +
=====Sanitize Disk Erase Support=====
 +
 +
The sanitize disk erase is an inband clean step supported by iLO drives. It is supported when the agent ramdisk contains the '''Proliant Hardware Manager''' from the proliantutils version 2.2.0 or higher. If the Sanitize Erase is not supported, then it falls back to the '''erase_devices''' implementation available in Ironic Python Agent. This clean step is performed as part of
 +
automated cleaning and it is disabled by default.
 +
 +
This inband clean step requires '''ssacli''' utility starting from version '''2.60-19.0''' to perform the erase on physical disks. See the
 +
[http://h20566.www2.hpe.com/hpsc/doc/public/display?docId=c03909334 ssacli documentation] for more information on ssacli utility.
 +
 +
    '''Note''': Inband clean step for RAID configuration and Disk Erase on HPE P/E-Class SR Gen10 controllers and above will require ssacli version to be ssacli-3.10-3.0 and above.
 +
 +
To create an agent ramdisk with '''Proliant Hardware Manager''', use the '''proliant-tools''' element in DIB::
 +
 +
    disk-image-create -o proliant-agent-ramdisk ironic-agent fedora proliant-tools
 +
 +
See the [http://docs.openstack.org/developer/diskimage-builder/elements/proliant-tools/README.html proliant-tools] for more information on creating agent ramdisk with '''proliant-tools''' element in DIB.
 +
 +
=====BIOS configuration support=====
 +
 +
The '''ilo''' and '''ilo5''' hardware types support '''ilo''' BIOS interface. The support includes providing manual clean steps "apply_configuration" and "factory_reset" to manage supported BIOS settings on the node.
 +
 +
    '''Note''': Prior to the Stein release the user is required to reboot the node manually in order for the settings to take into effect. Starting with the Stein
 +
    release, iLO drivers reboot the node after running the clean steps related to BIOS configuration. The BIOS settings are cached and the clean step
 +
    is marked as success only if all the requested settings are applied without any failure. If application of any of the settings fails, the clean step is
 +
    marked as failed and the settings are not cached.
 +
 +
=====Configuration=====
 +
 +
Following are the supported BIOS settings and the corresponding brief description for each of the settings. For a detailed description please
 +
refer to ''HPE Integrated Lights-Out REST API Documentation'' <https://hewlettpackard.github.io/ilo-rest-api-docs>.
 +
 +
* '''AdvancedMemProtection''':  Configure additional memory protection with ECC (Error Checking and  Correcting). Allowed values are ''AdvancedEcc'', ''OnlineSpareAdvancedEcc'',  ''MirroredAdvancedEcc''.
 +
 +
* '''AutoPowerOn''':  Configure the server to automatically power on when AC power is applied to  the system.  Allowed values are ''AlwaysPowerOn'', ''AlwaysPowerOff'',  ''RestoreLastState''.
 +
 +
* '''BootMode''':  Select the boot mode of the system.  Allowed values are ''Uefi'', ''LegacyBios''
 +
 +
* '''BootOrderPolicy''':  Configure how the system attempts to boot devices per the Boot Order when  no bootable device is found. Allowed values are ''RetryIndefinitely'', ''AttemptOnce'', ''ResetAfterFailed''.
 +
 +
* '''CollabPowerControl''':  Enables the Operating System to request processor frequency changes even  if the Power Regulator option on the server configured for Dynamic Power  Savings Mode. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''DynamicPowerCapping''':  Configure when the System ROM executes power calibration during the boot  process. Allowed values are ''Enabled'', ''Disabled'', ''Auto''.
 +
 +
* '''DynamicPowerResponse''':  Enable the System BIOS to control processor performance and power states  depending on the processor workload. Allowed values are ''Fast'', ''Slow''.
 +
 +
* '''IntelligentProvisioning''': Enable or disable the Intelligent Provisioning functionality. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''IntelPerfMonitoring''': Exposes certain chipset devices that can be used with the Intel  Performance Monitoring Toolkit. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''IntelProcVtd''': Hypervisor or operating system supporting this option can use hardware capabilities provided by Intel's Virtualization Technology for Directed  I/O. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''IntelQpiFreq''':  Set the QPI Link frequency to a lower speed.  Allowed values are ''Auto'', ''MinQpiSpeed''.
 +
 +
* '''IntelTxt''':  Option to modify Intel TXT support. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''PowerProfile''':  Set the power profile to be used. Allowed values are ''BalancedPowerPerf'', ''MinPower'', ''MaxPerf'', ''Custom''.
 +
 +
* '''PowerRegulator''': Determines how to regulate the power consumption. Allowed values are ''DynamicPowerSavings'', ''StaticLowPower'', ''StaticHighPerf'', ''OsControl''.
 +
 +
* '''ProcAes''': Enable or disable the Advanced Encryption Standard Instruction Set (AES-NI) in the processor. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''ProcCoreDisable''': Disable processor cores using Intel's Core Multi-Processing (CMP) Technology. Allowed values are Integers ranging from ''0'' to ''24''.
 +
 +
* '''ProcHyperthreading''': Enable or disable Intel Hyperthreading. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''ProcNoExecute''': Protect your system against malicious code and viruses. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''ProcTurbo''': Enables the processor to transition to a higher frequency than the  processor's rated speed using Turbo Boost Technology if the processor has available power and is within temperature specifications. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''ProcVirtualization''': Enables or Disables a hypervisor or operating system supporting this option to use hardware capabilities provided by Intel's Virtualization Technology. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''SecureBootStatus''': The current state of Secure Boot configuration. Allowed values are ''Enabled'', ''Disabled''.
 +
    '''Note:''' This setting is read-only and can't be modified with ''apply_configuration'' clean step.
 +
 
 +
* '''Sriov''': If enabled, SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. If enabled, the BIOS allocates additional resources to PCI-express devices. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''ThermalConfig''': select the fan cooling solution for the system. Allowed values are ''OptimalCooling'', ''IncreasedCooling'', ''MaxCooling''
 +
 +
* '''ThermalShutdown''': Control the reaction of the system to caution level thermal events. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''TpmState''': Current TPM device state. Allowed values are ''NotPresent'', ''PresentDisabled'', ''PresentEnabled''.
 +
    '''Note:''' This setting is read-only and can't be modified with ''apply_configuration'' clean step.
 +
 +
* '''TpmType''': Current TPM device type. Allowed values are ''NoTpm'', ''Tpm12'', ''Tpm20'', ''Tm10''.
 +
    '''Note:''' This setting is read-only and can't be modified with ''apply_configuration'' clean step.
 +
 +
* '''UefiOptimizedBoot''': Enables or Disables the System BIOS boot using native UEFI graphics drivers. Allowed values are ''Enabled'', ''Disabled''.
 +
 +
* '''WorkloadProfile''': Change the Workload Profile to accomodate your desired workload. Allowed values are ''GeneralPowerEfficientCompute'', ''GeneralPeakFrequencyCompute'', ''GeneralThroughputCompute'', ''Virtualization-PowerEfficient'', ''Virtualization-MaxPerformance'', ''LowLatency'', ''MissionCritical'', ''TransactionalApplicationProcessing'', ''HighPerformanceCompute'', ''DecisionSupport'', ''GraphicProcessing'', ''I/OThroughput'', ''Custom''.
 +
  '''Note:''' This setting is only applicable to ProLiant Gen10 servers with iLO 5 management systems.
 +
 +
===Rescue mode support===
 +
 +
The hardware type ilo supports rescue functionality. Rescue operation can be used to boot nodes into a rescue ramdisk so that the rescue user can access the node.
 +
Please refer to [https://docs.openstack.org/ironic/stein/admin/rescue.html Rescue Mode] for more details.
 +
 +
===Inject NMI support===
 +
 +
The management interface '''ilo''' supports injection of non-maskable interrupt (NMI) to to a bare metal. Following command can be used to inject NMI on a server:
 +
 +
  openstack baremetal node inject nmi <node>
 +
 +
Following command can be used to inject NMI via Compute service:
 +
 +
  openstack server dump create <server>
 +
 +
===Soft power operation support===
 +
 +
The power interface '''ilo''' supports soft power off and soft reboot operations on a bare metal. Following commands can be used to perform soft power operations on a server:
 +
 +
  openstack baremetal node reboot --soft [--power-timeout <power-timeout>] <node>
 +
 +
    '''Note''': The configuration '''[conductor]soft_power_off_timeout''' is used as a default timeout value when no timeout is provided while invoking hard or soft
 +
    power operations.
 +
 +
    '''Note''': Server POST state is used to track the power status of HPE ProLiant Gen9 servers and beyond.
 +
 +
===Out of Band RAID Support===
 +
 +
With Gen10 HPE Proliant servers and later the '''ilo5''' hardware type supports firmware based RAID configuration as a clean step. This feature requires the node to be
 +
configured to '''ilo5''' hardware type and its raid interface to be '''ilo5'''. See [https://docs.openstack.org/ironic/stein/admin/raid.html#raid RAID Configuration] for more information.
 +
 +
After a successful RAID configuration, the Bare Metal service will update the node with the following information:
 +
 +
*  Node '''properties/local_gb''' is set to the size of root volume.
 +
*  Node '''properties/root_device''' is filled with wwn details of root volume. It is used by iLO driver as root device hint during provisioning.
 +
 +
Later the value of raid level of root volume can be added in '''baremetal-with-RAID10''' (RAID10 for raid level 10) resource class. And consequently flavor needs to be
 +
updated to request the resource class to create the server using selected node:
 +
 +
  openstack baremetal node set test_node --resource-class baremetal-with-RAID10
 +
  openstack flavor set --property resources:CUSTOM_BAREMETAL_WITH_RAID10=1 test-flavor
 +
  openstack server create --flavor test-flavor --image test-image instance-1
 +
 +
    '''Note''': Supported raid levels for '''ilo5''' hardware type are: 0, 1, 5, 6, 10, 50, 60
 +
 +
===IPv6 support===
 +
 +
With the IPv6 support in proliantutils>=2.8.0, nodes can be enrolled into the baremetal service using iLO IPv6 addresses.
 +
 +
  openstack baremetal node create --driver ilo  --deploy-interface direct --driver-info ilo_address=2001:0db8:85a3:0000:0000:8a2e:0370:7334 \
 +
      --driver-info ilo_username=test-user --driver-info ilo_password=test-password --driver-info ilo_deploy_iso=test-iso --driver-info ilo_rescue_iso=test-iso
 +
 +
    '''Note''': No configuration changes (in e.g. ironic.conf) are required in order to support IPv6.
 +
 +
===Enabling ProLiant Gen10 systems in Ironic===
 +
 +
[https://www.hpe.com/us/en/servers/gen10-servers.html HPE Gen10 Servers] render a new compute experience. Gen10 Servers are key to infrastructure modernization, accelerating business insights across a hybrid world of traditional IT, public and private cloud.
 +
 +
These servers conform to [https://www.dmtf.org/standards/redfish Redfish API]. The '''proliantutils''' library uses Redfish protocol to communicate to this hardware. Minimum version of proliantutils library required for communicating to Gen10 servers is '''2.4.0'''. All iLO drivers and their features are supported on this hardware. Since Gen10 systems are [https://www.dmtf.org/standards/redfish Redfish] compliant, the reference hardware type [https://docs.openstack.org/ironic/latest/admin/drivers/redfish.html '''redfish'''] works with Gen10 systems as well but it will lack the iLO specific features.
 +
 +
    '''Note''': The S-Class software RAID is not supported by Linux in Gen10. Therefore, while provisioning a Gen10 node, make sure the controller is set to AHCI mode.
 +
 +
For a list of known issues in Gen10 systems, refer to the [https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/master#Gen10_Known_Issues_and_workarounds_.28if_available.29 Gen10 known issues] section.
  
 
===Instance Images===
 
===Instance Images===
Line 995: Line 1,365:
 
| When SSL is enabled in OpenStack environment and images to be attached to iLO virtual media are based on 'https', iLO is unable to read/boot using such images.  
 
| When SSL is enabled in OpenStack environment and images to be attached to iLO virtual media are based on 'https', iLO is unable to read/boot using such images.  
 
| iLO firmware version may not support the ciphers being enabled at the SSL server hosting the images. Please refer to iLO firmware documentation to ensure that the ciphers being used are supported http://h10032.www1.hp.com/ctg/Manual/c03334051. It is also recommended to refer to  'Release Notes' of iLO firmware version being used for more details.
 
| iLO firmware version may not support the ciphers being enabled at the SSL server hosting the images. Please refer to iLO firmware documentation to ensure that the ciphers being used are supported http://h10032.www1.hp.com/ctg/Manual/c03334051. It is also recommended to refer to  'Release Notes' of iLO firmware version being used for more details.
 +
|-
 +
| 6
 +
| NA
 +
| When read-only settings or invalid values for the allowed settings are provided while running manual clean step ''apply_configuration'', iLO simply ignores and the clean step is not marked as failed.
 +
| For more information, refer and track the issue here <https://bugs.launchpad.net/silva/+bug/1783944>_ .
 +
|}
 +
 +
===== Gen10 Known Issues and workarounds (if available) =====
 +
 +
{| class="wikitable"
 +
|-
 +
! scope="col"; | Sr No
 +
! scope="col"| Components (with Firmware Version)
 +
! scope="col"| Known Issues
 +
! scope="col"| Resolutions
 +
|-
 +
| 1
 +
| ---
 +
| ---
 +
| ---
 +
|-
 +
| 2
 +
| ---
 +
| ---
 +
| ---
 
|}
 
|}

Latest revision as of 05:57, 6 October 2020

Contents

iLO drivers (master branch)

Overview

iLO drivers enable to take advantage of features of iLO management engine in HPE ProLiant servers. iLO drivers are targeted for HPE ProLiant Gen8 and Gen9 systems which have iLO 4 management engine. From Pike release iLO drivers start supporting ProLiant Gen10 systems which have iLO 5 management engine. iLO5 conforms to Redfish API and hence hardware type redfish is also an option for this kind of hardware but it will lack the iLO specific features.

For enabling Gen10 systems and getting detailed information on Gen10 feature support in Ironic please check this portion Enabling ProLiant Gen10 systems in Ironic.

ProLiant hardware is supported by the ilo hardware type and the following classic drivers:

   Note: From Rocky release iLO drivers also support ilo5 hardware type for ProLiant Gen10 and later systems only.
   Note: All HPE ProLiant servers support reference hardware type ipmi. HPE ProLiant Gen10 servers also support hardware type redfish.

The iscsi_ilo and agent_ilo drivers provide security enhanced PXE-less deployment by using iLO virtual media to boot up the bare metal node. These drivers send management info through management channel and separates it from data channel which is used for deployment.

iscsi_ilo and agent_ilo drivers use deployment ramdisk built from diskimage-builder. The iscsi_ilo driver deploys from ironic conductor and supports both net-boot and local-boot of instance. agent_ilo deploys from bare metal node and supports both net-boot and local-boot of instance.

pxe_ilo driver uses PXE/iSCSI for deployment (just like normal PXE driver) and deploys from ironic conductor. Additionally it supports automatic setting of requested boot mode from nova. This driver doesn't require iLO Advanced license.

The hardware type ilo and iLO-based classic drivers support HPE server features like:

  • UEFI secure boot
  • Certificate based validation of iLO
  • Hardware based secure disk erase using Smart Storage Administrator (SSA) CLI
  • Out-of-band discovery of server attributes through hardware inspection
  • In-band RAID configuration
  • Firmware configuration and secure firmware update


Apart from the above features, ilo5 also supports following features:

  • Out of Band RAID Support

Hardware Interfaces

The ilo hardware type supports all the standard deploy, network, rescue, raid and storage interfaces.

Apart from the standard interfaces, ilo hardware type supports following ilo interfaces:

  • bios - ilo and no-bios. They can be enabled via the enabled_bios_interfaces option in ironic.conf:
   [DEFAULT]
   enabled_hardware_types = ilo
   enabled_bios_interfaces = ilo,no-bios
    Note: ilo is the default bios interface for ilo hardware type.
  • boot - ilo-virtual-media and ilo-pxe. They can be enabled via the enabled_boot_interfaces option in ironic.conf:
   [DEFAULT]
   enabled_hardware_types = ilo
   enabled_boot_interfaces = ilo-virtual-media,ilo-pxe
    Note: ilo-virtual-media is the default boot interface for ilo hardware type.
  • console - ilo and no-console. The default is ilo. They can be enabled via the enabled_console_interfaces option in ironic.conf:
   [DEFAULT]
   enabled_hardware_types = ilo
   enabled_console_interfaces = ilo,no-console
    Note: To use ilo console interface you need to enable iLO feature 'IPMI/DCMI over LAN Access' on iLO4 and iLO5 management engine.
  • inspect - ilo and inspector. They can be enabled via the enabled_inspect_interfaces option in ironic.conf:
   [DEFAULT]
   enabled_hardware_types = ilo
   enabled_inspect_interfaces = ilo,inspector
   Note: ilo is the default inspect interface for ilo hardware type. Ironic Inspector needs to be configured to use inspector as the inspect interface.
  • management - ilo. It can be enabled via the enabled_management_interfaces option in ironic.conf:
   [DEFAULT]
   enabled_hardware_types = ilo
   enabled_management_interfaces = ilo
  • power - ilo. It can be enabled via the enabled_power_interfaces option in ironic.conf:
   [DEFAULT]
   enabled_hardware_types = ilo
   enabled_power_interfaces = ilo

The ilo5 hardware type supports all the standard and ilo interfaces supported by ilo hardware type as mentioned above except raid standard interface. The following is the details of raid interface:

  • raid - ilo5 and no-raid. It can be enabled via the enabled_raid_interfaces option in ironic.conf:
   [DEFAULT]
   enabled_hardware_types = ilo5
   enabled_raid_interfaces = ilo5, no-raid
  
    Note: ilo5 is the default raid interface for ilo5 hardware type.

The following command can be used to enroll a ProLiant node with ilo hardware type:

   openstack baremetal node create --os-baremetal-api-version=1.38 \
           --driver ilo \
           --deploy-interface direct \
           --raid-interface agent \
           --driver-info ilo_address=<ilo-ip-address> \
           --driver-info ilo_username=<ilo-username> \
           --driver-info ilo_password=<ilo-password> \
           --driver-info ilo_deploy_iso=<glance-uuid-of-deploy-iso>

The following command can be used to enroll a ProLiant node with ilo5 hardware type:

   openstack baremetal node create \
           --driver ilo5 \
           --deploy-interface direct \
           --raid-interface ilo5 \
           --driver-info ilo_address=<ilo-ip-address> \
           --driver-info ilo_username=<ilo-username> \
           --driver-info ilo_password=<ilo-password> \
           --driver-info ilo_deploy_iso=<glance-uuid-of-deploy-iso> \
           --driver-info ilo_rescue_iso=<glance-uuid-of-rescue-iso>

Please refer to Enabling drivers and hardware types for detailed explanation of hardware type.

To enable the same feature set as provided by all iLO classic drivers,apply the following configuration in ironic.conf:

   [DEFAULT]
   enabled_hardware_types = ilo
   enabled_bios_interfaces = ilo
   enabled_boot_interfaces = ilo-virtual-media,ilo-pxe
   enabled_power_interfaces = ilo
   enabled_console_interfaces = ilo
   enabled_raid_interfaces = agent
   enabled_management_interfaces = ilo
   enabled_inspect_interfaces = ilo

The following commands can be used to enroll a node with the same feature set as one of the classic drivers, but using the ilo hardware type:

  • iscsi_ilo:
       openstack baremetal node create --os-baremetal-api-version=1.31 \
           --driver ilo \
           --deploy-interface iscsi \
           --boot-interface ilo-virtual-media \
           --driver-info ilo_address=<ilo-ip-address> \
           --driver-info ilo_username=<ilo-username> \
           --driver-info ilo_password=<ilo-password> \
           --driver-info ilo_deploy_iso=<glance-uuid-of-deploy-iso>
  • pxe_ilo:
       openstack baremetal node create --os-baremetal-api-version=1.31 \
           --driver ilo \
           --deploy-interface iscsi \
           --boot-interface ilo-pxe \
           --driver-info ilo_address=<ilo-ip-address> \
           --driver-info ilo_username=<ilo-username> \
           --driver-info ilo_password=<ilo-password> \
           --driver-info deploy_kernel=<glance-uuid-of-pxe-deploy-kernel> \
           --driver-info deploy_ramdisk=<glance-uuid-of-deploy-ramdisk>
  • agent_ilo:
       openstack baremetal node create --os-baremetal-api-version=1.31 \
           --driver ilo \
           --deploy-interface direct \
           --boot-interface ilo-virtual-media \
           --driver-info ilo_address=<ilo-ip-address> \
           --driver-info ilo_username=<ilo-username> \
           --driver-info ilo_password=<ilo-password> \
           --driver-info ilo_deploy_iso=<glance-uuid-of-deploy-iso>

Prerequisites

  • proliantutils is a python package which contains a set of modules for managing HPE ProLiant hardware. Install proliantutils module on the Ironic conductor node. Minimum version required is 2.4.0.


  $ pip install "proliantutils>=2.4.0"


  • ipmitool command must be present on the service node(s) where ironic-conductor is running. On most Linux distributions, this is provided as part of the ipmitool package. Source code is available at http://ipmitool.sourceforge.net/.

Different Configuration for iLO Drivers

Configure Glance Image Service

1. Configure Glance image service with its storage backend as Swift. See [4]_ for configuration instructions.

2. Set a temp-url key for Glance user in Swift. For example, if you have configured Glance with user `glance-swift and tenant as service,then run the below command::

   swift --os-username=service:glance-swift post -m temp-url-key:mysecretkeyforglance

3. Fill the required parameters in the [glance] section in /etc/ironic/ironic.conf. Normally you would be required to fill in the following details.:

   [glance]
   swift_temp_url_key=mysecretkeyforglance
   swift_endpoint_url=https://10.10.1.10:8080
   swift_api_version=v1
   swift_account=AUTH_51ea2fb400c34c9eb005ca945c0dc9e1
   swift_container=glance

The details can be retrieved by running the below command:

  $ swift --os-username=service:glance-swift stat -v | grep -i url
  StorageURL:     http://10.10.1.10:8080/v1/AUTH_51ea2fb400c34c9eb005ca945c0dc9e1
  Meta Temp-Url-Key: mysecretkeyforglance

4. Swift must be accessible with the same admin credentials configured in Ironic. For example, if Ironic is configured with the below credentials in /etc/ironic/ironic.conf:

   [keystone_authtoken]
   admin_password = password
   admin_user = ironic
   admin_tenant_name = service

Ensure auth_version in keystone_authtoken to 2. Then, the below command should work.:

   $ swift --os-username ironic --os-password password --os-tenant-name service --auth-version 2 stat
                        Account: AUTH_22af34365a104e4689c46400297f00cb
                     Containers: 2
                        Objects: 18
                          Bytes: 1728346241
   Objects in policy "policy-0": 18
     Bytes in policy "policy-0": 1728346241
              Meta Temp-Url-Key: mysecretkeyforglance
                    X-Timestamp: 1409763763.84427
                     X-Trans-Id: tx51de96a28f27401eb2833-005433924b
                   Content-Type: text/plain; charset=utf-8
                  Accept-Ranges: bytes

5. Restart the Ironic conductor service.:

   $ service ironic-conductor restart

Web server configuration on conductor

The HTTP(S) web server can be configured in many ways. For apache web server on Ubuntu, refer here

Following config variables need to be set in /etc/ironic/ironic.conf:

use_web_server_for_images in [ilo] section:

  [ilo]
  use_web_server_for_images = True
  http_url and http_root in [deploy] section:
  [deploy]
  # Ironic compute node's http root path. (string value)
  http_root=/httpboot
  # Ironic compute node's HTTP server URL. Example:
  # http://192.1.2.3:8080 (string value)
  http_url=http://192.168.0.2:8080

use_web_server_for_images: If the variable is set to false, iscsi_ilo and agent_ilo uses swift containers to host the intermediate floppy image and the boot ISO. If the variable is set to true, these drivers uses the local web server for hosting the intermediate files. The default value for use_web_server_for_images is False.

http_url: The value for this variable is prefixed with the generated intermediate files to generate a URL which is attached in the virtual media.

http_root: It is the directory location to which ironic conductor copies the intermediate floppy image and the boot ISO.

Note HTTPS is strongly recommended over HTTP web server configuration for security enhancement. The iscsi_ilo and agent_ilo will send the instance’s configdrive over an encrypted channel if web server is HTTPS enabled.

Enabling HTTPS in Swift

iLO drivers iscsi_ilo and agent_ilo use Swift for storing boot images and management information (information for Ironic conductor to provision bare metal hardware). By default, HTTPS is not enabled in Swift. HTTPS is required to encrypt all communication between Swift and Ironic conductor and Swift and bare metal (via Virtual Media). It can be enabled in one of the following ways:

     cd /etc/swift
     openssl req -new -x509 -nodes -out cert.crt -keyout cert.key
  • Add the following lines to /etc/swift/proxy-server.conf under [DEFAULT]::
     bind_port = 443   693
     cert_file = /etc/swift/cert.crt   694
     key_file = /etc/swift/cert.key    695
  • Restart the Swift proxy server.

Web server configuration for Standalone iLO Drivers

  • Set up the web server that serves the deploy ramdisks, outside of the ironic-conductor host. This web server should be accessible to the conductor nodes.
  • Upload the deploy ramdisk images such that the web server in above step can serve them properly.
  • Set up a web server on each conductor. This step is required only for agent_ilo and iscsi_ilo.


Images must be created (see :ref:`BuildingDibBasedDeployRamdisk`) and made available for download via HTTP(S) URL. This document does not describe the installation or configuration of HTTP(S) servers, however,

  • If using [i]PXE, then the network boot loader must be able to initiate a request to download the kernel and ramdisk images from "http_url", and the ironic-conductor must be able to write files to "http_root" that will be served from "http_url".
  • The deployment agent must be able to initiate a request to download the instance image from "http_url".

Requirements for Standalone iLO Drivers

  • iLO 4 Advanced License needs to be installed on iLO to enable Virtual Media feature.
  • Local web server on conductor - ilo driver uses web server on the conductor node to store temporary FAT images as well as boot ISO images. It needs to be configured on each conductor node.
  • HTTP(s) web server - When using ilo driver, the image containing the agent/deploy ramdisk is retrieved from HTTP(s) web server directly by iLO. This web server need not be on conductor node. For more information, see `HTTP(s) based Deploy`__.
  • See `Web server configuration for Standalone iLO Drivers`_

Configure Standalone iLO Drivers

1. Add http_url and http_root in the [deploy] section in /etc/ironic/ironic.conf. For example:

   http_url = https://10.10.1.10:8080/httpboot/
   http_root = /opt/stack/data/ironic/httpboot/

These determine how the web server on the conductor serves images. http_url is the URL prefix which is used for serving images. http_root is the path on disk that the web server is serving at http_url.

2. Restart the Ironic conductor service:

   $ service ironic-conductor restart

Requirements with Glance Image Service

  • iLO 4 Advanced License needs to be installed on iLO to enable Virtual Media feature.
  • Swift Object Storage Service - ilo driver uses Swift to store temporary FAT images/boot iso.
  • Glance Image Service with Swift configured as its backend - When using ilo drivers, the image containing the agent/deploy ramdisk is retrieved from Swift directly by the iLO.

Drivers

iscsi_ilo driver

Overview

iscsi_ilo driver was introduced as an alternative to pxe_ipmitool and pxe_ipminative drivers for HP ProLiant servers. iscsi_ilo uses virtual media feature in iLO to boot up the bare metal node instead of using PXE or iPXE.

Target Users
  • Users who do not want to use PXE/TFTP protocol on their data centres.
  • Users who have concerns on PXE driver's security issues and want to have a security enhanced PXE-less deployment mechanism - The PXE driver passes management information in clear-text to the baremetal node. However, if Swift proxy server has an HTTPS endpoint (See Enabling HTTPS in Swift for more information), the iscsi_ilo driver provides enhanced security by passing management information to and from Swift endpoint over HTTPS. The management information and boot image will be retrieved over encrypted management network via iLO virtual media.
Tested Platforms

This driver should work on HPE ProLiant Gen7 servers with iLO 3, Gen8 and Gen9 servers with iLO 4 and Gen10 servers with iLO 5.

It has been tested with the following servers:

  • ProLiant DL380 G7
  • ProLiant SL230s Gen8
  • ProLiant DL320e Gen8
  • ProLiant DL380e Gen8
  • ProLiant DL580e Gen8
  • ProLiant BL460c Gen8
  • ProLiant DL180 Gen9 UEFI
  • ProLiant DL360 Gen9 UEFI
  • ProLiant DL380 Gen9 UEFI
  • ProLiant BL460c Gen9
  • ProLiant XL450 Gen9 UEFI
  • ProLiant DL360 Gen10
  • ProLiant DL325 Gen10 Plus
  • ProLiant DL385 Gen10 Plus
Features
  • PXE-less deployment with virtual media.
  • Automatic detection of current boot mode.
  • Automatic setting of the required boot mode if UEFI boot mode is requested by the nova flavor's extra spec.
  • Supports booting the instance from virtual media as well as booting locally from disk. Default is booting from virtual media.
  • UEFI Boot
  • UEFI Secure Boot
  • Passing management information via secure, encrypted management network (virtual media) if Swift proxy server has an HTTPS endpoint. See Enabling HTTPS in Swift for more info. Provisioning is done using iSCSI over data network, so this driver has the benefit of security enhancement with the same performance. It segregates management info from data channel.
  • Remote Console (based on IPMI)
  • HW Sensors
  • Works well for machines with resource constraints (lesser amount of memory).
  • Local boot (both BIOS and UEFI)
  • Supports deployment of whole disk image.
  • Support for out-of-band hardware inspection.
  • Node cleaning.
  • Standalone iLO drivers.
Requirements
  • iLO 4 or iLO 5 Advanced License needs to be installed on iLO to enable Virtual Media Boot feature.
  • Swift Object Storage Service Or HTTP(s) web server on conductor - iLO driver uses either Swift/HTTP(s) web server on the conductor node to store temporary FAT images as well as boot ISO images.
  • Glance Image Service with Swift configured as its backend Or HTTP(s) web server - When using iscsi_ilo driver, the image containing the deploy ramdisk is retrieved from Swift/HTTP(s) web server directly by the iLO.
Deploy Process
  • Admin configures the Proliant baremetal node for iscsi_ilo driver. The Ironic node configured will have the ilo_deploy_iso property in its driver_info. This will contain the Glance UUID or HTTP(s) location of the ISO deploy ramdisk image.
  • Ironic gets a request to deploy a Glance/HTTP(s) image on the baremetal node.
  • iscsi_ilo driver powers off the baremetal node.
  • If ilo_deploy_iso is a Glance UUID, the driver generates a swift-temp-url for the deploy ramdisk image and attaches it as Virtual Media CDROM on the iLO. If ilo_deploy_iso is a HTTP(s) URL, the driver attaches it directly as Virtual Media CDROM on the iLO.
  • The driver creates a small FAT32 image containing parameters to the deploy ramdisk. This image is uploaded to Swift/HTTP(s) web server and its swift-temp-url/HTTP(s) URL is attached as Virtual Media Floppy on the iLO.
  • The driver sets the node to boot one-time from CDROM.
  • The driver powers on the baremetal node.
  • The deploy kernel/ramdisk is booted on the baremetal node. The ramdisk exposes the local disk over iSCSI and requests Ironic conductor to complete the deployment.
  • The driver on the Ironic conductor writes the glance/HTTP(s) image to the baremetal node's disk.
  • If local-boot is requested, Ironic conductor asks the deployment ramdisk to install the boot loader.
  • If it's a netboot (default), the driver bundles the boot kernel/ramdisk for the deploy image into an ISO and then uploads it to Swift/HTTP(s) web server. This ISO image will be used for booting the deployed instance.
  • The driver reboots the node.
  • For netboot, on the first and subsequent reboots iscsi_ilo driver attaches this boot ISO image in Swift/HTTP(s) as Virtual Media CDROM and then sets iLO to boot from it. If boot_option was set to local, then the instance is booted from disk.
Configuring and Enabling the driver

Note: The steps to create HTTP(s) web server and uploading the images to HTTP(s) web server is out-of-scope of Ironic.

1. Prepare an ISO deploy ramdisk image from diskimage-builder [3]_. This can be done by adding the iso element to the ramdisk-image-create command. This command creates the deploy kernel/ramdisk as well as a bootable ISO image containing the deploy kernel and ramdisk. The below command creates files named deploy-ramdisk.kernel, deploy-ramdisk.initramfs and deploy-ramdisk.iso in the current working directory

   pip install "diskimage-builder"
   ramdisk-image-create -o deploy-ramdisk ubuntu deploy-ironic iso

2. Upload this image to Glance.::

   glance image-create --name deploy-ramdisk.iso --disk-format iso --container-format bare < deploy-ramdisk.iso

3. Add iscsi_ilo to the list of enabled_drivers in /etc/ironic/ironic.conf. For example:::

   enabled_drivers = fake,pxe_ssh,pxe_ipmitool,iscsi_ilo

If using HTTP(s) web server:

4. Add http_url and http_root in the [deploy] section in /etc/ironic/ironic.conf. For example:::

   http_url = http://10.10.1.10:8080/httpboot/
   http_root = /opt/stack/data/ironic/httpboot/

If using Glance image service with its storage backend as Swift:

5. Configure Glance image service with its storage backend as Swift. See here for configuration instructions.

6. Set a temp-url key for Glance user in Swift. For example, if you have configured Glance with user glance-swift and tenant as service, then run the below command::

   swift --os-username=service:glance-swift post -m temp-url-key:mysecretkeyforglance

7. Fill the required parameters in the [glance] section in /etc/ironic/ironic.conf. Normally you would be required to fill in the following details.::

   [glance]
   swift_temp_url_key=mysecretkeyforglance
   swift_endpoint_url=http://10.10.1.10:8080
   swift_api_version=v1
   swift_account=AUTH_51ea2fb400c34c9eb005ca945c0dc9e1
   swift_container=glance

The details can be retrieved by running the below command:

  $ swift --os-username=service:glance-swift stat -v | grep -i url
  StorageURL:     http://10.10.1.10:8080/v1/AUTH_51ea2fb400c34c9eb005ca945c0dc9e1
  Meta Temp-Url-Key: mysecretkeyforglance


8. Swift must be accessible with the same admin credentials configured in Ironic. For example, if Ironic is configured with the below credentials in /etc/ironic/ironic.conf.::

   [keystone_authtoken]
   admin_password = password
   admin_user = ironic
   admin_tenant_name = service
   auth_version = 2

Then, the below command should work.::

   $ swift --os-username ironic --os-password password --os-tenant-name service --auth-version 2 stat
                        Account: AUTH_22af34365a104e4689c46400297f00cb
                     Containers: 2
                        Objects: 18
                          Bytes: 1728346241
   Objects in policy "policy-0": 18
     Bytes in policy "policy-0": 1728346241
              Meta Temp-Url-Key: mysecretkeyforglance
                    X-Timestamp: 1409763763.84427
                     X-Trans-Id: tx51de96a28f27401eb2833-005433924b
                   Content-Type: text/plain; charset=utf-8
                  Accept-Ranges: bytes

Finally:

8. Restart the Ironic conductor service.

   $ service ironic-conductor restart
Registering Proliant node in Ironic

Nodes configured for iLO driver should have the driver property set to iscsi_ilo. The following configuration values are also required in driver_info:

  • ilo_address: IP address or hostname of the iLO.
  • ilo_username: Username for the iLO with administrator privileges.
  • ilo_password: Password for the above iLO user.
  • ilo_deploy_iso: The Glance UUID or HTTP(s) URL of the deploy ramdisk ISO image.
  • ca_file: (optional) CA certificate file to validate iLO.
  • client_port: (optional) Port to be used for iLO operations if you are using a custom port on the iLO. Default port used is 443.
  • client_timeout: (optional) Timeout for iLO operations. Default timeout is 60 seconds.
  • console_port: (optional) Node's UDP port for console access. Any unused port on the Ironic conductor node may be used.


NOTE:

  • To update SSL certificates into iLO, you can refer to http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504 . You can use iLO hostname or IP address as a 'Common Name (CN)' while generating Certificate Signing Request (CSR). Use the same value as ilo_address while enrolling node to Bare Metal service to avoid SSL certificate validation errors related to hostname mismatch.
  • If configuration values for ca_file, client_port and client_timeout are not provided in the driver_info of the node, the corresponding config variables defined under [ilo] section in ironic.conf will be used.


For example, you could run a similar command like below to enroll the ProLiant node:

 ironic node-create -d iscsi_ilo -i ilo_address=<ilo-ip-address> -i ilo_username=<ilo-username> -i ilo_password=<ilo-password>  -i ilo_deploy_iso=<glance-uuid-of-deploy-iso>
Boot modes

iscsi_ilo driver supports automatic detection of boot mode (Legacy BIOS or UEFI) and setting of boot mode from BIOS to UEFI. Please see Note below for details.

  • When no boot mode setting is provided, iscsi_ilo driver preserves the current boot mode of the bare metal on the deployed instance.
  • A requirement of a specific boot mode may be provided by adding boot_mode:bios or boot_mode:uefi to capabilities property within the properties field of an Ironic node. iscsi_ilo driver will then deploy and configure the instance in the specified boot mode.

For example, to make a Proliant baremetal node boot always in UEFI mode, run the following command::

  ironic node-update <node-id> add properties/capabilities='boot_mode:uefi'

NOTE:

  • We recommend setting the boot_mode property on systems that support both UEFI and legacy modes if user wants facility in Nova to choose a baremetal node with appropriate boot mode. This is for Gen8 (ProLiant DL580 only) and Gen9 systems.
  • iscsi_ilo driver automatically sets boot mode from BIOS to UEFI, if the requested boot mode in nova boot is UEFI. However, users will need to pre-configure boot mode to Legacy on Gen8 (ProLiant DL580 only) and Gen9 servers if they want to deploy the node in legacy mode.
  • The automatic boot ISO creation for UEFI boot mode has been enabled in Kilo. The manual creation of boot ISO for UEFI boot mode is also supported. For the latter, the boot ISO for the deploy image needs to be built separately and the deploy image's boot_iso property in Glance should contain the Glance UUID of the boot ISO. For building boot ISO, add the iso element after adding the baremetal element while building disk images with diskimage-builder
   disk-image-create ubuntu baremetal iso
  • From nova, specific boot mode may be requested by using the ComputeCapabilitesFilter. For example, it can be set in a flavor like below::
  nova flavor-key ironic-test-3 set capabilities:boot_mode="uefi"
  nova boot --flavor ironic-test-3 --image test-image instance-1

agent_ilo driver

Overview

agent_ilo driver was introduced as an alternative to agent_ipmitool and agent_ipminative drivers for HP Proliant servers. agent_ilo driver uses virtual media feature in HP Proliant baremetal servers to boot up the Ironic Python Agent (IPA) on the baremetal node instead of using PXE. For more information on IPA, refer https://wiki.openstack.org/wiki/Ironic-python-agent.

Target Users
  • Users who do not want to use PXE/TFTP protocol on their data centres.
Tested Platforms

This driver should work on HPE ProLiant Gen7 servers with iLO 3, Gen8 and Gen9 servers with iLO 4 and Gen10 servers with iLO 5.

It has been tested with the following servers:

  • ProLiant DL380 G7
  • ProLiant SL230s Gen8
  • ProLiant DL320e Gen8
  • ProLiant DL380e Gen8
  • ProLiant DL580e Gen8
  • ProLiant BL460c Gen8
  • ProLiant DL180 Gen9 UEFI
  • ProLiant DL360 Gen9 UEFI
  • ProLiant DL380 Gen9 UEFI
  • ProLiant BL460c Gen9
  • ProLiant XL450 Gen9 UEFI
  • ProLiant DL360 Gen10
  • Synergy 480 Gen9
  • ProLiant DL325 Gen10 Plus
  • ProLiant DL385 Gen10 Plus
Features
  • PXE-less deploy with virtual media using Ironic Python Agent.
  • Remote Console (based on IPMI)
  • HW Sensors
  • Automatic detection of current boot mode.
  • Automatic setting of the required boot mode if UEFI boot mode is requested by the nova flavor's extra spec.
  • UEFI Boot
  • UEFI Secure Boot
  • Supports booting the instance from virtual media as well as booting locally from disk.
  • Supports deployment of whole disk image and partition image.
  • Local boot (both BIOS and UEFI)
  • Segregates management info from data channel.
  • Support for out-of-band hardware inspection.
  • Node cleaning.
  • Standalone iLO drivers.
  • Supports tenant network isolation for node instances provisioned for vlan type networks.
Requirements
  • iLO 4 or iLO 5 Advanced License needs to be installed on iLO to enable Virtual Media Boot feature.
  • Swift Object Storage Service Or HTTP(s) web server on conductor - iLO driver uses either Swift/HTTP(s) web server on the conductor node to store temporary FAT images as well as boot ISO images.
  • Glance Image Service with Swift configured as its backend Or HTTP(s) web server - When using agent_ilo driver, the image containing the agent is retrieved from Swift/HTTP(s) web server directly by the iLO.
Deploy Process
  • Admin configures the Proliant baremetal node for agent_ilo driver. The Ironic node configured will have the ilo_deploy_iso property in its driver_info. This will contain the Glance UUID/HTTP(s) URL of the ISO deploy agent image containing the agent.
  • Ironic gets a request to deploy a Glance/HTTP(s) image on the baremetal node.
  • Driver powers off the baremetal node.
  • If ilo_deploy_iso is a Glance UUID, the driver generates a swift-temp-url for the deploy agent image and attaches it as Virtual Media CDROM on the iLO. If ilo_deploy_iso is a HTTP(s) URL, the driver attaches it directly as Virtual Media CDROM on the iLO.
  • Driver creates a small FAT32 image containing parameters to the agent ramdisk. This image is uploaded to Swift/HTTP(s) and its swift-temp-url/HTTP(s) URL is attached as Virtual Media Floppy on the iLO.
  • Driver sets the node to boot one-time from CDROM.
  • Driver powers on the baremetal node.
  • The deploy kernel/ramdisk containing the agent is booted on the baremetal node. The agent ramdisk talks to the Ironic conductor, downloads the image directly from Swift/HTTP(s) and writes the image to chosen disk on the node.
  • Driver sets the node to permanently boot from disk and then reboots the node.
Configuring and Enabling the driver

1. Prepare an ISO deploy Ironic Python Agent image containing the agent [5]_. This can be done by using the iso-image-create script found within the agent. The below set of commands will create a file ipa-ramdisk.iso in the below directory UPLOAD::

   $ pip install "diskimage-builder"
   $ disk-image-create -o ipa-ramdisk fedora ironic-agent iso

2. Upload the IPA ramdisk image to Glance.::

   glance image-create --name ipa-ramdisk.iso --disk-format iso --container-format bare < ipa-ramdisk.iso

3. Configure Glance image service with its storage backend as Swift. See [4]_ for configuration instructions. 4. Set a temp-url key for Glance user in Swift. For example, if you have configured Glance with user glance-swift and tenant as service, then run the below command::

   swift --os-username=service:glance-swift post -m temp-url-key:mysecretkeyforglance

5. Fill the required parameters in the [glance] section in /etc/ironic/ironic.conf. Normally you would be required to fill in the following details.::

   [glance]
   swift_temp_url_key=mysecretkeyforglance
   swift_endpoint_url=http://10.10.1.10:8080
   swift_api_version=v1
   swift_account=AUTH_51ea2fb400c34c9eb005ca945c0dc9e1
   swift_container=glance
 The details can be retrieved by running the below command:::
  $ swift --os-username=service:glance-swift stat -v | grep -i url
  StorageURL:     http://10.10.1.10:8080/v1/AUTH_51ea2fb400c34c9eb005ca945c0dc9e1
  Meta Temp-Url-Key: mysecretkeyforglance

6. Swift must be accessible with the same admin credentials configured in Ironic. For example, if Ironic is configured with the below credentials in /etc/ironic/ironic.conf.::

   [keystone_authtoken]
   admin_password = password
   admin_user = ironic
   admin_tenant_name = service
   auth_version = 2

Then, the below command should work.::

   $ swift --os-username ironic --os-password password --os-tenant-name service --auth-version 2 stat
                        Account: AUTH_22af34365a104e4689c46400297f00cb
                     Containers: 2
                        Objects: 18
                          Bytes: 1728346241
   Objects in policy "policy-0": 18
     Bytes in policy "policy-0": 1728346241
              Meta Temp-Url-Key: mysecretkeyforglance
                    X-Timestamp: 1409763763.84427
                     X-Trans-Id: tx51de96a28f27401eb2833-005433924b
                   Content-Type: text/plain; charset=utf-8
                  Accept-Ranges: bytes


7. Add agent_ilo to the list of enabled_drivers in /etc/ironic/ironic.conf. For example:::

   enabled_drivers = fake,pxe_ssh,pxe_ipmitool,agent_ilo

8. Restart the Ironic conductor service.::

   $ service ironic-conductor restart
Registering Proliant node in Ironic

Nodes configured for iLO driver should have the driver property set to agent_ilo. The following configuration values are also required in driver_info:

  • ilo_address: IP address or hostname of the iLO.
  • ilo_username: Username for the iLO with administrator privileges.
  • ilo_password: Password for the above iLO user.
  • ilo_deploy_iso: The Glance UUID of the deploy agent ISO image containing the agent.
  • ca_file: (optional) CA certificate file to validate iLO.
  • client_port: (optional) Port to be used for iLO operations if you are using a custom port on the iLO. Default port used is 443.
  • client_timeout: (optional) Timeout for iLO operations. Default timeout is 60 seconds.
  • console_port: (optional) Node's UDP port for console access. Any unused port on the Ironic conductor node may be used.


NOTE:

  • To update SSL certificates into iLO, you can refer to http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504 . You can use iLO hostname or IP address as a 'Common Name (CN)' while generating Certificate Signing Request (CSR). Use the same value as ilo_address while enrolling node to Bare Metal service to avoid SSL certificate validation errors related to hostname mismatch.
  • If configuration values for ca_file, client_port and client_timeout are not provided in the driver_info of the node, the corresponding config variables defined under [ilo] section in ironic.conf will be used.

For example, you could run a similar command like below to enroll the ProLiant node:

 ironic node-create -d agent_ilo -i ilo_address=<ilo-ip-address> -i ilo_username=<ilo-username> -i ilo_password=<ilo-password>  -i ilo_deploy_iso=<glance-uuid-of-deploy-iso>
Boot modes

agent_ilo driver supports automatic detection of boot mode (Legacy BIOS or UEFI) and setting of boot mode from BIOS to UEFI. Please see Note below for details.

  • When no boot mode setting is provided, agent_ilo driver preserves the current boot mode on the deployed instance.
  • A requirement of a specific boot mode may be provided by adding boot_mode:bios or boot_mode:uefi to capabilities property within the properties field of an Ironic node. Then agent_ilo driver will deploy and configure the instance in the appropriate boot mode.

For example, to make a Proliant baremetal node boot in UEFI mode, run the following command::

  ironic node-update <node-id> add properties/capabilities='boot_mode:uefi'

NOTE:

  • We recommend setting the boot_mode property on systems that support both UEFI and legacy modes if user wants facility in Nova to choose a baremetal node with appropriate boot mode. This is for ProLiant DL580 Gen8 and Gen9 systems.
  • agent_ilo driver automatically set boot mode from BIOS to UEFI, if the requested boot mode in nova boot is UEFI. However, users will need to pre-configure boot mode to Legacy on Gen8 (ProLiant DL580 only) and Gen9 servers if they want to deploy the node in legacy mode.
  • From nova, specific boot mode may be requested by using the ComputeCapabilitesFilter. For example, it can be set in a flavor like below::
  nova flavor-key ironic-test-3 set capabilities:boot_mode="uefi"
  nova boot --flavor ironic-test-3 --image test-image instance-1

pxe_ilo driver

Overview

pxe_ilo driver uses PXE/iSCSI (just like pxe_ipmitool driver) to deploy the image and uses iLO to do all management operations on the baremetal node(instead of using IPMI).

Target Users
  • Users who want to use PXE/iSCSI for deployment in their environment or who don't have Advanced License in their iLO.
  • Users who don't want to configure boot mode and boot device manually on the baremetal node.
  • User who wants to use iLO driver value-add features such as boot mode management, out-of-band node cleaning and hardware introspection.
Tested Platforms

This driver should work on HPE ProLiant Gen7 servers with iLO 3, Gen8 and Gen9 servers with iLO 4 and Gen10 servers with iLO 5.

It has been tested with the following servers:

  • ProLiant DL380 G7
  • ProLiant SL230s Gen8
  • ProLiant DL320e Gen8
  • ProLiant DL380e Gen8
  • ProLiant DL580e Gen8
  • ProLiant BL460c Gen8
  • ProLiant DL180 Gen9 UEFI
  • ProLiant DL360 Gen9 UEFI
  • ProLiant DL380 Gen9 UEFI
  • ProLiant BL460c Gen9
  • ProLiant XL450 Gen9 UEFI
  • ProLiant DL360 Gen10
  • ProLiant DL325 Gen10 Plus
  • ProLiant DL385 Gen10 Plus
Features
  • Automatic detection of current boot mode.
  • Automatic setting of the required boot mode if UEFI boot mode is requested by the nova flavor's extra spec.
  • Remote Console (based on IPMI)
  • HW Sensors
  • UEFI Boot
  • UEFI Secure Boot
  • Local boot (both BIOS and UEFI)
  • Supports deployment of whole disk image and partition image.
  • Supports booting the instance from PXE as well as booting locally from disk.
  • Segregates management info from data channel.
  • Support for out-of-band hardware inspection.
  • Node cleaning
  • Standalone iLO drivers.
Requirements

None.

Configuring and Enabling the driver

1. Prepare an ISO deploy ramdisk image from diskimage-builder [3]. The below command creates a file named deploy-ramdisk.kernel and deploy-ramdisk.initramfs in the current working directory::

   ramdisk-image-create -o deploy-ramdisk ubuntu deploy-ironic

2. Upload this image to Glance.::

   glance image-create --name deploy-ramdisk.kernel --disk-format aki --container-format aki < deploy-ramdisk.kernel
   glance image-create --name deploy-ramdisk.initramfs --disk-format ari --container-format ari < deploy-ramdisk.initramfs

7. Add pxe_ilo to the list of enabled_drivers in /etc/ironic/ironic.conf. For example:::

   enabled_drivers = fake,pxe_ssh,pxe_ipmitool,pxe_ilo

8. Restart the Ironic conductor service.::

   service ironic-conductor restart
Registering Proliant node in Ironic

Nodes configured for iLO driver should have the driver property set to pxe_ilo. The following configuration values are also required in driver_info:

  • ilo_address: IP address or hostname of the iLO.
  • ilo_username: Username for the iLO with administrator privileges.
  • ilo_password: Password for the above iLO user.
  • pxe_deploy_kernel: The Glance UUID of the deployment kernel.
  • pxe_deploy_ramdisk: The Glance UUID of the deployment ramdisk.
  • ca_file: The Glance UUID of the deployment ramdisk.
  • client_port: (optional) Port to be used for iLO operations if you are using a custom port on the iLO. Default port used is 443.
  • client_timeout: (optional) Timeout for iLO operations. Default timeout is 60 seconds.
  • console_port: (optional) Node's UDP port for console access. Any unused port on the Ironic conductor node may be used.


NOTE:

  • To update SSL certificates into iLO, you can refer to http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04530504 . You can use iLO hostname or IP address as a 'Common Name (CN)' while generating Certificate Signing Request (CSR). Use the same value as ilo_address while enrolling node to Bare Metal service to avoid SSL certificate validation errors related to hostname mismatch.
  • If configuration values for ca_file, client_port and client_timeout are not provided in the driver_info of the node, the corresponding config variables defined under [ilo] section in ironic.conf will be used.


For example, you could run a similar command like below to enroll the ProLiant node:

 ironic node-create -d pxe_ilo -i ilo_address=<ilo-ip-address> -i ilo_username=<ilo-username> -i ilo_password=<ilo-password>
 -i deploy_kernel=<glance-uuid-of-pxe-deploy-kernel> -i deploy_ramdisk=<glance-uuid-of-deploy-ramdisk>
Boot modes

pxe_ilo driver supports automatic detection of boot mode (Legacy BIOS or UEFI) and setting of boot mode from BIOS to UEFI. Please see Note below for details.

  • When no boot mode setting is provided, pxe_ilo driver preserves the current boot mode on the deployed instance.
  • A requirement of a specific boot mode may be provided by adding boot_mode:bios or boot_mode:uefi to capabilities property within the properties field of an Ironic node. Then pxe_ilo driver will deploy and configure the instance in the appropriate boot mode.::
  ironic node-update <NODE-ID> add properties/capabilities='boot_mode:uefi'

NOTE:

  • We recommend setting the boot_mode property on systems that support both UEFI and legacy modes if user wants facility in Nova to choose a baremetal node with appropriate boot mode. This is for ProLiant DL580 Gen8 and Gen9 systems.
  • pxe_ilo driver automatically set boot mode from BIOS to UEFI, if the requested boot mode in nova boot is UEFI. However, users will need to pre-configure boot mode to Legacy on DL580 Gen8 and Gen9 servers if they want to deploy the node in legacy mode.
  • From nova, specific boot mode may be requested by using the ComputeCapabilitesFilter. For example, it can be set in a flavor like below::
  nova flavor-key ironic-test-3 set capabilities:boot_mode="uefi"
  nova boot --flavor ironic-test-3 --image test-image instance-1

UEFI Secure Boot support

  • The following drivers support UEFI secure boot deploy:
    • iscsi_ilo
    • agent_ilo
    • pxe_ilo


Tested Platforms: This feature is available on HP Proliant Gen9 servers and above with iLO 4. It has been tested with the following servers:

  • ProLiant DL360 Gen9 UEFI
  • ProLiant DL380 Gen9 UEFI


The UEFI secure boot mode can be configured in Ironic by adding secure_boot parameter in the capabilities parameter within properties field of an Ironic node.

secure_boot is a boolean parameter and takes value as true or false.

To enable secure_boot on a node add it to capabilities as below::

 ironic node-update <node-uuid> add properties/capabilities='secure_boot:true'

or, alternatively use hardware inspection to populate the secure boot capability.

Nodes having secure_boot set to true may be requested by adding an extra_spec to the Nova flavor::

 nova flavor-key ironic-test-3 set capabilities:secure_boot="true"
 nova boot --flavor ironic-test-3 --image test-image instance-1

If capabilities is used in extra_spec as above, Nova scheduler (ComputeCapabilitiesFilter) will match only Ironic nodes which have the secure_boot set appropriately in properties/capabilities. It will filter out rest of the nodes.

The above facility for matching in Nova can be used in heterogeneous environments where there is a mix of machines supporting and not supporting UEFI secure boot, and operator wants to provide a choice to the user regarding secure boot. If the flavor doesn't contain secure_boot then Nova scheduler will not consider secure boot mode as a placement criteria, hence user may get a secure boot capable machine that matches with user specified flavors but deployment would not use its secure boot capability. Secure boot deploy would happen only when it is explicitly specified through flavor

Use element ubuntu-signed or fedora to build signed ubuntu deploy iso and user images from diskimage-builder_. The below command creates files named deploy-ramdisk.kernel, deploy-ramdisk.initramfs and deploy-ramdisk.iso in the current working directory

   pip install "diskimage-builder"
   ramdisk-image-create -o deploy-ramdisk ubuntu-signed deploy-ironic iso

The below command creates files named cloud-image-boot.iso, cloud-image.initrd, cloud-image.vmlinuz and cloud-image.qcow2 in the current working directory

   disk-image-create -o cloud-image ubuntu-signed baremetal iso

NOTE:

  • UEFI secure boot is enabled when instance image is getting booted. The bare metal deploy happens in UEFI boot mode.
  • In UEFI secure boot, digitally signed bootloader should be able to validate digital signatures of kernel during boot process. This requires that the bootloader contains the digital signatures of the kernel. For iscsi_ilo driver, it is recommended that boot_iso property for user image contains the Glance UUID of the boot ISO. If boot_iso property is not updated in Glance for the user image, it would create the boot_iso using bootloader from the deploy iso. This boot_iso will be able to boot the user image in UEFI secure boot environment only if the bootloader is signed and can validate digital signatures of user image kernel.
  • For pxe_ilo driver, in case of deploy of partition image, ensure that the signed grub2 bootloader used during deploy can validate digital signature of the kernel in the instance partition image. If signed grub2 cannot validate kernel in the instance partition image, boot will fail for the same.

UEFI Boot from iSCSI volume support

With Gen9 (UEFI firmware version 1.40 or higher) and Gen10 HPE Proliant servers, the driver supports firmware based UEFI boot of an iSCSI cinder volume.

This feature requires the node to be configured to boot in UEFI boot mode, as well as user image should be UEFI bootable image, and PortFast needs to be enabled in switch configuration for immediate spanning tree forwarding state so it wouldn’t take much time setting the iSCSI target as persistent device.

The driver does not support this functionality when in BIOS boot mode. In case the node is configured with ilo-pxe boot interface and the boot mode configured on the bare metal is BIOS, the iscsi boot from volume is performed using ipxe. See Boot From Volume for more details.

To use this feature, configure the boot mode of the bare metal to UEFI and configure the corresponding ironic node using the steps given in Boot From Volume. In a cloud environment with nodes configured to boot from BIOS and UEFI boot modes, the virtual media driver only supports UEFI boot mode, and that attempting to use iscsi boot at the same time with a bios volume will result in an error.

Hardware Inspection

Hardware inspection is supported by following drivers:

  • pxe_ilo
  • iscsi_ilo
  • agent_ilo
  • The inspection can be initiated by using following commands:
    • Move node to manageable state:
   ironic node-set-provision-state <node_UUID> manage
    • Initiate inspection:
   ironic node-set-provision-state <node_UUID> inspect

NOTE:

  • The disk size is returned by RIBCL/RIS only when RAID is preconfigured on the storage. If the storage is Direct Attached Storage, then RIBCL/RIS fails to get the disk size..
    • The SNMPv3 inspection gets disk size for all types of storages. If RIBCL/RIS is unable to get disk size and SNMPv3 inspection is requested, the proliantutils does SNMPv3 inspection to get the disk size. If proliantutils is unable to get the disk size, it raises an error. This feature is available in proliantutils release version >= 2.2.0.
    • The iLO must be updated with SNMPv3 authentication details. Refer to the section `SNMPv3 Authentication` in `http://h20566.www2.hpe.com/hpsc/doc/public/display?docId=c03334051' for setting up authentication details on iLO. The following parameters are mandatory to be given in driver_info for SNMPv3 inspection:
      • ``snmp_auth_user`` : The SNMPv3 user.
      • ``snmp_auth_prot_password`` : The auth protocol pass phrase.
      • ``snmp_auth_priv_password`` : The privacy protocol pass phrase.
    • The following parameters are optional for SNMPv3 inspection:
      • ``snmp_auth_protocol`` : The Auth Protocol. The valid values are "MD5" and "SHA". The iLO default value is "MD5".
      • ``snmp_auth_priv_protocol`` : The Privacy protocol. The valid values are "AES" and "DES". The iLO default value is "DES".
  • The iLO firmware version should be 2.10 or above for nic_capacity to be discovered.

The inspection process will discover the following essential properties (properties required for scheduling deployment):

  • memory_mb: memory size
  • cpus: number of cpus
  • cpu_arch: cpu architecture
  • local_gb: disk size


Inspection can also discover the following extra capabilities for iLO drivers:

  • ilo_firmware_version: iLO firmware version
  • rom_firmware_version: System ROM firmware version
  • secure_boot: secure boot is supported or not. The possible values are 'true' or 'false'. The value is returned as 'true' if secure boot is supported by the server.
  • server_model: server model
  • pci_gpu_devices: number of gpu devices connected to the baremetal.
  • nic_capacity: the max speed of the embedded NIC adapter.


The operator can specify these capabilities in nova flavor for node to be selected for scheduling:

 nova flavor-key my-baremetal-flavor set capabilities:server_model="<in> Gen8"
 nova flavor-key my-baremetal-flavor set capabilities:pci_gpu_devices="> 0"
 nova flavor-key my-baremetal-flavor set capabilities:nic_capacity="10Gb"
 nova flavor-key my-baremetal-flavor set capabilities:ilo_firmware_version="<in> 2.10"
 nova flavor-key my-baremetal-flavor set capabilities:secure_boot="true"

The above are just the examples of using the capabilities in nova flavor.

Enabling HTTPS in Swift

iLO drivers iscsi_ilo and agent_ilo use Swift for storing boot images and management information. By default, HTTPS is not enabled in Swift. HTTPS is required to encrypt all communication between Ironic Conductor and Swift proxy server, thereby preventing eavesdropping of network packets. It can be enabled in one of the following ways:

   cd /etc/swift
   openssl req -new -x509 -nodes -out cert.crt -keyout cert.key
    • Add the following lines to /etc/swift/proxy-server.conf under [DEFAULT]
 bind_port = 443
 cert_file = /etc/swift/cert.crt
 key_file = /etc/swift/cert.key
    • Restart the Swift proxy server.

Node Cleaning Support

  • The following drivers support node cleaning:
    • pxe_ilo
    • iscsi_ilo
    • agent_ilo

Ironic provides two modes for node cleaning: automated and manual. Automated cleaning is automatically performed before the first workload has been assigned to a node and when hardware is recycled from one workload to another whereas Manual cleaning must be invoked by the operator.

Automated cleaning

Node automated cleaning is enabled by default. This setting can be changed in ironic.conf. (Prior to Mitaka, this option was named ‘clean_nodes’)

   [conductor]
   automated_clean=true

OR

   [conductor]
   automated_clean=false

Nodes are set to cleaning state in either of the following -

During deletion of an existing instance, i.e. when the node moves from ACTIVE -> AVAILABLE state
   ironic node-set-provision-state <node-uuid> deleted
Or while moving the node from MANAGEABLE -> AVAILABLE state
   ironic node-set-provision-state <node-uuid> provide

Currently, supported out-of-band iLO automated cleaning operations are:

  • reset_bios_to_default: Resets system ROM / BIOS Settings to default. This clean step is supported only on Gen9 and above servers. By default, enabled with priority 10.
  • reset_secure_boot_keys_to_default: Resets secure boot keys to manufacturer’s defaults. This step is supported only on Gen9 and above servers. By default, enabled with priority 20.
  • reset_ilo_credential: Resets the iLO password, if ‘ilo_change_password’ is specified as part of node’s driver_info. By default, enabled with priority 30.
  • clear_secure_boot_keys: Clears all secure boot keys. This step is supported only on Gen9 and above servers. By default, this step is disabled.
  • reset_ilo: Resets the iLO. By default, this step is disabled.


Additionally, agent_ilo driver supports inband automated cleaning operation erase_devices. It performs Sanitize Erase on disks in HPE Proliant servers. This step supports sanitize disk erase in Proliant servers only if the ramdisk created using diskimage-builder from Ocata release else it performs the erase_devices implementation available in Ironic Python Agent. By default, this step is disabled. See Sanitize Disk Erase Support for more information.

You may also need to configure a Cleaning Network. To disable or change the priority of the particular automated clean step, respective configuration options to be updated in ironic.conf.

   [ilo]
   clean_priority_reset_ilo=0
   clean_priority_reset_bios_to_default=10
   clean_priority_reset_secure_boot_keys_to_default=20
   clean_priority_clear_secure_boot_keys=0
   clean_priority_reset_ilo_credential=30
   [deploy]
   erase_devices_priority=0
   

To disable a particular automated clean step, update the priority of step to 0. For more information on node automated cleaning, see Automated cleaning

Manual cleaning

When initiating a manual clean, the operator specifies the cleaning steps to be performed. Manual cleaning can only be performed when a node is in the MANAGEABLE state. Once the manual cleaning is finished, the node will be put in the MANAGEABLE state again. Manual cleaning can only be performed when the REST API request to initiate it is available in API version 1.15 and higher. So, from command line you need to do:

   ironic --ironic-api-version 1.15 node-set-provision-state --clean-steps input_manual_clean_steps.json <node-uuid> clean

Currently, supported out-of-band iLO manual cleaning operations are:

  • activate_license:
Activates the iLO Advanced license. This is an out-of-band manual cleaning step associated with the management interface. Please note that this operation cannot be performed using virtual media based drivers like iscsi_ilo and agent_ilo as they need this type of advanced license already active to use virtual media to boot into to start cleaning operation. Virtual media is an advanced feature. If an advanced license is already active and the user wants to overwrite the current license key, for example in case of a multi-server activation key delivered with a flexible-quantity kit or after completing an Activation Key Agreement (AKA), then these drivers can still be used for executing this cleaning step.
See Activating iLO Advanced license as manual clean step for user guidance on usage.
   Note: This feature is not applicable to Synergy 480 Gen9 machines because the Synergy machines are installed with the Advanced iLO license by default.
  • update_firmware:
Updates the firmware of the devices. Also an out-of-band step associated with the management interface. The supported devices for firmware update are: ilo, cpld, power_pic, bios and chassis. Some devices firmware cannot be updated via this method, such as: storage controllers, host bus adapters, disk drive firmware, network interfaces and Onboard Administrator (OA). Refer below table for the above components' commonly used descriptions.
Device Description
ilo BMC for HPE ProLiant servers
cpld System programmable logic device
power_pic Power management controller
bios HPE ProLiant System ROM
chassis System chassis device
See Initiating firmware update as manual clean step for user guidance on usage.

And, for more information on node manual cleaning, see Manual cleaning

Activating iLO Advanced license as manual clean step

iLO drivers can activate the iLO Advanced license key as a manual cleaning step. Any manual cleaning step can only be initiated when a node is in the MANAGEABLE state. Once the manual cleaning is finished, the node will be put in the MANAGEABLE state again. User can follow steps from Manual cleaning to initiate manual cleaning operation on a node. Refer the following in executing the iLO advanced license activation as a manual clean step via ironic client for the purpose of illustration:

   ironic node-set-provision-state <node-uuid> manage
   ironic --ironic-api-version latest node-set-provision-state --clean-steps /home/deray/license_activation_clean_step.json <node-uuid> clean

An example of a manual clean step with activate_license as the only clean step could be (or a typical content of license_activation_clean_step.json file):

   [{
     "interface": "management",
     "step": "activate_license",
     "args": {
       "ilo_license_key": "ABC12-XXXXX-XXXXX-XXXXX-YZ345"
     }
   }]

What the different attributes of activate_license clean step stand for are as follows:

Attribute Description
interface Interface of clean step, here management
step Name of clean step, here activate_license
args Keyword-argument entry (<name>: <value>) being passed to clean step
args.ilo_license_key iLO Advanced license key to activate enterprise features. This is mandatory.
Initiating firmware update as manual clean step

iLO drivers can invoke secure firmware update as a manual cleaning step. Any manual cleaning step can only be initiated when a node is in the MANAGEABLE state. Once the manual cleaning is finished, the node will be put in the MANAGEABLE state again. User can follow steps from Manual cleaning to initiate manual cleaning operation on a node. Refer the following in executing the iLO based firmware update as a manual clean step via ironic client for the purpose of illustration:

   ironic node-set-provision-state <node-uuid> manage
   ironic --ironic-api-version latest node-set-provision-state --clean-steps /home/deray/firmware_update_clean_step.json <node-uuid> clean

An example of a manual clean step with update_firmware as the only clean step could be (or a typical content of firmware_update_clean_step.json file):

   [{
     "interface": "management",
     "step": "update_firmware",
     "args": {
       "firmware_update_mode": "ilo",
       "firmware_images":[
         {
           "url": "file:///firmware_images/ilo/1.5/CP024444.scexe",
           "checksum": "a94e683ea16d9ae44768f0a65942234d",
           "component": "ilo"
         },
         {
           "url": "swift://firmware_container/cpld2.3.rpm",
           "checksum": "<md5-checksum-of-this-file>",
           "component": "cpld"
         },
         {
           "url": "http://my_address:port/firmwares/bios_vLatest.scexe",
           "checksum": "<md5-checksum-of-this-file>",
           "component": "bios"
         },
         {
           "url": "https://my_secure_address_url/firmwares/chassis_vLatest.scexe",
           "checksum": "<md5-checksum-of-this-file>",
           "component": "chassis"
         },
         {
           "url": "file:///home/ubuntu/firmware_images/power_pic/pmc_v3.0.bin",
           "checksum": "<md5-checksum-of-this-file>",
           "component": "power_pic"
         }
       ]
     }
   }]

What the different attributes of update_firmware clean step stand for are as follows:

Attribute Description
interface Interface of clean step, here management
step Name of clean step, here update_firmware
args Keyword-argument entry (<name>: <value>) being passed to clean step
args.firmware_update_mode Mode (or mechanism) of out-of-band firmware update. Supported value is ilo. This is mandatory.
args.firmware_images Ordered list of dictionaries of images to be flashed. This is mandatory.

Each firmware image block is represented by a dictionary (JSON), in the form:

   {
     "url": <url of firmware image file>,
     "checksum": <md5 checksum of firmware image file to verify the image>,
     "component": <device on which firmware image will be flashed>
   }

All the fields in the firmware image block are mandatory.

  • The different types of firmware url schemes supported are: file, http, https and swift.
Note: This feature assumes that while using file url scheme the file path is on the conductor controlling the node.
Note: The swift url scheme assumes the swift account of the service project. The service project (tenant) is a special project created in the Keystone system designed for the use of the core OpenStack services. When Ironic makes use of Swift for storage purpose, the account is generally service and the container is generally ironic and ilo drivers use a container named ironic_ilo_container for their own purpose.
Note: While using firmware files with a .rpm extension, make sure the commands rpm2cpio and cpio are present on the conductor, as they are utilized to extract the firmware image from the package.
  • The firmware components that can be updated are: ilo, cpld, power_pic, bios and chassis.
  • The firmware images will be updated in the order given by the operator. If there is any error during processing of any of the given firmware images provided in the list, none of the firmware updates will occur. The processing error could happen during image download, image checksum verification or image extraction. The logic is to process each of the firmware files and update them on the devices only if all the files are processed successfully. If, during the update (uploading and flashing) process, an update fails, then the remaining updates, if any, in the list will be aborted. But it is recommended to triage and fix the failure and re-attempt the manual clean step update_firmware for the aborted firmware_images.
The devices for which the firmwares have been updated successfully would start functioning using their newly updated firmware.
  • As a troubleshooting guidance on the complete process, check Ironic conductor logs carefully to see if there are any firmware processing or update related errors which may help in root causing or gain an understanding of where things were left off or where things failed. You can then fix or work around and then try again. A common cause of update failure is HPE Secure Digital Signature check failure for the firmware image file.
  • To compute md5 checksum for your image file, user can use the following command:
   $ md5sum image.rpm
   66cdb090c80b71daa21a67f06ecd3f33  image.rpm
RAID Support

The inband RAID functionality is supported by iLO drivers. See http://docs.openstack.org/developer/ironic/deploy/raid.html#raid for more information. Bare Metal service update node with following information after successful configuration of RAID:

  • Node properties/local_gb is set to the size of root volume.
  • Node properties/root_device is filled with wwn details of root volume. It is used by iLO drivers as root device hint during provisioning.
  • The value of raid level of root volume is added as raid_level capability to the node's capabilities parameter within properties field. The operator can specify the raid_level capability in nova flavor for node to be selected for scheduling:
   nova flavor-key ironic-test set capabilities:raid_level="1+0"
   nova boot --flavor ironic-test --image test-image instance-1
Sanitize Disk Erase Support

The sanitize disk erase is an inband clean step supported by iLO drives. It is supported when the agent ramdisk contains the Proliant Hardware Manager from the proliantutils version 2.2.0 or higher. If the Sanitize Erase is not supported, then it falls back to the erase_devices implementation available in Ironic Python Agent. This clean step is performed as part of automated cleaning and it is disabled by default.

This inband clean step requires ssacli utility starting from version 2.60-19.0 to perform the erase on physical disks. See the ssacli documentation for more information on ssacli utility.

   Note: Inband clean step for RAID configuration and Disk Erase on HPE P/E-Class SR Gen10 controllers and above will require ssacli version to be ssacli-3.10-3.0 and above.

To create an agent ramdisk with Proliant Hardware Manager, use the proliant-tools element in DIB::

   disk-image-create -o proliant-agent-ramdisk ironic-agent fedora proliant-tools

See the proliant-tools for more information on creating agent ramdisk with proliant-tools element in DIB.

BIOS configuration support

The ilo and ilo5 hardware types support ilo BIOS interface. The support includes providing manual clean steps "apply_configuration" and "factory_reset" to manage supported BIOS settings on the node.

   Note: Prior to the Stein release the user is required to reboot the node manually in order for the settings to take into effect. Starting with the Stein
   release, iLO drivers reboot the node after running the clean steps related to BIOS configuration. The BIOS settings are cached and the clean step 
   is marked as success only if all the requested settings are applied without any failure. If application of any of the settings fails, the clean step is
   marked as failed and the settings are not cached.
Configuration

Following are the supported BIOS settings and the corresponding brief description for each of the settings. For a detailed description please refer to HPE Integrated Lights-Out REST API Documentation <https://hewlettpackard.github.io/ilo-rest-api-docs>.

  • AdvancedMemProtection: Configure additional memory protection with ECC (Error Checking and Correcting). Allowed values are AdvancedEcc, OnlineSpareAdvancedEcc, MirroredAdvancedEcc.
  • AutoPowerOn: Configure the server to automatically power on when AC power is applied to the system. Allowed values are AlwaysPowerOn, AlwaysPowerOff, RestoreLastState.
  • BootMode: Select the boot mode of the system. Allowed values are Uefi, LegacyBios
  • BootOrderPolicy: Configure how the system attempts to boot devices per the Boot Order when no bootable device is found. Allowed values are RetryIndefinitely, AttemptOnce, ResetAfterFailed.
  • CollabPowerControl: Enables the Operating System to request processor frequency changes even if the Power Regulator option on the server configured for Dynamic Power Savings Mode. Allowed values are Enabled, Disabled.
  • DynamicPowerCapping: Configure when the System ROM executes power calibration during the boot process. Allowed values are Enabled, Disabled, Auto.
  • DynamicPowerResponse: Enable the System BIOS to control processor performance and power states depending on the processor workload. Allowed values are Fast, Slow.
  • IntelligentProvisioning: Enable or disable the Intelligent Provisioning functionality. Allowed values are Enabled, Disabled.
  • IntelPerfMonitoring: Exposes certain chipset devices that can be used with the Intel Performance Monitoring Toolkit. Allowed values are Enabled, Disabled.
  • IntelProcVtd: Hypervisor or operating system supporting this option can use hardware capabilities provided by Intel's Virtualization Technology for Directed I/O. Allowed values are Enabled, Disabled.
  • IntelQpiFreq: Set the QPI Link frequency to a lower speed. Allowed values are Auto, MinQpiSpeed.
  • IntelTxt: Option to modify Intel TXT support. Allowed values are Enabled, Disabled.
  • PowerProfile: Set the power profile to be used. Allowed values are BalancedPowerPerf, MinPower, MaxPerf, Custom.
  • PowerRegulator: Determines how to regulate the power consumption. Allowed values are DynamicPowerSavings, StaticLowPower, StaticHighPerf, OsControl.
  • ProcAes: Enable or disable the Advanced Encryption Standard Instruction Set (AES-NI) in the processor. Allowed values are Enabled, Disabled.
  • ProcCoreDisable: Disable processor cores using Intel's Core Multi-Processing (CMP) Technology. Allowed values are Integers ranging from 0 to 24.
  • ProcHyperthreading: Enable or disable Intel Hyperthreading. Allowed values are Enabled, Disabled.
  • ProcNoExecute: Protect your system against malicious code and viruses. Allowed values are Enabled, Disabled.
  • ProcTurbo: Enables the processor to transition to a higher frequency than the processor's rated speed using Turbo Boost Technology if the processor has available power and is within temperature specifications. Allowed values are Enabled, Disabled.
  • ProcVirtualization: Enables or Disables a hypervisor or operating system supporting this option to use hardware capabilities provided by Intel's Virtualization Technology. Allowed values are Enabled, Disabled.
  • SecureBootStatus: The current state of Secure Boot configuration. Allowed values are Enabled, Disabled.
   Note: This setting is read-only and can't be modified with apply_configuration clean step.
 
  • Sriov: If enabled, SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. If enabled, the BIOS allocates additional resources to PCI-express devices. Allowed values are Enabled, Disabled.
  • ThermalConfig: select the fan cooling solution for the system. Allowed values are OptimalCooling, IncreasedCooling, MaxCooling
  • ThermalShutdown: Control the reaction of the system to caution level thermal events. Allowed values are Enabled, Disabled.
  • TpmState: Current TPM device state. Allowed values are NotPresent, PresentDisabled, PresentEnabled.
   Note: This setting is read-only and can't be modified with apply_configuration clean step.
  • TpmType: Current TPM device type. Allowed values are NoTpm, Tpm12, Tpm20, Tm10.
   Note: This setting is read-only and can't be modified with apply_configuration clean step.
  • UefiOptimizedBoot: Enables or Disables the System BIOS boot using native UEFI graphics drivers. Allowed values are Enabled, Disabled.
  • WorkloadProfile: Change the Workload Profile to accomodate your desired workload. Allowed values are GeneralPowerEfficientCompute, GeneralPeakFrequencyCompute, GeneralThroughputCompute, Virtualization-PowerEfficient, Virtualization-MaxPerformance, LowLatency, MissionCritical, TransactionalApplicationProcessing, HighPerformanceCompute, DecisionSupport, GraphicProcessing, I/OThroughput, Custom.
 Note: This setting is only applicable to ProLiant Gen10 servers with iLO 5 management systems.

Rescue mode support

The hardware type ilo supports rescue functionality. Rescue operation can be used to boot nodes into a rescue ramdisk so that the rescue user can access the node. Please refer to Rescue Mode for more details.

Inject NMI support

The management interface ilo supports injection of non-maskable interrupt (NMI) to to a bare metal. Following command can be used to inject NMI on a server:

 openstack baremetal node inject nmi <node>

Following command can be used to inject NMI via Compute service:

 openstack server dump create <server>

Soft power operation support

The power interface ilo supports soft power off and soft reboot operations on a bare metal. Following commands can be used to perform soft power operations on a server:

 openstack baremetal node reboot --soft [--power-timeout <power-timeout>] <node>
   Note: The configuration [conductor]soft_power_off_timeout is used as a default timeout value when no timeout is provided while invoking hard or soft
   power operations.
   Note: Server POST state is used to track the power status of HPE ProLiant Gen9 servers and beyond.

Out of Band RAID Support

With Gen10 HPE Proliant servers and later the ilo5 hardware type supports firmware based RAID configuration as a clean step. This feature requires the node to be configured to ilo5 hardware type and its raid interface to be ilo5. See RAID Configuration for more information.

After a successful RAID configuration, the Bare Metal service will update the node with the following information:

  • Node properties/local_gb is set to the size of root volume.
  • Node properties/root_device is filled with wwn details of root volume. It is used by iLO driver as root device hint during provisioning.

Later the value of raid level of root volume can be added in baremetal-with-RAID10 (RAID10 for raid level 10) resource class. And consequently flavor needs to be updated to request the resource class to create the server using selected node:

 openstack baremetal node set test_node --resource-class baremetal-with-RAID10
 openstack flavor set --property resources:CUSTOM_BAREMETAL_WITH_RAID10=1 test-flavor
 openstack server create --flavor test-flavor --image test-image instance-1
   Note: Supported raid levels for ilo5 hardware type are: 0, 1, 5, 6, 10, 50, 60

IPv6 support

With the IPv6 support in proliantutils>=2.8.0, nodes can be enrolled into the baremetal service using iLO IPv6 addresses.

 openstack baremetal node create --driver ilo  --deploy-interface direct --driver-info ilo_address=2001:0db8:85a3:0000:0000:8a2e:0370:7334 \
     --driver-info ilo_username=test-user --driver-info ilo_password=test-password --driver-info ilo_deploy_iso=test-iso --driver-info ilo_rescue_iso=test-iso
   Note: No configuration changes (in e.g. ironic.conf) are required in order to support IPv6.

Enabling ProLiant Gen10 systems in Ironic

HPE Gen10 Servers render a new compute experience. Gen10 Servers are key to infrastructure modernization, accelerating business insights across a hybrid world of traditional IT, public and private cloud.

These servers conform to Redfish API. The proliantutils library uses Redfish protocol to communicate to this hardware. Minimum version of proliantutils library required for communicating to Gen10 servers is 2.4.0. All iLO drivers and their features are supported on this hardware. Since Gen10 systems are Redfish compliant, the reference hardware type redfish works with Gen10 systems as well but it will lack the iLO specific features.

   Note: The S-Class software RAID is not supported by Linux in Gen10. Therefore, while provisioning a Gen10 node, make sure the controller is set to AHCI mode.

For a list of known issues in Gen10 systems, refer to the Gen10 known issues section.

Instance Images

All iLO drivers support deployment of whole disk images. The whole disk images could be one of following types:

1. BIOS only image. An image having only MBR partition and will boot only in BIOS boot mode.

2. UEFI only image. An image having GPT partition and will boot only in UEFI boot mode.

3. Hybrid image. An image that has GPT and MBR partition and will boot in both BIOS and UEFI boot mode.

4. Signed UEFI image. An UEFI image wherein bootloader and kernel are signed which could be used in UEFI secure boot environment.

Few of the linux distros provide whole disk images. Examples are:

1. Ubuntu - https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-uefi1.img

2. CoreOS - http://stable.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2

3. OpenSuse - https://susestudio.com (It lets you build the image through the browser)

Following table summarizes the whole disk image capabilities:

Image Type Boot Mode Config Drive UEFI Secure Boot
BIOS only BIOS Yes NA
UEFI only UEFI Yes No
UEFI Signed UEFI Yes Yes
Hybrid BIOS and UEFI See note below Yes, if signed

Note : Config Drive feature of Ironic may not work on all the whole disk images, especially hybrid images wherein partition information may get lost when config drive partition is being created leading to failure during provisioning or instance may not boot.

Not all Linux distributions support hybrid images (single image that can boot in BIOS and UEFI boot mode). If the image can be booted only in a specific boot mode then user needs to add 'boot_mode' capability in nova flavor's extra_spec. From nova, specific boot mode may be requested by using the ComputeCapabilitesFilter. For example:-

 nova flavor-key ironic-test-3 set capabilities:boot_mode="uefi"
 nova boot --flavor ironic-test-3 --image test-image instance-1

For pxe-ilo driver, to deploy a whole disk image in UEFI boot mode, user needs to add boot_option="local" capability in nova flavor's extra_spec. For example:-

 nova flavor-key ironic-test-3 set capabilities:boot_mode="uefi" capabilities:boot_option="local" 
 nova boot --flavor ironic-test-3 --image test-image instance-1

Known Issues

Sr No Firmware Version Known Issues Resolutions
1 BIOS System ROM version 1.20 Deploy on Gen9 servers fails as iLO do not honour one time boot device settings and tries to boot from the persistent boot device. It is caused due to a defect in BIOS System ROM. The fix for the same is available since firmware version 1.32_03-05-2015 13 May 2015 onward.
2 Smart Array SAS Driver v8.03 Fedora based IPA deploy ramdisk ISO fails to boot with error "error: can't allocate initrd" if the P220 based smart array controller is attached to the ProLiant server It is a Fishman driver issue in firmware for P220 based smart arrays. The defect has been filed on Fishman firmware. The driver patch would be made available shortly.
3 iLO version 2.20 Deploy using any of the iLO drivers can fail on Gen9 servers with error in conductor logs as "Invalid Device Choice" while setting persistent boot device. This issue happens only when Gen9 servers are running with iLO firmware version 2.20 This issue is in iLO firmware wherein if RIBCL is used to update persistent boot devices in UEFI boot mode on Gen9 servers, it fails with error message mentioned above. This issue can be resolved by using one of the methods given below:-

A. Downgrading the iLO firmware version to 2.10 or upgrading it to version higher than 2.20

B. Upgrading python package 'proliantutils' to version greater or equal to 2.1.3, This issue has been fixed in 'proliantutils' by enhancing it to use HP REST interface to update persistent boot devices for Gen9 servers.

  $ sudo pip install "proliantutils>=2.1.3"
4 NA Openstack documentation (http://docs.openstack.org/developer/ironic/) does not document Enabling HTTPS in Swift Refer to Enabling HTTPS in Swift section of the iLO driver wiki (https://wiki.openstack.org/wiki/Ironic/Drivers/iLODrivers/Kilo#Enabling_HTTPS_in_Swift) to get information and steps to enable it.
5 NA When SSL is enabled in OpenStack environment and images to be attached to iLO virtual media are based on 'https', iLO is unable to read/boot using such images. iLO firmware version may not support the ciphers being enabled at the SSL server hosting the images. Please refer to iLO firmware documentation to ensure that the ciphers being used are supported http://h10032.www1.hp.com/ctg/Manual/c03334051. It is also recommended to refer to 'Release Notes' of iLO firmware version being used for more details.
6 NA When read-only settings or invalid values for the allowed settings are provided while running manual clean step apply_configuration, iLO simply ignores and the clean step is not marked as failed. For more information, refer and track the issue here <https://bugs.launchpad.net/silva/+bug/1783944>_ .
Gen10 Known Issues and workarounds (if available)
Sr No Components (with Firmware Version) Known Issues Resolutions
1 --- --- ---
2 --- --- ---