https://wiki.openstack.org/w/api.php?action=feedcontributions&user=Zohar.cloud&feedformat=atom
OpenStack - User contributions [en]
2024-03-29T09:06:50Z
User contributions
MediaWiki 1.28.2
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems/KIOXIA_CI&diff=180213
ThirdPartySystems/KIOXIA CI
2021-12-15T09:20:11Z
<p>Zohar.cloud: </p>
<hr />
<div>{{ThirdPartySystemInfo|name=KIOXIA CI<br />
|account=charlespiercey<br />
|contact=chuck.piercey@kioxia.com, Amarjit.Singh@kioxia.com, Sachin.More@kioxia.com<br />
|intent=Test KIOXIA Kumoscale NVMeOF volume driver<br />
|structure=python server with shell scripts running devstack and tempest<br />
|method=trigger on patchsets and comments to stack with ref branch and config to use storage backend then run tempest volume api tests and upload results<br />
|programs=Cinder<br />
|status=non-voting}}</div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems/KIOXIA_CI&diff=177403
ThirdPartySystems/KIOXIA CI
2021-01-21T12:03:49Z
<p>Zohar.cloud: </p>
<hr />
<div>{{ThirdPartySystemInfo|name=KIOXIA CI<br />
|account=charlespiercey, zohar<br />
|contact=chuck.piercey@kioxia.com, zohar.cloud@gmail.com<br />
|intent=Test KIOXIA Kumoscale NVMeOF volume driver<br />
|structure=python server with shell scripts running devstack and tempest<br />
|method=trigger on patchsets and comments to stack with ref branch and config to use storage backend then run tempest volume api tests and upload results<br />
|programs=Cinder<br />
|status=non-voting}}</div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems/KIOXIA_CI&diff=177402
ThirdPartySystems/KIOXIA CI
2021-01-21T11:44:28Z
<p>Zohar.cloud: </p>
<hr />
<div>{{ThirdPartySystemInfo|name=KIOXIA CI<br />
|account=charlespiercey, zohar<br />
|contact=chuck.piercey@kioxia.com, zohar.cloud@gmail.com<br />
|intent=Test KIOXIA Kumoscale NVMeOF volume driver<br />
|structure=python server with bash scripts running devstack and tempest<br />
|method=trigger on patchsets and comments to stack with ref branch and config to use storage backend then run tempest volume api tests and upload results<br />
|programs=Cinder<br />
|status=non-voting}}</div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems/KIOXIA_CI&diff=177401
ThirdPartySystems/KIOXIA CI
2021-01-21T11:43:59Z
<p>Zohar.cloud: </p>
<hr />
<div>{{ThirdPartySystemInfo|name=KIOXIA CI<br />
|account=charlespiercey, zohar<br />
|contact=chuck.piercey@kioxia.com, zohar.cloud@gmail.com<br />
|intent=Test KIOXIA Kumoscale NVMeOF volume driver<br />
|structure=python server with bash scripts running devstack and tempest<br />
|method=trigger on patchsets and comments to stack with ref branch and config to use backend then run tempest volume api tests and upload results<br />
|programs=Cinder<br />
|status=non-voting}}</div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems/KIOXIA_CI&diff=177400
ThirdPartySystems/KIOXIA CI
2021-01-21T11:40:28Z
<p>Zohar.cloud: </p>
<hr />
<div>{{ThirdPartySystemInfo|name=KIOXIA CI<br />
|account=charlespiercey, zohar<br />
|contact=chuck.piercey@kioxia.com, zohar.cloud@gmail.com<br />
|intent=Test KIOXIA Kumoscale NVMeOF volume driver<br />
|structure=python server with bash scripts running devstack and tempest<br />
|method=trigger on patchsets and comments to stack with ref branch and config to use backend and run tempest and upload results<br />
|programs=Cinder<br />
|status=non-voting}}</div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems/KIOXIA_CI&diff=177399
ThirdPartySystems/KIOXIA CI
2021-01-21T11:34:55Z
<p>Zohar.cloud: </p>
<hr />
<div>{{ThirdPartySystemInfo|name=KIOXIA CI<br />
|account=charlespiercey, zohar<br />
|contact=chuck.piercey@kioxia.com, zohar.cloud@gmail.com<br />
|intent=Test KIOXIA Kumoscale NVMeOF volume driver<br />
|structure=python server with bash scripts running devstack and tempest<br />
|method=trigger on patchsets and comments to stack with ref branch and config to use backend and run tempest and upload results<br />
|programs=touched by this system<br />
|status=non-voting}}</div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems/KIOXIA_CI&diff=177398
ThirdPartySystems/KIOXIA CI
2021-01-21T11:34:33Z
<p>Zohar.cloud: </p>
<hr />
<div>{{ThirdPartySystemInfo|name=Name of 3rd party system<br />
|account=charlespiercey, zohar<br />
|contact=chuck.piercey@kioxia.com, zohar.cloud@gmail.com<br />
|intent=Test KIOXIA Kumoscale NVMeOF volume driver<br />
|structure=python server with bash scripts running devstack and tempest<br />
|method=trigger on patchsets and comments to stack with ref branch and config to use backend and run tempest and upload results<br />
|programs=touched by this system<br />
|status=non-voting}}</div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems/KIOXIA_CI&diff=177397
ThirdPartySystems/KIOXIA CI
2021-01-21T11:33:31Z
<p>Zohar.cloud: </p>
<hr />
<div>{{ThirdPartySystemInfo|name=Name of 3rd party system<br />
|account=charlespiercey, zohar<br />
|contact=zohar.cloud@gmail.com<br />
|intent=Test KIOXIA Kumoscale NVMeOF volume driver<br />
|structure=python server with bash scripts running devstack and tempest<br />
|method=trigger on patchsets and comments to stack with ref branch and config to use backend and run tempest and upload results<br />
|programs=touched by this system<br />
|status=non-voting}}</div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems/KIOXIA_CI&diff=177396
ThirdPartySystems/KIOXIA CI
2021-01-21T11:33:04Z
<p>Zohar.cloud: </p>
<hr />
<div>{{ThirdPartySystemInfo|name=Name of 3rd party system<br />
|account=charlespiercey<br />
|contact=zohar.cloud@gmail.com<br />
|intent=Test KIOXIA Kumoscale NVMeOF volume driver<br />
|structure=python server with bash scripts running devstack and tempest<br />
|method=trigger on patchsets and comments to stack with ref branch and config to use backend and run tempest and upload results<br />
|programs=touched by this system<br />
|status=non-voting}}</div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems/KIOXIA_CI&diff=177395
ThirdPartySystems/KIOXIA CI
2021-01-21T11:27:18Z
<p>Zohar.cloud: Created page with "{{subst:ThirdPartySystemInfoSubst}}"</p>
<hr />
<div>{{ThirdPartySystemInfo|name=Name of 3rd party system<br />
|account=gerrit account<br />
|contact=contact info for people taking responsibility for this system<br />
|intent=of this system, why do you have it<br />
|structure=of this system, what tools are you using<br />
|method=what you are actually doing<br />
|programs=touched by this system<br />
|status=of this system, in production, testing, non-voting, voting, disabled}}</div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=ThirdPartySystems&diff=177394
ThirdPartySystems
2021-01-21T11:26:36Z
<p>Zohar.cloud: /* Third Party CI Systems */</p>
<hr />
<div>== Third Party CI Systems ==<br />
<br />
{| border="1" cellpadding="2" cellspacing="0" class="wikitable"<br />
|+<br />
!colspan="3"|3rd Party CI Systems<br />
|-<br />
!Name<br />
!Link<br />
!Comments<br />
|-<br />
{{ThirdPartySystemTableEntry|6WIND Networking CI}}|<br />
{{ThirdPartySystemTableEntry|A10 Networks CI |}}|<br />
{{ThirdPartySystemTableEntry|Arista-CI}}|<br />
{{ThirdPartySystemTableEntry|ATT Airship-CI}}|<br />
{{ThirdPartySystemTableEntry|ATT Airship2.0-CI}}|<br />
{{ThirdPartySystemTableEntryDown|Blockbridge EPS CI}}|<br />
{{ThirdPartySystemTableEntry|Brocade OpenStack CI}}|<br />
{{ThirdPartySystemTableEntry|Big Switch CI}}|<br />
{{ThirdPartySystemTableEntry|Canonical Charm CI}}|<br />
{{ThirdPartySystemTableEntry|Cisco Ironic CI}}|<br />
{{ThirdPartySystemTableEntry|Cisco ml2 CI}}|<br />
{{ThirdPartySystemTableEntry|Cisco N1kV CI}}|<br />
{{ThirdPartySystemTableEntry|Cisco UCSm CI}}|<br />
{{ThirdPartySystemTableEntry|cisco_pnr_ci}}|<br />
{{ThirdPartySystemTableEntry|Cisco Tail-f CI}}|<br />
{{ThirdPartySystemTableEntry|Cisco ZM CI}}|<br />
{{ThirdPartySystemTableEntry|Citrix NetScaler CI}}|<br />
{{ThirdPartySystemTableEntry|Cloudbase Cinder SMB3 CI}}|<br />
{{ThirdPartySystemTableEntryDown|Cloudbase Compute Hyper-V CI}}|<br />
{{ThirdPartySystemTableEntryDown|Cloudbase Manila SMB3 CI}}|<br />
{{ThirdPartySystemTableEntry|Cloudbase Neutron Hyper-V CI}}|<br />
{{ThirdPartySystemTableEntry|Cloudbase Nova Hyper-V CI}}|<br />
{{ThirdPartySystemTableEntry|CloudByte CI}}|<br />
{{ThirdPartySystemTableEntryDown|CloudFounders OpenvStorage CI}}|<br />
{{ThirdPartySystemTableEntry|Coho Data CI}}|<br />
{{ThirdPartySystemTableEntry|Coraid CI}}|<br />
{{ThirdPartySystemTableEntry|DataCore CI}}|<br />
{{ThirdPartySystemTableEntry|datera-ci}}|<br />
{{ThirdPartySystemTableEntry|DB Datasets CI}}|<br />
{{ThirdPartySystemTableEntryDown|Designate CI}}|<br />
{{ThirdPartySystemTableEntry|DellEMC Ironic CI}}|<br />
{{ThirdPartySystemTableEntry|DellEMC SC CI}}|<br />
{{ThirdPartySystemTableEntry|DellEMC PowerMAX CI}}|<br />
{{ThirdPartySystemTableEntry|DellEMC VNX CI}}|<br />
{{ThirdPartySystemTableEntry|DellEMC XtremIO CI}}|<br />
{{ThirdPartySystemTableEntry|DellEMC PowerFlex CI}}|<br />
{{ThirdPartySystemTableEntry|DellEMC Unity CI}}|<br />
{{ThirdPartySystemTableEntry|DellEMC PowerStore CI}}|<br />
{{ThirdPartySystemTableEntry|DellEMC PowerVault ME CI}}|<br />
{{ThirdPartySystemTableEntry|FalconStor CI}}|<br />
{{ThirdPartySystemTableEntry|Freescale CI}}|<br />
{{ThirdPartySystemTableEntry|Fuel CI}}|<br />
{{ThirdPartySystemTableEntry|Fuel Packaging CI}}|<br />
{{ThirdPartySystemTableEntry|Fujitsu C-Fabric CI}}|<br />
{{ThirdPartySystemTableEntry|Fujitsu ETERNUS CI}}|<br />
{{ThirdPartySystemTableEntry|Fujitsu iRMC CI}}|<br />
{{ThirdPartySystemTableEntry|Fujitsu ISM CI}}|<br />
{{ThirdPartySystemTableEntry|Hedvig CI}}|<br />
{{ThirdPartySystemTableEntry|HGST Solutions CI}}|<br />
{{ThirdPartySystemTableEntry|Hitachi HBSD CI}}|<br />
{{ThirdPartySystemTableEntry|Hitachi HBSD2 CI}}|<br />
{{ThirdPartySystemTableEntry|Hitachi HNAS CI}}|<br />
{{ThirdPartySystemTableEntry|Hitachi Manila HNAS CI}}|<br />
{{ThirdPartySystemTableEntry|Hitachi Manila HSP CI}}|<br />
{{ThirdPartySystemTableEntry|HP Octavia Sonar CI}}|<br />
{{ThirdPartySystemTableEntry|HPE Storage CI}}|<br />
{{ThirdPartySystemTableEntry|HP Networking CI}}|<br />
{{ThirdPartySystemTableEntry|HPE Proliant iLO drivers CI}}|<br />
{{ThirdPartySystemTableEntry|HPMSA CI}}|<br />
{{ThirdPartySystemTableEntry|Huawei FusionCompute CI}}|<br />
{{ThirdPartySystemTableEntry|Huawei FusionStorage CI}}|<br />
{{ThirdPartySystemTableEntry|Huawei Ironic CI}}|<br />
{{ThirdPartySystemTableEntry|Huawei Manila CI}}|<br />
{{ThirdPartySystemTableEntry|Huawei ML2 CI}}|<br />
{{ThirdPartySystemTableEntry|Huawei volume CI}}|<br />
{{ThirdPartySystemTableEntry|IBMPowerKVMCI}}|<br />
{{ThirdPartySystemTableEntryDown|IBM FlashSystem CI}}| See [[ThirdPartySystems/IBM Storage CI|IBM Storage CI]]<br />
{{ThirdPartySystemTableEntry|IBM GPFS CI}}|<br />
{{ThirdPartySystemTableEntry|IBM PowerVM CI}}|<br />
{{ThirdPartySystemTableEntry|IBM NAS CI}}|<br />
{{ThirdPartySystemTableEntry|IBM SDN-VE CI}}|<br />
{{ThirdPartySystemTableEntry|IBM Storage CI}}|<br />
{{ThirdPartySystemTableEntryDown|IBM STORWIZE CI}}| See [[ThirdPartySystems/IBM Storage CI|IBM Storage CI]]<br />
{{ThirdPartySystemTableEntryDown|IBM XIV-DS8K CI}}| See [[ThirdPartySystems/IBM Storage CI|IBM Storage CI]]<br />
{{ThirdPartySystemTableEntry|IBM xCAT CI}}|<br />
{{ThirdPartySystemTableEntryDown|IBM zKVM CI}}|<br />
{{ThirdPartySystemTableEntryDown|IBM z/VM CI}}|<br />
{{ThirdPartySystemTableEntry|INFINIDAT CI}}|<br />
{{ThirdPartySystemTableEntry|Infortrend Storage CI}}|<br />
{{ThirdPartySystemTableEntry|Inspur CI}}|<br />
<br />
<br />
<br />
<br />
{{ThirdPartySystemTableEntry|Intel-PCI-CI}}|Jenkins<br />
{{ThirdPartySystemTableEntry|Intel-SRIOV-CI}}|Jenkins rewrite to Zuulv3<br />
{{ThirdPartySystemTableEntry|Intel-PMEM-CI}}|ZuulV3<br />
{{ThirdPartySystemTableEntryDown |Intel NUMA }}|Jenkins rewrite to Zuulv3 <br />
{{ThirdPartySystemTableEntryDown|Intel-IPMI}}|Jenkins rewrite to Zuulv3 <br />
{{ThirdPartySystemTableEntryDown|Intel-SRIOV-CI-TaaS}}|Zuulv3 -moving to new server<br />
{{ThirdPartySystemTableEntryDown|Intel-OVS-DPDK}}|Zuulv3 - New Job WIP<br />
<br />
{{ThirdPartySystemTableEntryDown|ITRI DISCO CI}}|<br />
{{ThirdPartySystemTableEntryDown|ITRI Peregrine CI}}|<br />
{{ThirdPartySystemTableEntry|Kaminario K2 CI}}|<br />
{{ThirdPartySystemTableEntry|KIOXIA CI}}|<br />
{{ThirdPartySystemTableEntry|KEMPtechnologies CI}}|<br />
{{ThirdPartySystemTableEntryDown|Linaro CI}}|<br />
{{ThirdPartySystemTableEntry|Lenovo LXCA CI}}|<br />
{{ThirdPartySystemTableEntry|Lenovo Storage CI}}|<br />
{{ThirdPartySystemTableEntry|Limestone Networks CI}}|<br />
{{ThirdPartySystemTableEntry|LINBIT LINSTOR CI}}|WIP<br />
{{ThirdPartySystemTableEntry|MacroSAN Volume CI}}|<br />
{{ThirdPartySystemTableEntry|Maxta CI}}|<br />
{{ThirdPartySystemTableEntry|Mellanox CI}}|<br />
{{ThirdPartySystemTableEntry|Metaplugin CI}}|<br />
{{ThirdPartySystemTableEntryDown|Midokura CI}}|<br />
{{ThirdPartySystemTableEntry|MapR-FS Manila CI}}|<br />
{{ThirdPartySystemTableEntry|Mirantis Rally CI}}|<br />
{{ThirdPartySystemTableEntry|murano-ci}}|<br />
{{ThirdPartySystemTableEntryDown|NEC CI}}|<br />
{{ThirdPartySystemTableEntry|NEC Cinder CI}}|<br />
{{ThirdPartySystemTableEntry|NetApp CI}}|<br />
{{ThirdPartySystemTableEntry|NetApp SolidFire CI}}|<br />
{{ThirdPartySystemTableEntry|Networking-spp Integration CI}}|<br />
{{ThirdPartySystemTableEntry|Nexenta CI}}|<br />
{{ThirdPartySystemTableEntry|Nexenta Manila CI}}|<br />
{{ThirdPartySystemTableEntry|Nimble Storage CI}}|<br />
{{ThirdPartySystemTableEntry|Nokia Airframe CI}}|<br />
{{ThirdPartySystemTableEntry|NTT SystemFault MasakariIntegration CI}}|<br />
{{ThirdPartySystemTableEntry|Nuage CI}}|<br />
{{ThirdPartySystemTableEntryDown|OpenDaylight CI}}|<br />
{{ThirdPartySystemTableEntry|Open-E JovianDSS CI}}|<br />
{{ThirdPartySystemTableEntryDown|OPNFV CI}}|<br />
{{ThirdPartySystemTableEntry|Oracle ZFSSA CI}}|<br />
{{ThirdPartySystemTableEntryDown|PLUMgrid CI}}|<br />
{{ThirdPartySystemTableEntry|Pengyun Cinder CI}}|<br />
{{ThirdPartySystemTableEntry|ProphetStor CI}}|<br />
{{ThirdPartySystemTableEntry|PhazrIO CI}}|<br />
{{ThirdPartySystemTableEntry|Pure Storage CI}}|<br />
{{ThirdPartySystemTableEntry|QNAP CI}}|<br />
{{ThirdPartySystemTableEntry|Quantastor CI}}|<br />
{{ThirdPartySystemTableEntry|Quobyte CI}}|<br />
{{ThirdPartySystemTableEntry|Rackspace CloudDNS CI}}|<br />
{{ThirdPartySystemTableEntry|Rackspace GolangSwift CI}}|<br />
{{ThirdPartySystemTableEntry|Radware CI}}|<br />
{{ThirdPartySystemTableEntry|RDO Third Party CI}}|<br />
{{ThirdPartySystemTableEntry|RedHat CI}}|<br />
{{ThirdPartySystemTableEntry|RedHat GlusterFS CI}}|<br />
{{ThirdPartySystemTableEntryDown|RedHat̞ NFVPE CI}}|<br />
{{ThirdPartySystemTableEntry|RedHat RDO CI}}|<br />
{{ThirdPartySystemTableEntry|Reduxio HX550 CI}}|<br />
{{ThirdPartySystemTableEntry|SandStone Storage CI}}|<br />
{{ThirdPartySystemTableEntry|Scality CI}}|<br />
{{ThirdPartySystemTableEntryDown|Seagate CI}}|WIP<br />
{{ThirdPartySystemTableEntry|Snabb-NFV CI}}|<br />
{{ThirdPartySystemTableEntry|Software Factory CI}}|ZuulV3|<br />
{{ThirdPartySystemTableEntry|SolidFire CI}}|<br />
{{ThirdPartySystemTableEntry|StorPool distributed storage CI}}|<br />
{{ThirdPartySystemTableEntry|SUSE Cloud CI}}|<br />
{{ThirdPartySystemTableEntry|SwiftStack Cluster CI}}|<br />
{{ThirdPartySystemTableEntry|Synology DSM CI}}|<br />
{{ThirdPartySystemTableEntry|Tail-f NCS CI}}|<br />
{{ThirdPartySystemTableEntry|Tegile Storage CI}}|<br />
{{ThirdPartySystemTableEntry|Tintri CI}}|<br />
{{ThirdPartySystemTableEntry|TOYOU ACS5000 CI}}|<br />
{{ThirdPartySystemTableEntry|UFCG OneView CI}}|<br />
{{ThirdPartySystemTableEntry|Vanilla Stack CI}}|<br />
{{ThirdPartySystemTableEntry|vArmour CI}}|<br />
{{ThirdPartySystemTableEntry|Vedams SCST CI}}|<br />
{{ThirdPartySystemTableEntry|Veritas HyperScale CI}}|<br />
{{ThirdPartySystemTableEntry|Veritas Access CI}}|<br />
{{ThirdPartySystemTableEntry|Violin Memory CI}}|<br />
{{ThirdPartySystemTableEntry|Virtuozzo CI}}|<br />
{{ThirdPartySystemTableEntryDown|Virtuozzo Storage CI}}|<br />
{{ThirdPartySystemTableEntry|VMware CI}}|<br />
{{ThirdPartySystemTableEntry|Wherenow.org CI}}|<br />
{{ThirdPartySystemTableEntry|XenProject CI}}|<br />
{{ThirdPartySystemTableEntry|XenServer CI}}|<br />
{{ThirdPartySystemTableEntry|X-IO technologies CI}}|<br />
{{ThirdPartySystemTableEntryDown|XP Storage CI}}|<br />
{{ThirdPartySystemTableEntry|ZadaraStorage VPSA CI}}|<br />
{{ThirdPartySystemTableEntry|ZTE cinder2 CI}}|<br />
{{ThirdPartySystemTableEntry|Example}}|<br />
|-<br />
|}<br />
<br />
<br />
Instructions on how to add a new system to the above table:<br />
* Add an '''alphabetical''' entry in the above table: <code><nowiki>{{ThirdPartySystemTableEntry|Example}}|Comment</nowiki></code> where Example is the name of your system and Comment (optional) is a free text comment about your system <br />
* Save the page and click on the link to the new page<br />
* Select the "edit the page" option and paste <code><nowiki>{{subst:ThirdPartySystemInfoSubst}}</nowiki></code> into your new page and then save it. This will expand to a table. Edit the table replacing the placeholder values with the correct values for your system<br />
* If your system is going down or having problems, change the entry to <code><nowiki>{{ThirdPartySystemTableEntryDown|<your ci system name>}}</nowiki></code><br />
----<br />
<br />
Do you have a Gerrit CI account created for you by the Infrastructure team and you want to update it? [https://wiki.openstack.org/wiki/OldtoNewGerritCIAccount Read how here].<br />
<br />
[[Category:ThirdPartySystems]]</div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=Cinder/Specs/NVMEMDHealingAgent&diff=176979
Cinder/Specs/NVMEMDHealingAgent
2020-11-29T17:17:19Z
<p>Zohar.cloud: Created page with "=== OpenStack Healing Agent === ===== init ===== host_uuid host_nqn <other params such as version> Get host uuid and nqn, schedule main method to run every X second..."</p>
<hr />
<div>=== OpenStack Healing Agent ===<br />
<br />
===== init =====<br />
host_uuid<br />
host_nqn<br />
<other params such as version><br />
<br />
Get host uuid and nqn, schedule main method to run every X seconds<br />
<br />
<br />
===== main method =====<br />
<br />
This will be scheduled to run every X seconds on the connectoring host with the following sub methods:<br />
<br />
===== hostprobe =====<br />
<br />
Call storage provisioner /hostprobe API with stored info:<br />
host_nqn<br />
host_uuid<br />
host_name<br />
client_type<br />
duration<br />
version<br />
<br />
<br />
===== monitor_host =====<br />
<br />
Query storage provisioner for metadata on all volumes belonging to this host (uuid)<br />
Inspect all KS volume NVMe connections / hook into their events<br />
Inspect every KS replicated volume host MD for its legs states<br />
<br />
<br />
Call self_healing spec below with provisioner metadata + inspected host volume devices info:<br />
<br />
===== self_healing =====<br />
If storage provisioner metadata shows a different set of legs for the volume than what was inspected on the host, reconcile the volume’s MD state:<br />
1. Connect to targets of new replicas if not already connected<br />
2. Remove replica legs from MD that provisioner says no longer part of the volume<br />
3. Re-assemble MD with provisioner replicas info of the volume<br />
<br />
===== Active self healing: =====<br />
If the host MD shows one of its legs as failed, but metadata from storage provisioner says it is supposed to be available, report to the provisioner the failed/missing leg. (and vice versa for available leg that provisioner says is supposed to be failed/missing.)<br />
<br />
If the volume has maxDownTime>0, and the provisioner reports a leg as missing for more than maxDownTime, and the volume is not being migrated, try to replace the leg:<br />
1. Call provisioner add_replica (with node’s host uuid / topology)<br />
2. Publish the replica and connect to it<br />
3. If successful, call provisioner delete_replica for the missing leg<br />
<br />
<br />
Also in monitor_host report to provisioner any of the detected events below:<br />
<br />
Target connect / disconnect<br />
Replicated volume degraded / healed<br />
Replicated volume started / finished sync<br />
NVMe session established / closed<br />
<br />
(This is for monitoring/telemetry purposes)</div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=Cinder/Specs/NVMEConnectorMDSupport&diff=176978
Cinder/Specs/NVMEConnectorMDSupport
2020-11-29T17:13:14Z
<p>Zohar.cloud: Created page with "=== NVMeoF+MD Connector === ===== get_connector_properties ===== Get host uuid Get host initiator nqn /etc/nvme/hostnqn - If nqn not generated yet, generate it. return: d..."</p>
<hr />
<div>=== NVMeoF+MD Connector ===<br />
<br />
===== get_connector_properties =====<br />
Get host uuid<br />
Get host initiator nqn<br />
/etc/nvme/hostnqn - If nqn not generated yet, generate it.<br />
<br />
return:<br />
dict:<br />
uuid<br />
nqn<br />
<br />
<br />
===== connect_volume =====<br />
connection_properties:<br />
volume_replicas:<br />
target_nqn<br />
portals<br />
vol_uuid<br />
alias<br />
writable<br />
<flat volume properties if not replicated><br />
<br />
Check if healing agent is running, if not, launch it by calling its init method<br />
<br />
NVMe connect to portals of volume replicas for targets that are not connected.<br />
If volume is not replicated, after connect to portals: return path to bare NVMe device<br />
If the target was already connected, async re-scan was supposed to be initiated by the driver create_volume call to provisioner publish (in driver spec above.)<br />
nvme connect -a <portal_address> -s <portal_port> -t <portal_transport> -n <target_nqn> -Q 128 -l -1<br />
<br />
For all above replicas (now host should be connected to all their targets) get the host device path:<br />
Scan through ALL host NVMe devices and match by target_nqn<br />
Then match by volume uuid in all devices from above target controller<br />
<br />
Create raid from above found devices for each replica<br />
mdadm -C [-o] <device_name> -R [-N <name>] --level <raid_type> --raid-devices=<num_drives> --bitmap=internal --homehost=any --failfast --assume-clean <drive1 … driveN><br />
<br />
return:<br />
type='block'<br />
path=<device path><br />
<br />
<br />
===== disconnect_volume =====<br />
connection_properties:<br />
device_path<br />
volume_replicas<br />
device_info:<br />
path<br />
<br />
Destroy RAID on device path if replicated<br />
After disconnect from last remaining NVMe device on a target: `nvme disconnect`<br />
<br />
<br />
===== extend_volume =====<br />
connection_properties:<br />
device_path<br />
volume_replicas<br />
<br />
Grow RAID array to new size<br />
mdadm --grow /dev/mdX --size <new_size></div>
Zohar.cloud
https://wiki.openstack.org/w/index.php?title=Cinder/Specs/KumoScaleVolumeDriver&diff=176977
Cinder/Specs/KumoScaleVolumeDriver
2020-11-29T16:56:52Z
<p>Zohar.cloud: Created page with "=== KumoScale Cinder Volume Driver === ===== create_volume ===== volume: display_name name size availability_zone Call provisioner create_volu..."</p>
<hr />
<div>=== KumoScale Cinder Volume Driver ===<br />
<br />
===== create_volume =====<br />
volume:<br />
display_name<br />
name<br />
size<br />
availability_zone<br />
<br />
Call provisioner create_volume with StorageClass and VolumeCreate entities<br />
entities.StorageClass(self.num_replicas, None, None, zone_list, self.block_size, self.max_iops_per_gb, self.desired_iops_per_gb, self.max_bw_per_gb, self.desired_bw_per_gb, self.same_rack_allowed, self.max_replica_down_time, None, self.span_allowed)<br />
entities.VolumeCreate(volume_name, volume_size, storage_class, Self.provisioning_type, self.vol_reserved_space_percentage, 'NVMeoF', volume_uuid)<br />
kumoscale.create_volume(ks_volume)<br />
<br />
<br />
===== delete_volume =====<br />
volume:<br />
name<br />
<br />
Call provisioner delete_volume<br />
kumoscale.delete_volume(volume_uuid)<br />
<br />
<br />
===== create_snapshot =====<br />
snapshot:<br />
display_name<br />
name<br />
volume_id<br />
<br />
Call provisioner create_snapshot with SnapshotCreate entity<br />
ks_snapshot = entities.SnapshotCreate(snapshot_name, volume_uuid, self.snap_reserved_space_percentage, snapshot_uuid)<br />
kumoscale.create_snapshot(ks_snapshot)<br />
<br />
<br />
===== delete_snapshot =====<br />
snapshot:<br />
name<br />
<br />
Call provisioner delete_snapshot<br />
kumoscale.delete_snapshot(snapshot_uuid)<br />
<br />
<br />
===== create_volume_from_snapshot =====<br />
volume:<br />
display_name<br />
name<br />
snapshot:<br />
name<br />
<br />
Call provisioner create_snapshot_volume with SnapshotVolumeCreate entity<br />
entities.SnapshotVolumeCreate(volume_name, snapshot_uuid, self.writable, reserved_space_percentage, volume_uuid, self.max_iops_per_gb, self.max_bw_per_gb, 'NVMeoF', self.snap_vol_span_allowed)<br />
kumoscale.create_snapshot_volume(ks_snapshot_volume)<br />
<br />
<br />
===== extend_volume =====<br />
volume:<br />
name<br />
new_size<br />
<br />
Call provisioner extend volume API<br />
kumoscale.<extend volume>(volume_uuid, new_size)<br />
<br />
<br />
===== initialize_connection =====<br />
volume:<br />
display_name<br />
name<br />
connector:<br />
uuid<br />
nqn<br />
<br />
Call provisioner host probe (to register initiator host for first time)<br />
(alternatively can first check if host already registered)<br />
kumoscale.<hostprobe>(connector.uuid, connector.nqn, [-1 interval])<br />
<br />
Call provisioner publish<br />
kumoscale.publish(host_uuid, volume_uuid)<br />
<br />
Now, build connection info dict with replica info as per return value spec (at end of this method spec)<br />
<br />
Query provisioner for volume dict<br />
kumoscale.get_volumes_by_id(volume_uuid)<br />
Expected volume dict: (use first element of result)<br />
uuid<br />
location:<br />
[<br />
uuid<br />
backend:<br />
persistentID<br />
]<br />
writeable<br />
<br />
Query provisioner for targets for the volume uuid<br />
kumoscale.get_targets(None, volume_uuid)<br />
<br />
For each target, query provisioner for its backend and add the backend portals to a list<br />
kumoscale.get_backend_by_id(persistent_id)<br />
Expected backend dict: (use first (and only) element)<br />
pi #backend persistent id<br />
portals:<br />
[<br />
ip<br />
port<br />
transport<br />
]<br />
<br />
<br />
Also for each target, loop through volume replicas:<br />
For each matching replica by backend persistentID to target backend persistentID:<br />
Match replica persistentID with portals list (above) - and add matching portals to replica dict<br />
replica dict:<br />
vol_uuid<br />
target_nqn<br />
alias<br />
writable<br />
portals<br />
<br />
return:<br />
driver_volume_type<br />
data:<br />
volume_replicas:<br />
[<br />
vol_uuid<br />
alias<br />
writable<br />
target_nqn<br />
portals<br />
]<br />
<br />
<br />
===== terminate_connection =====<br />
volume:<br />
display_name<br />
name<br />
connector:<br />
uuid<br />
<br />
Call provisioner unpublish<br />
kumoscale.unpublish(host_uuid, volume_uuid)<br />
<br />
<br />
===== get_volume_stats =====<br />
Populate static/constant values…<br />
Call provisioner get_tenants<br />
kumoscale.get_tenants()<br />
Use “default tenant” (0) stats for total capacity and free capacity<br />
<br />
return:<br />
dict:<br />
volume_backend_name<br />
vendor_name<br />
driver_version<br />
storage_protocol<br />
consistencygroup_support<br />
thin_provisioning_support<br />
multiattach<br />
total_capacity_gb<br />
free_capacity_gb</div>
Zohar.cloud