Manila/docs/Setting up DevStack with Manila on Fedora 20

= Manila + DevStack setup on Fedora 20 =

Goal
Document the steps needed to setup DevStack with openstack Manila on F20.

Pre-requisites
F20 installed on a VM or physical system.

In this document, I am using a VM as a F20 system. So my DevStack will be hosted inside a VM and instances (aka guest) created by DevStack will be a VM inside a VM (aka nested KVM).

It's good to create a F20 VM with atleast 4G RAM, 4 vcpus and sufficient disk space (50G in my case).

Disable selinux or put it in permissive mode.

Install and run DevStack (For Kilo and beyond version of devstack)
From Kilo onwards, Manila can be configured in devstack using the devstack plugin mechanism. Follow the steps mentioned in KiloDevstack to get Manila up and running in devstack

Sanity check &amp; Troubleshooting
  In case  didn't succeed for you, then try to figure what the script is complaining about from the last error seen. Sometimes it errors out because some of the system services were not started. Ideally DevStack should start all the needed system services but there can be corner cases. In such cases use  to start the needed service. For eg: [stack@devstack-large-vm ~]$ sudo systemctl start rabbitmq-server.service and so on. I have also seen that just re-doing  sometime works!   Otherwise google :) or ask on openstack-dev@lists.openstack.org with [DevStack] as the tag in the subject of the mail, so that it can get the attention of the right folks.   One other way is to ask your Q on #openstack-dev channel hosted on irc.freenode.net.   Assuming  succeeded for you, next step is to do some basic sanity checks and setup the development shell environment. DevStack (by default) arranges all the services' console's in a multi-window screen session, which can be accessed by doing:  [stack@devstack-large-vm ~]$ screen -x stack  This brings up the screen session with each service and its corresponding window at the bottom of the screen.  Press   to bring up the service selection window, select the service you want to go to the console of and press  .   Ensure that   service is NOT listed in the service selection window, since we disabled.   Ensure that all other services are running fine, by going to the console of each one of them.   DevStack sets up  and   tenants, hence apart from the terminal hosting the screen window, I generally open 2 more terminals to my DevStack VM and run the below scripts to get   and   shells which can be used in future to quickly run openstack commands with   or   tenants' privileges. For terminal with  privileges : [root@devstack-large-vm ~]# su - stack

[stack@devstack-large-vm ~]$ cat ~/mytools/setenv_admin export OS_USERNAME=admin` export OS_TENANT_NAME=admin` export OS_PASSWORD=abc123` export OS_AUTH_URL=http://192.168.122.219:5000/v2.0/` export PS1=$PS1\[admin\]\
 * 1) source this file to set env and then run os cmds like `cinder list` etc

[stack@devstack-large-vm ~]$ source ~/mytools/setenv_admin [stack@devstack-large-vm ~]$ [admin] For terminal with  privileges : [root@devstack-large-vm ~]# su - stack

[stack@devstack-large-vm ~]$ cat ~/mytools/setenv_demo export OS_USERNAME=demo export OS_TENANT_NAME=demo export OS_PASSWORD=abc123 export OS_AUTH_URL=http://192.168.122.219:5000/v2.0/ export PS1=$PS1\[demo\]\
 * 1) source this file to set env and then run os cmds like `cinder list` etc

[stack@devstack-large-vm ~]$ source ~/mytools/setenv_demo [stack@devstack-large-vm ~]$ [demo]   Now do a sanity check in your  and   shells using some basic openstack commands. [stack@devstack-large-vm ~]$ [demo] cinder list +++--+--+-+--+-+ +++--+--+-+--+-+ +++--+--+-+--+-+
 * ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |

[stack@devstack-large-vm ~]$ [demo] nova list ++--+++-+--+ ++--+++-+--+ ++--+++-+--+
 * ID | Name | Status | Task State | Power State | Networks |

[stack@devstack-large-vm ~]$ [demo] manila list ++--+--+-++-+ ++--+--+-++-+ ++--+--+-++-+
 * ID | Name | Size | Share Proto | Status | Export location |

[stack@devstack-large-vm ~]$ [demo] glance image-list +--+-+-+--+---++ +--+-+-+--+---++ +--+-+-+--+---++ <ul> NOTE:  image is added by Manila scripts to DevStack.</li></ul> </li>  Sometimes I have seen that manila share service errors out. From its console window you can try to debug whats the exception/error due to which its errored out. Most of the time I have seen it erroring out due to some exception related to networking. This can happens due to race between the time  was started and   was started. Just restarting  worked for me almost all the times. </li>  To restart a failed service, go to its service console window, get the last run command by pressing  key once (and just once!) and hit. In general, its a good idea to restart a failed service to check if its works fine before concluding that its really a failure of the service and hence it needs further debug. </li>  One other issue you might encounter is Cinder volume service  giving Warnings about unable to initialize the default LVM iSCSI driver. This typically happens because the loop device needed as PV for the stack-volumes VG isn't created. Follow the steps below to create the loop device PV for. [stack@devstack-large-vm ~]$ [admin] sudo pvs PV        VG           Fmt  Attr PSize PFree /dev/loop0 stack-shares lvm2 a-- 8.20g 8.20g
 * ID                                  | Name                            | Disk Format | Container Format | Size      | Status |
 * c3c32496-0b90-4520-a9d4-b9341afa5993 | cirros-0.3.2-x86_64-uec        | ami         | ami              | 25165824  | active |
 * 62a9b748-352f-41a6-9081-25dd48319da8 | cirros-0.3.2-x86_64-uec-kernel | aki         | aki              | 4969360   | active |
 * e7591991-8bc8-470a-a15c-723031e7b809 | cirros-0.3.2-x86_64-uec-ramdisk | ari        | ari              | 3723817   | active |
 * 5d470fc2-39e3-461d-a4ea-b1b5de795604 | ubuntu_1204_nfs_cifs           | qcow2       | bare             | 318701568 | active |

[stack@devstack-large-vm ~]$ [admin] sudo losetup -f --show /opt/stack/data/stack-volumes-backing-file /dev/loop1

[stack@devstack-large-vm ~]$ [admin] losetup -a /dev/loop0: []: (/opt/stack/data/stack-shares-backing-file) /dev/loop1: []: (/opt/stack/data/stack-volumes-backing-file)

[stack@devstack-large-vm ~]$ [admin] sudo vgs VG           #PV #LV #SN Attr   VSize  VFree stack-shares   1   0   0 wz--n-  8.20g  8.20g stack-volumes  1   0   0 wz--n- 10.01g 10.01g Now goto  service window in the screen session, kill the   service by pressing   and then restart   service by running the last command (which can be accessed using the   key). Now  should not complain about the un-initialized driver. </li></ol>

Re-run / Re-join DevStack
(Post a reboot/restart of your DevStack VM/system)

<ol style="list-style-type: decimal;">  In case you reboot and/or restart the DevStack VM/host, you can re-join the same DevStack setup by doing (assuming you logged in afresh as ). [root@devstack-large-vm ~]# su - stack [stack@devstack-large-vm ~]$ cd devstack <ul> NOTE: In general, its a good idea to check if the stack-volumes VG is present, and if not, create it before doing. This ensures that you won't hit the problem stated in #11 in the Troubleshooting section above.</li></ul>

[stack@devstack-large-vm ~]$ sudo losetup -f --show /opt/stack/data/stack-volumes-backing-file Now do  to recreate &amp; join your existing devstack setup. [stack@devstack-large-vm ~]$ ./rejoin-stack.sh </li>  Before you do the above, its good to check if some of the important system services are running, if not, use  command to start them. Use  command to get a glimpse of the system services. [stack@devstack-large-vm devstack]$ openstack-status

Support services
mysqld:                                inactive  (disabled on boot) libvirtd:                              active openvswitch:                           active dbus:                                  active rabbitmq-server:                       active

[stack@devstack-large-vm devstack]$ sudo systemctl start mysqld.service

[stack@devstack-large-vm devstack]$ openstack-status

Support services
mysqld:                                active    (disabled on boot) libvirtd:                              active openvswitch:                           active dbus:                                  active rabbitmq-server:                       active <ul> NOTE: You can use  command to ensure that these services are auto started on system boot, but for some reason it doesn't work for   service.</li></ul> </li>  Again, here too you may end up seeing some openstack services not starting properly which could happen due to race between the invocation of different services and/or some dependencies like VG not present. See the Troubleshooting section above for resolution. is successful once all the openstack services in the screen session are working fine without any errors. As always, sanity check if DevStack setup is successful by following the steps mentioned in the Sanity check section above. </li></ol>

Create a Nova instance
<ol style="list-style-type: decimal;">  We will create a Nova instance (aka VM / Guest) using the ubuntu image present in glance image-list. Before we do that, some changes are needed in  as below: For some reason (maybe nested kvm is broken on F20), _if_ the Nova instance hangs during boot, remove. To do that: "in [libvirt] section, ensure" Sometimes i have seen that nova-scheduler doesn't pick the DevStack host possibly due to low mem/cpus available due to which instance creation errors out. Since ours is a all-in-one (AIO) development setup, we want to make sure that our DevStack VM/host is always selected (aka filtered) by nova-scheduler inspite of low mem / cpus. To achive this it's preferred to do: "in [default] section, append" Don't forget to restart  and   services for the above   changes to take effect. </li>  Switch to  tenant's shell &amp; create a Nova instance using the commands below. [stack@devstack-large-vm ~]$ [demo] nova keypair-add --pub_key ~/.ssh/id_rsa.pub mykey

[stack@devstack-large-vm ~]$ [demo] nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 +-+---+-+---+--+ +-+---+-+---+--+ +-+---+-+---+--+
 * IP Protocol | From Port | To Port | IP Range | Source Group |
 * tcp        | 22        | 22      | 0.0.0.0/0 |              |

[stack@devstack-large-vm ~]$ [demo] nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 +-+---+-+---+--+ +-+---+-+---+--+ +-+---+-+---+--+
 * IP Protocol | From Port | To Port | IP Range | Source Group |
 * icmp       | -1        | -1      | 0.0.0.0/0 |              |

[stack@devstack-large-vm ~]$ [demo] nova boot --flavor m1.micro --image ubuntu_1204_nfs_cifs --key-name mykey --security-groups default myvm_ubuntu </li>  Wait for the Instance to get into ACTIVE/Running state and then ssh into it as a sanity check. [stack@devstack-large-vm ~]$ [demo] nova list +--+-+++-+--+ | ID                                  | Name        | Status | Task State | Power State | Networks         | +--+-+++-+--+ | f92c51fd-de36-402f-b072-a0e515116892 | myvm_ubuntu | ACTIVE | -         | Running     | private=10.0.0.4 | +--+-+++-+--+ Openstack sets up Nova instances in a private subnet using neutron services. As you can see the instance IP is 10.x.x.x which is a different subnet compared to your DevStack VM/host subnet. The private subnet is created by neutron using a combination of network namespaces, linux bridges &amp; openvswitch bridges. Thus, one can't get to the instances using just ssh, but need to use the network namespace and ssh from within that namespace, as shown below: [stack@devstack-large-vm ~]$ [demo] ip netns qrouter-7587cea0-4015-4a18-a191-20ce7be410e4 qdhcp-26f7e398-39e7-465f-8997-43062a825c27

[stack@devstack-large-vm ~]$ [demo] sudo ip netns exec qdhcp-26f7e398-39e7-465f-8997-43062a825c27 ssh ubuntu@10.0.0.4 If everything is setup as expected, you should be able to successfully ssh into the instance with the above command. <ul> NOTE: password for user  is   for the   image used here. One can use  inside the instance to run commands as  .</li></ul> </li></ol>

Create Manila share and access from Nova instance
<ol style="list-style-type: decimal;">  Create a new share network in Manila for use by the tenant. List the tenant's private net-id and subnet-id and create Manila share network using them. <ul>  NOTE: A share network is a private L2 subnet for the Manila share, created using neutron services and associated with the tenant's private subnet in order to achieve multi-tenancy using L2 level isolation. [stack@devstack-large-vm ~]$ [demo] neutron net-list +--+-+--+ | id                                  | name    | subnets                                          | +--+-+--+ | 8031f472-2b64-430c-8131-7aad456ebfbb | private | 77343a5f-f553-4e20-af42-698890d8a269 10.0.0.0/24 | +--+-+--+
 * b5f39b46-6d75-4df2-a2d0-eaa410b184fd | public | 6bb017ac-bfbf-425d-803b-31b297c4604c             |

[stack@devstack-large-vm ~]$ [demo] neutron subnet-list +--++-++ | id                                  | name           | cidr        | allocation_pools                           | +--++-++ | 77343a5f-f553-4e20-af42-698890d8a269 | private-subnet | 10.0.0.0/24 | {&quot;start&quot;: &quot;10.0.0.2&quot;, &quot;end&quot;: &quot;10.0.0.254&quot;} | +--++-++

[stack@devstack-large-vm ~]$ [demo] manila share-network-create --neutron-net-id 8031f472-2b64-430c-8131-7aad456ebfbb --neutron-subnet-id 77343a5f-f553-4e20-af42-698890d8a269 --name share_network_for_10xxx --description &quot;Share network for 10.0.0.0/24 subnet&quot;

[stack@devstack-large-vm ~]$ [demo] manila share-network-list +--+-++ |                 id                  |           name          | status | +--+-++ | 085c596f-feac-4539-97cd-393279e99098 | share_network_for_10xxx | None  | +--+-++ </li></ul> </li>  Create a new Manila share (aka export). Manila by default uses the GenericShareDriver which uses Cinder services to create a new cinder volume, export it as a block device, mkfs it and export the filesystem as a NFS share. All of this happens transparently in a service VM that's created and managed by Manila ! [stack@devstack-large-vm ~]$ [demo] grep share_driver /etc/manila/manila.conf share_driver = manila.share.drivers.generic.GenericShareDriver

[stack@devstack-large-vm ~]$ [demo] manila create --name cinder_vol_share_using_nfs --share-network-id 085c596f-feac-4539-97cd-393279e99098  NFS 1

[stack@devstack-large-vm ~]$ [demo] manila list +--++--+-+---+---+ |                 ID                  |            Name            | Size | Share Proto |   Status  |                        Export location                        | +--++--+-+---+---+ | 1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 | cinder_vol_share_using_nfs | 1   |     NFS     | available | 10.254.0.3:/shares/share-1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 | +--++--+-+---+---+ <ul> <li>NOTE:  is the IP of the service VM and   is the export path.</li></ul> </li> <li> Allow access of the share to the Nova instance. [stack@devstack-large-vm ~]$ [demo] manila access-allow 1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 ip 10.0.0.4 <ul> <li>NOTE: As part of  Manila ensures that the service VM exports the export path for the specified tenant IP only.</li></ul> </li> <li> Login to the Nova instance and mount the share. [stack@devstack-large-vm ~]$ [demo] sudo ip netns exec qdhcp-26f7e398-39e7-465f-8997-43062a825c27 ssh ubuntu@10.0.0.4 <ul> <li> NOTE: password is. ubuntu@ubuntu:~$ sudo mount -t nfs -o vers=4 10.254.0.3:/shares/share-1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 /mnt ubuntu@ubuntu:~$ df -h Filesystem                                                    Size  Used Avail Use% Mounted on` /dev/vda1                                                     1.4G  524M  793M  40% /` udev                                                           56M  4.0K   56M   1% /dev` tmpfs                                                          24M  360K   23M   2% /run` none                                                          5.0M     0  5.0M   0% /run/lock` none                                                           59M     0   59M   0% /run/shm` 10.254.0.3:/shares/share-1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 1008M  34M  924M   4% /mnt </li> <li> NOTE: If all goes well as expected, you should be able to successfully mount the Manila share in your Nova instance, as seen above. </li></ul>

Good Luck ! </li></ol>

Troubleshooting
<ol style="list-style-type: decimal;"> <li> erroring out due to exception in. log shows the below exception. Traceback (most recent call last): File &quot;/opt/stack/manila/manila/openstack/common/rpc/amqp.py&quot;, line 433, in _process_data **args)  File &quot;/opt/stack/manila/manila/openstack/common/rpc/dispatcher.py&quot;, line 148, in dispatch     return getattr(proxyobj, method)(ctxt, **kwargs)   File &quot;/opt/stack/manila/manila/share/manager.py&quot;, line 165, in create_share     self.db.share_update(context, share_id, {'status': 'error'})   File &quot;/usr/lib64/python2.7/contextlib.py&quot;, line 24, in exit     self.gen.next   File &quot;/opt/stack/manila/manila/share/manager.py&quot;, line 159, in create_share     context, share_ref, share_server=share_server) File &quot;/opt/stack/manila/manila/share/drivers/generic.py&quot;, line 132, in create_share volume = self._attach_volume(self.admin_context, share, server, volume) File &quot;/opt/stack/manila/manila/share/drivers/service_instance.py&quot;, line 112, in wrapped_func return f(self, *args, **kwargs) File &quot;/opt/stack/manila/manila/share/drivers/generic.py&quot;, line 198, in _attach_volume % volume['id']) ManilaException: Failed to attach volume 2a5bf78f-313d-463e-9b07-bb7a98080ce1  log at the same time, has the below exception.  2014-06-16 16:39:38.440 DEBUG cinder.openstack.common.processutils [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c 795095] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdd a from (pid=4623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142 2014-06-16 16:39:38.981 DEBUG cinder.openstack.common.processutils [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c 795095] Result was 107 from (pid=4623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:167 2014-06-16 16:39:38.981 WARNING cinder.brick.iscsi.iscsi [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095] Fa iled to create iscsi target for volume id:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda: Unexpected error while running command. Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda Exit code: 107 Stdout: 'Command:\n\ttgtadm -C 0 --lld iscsi --op new --mode target --tid 1 -T iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda\nexited with code: 107.\n' Stderr: 'tgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\n' 2014-06-16 16:39:38.982 ERROR oslo.messaging.rpc.dispatcher [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095] Exception during message handling: Failed to create iscsi target for volume volume-2a5bf78f-313d-463e-9b07-bb7a98080ce1. 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher  File &quot;/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py&quot;, line 134, in _dispatch_and_reply 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher    incoming.message)) 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher  File &quot;/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py&quot;, line 177, in _dispatch 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher    return self._do_dispatch(endpoint, method, ctxt, args) 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher  File &quot;/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py&quot;, line 123, in _do_dispatch 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher    result = getattr(endpoint, method)(ctxt, **new_args) 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher  File &quot;/opt/stack/cinder/cinder/volume/manager.py&quot;, line 783, in initialize_connection 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher    volume) 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File &quot;/opt/stack/cinder/cinder/volume/drivers/lvm.py&quot;, line 524, in create_export 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     return self._create_export(context, volume) 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File &quot;/opt/stack/cinder/cinder/volume/drivers/lvm.py&quot;, line 533, in _create_export 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     data = self.target_helper.create_export(context, volume, volume_path) 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File &quot;/opt/stack/cinder/cinder/volume/iscsi.py&quot;, line 53, in create_export 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     chap_auth) 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher  File &quot;/opt/stack/cinder/cinder/brick/iscsi/iscsi.py&quot;, line 219, in create_iscsi_target 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher    raise exception.ISCSITargetCreateFailed(volume_id=vol_id) 2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-2a5bf78f-313d-463e-9b07-bb7a98080ce1. <ul> <li> As we can see,  is unable to create a iSCSI target for the cinder volume, hence unable to attach the cinder volume to the manila service VM. </li> <li> Solution is to check if  is running and if not, start it. [root@devstack-large-vm ~]# systemctl status tgtd.service tgtd.service - tgtd iSCSI target daemon Loaded: loaded (/usr/lib/systemd/system/tgtd.service; disabled) Active: inactive (dead)

[root@devstack-large-vm ~]# systemctl start tgtd.service [root@devstack-large-vm ~]# chkconfig tgtd on Note: Forwarding request to 'systemctl enable tgtd.service'. ln -s '/usr/lib/systemd/system/tgtd.service' '/etc/systemd/system/multi-user.target.wants/tgtd.service' [root@devstack-large-vm ~]#

[root@devstack-large-vm ~]# systemctl status tgtd.service tgtd.service - tgtd iSCSI target daemon Loaded: loaded (/usr/lib/systemd/system/tgtd.service; enabled) Active: active (running) since Tue 2014-06-17 05:50:42 UTC; 29s ago Main PID: 10623 (tgtd) CGroup: /system.slice/tgtd.service └─10623 /usr/sbin/tgtd -f Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: librdmacm: Warning: couldn't read ABI version. Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: librdmacm: Warning: assuming: 4 Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: librdmacm: Fatal: unable to get RDMA device list Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: tgtd: iser_ib_init(3355) Failed to initialize RDMA; load kernel modules? Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: tgtd: work_timer_start(146) use timer_fd based scheduler Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: tgtd: bs_init_signalfd(271) could not open backing-store module directory /usr/lib64/tgt/backing-store Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: tgtd: bs_init(390) use signalfd notification Jun 17 05:50:42 devstack-large-vm.localdomain systemd[1]: Started tgtd iSCSI target daemon. </li> <li> Now  should go through ! </li></ul> </li> <li> Deleting the last Manila share doesn't shutdown the service VM, inspite of  set in   You can delete a Manila share by doing: [stack@devstack-large-vm ~]$ [demo] manila delete de45c4db-aa89-4887-ab3c-153d7b909708

[stack@devstack-large-vm ~]$ [demo] manila list ++--+--+-++-+ ++--+--+-++-+ ++--+--+-++-+ List all the tenant's VMs using   privileges: [stack@devstack-large-vm ~]$ [admin] nova list --all-tenants +--+---+-++-+---+ +--+---+-++-+---+ +--+---+-++-+---+ As can be seen, the service VM  is still running. The service VM is created using tenant  and user , so switch to those credentials and turn off the service VM as below : <ul> <li> NOTE: It's a good idea to create a new source file for this [stack@devstack-large-vm ~]$ cat ~/mytools/setenv_service export OS_USERNAME=nova export OS_TENANT_NAME=service export OS_PASSWORD=abc123 export OS_AUTH_URL=http://192.168.122.219:5000/v2.0/
 * ID | Name | Size | Share Proto | Status | Export location |
 * ID                                  | Name                                                                  | Status  | Task State | Power State | Networks                          |
 * 1317b8e6-0d02-4e6b-934a-225752dd809c | manila_service_instance_backend1_8c2fc21d-3dd8-42a2-8363-cacc726df9fa | ACTIVE | -          | Running    | manila_service_network=10.254.0.3 |
 * 28c1aeff-ed98-4d53-b9ba-36028558ebf3 | myvm_ubuntu                                                          | SHUTOFF | -          | Shutdown     | private=10.0.0.3                  |
 * 1) source this file to set service tenant's priviledges

export PS1=$PS1\[service\]\

[stack@devstack-large-vm ~]$ source ~/mytools/setenv_service

[stack@devstack-large-vm ~]$ [service] nova stop 1317b8e6-0d02-4e6b-934a-225752dd809c </li></ul>

Now switch to  and check the status of service VM: [stack@devstack-large-vm ~]$ [admin] nova list --all-tenants +--+---+-++-+---+ +--+---+-++-+---+ +--+---+-++-+---+ </li> <li> Service VM doesn't restart, post, hence creating new shares errors out. As part of rejoining DevStack, the service VM should be automatically re-started if there is atleast 1 active share in the Manila DB. Sometime this doesn't happen and we need to manually restart the service VM for Manila create and other APIs to work properly. Check if service VM is started: [stack@devstack-large-vm ~]$ [admin] nova list --all-tenants +--+---+-++-+---+ +--+---+-++-+---+ +--+---+-++-+---+ Use the right credentials to start the service VM: [stack@devstack-large-vm ~]$ source ~/mytools/setenv_service
 * ID                                  | Name                                                                  | Status  | Task State | Power State | Networks                          |
 * 1317b8e6-0d02-4e6b-934a-225752dd809c | manila_service_instance_backend1_8c2fc21d-3dd8-42a2-8363-cacc726df9fa | SHUTOFF | -         | Shutdown    | manila_service_network=10.254.0.3 |
 * 28c1aeff-ed98-4d53-b9ba-36028558ebf3 | myvm_ubuntu                                                          | SHUTOFF | -          | Shutdown    | private=10.0.0.3                  |
 * ID                                  | Name                                                                  | Status  | Task State | Power State | Networks                          |
 * 1317b8e6-0d02-4e6b-934a-225752dd809c | manila_service_instance_backend1_8c2fc21d-3dd8-42a2-8363-cacc726df9fa | SHUTOFF | -         | Shutdown    | manila_service_network=10.254.0.3 |
 * 28c1aeff-ed98-4d53-b9ba-36028558ebf3 | myvm_ubuntu                                                          | SHUTOFF | -          | Shutdown    | private=10.0.0.3                  |

[stack@devstack-large-vm ~]$ [service] nova start 1317b8e6-0d02-4e6b-934a-225752dd809c

[stack@devstack-large-vm ~]$ [service] nova list +--+---+++-+---+ +--+---+++-+---+ +--+---+++-+---+ Now   and other operations should work </li></ol>
 * ID                                  | Name                                                                  | Status | Task State | Power State | Networks                          |
 * 1317b8e6-0d02-4e6b-934a-225752dd809c | manila_service_instance_backend1_8c2fc21d-3dd8-42a2-8363-cacc726df9fa | ACTIVE | -         | Running     | manila_service_network=10.254.0.3 |