Jump to: navigation, search

Difference between revisions of "Manila/docs/Setting up DevStack with Manila on Fedora 20"

< Manila‎ | docs
 
(5 intermediate revisions by 2 users not shown)
Line 15: Line 15:
 
Disable selinux or put it in permissive mode.
 
Disable selinux or put it in permissive mode.
  
== Install and run DevStack ==
+
== Install and run DevStack (For Kilo and beyond version of devstack) ==
  
'''(For the first time)'''
+
From Kilo onwards, Manila can be configured in devstack using the devstack plugin mechanism.
 +
Follow the steps mentioned in [https://wiki.openstack.org/wiki/Manila/KiloDevstack KiloDevstack] to get Manila up and running in devstack
  
=== Get DevStack ===
+
== Sanity check &amp; Troubleshooting ==
 
 
<ul>
 
<li><p>Login as root inside F20 VM and clone DevStack.</p>
 
<pre>[root@devstack-large-vm ~]# git clone https://github.com/openstack-dev/devstack.git
 
[root@devstack-large-vm ~]# cd devstack</pre></li>
 
<li><p>Run tools/create-stack-user.sh to create a new user and group called <code>stack</code>.</p>
 
<pre>[root@devstack-large-vm devstack]# tools/create-stack-user.sh</pre></li>
 
<li><p>Change to user <code>stack</code></p>
 
<pre>[root@devstack-large-vm devstack]# su - stack</pre></li>
 
<li><p>Clone DevStack again (as user <code>stack</code>) so that DevStack files have the right perms.</p>
 
<pre>[stack@devstack-large-vm ~]$ git clone https://github.com/openstack-dev/devstack.git</pre>
 
=== Get Manila ===
 
</li>
 
<li><p>Lets clone Manila now (as user <code>stack</code>).</p>
 
<pre>[stack@devstack-large-vm ~]$ cd  # to go back to home dir
 
 
 
[stack@devstack-large-vm ~]$ git clone https://github.com/stackforge/manila.git
 
 
 
[stack@devstack-large-vm ~]$ cd manila</pre></li></ul>
 
 
 
=== Enable DevStack to use Manila ===
 
 
 
<ol style="list-style-type: decimal;">
 
<li><p>Link Manila with DevStack.</p>
 
<ul>
 
<li>NOTE: Make appropriate changes to the steps below if your Manila and Devstack repos have different paths.</li></ul>
 
 
 
<pre>[stack@devstack-large-vm ~]$ export devstack_DIR=/opt/stack/devstack
 
 
 
[stack@devstack-large-vm ~]$ cd /opt/stack/manila/contrib/devstack
 
 
 
[stack@devstack-large-vm devstack]$ cp lib/manila ${devstack_DIR}/lib
 
 
 
[stack@devstack-large-vm devstack]$ cp extras.d/70-manila.sh ${devstack_DIR}/extras.d</pre>
 
=== Run DevStack ===
 
</li>
 
<li><p>Before we can run DevStack, we need to configure localrc to tell DevStack the different passwords, reclone/offline flags and which services we want to enable. For Manila we need to disable <code>nova-network</code> and enable <code>neutron</code> services.</p>
 
<p>If you don't have <code>localrc</code>, create one in <code>/opt/stack/devstack/</code> dir.</p></li>
 
<li><p>Below is my <code>localrc</code> that is known to work :)</p>
 
<pre>[stack@devstack-large-vm ~]$ cd /opt/stack/devstack
 
[stack@devstack-large-vm devstack]$ cat ./localrc
 
RECLONE=True
 
OFFLINE=False
 
DATABASE_PASSWORD=abc123
 
RABBIT_PASSWORD=abc123
 
SERVICE_TOKEN=abc123
 
SERVICE_PASSWORD=abc123
 
ADMIN_PASSWORD=abc123
 
disable_service n-net
 
enable_service q-svc
 
enable_service q-agt
 
enable_service q-dhcp
 
enable_service q-l3
 
enable_service q-meta
 
enable_service neutron
 
enable_service manila
 
enable_service m-api
 
enable_service m-shr
 
enable_service m-sch</pre>
 
<ul>
 
<li><p>NOTE: For more details on how some of the above DevStack options work, goto http://devstack.org.</p></li>
 
<li><p>NOTE: ''abc123'' is my password, you can change it as per your wish.</p></li></ul>
 
</li>
 
<li><p>Start DevStack</p>
 
<pre>[stack@devstack-large-vm devstack]$ ./stack.sh</pre></li>
 
<li><p>If all goes well, you should have your DevStack running with all the required services including Manila services. You should see a message something like below if all goes well.</p>
 
<pre>Horizon is now available at http://192.168.122.219/
 
Keystone is serving at http://192.168.122.219:5000/v2.0/
 
Examples on using novaclient command line is in exercise.sh
 
The default users are: admin and demo
 
The password: abc123
 
This is your host ip: 192.168.122.219
 
stack.sh completed in 783 seconds.</pre></li></ol>
 
 
 
=== Sanity check &amp; Troubleshooting ===
 
  
 
<ol style="list-style-type: decimal;">
 
<ol style="list-style-type: decimal;">
Line 228: Line 154:
 
<ol style="list-style-type: decimal;">
 
<ol style="list-style-type: decimal;">
 
<li><p>We will create a Nova instance (aka VM / Guest) using the ubuntu image present in glance image-list. Before we do that, some changes are needed in <code>/etc/nova/nova.conf</code> as below:</p>
 
<li><p>We will create a Nova instance (aka VM / Guest) using the ubuntu image present in glance image-list. Before we do that, some changes are needed in <code>/etc/nova/nova.conf</code> as below:</p>
<p>For some reason (maybe nested kvm is broken on F20), the Nova instance hangs during boot if <code>virt_type = kvm</code>. To overcome that do:</p>
+
<p>For some reason (maybe nested kvm is broken on F20), '''_if_''' the Nova instance hangs during boot, remove <code>virt_type = kvm</code>. To do that:</p>
 
<blockquote><p>in [libvirt] section, ensure <code>virt_type = qemu</code></p></blockquote>
 
<blockquote><p>in [libvirt] section, ensure <code>virt_type = qemu</code></p></blockquote>
 
<p>Sometimes i have seen that nova-scheduler doesn't pick the DevStack host possibly due to low mem/cpus available due to which instance creation errors out. Since ours is a all-in-one (AIO) development setup, we want to make sure that our DevStack VM/host is always selected (aka filtered) by nova-scheduler inspite of low mem / cpus. To achive this it's preferred to do:</p>
 
<p>Sometimes i have seen that nova-scheduler doesn't pick the DevStack host possibly due to low mem/cpus available due to which instance creation errors out. Since ours is a all-in-one (AIO) development setup, we want to make sure that our DevStack VM/host is always selected (aka filtered) by nova-scheduler inspite of low mem / cpus. To achive this it's preferred to do:</p>

Latest revision as of 08:34, 30 April 2015

Manila + DevStack setup on Fedora 20

Goal

Document the steps needed to setup DevStack with openstack Manila on F20.

Pre-requisites

F20 installed on a VM or physical system.

In this document, I am using a VM as a F20 system. So my DevStack will be hosted inside a VM and instances (aka guest) created by DevStack will be a VM inside a VM (aka nested KVM).

It's good to create a F20 VM with atleast 4G RAM, 4 vcpus and sufficient disk space (50G in my case).

Disable selinux or put it in permissive mode.

Install and run DevStack (For Kilo and beyond version of devstack)

From Kilo onwards, Manila can be configured in devstack using the devstack plugin mechanism. Follow the steps mentioned in KiloDevstack to get Manila up and running in devstack

Sanity check & Troubleshooting

  1. In case ./stack.sh didn't succeed for you, then try to figure what the script is complaining about from the last error seen. Sometimes it errors out because some of the system services were not started. Ideally DevStack should start all the needed system services but there can be corner cases. In such cases use systemctl to start the needed service. For eg:

    [stack@devstack-large-vm ~]$ sudo systemctl start rabbitmq-server.service

    and so on. I have also seen that just re-doing ./stack.sh sometime works!

  2. Otherwise google :) or ask on openstack-dev@lists.openstack.org with [DevStack] as the tag in the subject of the mail, so that it can get the attention of the right folks.

  3. One other way is to ask your Q on #openstack-dev channel hosted on irc.freenode.net.

  4. Assuming ./stack.sh succeeded for you, next step is to do some basic sanity checks and setup the development shell environment. DevStack (by default) arranges all the services' console's in a multi-window screen session, which can be accessed by doing:

    [stack@devstack-large-vm ~]$ screen -x stack

    This brings up the screen session with each service and its corresponding window at the bottom of the screen.

    Press Ctrl-A," to bring up the service selection window, select the service you want to go to the console of and press Enter.

  5. Ensure that n-net service is NOT listed in the service selection window, since we disabled nova-network.

  6. Ensure that all other services are running fine, by going to the console of each one of them.

  7. DevStack sets up admin and demo tenants, hence apart from the terminal hosting the screen window, I generally open 2 more terminals to my DevStack VM and run the below scripts to get admin and demo shells which can be used in future to quickly run openstack commands with admin or demo tenants' privileges.

    For terminal with admin privileges :

    [root@devstack-large-vm ~]# su - stack
    
    [stack@devstack-large-vm ~]$ cat ~/mytools/setenv_admin
    # source this file to set env and then run os cmds like `cinder list` etc
    export OS_USERNAME=admin`
    export OS_TENANT_NAME=admin`
    export OS_PASSWORD=abc123`
    export OS_AUTH_URL=http://192.168.122.219:5000/v2.0/`
    export PS1=$PS1\[admin\]\
    
    [stack@devstack-large-vm ~]$ source ~/mytools/setenv_admin
    [stack@devstack-large-vm ~]$ [admin]

    For terminal with demo privileges :

    [root@devstack-large-vm ~]# su - stack
    
    [stack@devstack-large-vm ~]$ cat ~/mytools/setenv_demo
    # source this file to set env and then run os cmds like `cinder list` etc
    export OS_USERNAME=demo
    export OS_TENANT_NAME=demo
    export OS_PASSWORD=abc123
    export OS_AUTH_URL=http://192.168.122.219:5000/v2.0/
    export PS1=$PS1\[demo\]\ 
    
    [stack@devstack-large-vm ~]$ source ~/mytools/setenv_demo
    [stack@devstack-large-vm ~]$ [demo]
  8. Now do a sanity check in your admin and demo shells using some basic openstack commands.

    [stack@devstack-large-vm ~]$ [demo] cinder list
    +----+--------+--------------+------+-------------+----------+-------------+
    | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
    +----+--------+--------------+------+-------------+----------+-------------+
    +----+--------+--------------+------+-------------+----------+-------------+
    
    [stack@devstack-large-vm ~]$ [demo] nova list
    +----+------+--------+------------+-------------+----------+
    | ID | Name | Status | Task State | Power State | Networks |
    +----+------+--------+------------+-------------+----------+
    +----+------+--------+------------+-------------+----------+
    
    [stack@devstack-large-vm ~]$ [demo] manila list
    +----+------+------+-------------+--------+-----------------+
    | ID | Name | Size | Share Proto | Status | Export location |
    +----+------+------+-------------+--------+-----------------+
    +----+------+------+-------------+--------+-----------------+
    
    [stack@devstack-large-vm ~]$ [demo] glance image-list
    +--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
    | ID                                   | Name                            | Disk Format | Container Format | Size      | Status |
    +--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
    | c3c32496-0b90-4520-a9d4-b9341afa5993 | cirros-0.3.2-x86_64-uec         | ami         | ami              | 25165824  | active |
    | 62a9b748-352f-41a6-9081-25dd48319da8 | cirros-0.3.2-x86_64-uec-kernel  | aki         | aki              | 4969360   | active |
    | e7591991-8bc8-470a-a15c-723031e7b809 | cirros-0.3.2-x86_64-uec-ramdisk | ari         | ari              | 3723817   | active |
    | 5d470fc2-39e3-461d-a4ea-b1b5de795604 | ubuntu_1204_nfs_cifs            | qcow2       | bare             | 318701568 | active |
    +--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
    • NOTE: ubuntu_1204_nfs_cifs image is added by Manila scripts to DevStack.
  9. Sometimes I have seen that manila share service (m-shr) errors out. From its console window you can try to debug whats the exception/error due to which its errored out. Most of the time I have seen it erroring out due to some exception related to networking. This can happens due to race between the time q-svc was started and m-shr was started. Just restarting m-shr worked for me almost all the times.

  10. To restart a failed service, go to its service console window, get the last run command by pressing Up-arrow key once (and just once!) and hit Enter. In general, its a good idea to restart a failed service to check if its works fine before concluding that its really a failure of the service and hence it needs further debug.

  11. One other issue you might encounter is Cinder volume service c-vol giving Warnings about unable to initialize the default LVM iSCSI driver. This typically happens because the loop device needed as PV for the stack-volumes VG isn't created. Follow the steps below to create the loop device PV for c-vol.

    [stack@devstack-large-vm ~]$ [admin] sudo pvs
    PV         VG           Fmt  Attr PSize PFree
    /dev/loop0 stack-shares lvm2 a--  8.20g 8.20g
    
    [stack@devstack-large-vm ~]$ [admin] sudo losetup -f --show /opt/stack/data/stack-volumes-backing-file
    /dev/loop1
    
    [stack@devstack-large-vm ~]$ [admin] losetup -a
    /dev/loop0: []: (/opt/stack/data/stack-shares-backing-file)
    /dev/loop1: []: (/opt/stack/data/stack-volumes-backing-file)
    
    [stack@devstack-large-vm ~]$ [admin] sudo vgs
    VG            #PV #LV #SN Attr   VSize  VFree
    stack-shares    1   0   0 wz--n-  8.20g  8.20g
    stack-volumes   1   0   0 wz--n- 10.01g 10.01g

    Now goto c-vol service window in the screen session, kill the c-vol service by pressing Ctrl-C and then restart c-vol service by running the last command (which can be accessed using the Up-arrow key). Now c-vol should not complain about the un-initialized driver.

Re-run / Re-join DevStack

(Post a reboot/restart of your DevStack VM/system)

  1. In case you reboot and/or restart the DevStack VM/host, you can re-join the same DevStack setup by doing (assuming you logged in afresh as root).

    [root@devstack-large-vm ~]# su - stack
    [stack@devstack-large-vm ~]$ cd devstack
    • NOTE: In general, its a good idea to check if the stack-volumes VG is present, and if not, create it before doing rejoin-stack.sh. This ensures that you won't hit the problem stated in #11 in the Troubleshooting section above.
    [stack@devstack-large-vm ~]$ sudo losetup -f --show /opt/stack/data/stack-volumes-backing-file

    Now do rejoin-stack.sh to recreate & join your existing devstack setup.

    [stack@devstack-large-vm ~]$ ./rejoin-stack.sh
  2. Before you do the above, its good to check if some of the important system services are running, if not, use systemctl command to start them. Use openstack-status command to get a glimpse of the system services.

    [stack@devstack-large-vm devstack]$ openstack-status
    == Support services ==
    mysqld:                                 inactive  (disabled on boot)
    libvirtd:                               active
    openvswitch:                            active
    dbus:                                   active
    rabbitmq-server:                        active
    
    [stack@devstack-large-vm devstack]$ sudo systemctl start mysqld.service
    
    [stack@devstack-large-vm devstack]$ openstack-status
    == Support services ==
    mysqld:                                 active    (disabled on boot)
    libvirtd:                               active
    openvswitch:                            active
    dbus:                                   active
    rabbitmq-server:                        active
    • NOTE: You can use chkconfig command to ensure that these services are auto started on system boot, but for some reason it doesn't work for mysqld service.
  3. Again, here too you may end up seeing some openstack services not starting properly which could happen due to race between the invocation of different services and/or some dependencies like VG not present. See the Troubleshooting section above for resolution.

    ./rejoin-stack.sh is successful once all the openstack services in the screen session are working fine without any errors. As always, sanity check if DevStack setup is successful by following the steps mentioned in the Sanity check section above.

Create a Nova instance

  1. We will create a Nova instance (aka VM / Guest) using the ubuntu image present in glance image-list. Before we do that, some changes are needed in /etc/nova/nova.conf as below:

    For some reason (maybe nested kvm is broken on F20), _if_ the Nova instance hangs during boot, remove virt_type = kvm. To do that:

    in [libvirt] section, ensure virt_type = qemu

    Sometimes i have seen that nova-scheduler doesn't pick the DevStack host possibly due to low mem/cpus available due to which instance creation errors out. Since ours is a all-in-one (AIO) development setup, we want to make sure that our DevStack VM/host is always selected (aka filtered) by nova-scheduler inspite of low mem / cpus. To achive this it's preferred to do:

    in [default] section, append scheduler_default_filters = AllHostsFilter

    Don't forget to restart n-cpu and n-sch services for the above nova.conf changes to take effect.

  2. Switch to demo tenant's shell & create a Nova instance using the commands below.

    [stack@devstack-large-vm ~]$ [demo] nova keypair-add --pub_key ~/.ssh/id_rsa.pub mykey
    
    [stack@devstack-large-vm ~]$ [demo] nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
    +-------------+-----------+---------+-----------+--------------+
    | IP Protocol | From Port | To Port | IP Range  | Source Group |
    +-------------+-----------+---------+-----------+--------------+
    | tcp         | 22        | 22      | 0.0.0.0/0 |              |
    +-------------+-----------+---------+-----------+--------------+
    
    [stack@devstack-large-vm ~]$ [demo] nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
    +-------------+-----------+---------+-----------+--------------+
    | IP Protocol | From Port | To Port | IP Range  | Source Group |
    +-------------+-----------+---------+-----------+--------------+
    | icmp        | -1        | -1      | 0.0.0.0/0 |              |
    +-------------+-----------+---------+-----------+--------------+
    
    
    [stack@devstack-large-vm ~]$ [demo] nova boot --flavor m1.micro --image ubuntu_1204_nfs_cifs --key-name mykey --security-groups default myvm_ubuntu
  3. Wait for the Instance to get into ACTIVE/Running state and then ssh into it as a sanity check.

    [stack@devstack-large-vm ~]$ [demo] nova list
     +--------------------------------------+-------------+--------+------------+-------------+------------------+
     | ID                                   | Name        | Status | Task State | Power State | Networks         |
     +--------------------------------------+-------------+--------+------------+-------------+------------------+
     | f92c51fd-de36-402f-b072-a0e515116892 | myvm_ubuntu | ACTIVE | -          | Running     | private=10.0.0.4 |
     +--------------------------------------+-------------+--------+------------+-------------+------------------+

    Openstack sets up Nova instances in a private subnet using neutron services. As you can see the instance IP is 10.x.x.x which is a different subnet compared to your DevStack VM/host subnet.

    The private subnet is created by neutron using a combination of network namespaces, linux bridges & openvswitch bridges. Thus, one can't get to the instances using just ssh, but need to use the network namespace and ssh from within that namespace, as shown below:

    [stack@devstack-large-vm ~]$ [demo] ip netns
    qrouter-7587cea0-4015-4a18-a191-20ce7be410e4
    qdhcp-26f7e398-39e7-465f-8997-43062a825c27  
    
    [stack@devstack-large-vm ~]$ [demo] sudo ip netns exec qdhcp-26f7e398-39e7-465f-8997-43062a825c27 ssh ubuntu@10.0.0.4

    If everything is setup as expected, you should be able to successfully ssh into the instance with the above command.

    • NOTE: password for user ubuntu is ubuntu for the ubuntu_1204_nfs_cifs image used here. One can use sudo inside the instance to run commands as root.

Create Manila share and access from Nova instance

  1. Create a new share network in Manila for use by the tenant. List the tenant's private net-id and subnet-id and create Manila share network using them.

    • NOTE: A share network is a private L2 subnet for the Manila share, created using neutron services and associated with the tenant's private subnet in order to achieve multi-tenancy using L2 level isolation.

      [stack@devstack-large-vm ~]$ [demo] neutron net-list
       +--------------------------------------+---------+--------------------------------------------------+
       | id                                   | name    | subnets                                          |
       +--------------------------------------+---------+--------------------------------------------------+
       | 8031f472-2b64-430c-8131-7aad456ebfbb | private | 77343a5f-f553-4e20-af42-698890d8a269 10.0.0.0/24 |
      | b5f39b46-6d75-4df2-a2d0-eaa410b184fd | public  | 6bb017ac-bfbf-425d-803b-31b297c4604c             |
       +--------------------------------------+---------+--------------------------------------------------+
      
      
      
      [stack@devstack-large-vm ~]$ [demo] neutron subnet-list 
       +--------------------------------------+----------------+-------------+--------------------------------------------+
       | id                                   | name           | cidr        | allocation_pools                           |
       +--------------------------------------+----------------+-------------+--------------------------------------------+
       | 77343a5f-f553-4e20-af42-698890d8a269 | private-subnet | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} |
       +--------------------------------------+----------------+-------------+--------------------------------------------+
      
      
       [stack@devstack-large-vm ~]$ [demo] manila share-network-create --neutron-net-id 8031f472-2b64-430c-8131-7aad456ebfbb  --neutron-subnet-id 77343a5f-f553-4e20-af42-698890d8a269 --name share_network_for_10xxx --description "Share network for 10.0.0.0/24 subnet"
      
      
       [stack@devstack-large-vm ~]$ [demo] manila share-network-list
       +--------------------------------------+-------------------------+--------+
       |                  id                  |           name          | status |
       +--------------------------------------+-------------------------+--------+
       | 085c596f-feac-4539-97cd-393279e99098 | share_network_for_10xxx |  None  |
       +--------------------------------------+-------------------------+--------+
  2. Create a new Manila share (aka export).

    Manila by default uses the GenericShareDriver which uses Cinder services to create a new cinder volume, export it as a block device, mkfs it and export the filesystem as a NFS share. All of this happens transparently in a service VM that's created and managed by Manila !

    [stack@devstack-large-vm ~]$ [demo] grep share_driver /etc/manila/manila.conf
     share_driver = manila.share.drivers.generic.GenericShareDriver
    
    [stack@devstack-large-vm ~]$ [demo] manila create --name cinder_vol_share_using_nfs --share-network-id  085c596f-feac-4539-97cd-393279e99098  NFS 1
    
    [stack@devstack-large-vm ~]$ [demo] manila list
     +--------------------------------------+----------------------------+------+-------------+-----------+---------------------------------------------------------------+
     |                  ID                  |            Name            | Size | Share Proto |   Status  |                        Export location                        |
     +--------------------------------------+----------------------------+------+-------------+-----------+---------------------------------------------------------------+
     | 1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 | cinder_vol_share_using_nfs |  1   |     NFS     | available | 10.254.0.3:/shares/share-1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 |
     +--------------------------------------+----------------------------+------+-------------+-----------+---------------------------------------------------------------+
    • NOTE: 10.254.0.3 is the IP of the service VM and /shares/share-1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 is the export path.
  3. Allow access of the share to the Nova instance.

    [stack@devstack-large-vm ~]$ [demo] manila access-allow 1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 ip 10.0.0.4 
    • NOTE: As part of access-allow Manila ensures that the service VM exports the export path for the specified tenant IP only.
  4. Login to the Nova instance and mount the share.

    [stack@devstack-large-vm ~]$ [demo] sudo ip netns exec qdhcp-26f7e398-39e7-465f-8997-43062a825c27 ssh ubuntu@10.0.0.4
    • NOTE: password is ubuntu.

      ubuntu@ubuntu:~$ sudo mount -t nfs -o vers=4  10.254.0.3:/shares/share-1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 /mnt
      ubuntu@ubuntu:~$ df -h
       Filesystem                                                     Size  Used Avail Use% Mounted on`
       /dev/vda1                                                      1.4G  524M  793M  40% /`
       udev                                                            56M  4.0K   56M   1% /dev`
       tmpfs                                                           24M  360K   23M   2% /run`
       none                                                           5.0M     0  5.0M   0% /run/lock`
       none                                                            59M     0   59M   0% /run/shm`
       10.254.0.3:/shares/share-1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 1008M   34M  924M   4% /mnt
    • NOTE: If all goes well as expected, you should be able to successfully mount the Manila share in your Nova instance, as seen above.

    Good Luck !

Troubleshooting

  1. manila create ... erroring out due to exception in _attach_volume_.

    m-shr log shows the below exception.

     Traceback (most recent call last):
       File "/opt/stack/manila/manila/openstack/common/rpc/amqp.py", line 433, in _process_data
         **args)
       File "/opt/stack/manila/manila/openstack/common/rpc/dispatcher.py", line 148, in dispatch
         return getattr(proxyobj, method)(ctxt, **kwargs)
       File "/opt/stack/manila/manila/share/manager.py", line 165, in create_share
         self.db.share_update(context, share_id, {'status': 'error'})
       File "/usr/lib64/python2.7/contextlib.py", line 24, in exit
         self.gen.next()
       File "/opt/stack/manila/manila/share/manager.py", line 159, in create_share
         context, share_ref, share_server=share_server)
       File "/opt/stack/manila/manila/share/drivers/generic.py", line 132, in create_share
         volume = self._attach_volume(self.admin_context, share, server, volume)
       File "/opt/stack/manila/manila/share/drivers/service_instance.py", line 112, in wrapped_func
         return f(self, *args, **kwargs)
       File "/opt/stack/manila/manila/share/drivers/generic.py", line 198, in _attach_volume
         % volume['id'])
     ManilaException: Failed to attach volume 2a5bf78f-313d-463e-9b07-bb7a98080ce1

    c-vol log at the same time, has the below exception.

    2014-06-16 16:39:38.440 DEBUG cinder.openstack.common.processutils [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c
    795095] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdd
    a from (pid=4623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
    2014-06-16 16:39:38.981 DEBUG cinder.openstack.common.processutils [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c
    795095] Result was 107 from (pid=4623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:167
    2014-06-16 16:39:38.981 WARNING cinder.brick.iscsi.iscsi [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095] Fa
    iled to create iscsi target for volume id:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda: Unexpected error while running command.
    Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
    Exit code: 107
    Stdout: 'Command:\n\ttgtadm -C 0 --lld iscsi --op new --mode target --tid 1 -T iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda\nexited with code: 107.\n'
    Stderr: 'tgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\n'
    2014-06-16 16:39:38.982 ERROR oslo.messaging.rpc.dispatcher [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 
    b65a066f32df4aca80fa9a6d5c795095] Exception during message handling: Failed to create iscsi target for volume volume-2a5bf78f-313d-463e-9b07-bb7a98080ce1.
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     incoming.message))
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/manager.py", line 783, in initialize_connection
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     volume)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 524, in create_export
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     return self._create_export(context, volume)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 533, in _create_export
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     data = self.target_helper.create_export(context, volume, volume_path)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/iscsi.py", line 53, in create_export
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     chap_auth)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/brick/iscsi/iscsi.py", line 219, in create_iscsi_target
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     raise exception.ISCSITargetCreateFailed(volume_id=vol_id)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-2a5bf78f-313d-463e-9b07-bb7a98080ce1.
    • As we can see, tgt-admin is unable to create a iSCSI target for the cinder volume, hence unable to attach the cinder volume to the manila service VM.

    • Solution is to check if tgtd.service is running and if not, start it.

       [root@devstack-large-vm ~]# systemctl status tgtd.service
      tgtd.service - tgtd iSCSI target daemon
          Loaded: loaded (/usr/lib/systemd/system/tgtd.service; disabled)
          Active: inactive (dead)
      
      
      [root@devstack-large-vm ~]# systemctl start tgtd.service
      [root@devstack-large-vm ~]# chkconfig tgtd on
      Note: Forwarding request to 'systemctl enable tgtd.service'.
      ln -s '/usr/lib/systemd/system/tgtd.service' '/etc/systemd/system/multi-user.target.wants/tgtd.service'
      [root@devstack-large-vm ~]# 
      
      
      [root@devstack-large-vm ~]# systemctl status tgtd.service
      tgtd.service - tgtd iSCSI target daemon
         Loaded: loaded (/usr/lib/systemd/system/tgtd.service; enabled)
         Active: active (running) since Tue 2014-06-17 05:50:42 UTC; 29s ago
       Main PID: 10623 (tgtd)
         CGroup: /system.slice/tgtd.service
                 └─10623 /usr/sbin/tgtd -f
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: librdmacm: Warning: couldn't read ABI version.
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: librdmacm: Warning: assuming: 4
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: librdmacm: Fatal: unable to get RDMA device list
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: tgtd: iser_ib_init(3355) Failed to initialize RDMA; load kernel modules?
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: tgtd: work_timer_start(146) use timer_fd based scheduler
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: tgtd: bs_init_signalfd(271) could not open backing-store module directory /usr/lib64/tgt/backing-store
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: tgtd: bs_init(390) use signalfd notification
      Jun 17 05:50:42 devstack-large-vm.localdomain systemd[1]: Started tgtd iSCSI target daemon.
    • Now manila create ... should go through !

  2. Deleting the last Manila share doesn't shutdown the service VM, inspite of delete_share_server_with_last_share=True set in /etc/manila/manila.conf

    You can delete a Manila share by doing:

    [stack@devstack-large-vm ~]$ [demo] manila delete de45c4db-aa89-4887-ab3c-153d7b909708
    
    [stack@devstack-large-vm ~]$ [demo] manila list
    +----+------+------+-------------+--------+-----------------+
    | ID | Name | Size | Share Proto | Status | Export location |
    +----+------+------+-------------+--------+-----------------+
    +----+------+------+-------------+--------+-----------------+

    List all the tenant's VMs using admin privileges:

    [stack@devstack-large-vm ~]$ [admin] nova list --all-tenants
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
    | ID                                   | Name                                                                  | Status  | Task State | Power State | Networks                          |
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
    | 1317b8e6-0d02-4e6b-934a-225752dd809c | manila_service_instance_backend1_8c2fc21d-3dd8-42a2-8363-cacc726df9fa | ACTIVE  | -          | Running    | manila_service_network=10.254.0.3 |
    | 28c1aeff-ed98-4d53-b9ba-36028558ebf3 | myvm_ubuntu                                                           | SHUTOFF | -          | Shutdown     | private=10.0.0.3                  |
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+

    As can be seen, the service VM (manila_service_instance_xxxx) is still running. The service VM is created using tenant service and user nova, so switch to those credentials and turn off the service VM as below :

    • NOTE: It's a good idea to create a new source file for this

      [stack@devstack-large-vm ~]$ cat ~/mytools/setenv_service
      # source this file to set service tenant's priviledges
      export OS_USERNAME=nova
      export OS_TENANT_NAME=service
      export OS_PASSWORD=abc123
      export OS_AUTH_URL=http://192.168.122.219:5000/v2.0/
      
      export PS1=$PS1\[service\]\ 
      
      [stack@devstack-large-vm ~]$ source ~/mytools/setenv_service
      
      [stack@devstack-large-vm ~]$ [service] nova stop 1317b8e6-0d02-4e6b-934a-225752dd809c

    Now switch to admin and check the status of service VM:

    [stack@devstack-large-vm ~]$ [admin] nova list --all-tenants
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
    | ID                                   | Name                                                                  | Status  | Task State | Power State | Networks                          |
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
    | 1317b8e6-0d02-4e6b-934a-225752dd809c | manila_service_instance_backend1_8c2fc21d-3dd8-42a2-8363-cacc726df9fa | SHUTOFF | -          | Shutdown    | manila_service_network=10.254.0.3 |
    | 28c1aeff-ed98-4d53-b9ba-36028558ebf3 | myvm_ubuntu                                                           | SHUTOFF | -          | Shutdown    | private=10.0.0.3                  |
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
  3. Service VM doesn't restart, post rejoin-stack.sh, hence creating new shares errors out.

    As part of rejoining DevStack, the service VM should be automatically re-started if there is atleast 1 active share in the Manila DB. Sometime this doesn't happen and we need to manually restart the service VM for Manila create and other APIs to work properly.

    Check if service VM is started:

    [stack@devstack-large-vm ~]$ [admin] nova list --all-tenants
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
    | ID                                   | Name                                                                  | Status  | Task State | Power State | Networks                          |
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
    | 1317b8e6-0d02-4e6b-934a-225752dd809c | manila_service_instance_backend1_8c2fc21d-3dd8-42a2-8363-cacc726df9fa | SHUTOFF | -          | Shutdown    | manila_service_network=10.254.0.3 |
    | 28c1aeff-ed98-4d53-b9ba-36028558ebf3 | myvm_ubuntu                                                           | SHUTOFF | -          | Shutdown    | private=10.0.0.3                  |
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+

    Use the right credentials to start the service VM:

    [stack@devstack-large-vm ~]$ source ~/mytools/setenv_service
    
    [stack@devstack-large-vm ~]$ [service] nova start 1317b8e6-0d02-4e6b-934a-225752dd809c
    
    [stack@devstack-large-vm ~]$ [service] nova list
    +--------------------------------------+-----------------------------------------------------------------------+--------+------------+-------------+-----------------------------------+
    | ID                                   | Name                                                                  | Status | Task State | Power State | Networks                          |
    +--------------------------------------+-----------------------------------------------------------------------+--------+------------+-------------+-----------------------------------+
    | 1317b8e6-0d02-4e6b-934a-225752dd809c | manila_service_instance_backend1_8c2fc21d-3dd8-42a2-8363-cacc726df9fa | ACTIVE | -          | Running     | manila_service_network=10.254.0.3 |
    +--------------------------------------+-----------------------------------------------------------------------+--------+------------+-------------+-----------------------------------+

    Now manila create ... and other operations should work