Jump to: navigation, search

SharedFS

Revision as of 19:57, 11 April 2012 by Andrewbogott (talk)
  • Launchpad Entry: nova-sharedfs
  • Created: 26 Feb 2012
  • Contributors: Andrew Bogott

Summary

Provide an API for creating shareable filesystems. Provide an API for managing the access of instances to those filesystems. Filesystems will be managed by a system-specific driver. How instances will discover and mount filesystems is yet to be determined.

Release Note

Rationale

Openstack currently lacks any system for managing file-system-level storage (e.g. lustre, NFS, gluster.) Important uses of this feature will include:

- Adding storage to an instance using file-level rather than a block-level resources.

- Creating shared /home directories among all instances in a given project.

- About any application that requires access to shared files among systems.

User stories

  • A user of an instance realizes he needs more storage. He creates a new, gigantic gluster volume and attaches it to his instance using the dashboard.
  • The manager of a project wants free reign to create and destroy instances without destroying user data. She creates a big shared NFS system and configures instances to mount /home on that volume.
  • Agents running on multiple instances need to collaborate on a gigantic data set. The data set is stored on a shared filesystem that is visible to all the instances.

Assumptions

Instances will need filesystem-specific support (e.g. an NFS or gluster client). The installation of clients or drivers on instances is outside the scope of this design.

Design

A given filesystem will require a custom driver that contains necessary knowledge about creating, destroying, and exporting volumes as well as the ability to manage any necessary security settings.

It may be necessary to provide file-system specific drivers to run on each instance as well.

FileSystems can be created and destroyed via API calls. Upon creation, an FS can have a scope of 'instance', 'project', or 'global'. The API call to create a filesystem will also require information about where this FS should be mounted within instances.

(Magic stuff that will probably be post-folsom:

A project-wide FS is automatically attached to and mounted on any instances that are created in that project.

If not project-wide, an FS can be attached to an instance or instances via an API call. Upon attachment the FS will appear within the instance's local filesystem.)

Implementation


Create a file system

        PUT /v1.1/<tenant_id>/os-filesystem/homeforproject1

    # Sample body (project-wide):
    {'fs_entry' :
        {'size': '4Gb',
         'scope': 'project',
         'project' : 'project1'}
    }

    # Sample response (project-wide):
    {'fs_entry' :
        {'name': 'homeforproject1',
         'size': '4Gb',
         'scope': 'project',
         'project' : 'project1'}
    }

        PUT /v1.1/<tenant_id>/os-filesystem/project2storage

    # Sample body (instance):
    {'fs_entry' :
        {'size': '80Gb',
         'scope': 'instance'}
    }

    # Sample response (instance):
    {'fs_entry' :
        {'name': 'project2storage',
         'size': '80Gb',
         'scope': 'instance
        }
    }


Get list of available file systems

        GET /v1.1/<tenant_id>/os-filesystem

    # Sample response:
    {'fs_entries':
        {'name': 'project2storage',
         'size': '80Gb',
         'scope': 'instance
        }
        {'name': 'homeforproject1'
         'size': '4Gb',
         'scope': 'project',
         'project' : 'project1'
        }
    }


Delete a file system

        DELETE /v1.1/<tenant_id>/os-filesystem/project2storage

    Normal Response Code: 202
    Failure Response Code: 404 (FS to be deleted not found.)
    Failure Response Code: 403 (Insufficient permissions to delete.)


List instances connected to a file system

        GET /v1.1/<tenant_id>/os-filesystem/homeforproject1/instances

    # Sample response:
    {'instance_entries':
        {'id': 'instance00001'}
        {'id': 'instance00002'}
        {'id': 'instance00002'}
    }


Connect an instance to a file system

        PUT /v1.1/<tenant_id>/os-filesystem/homeforproject1/instances/<instance_id>

    # Sample response:
    {'instance_entry':
        {'id': 'instance00001'}


Remove an instance from a file system

        DELETE /v1.1/<tenant_id>/os-filesystem/homeforproject1/instances/<instance_id>

    Normal Response Code: 202
    Failure Response Code: 404 (Instance or FS not found.)
    Failure Response Code: 403 (Insufficient permission)



The FS driver interface will be similar to the existing nova-volume driver interface. It will require the following:

do_setup(self, context)

check_for_setup_error()

create_fs(fs_name)

delete_fs(fs_name)

list_fs()

attach(fs_name, ip_list)

unattach(fs_name, ip_list)

list_attachments(fs_name)

UI Changes

Commands to support these functions will most likely be part of the same command that manages volumes. That's the 'nova' tool today, but is likely to be something volume-specific in Folsom.

Code Changes

For the most part this will be a stand-alone API. It will probably need to query the Nova databases in order to manage attaching and unattaching.

In order to support project-wide shares, we'll need to hook the creation and deletion of instances somehow. That may or may not involve modifying nova code; probably it will just be handled using queue notification.

Migration

Test/Demo Plan

Unresolved issues

It's unclear how to get filesystems mounted on instances, and even less clear how to do so dynamically rather than just at instance creation. EC2 metadata can be used for startup configuration but can't currently be changed at runtime.

1) Push things from the server side via ssh

2) Assume the presence of an agent on the instance which polls for filesystem changes

3) Push a trivial agent using cloud-init

Someday there will be an established standard for installing and communicating with guest agents. That is clearly the right solution. In the meantime we'll probably limit ourselves to Ubuntu clients and use option 3.

BoF agenda and discussion