Jump to: navigation, search

Difference between revisions of "LocalStorageVolume"

Line 12: Line 12:
 
== Glossary ==
 
== Glossary ==
  
'''Qcow2''': Qcow2
+
'''Qcow2 Image''': Qcow2 Image
  
 
'''Incremental Snapshot''': incremental snapshot
 
'''Incremental Snapshot''': incremental snapshot
Line 19: Line 19:
  
 
== Summary ==
 
== Summary ==
The goal of this blueprint is to add a first-class, customer-facing service for the management of network infrastructure within an OpenStack cloud.  This will allow service providers to offer "Networking as a Service" (NaaS) to their customers.
+
* 这只是cinder的一个driver
 
+
* 实现高性能的volume
This blueprint discusses goals, use cases, requirements and design ideas for features and capabilities to enable in openstack-NaaS in order to be able to create and manage networks intended as ''collection of virtual ports with shared connectivity'', which provide VM instances with Layer-2 and possibly Layer-3 connectivity.
+
* 减少网络负载
 
+
* 提高系统可靠性
Higher-layer services, such as Firewall, NAT, VPN, and Load Balancing, will instead be provided by distinct services communicating with NaaS through exposed APIs. L4/L7 services are discussed at this [http://wiki.openstack.org/HigherLayerNetworkServices wiki page]'.
+
* 用户可以从swift下载他的快照
 +
* 某个region可以基于另外一个region的快照创建volume
 +
* 增量快照
 +
* 在线快照
 +
* 其他的从AWS EBS中找到
  
 
== Rationale ==
 
== Rationale ==
The main aim of NaaS is to provide Openstack users with a service for providing Layer-2 networking to their VM instances; a network created with NaaS can be indeed regarded as a virtual network switch, together with related network devices attached to it, which potentially spans over all the compute nodes in the cloud.
+
* 性价比高
Apart from providing Layer-2 services, NaaS also aims at providing Layer-3 networking, intended as IP configuration management and IP routing configuration.
+
* 可靠安全
 
+
* 减少网络负载
NaaS APIs should be decoupled by the implementation of the network service, which should be provided through plugins.
+
* 性能高
This implies that NaaS does not mandate any specific model for created networks, either at Layer-2 (e.g.: VLANs, IP tunnels), or Layer-3 (e.g.: file-system based injection, DHCP).
+
* 可靠性高
  
 
== Goals ==
 
== Goals ==
'''Goal 1''': Allow customers and CSPs to create, delete, and bridge networks. Networks can either be private, i.e.: available only to a specific customer, or shared. Networks shared only among a specific group of customers can also be considered.
+
'''Goal 1''': 与AWS EBS一致。
 
 
'''Goal 2''': Allow customers and CSPs to manage virtual ports for their networks, and attach instances or other network appliances (physical or virtual) available in the cloud to them.
 
 
 
'''Goal 3''': Allow customers and CSPs to extend their networks from the cloud to a remote site, by attaching a bridging device within the cloud to their networks'; the bridging device would then bridge to the appropriate remote site.
 
 
 
'''Goal 4''': Allow customers and CSPs to manage IP configuration for their networks. Both IPv4 and IPv6 should be supported. Zero, One, or more IP subnets can be associated with a network.
 
  
'''Goal 5''': Allow customers and CSPs to define IP routes among subnets. Routines can be defined either across subnets in the same virtual network, or across IP subnets in distinct virtual networks owned by the same tenant.
+
'''Goal 2''': 增量快照。
  
'''Goal 6''': Allow customer and CSPs to monitor networks by making available statistic information such as total number of bytes transmitted and received per network, port or VIF. IP-based statistics should also be available.
+
'''Goal 3''': 高性能。
 
 
'''Goal 7''': Allow customers and CSPs to securely configure network policies for networks, ports, and devices attached to them. These policies can include, for instance, port security polices, access control lists, high availability, or QoS policies (which are typically available on physical network switches). Only a basic set of configuration options will be supported; the remaining network policies should be assumed as plugin-specific and always be configured through API extension mechanisms.
 
 
 
'''Goal 8''': Allow CSPs to register and configure the plugins providing the actual implementation of the network service. CSPs should be able to select and plug in third-party technologies as appropriate.  This may be for extended features, improved performance, or reduced complexity or cost.
 
  
 
== Use cases ==
 
== Use cases ==
  
=== Use case 1: Create private network and attach instance to it ===
+
=== Use case 1: Create volume ===
''Related topology:''  Each tenant has an isolated network. Tenants are free to do whatever they want on their network, because it is completely isolated from those of other tenants. This is the NASA Nebula model as of today, implemented using one VLAN per tenant. Note that although this same model could equally be implemented with different technologies (e.g. GRE tunnels instead of VLANs for isolation), this would not change the nature of the model itself.
 
  
# Customer uses the NaaS API to create a Network;
+
# cinder-api create a new volume DB item;
# On success, NaaS API return a unique identifier for the newly created network;
+
# cinder-driver do noting.
# Customer uses NaaS API for configuring a logical port on the network;
 
# Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
 
# Cloud Controller API dispatches request to compute service;
 
# The compute service creates VM and VIFs. For each VIF, it asks Naas to plug it into port and network specified by Customer in (4).
 
  
=== Use case 2: Attach instance to default public network ===
+
''NOTE'': It create the qcow2 image of the volume when the volume is attaching.
''Related topology:'' Similar to the 'Flat' mode currently supported by nova network. Instances from different customers are all deployed on the same virtual network. In this case, the Core NaaS service can provide port isolation policy in order to ensure VM security.
 
  
# Customer uses Naas API to retrieve public networks;
+
=== Use case 2: Create volume from snapshot ===
# On success, Naas API returns a list of unique network identifiers; Customer selects a networks from this list;
 
# Customer uses NaaS API for configuring a logical port on the network;
 
# Customer invokes Cloud Controller API to run an instance, specifying network and virtual port for it;
 
# Cloud Controller API dispatches request to compute service;
 
# The Nova Compute service creates VM and VIFs. For each VIF, it asks Naas to plug it into port and network specified by Customer in (4).
 
  
The main difference between this use case and the previous one is that the Customer uses a pre-configured network instead of creating it. Another point that needs to be discussed is whether customers should be allowed to manage ports for public networks. Alternatively, the compute service can implicitly create a port when it contacts the NaaS API.
+
# cinder-api create a new volume DB item and set volume['snapshot_id'];
 +
# cinder-driver do nothing.
  
=== Use case 3: Register bridge for connecting cloud network to other site ===
+
=== Use case 3: Delete volume ===
''Related topology: '' Customer on-premise data centre extending into the cloud, interconnecting networks in distinct cloud. Although actual implementation can be provided in several ways. we are interested in the abstract model: a single connectivity domain spanning two or more networks in distinct administrative domains.
 
  
# Customer uses NaaS API to register a bridge for its network;
+
# cinder-driver check the status of snapshots of the volume.
# On success the NaaS API returns the bridge identifier;
+
# cinder-api destroy volume DB item.
# Customer uses NaaS API to provide bridge configuration (e.g.: remote endpoint, port, credentials);
 
# Customer uses NaaS API to create a virtual port on the network for the bridge device;
 
# Customer uses  NaaS API to plug bridge device into network
 
  
=== Use case 4: Retrieve statistics for a network ===
+
=== Use case 4: Create snapshot ===
  
# Customer uses NaaS API to retrieve a specific network;  
+
# cinder-driver use qemu monitor command to create snapshot in image;
# On success the NaaS API returns the network identifier;
+
# cinder-driver upload incremental snapshot to swift;
# Customer uses NaaS API to retrieve statistics for the network. Filters can be used to specify which data should be retrieved (e.g.: total bytes RX/TX), and for which components of the network they should be retrieved (whole network, specific port(s), specific VIF(s))
+
# cinder-driver delete old snapshot in image.
# NaaS invokes the implementation plugin to retrieve required information;
 
# NaaS API returns statistic data and customer processes them.
 
  
''NOTE:''the actor for this use case can either be a customer or an higher-layer monitoring or billing service, e.g.: a service which charges customers according to their network usage.
+
''NOTE'': snapshot can be created when the volume state is attached. When the volume is detached, there are not changes in volume, so it don't need to create snapshot.
  
=== Use case 5: Configure network policies ===
+
=== Use case 5: Delete snapshot ===
  
# Customer uses NaaS API to retrieve a specific network;
+
# cinder-driver delete snapshot in swift.  
# On success the NaaS API returns the network identifier;
 
# Customer uses NaaS API to enforce a policy on the network; for instance a policy can specify a maximum bit rate on a specific port or block a specific protocol over the whole network;
 
# NaaS invokes the implementation plugin to enforce the policy;
 
# NaaS API informs the user about the result of the operation.
 
  
=== Use case 6: Configure an IP subnet and attach instances to it ===
+
=== Use case 6: Attach volume ===
  
# Customer uses NaaS API to retrieve a specific network;
+
# cinder-driver create the qcow2 image of the volume.
# On success the NaaS API returns the network identifier;  
+
# if the volume is new volume, goto (5);
# Customers uses NaaS API to create an IP subnet, by specifying CIDR (or alternative network address and netmask), and the gateway;  
+
# if the volume is new volume from snapshot, download the snapshot and write it to the qcow2 image, goto (5);
# On success NaaS API return a unique identifier for the newly created subnet;
+
# if the volume is old volume, download the last snapshot and write it to the qcow2 image;  
# Customer invokes NaaS API in order to attach a VIF already attached to one of his L2 networks to the newly created subnets;
+
# update volume['host'] to new host which the instance besides. the volume is attached to the instance.
# NaaS verifies sanity of input data (e.g.: VIF attached to the appropriate network);
 
# NaaS invokes the IP configuration management plugin to provide the supplied VIF with the appropriate configuration;
 
  
=== Use case 7: Configure a route between two IP subnets ===
+
=== Use case 7: Detach volume ===
 
 
For this use case, we assume that the customer already has the identifiers (URIs, UUIDs) of the two subnets that should be routed.
 
  
 
# Customer invokes NaaS API to create a route between the two subnets
 
# Customer invokes NaaS API to create a route between the two subnets
# NaaS API creates appropriate routes using CIDRs and gateway address for the two subnets
 
# NaaS return a unique identifier for the newly created route
 
 
The way in which the IP route is created (e.g.: manipulating route table on instances, manipulating routing tables in hypervisors, configuring a router virtual appliance, etc.etc.) is plugin-specific.
 
Routing attributes, such as distance, cost, and weight should also be part of extension APIs as they are not required to provide the basic functionality.
 
 
=== Plug-in use cases ===
 
 
In this section, we give examples of the technologies that a service provider may wish to plug into NaaS to provide L2/L3 networking.
 
Some of these technologies are commercial in nature. 
 
Where this is the case, this blueprint will include bindings for an open-source alternative, so that the NaaS API is completely supported by open-source software.
 
It is a goal of this blueprint that a service provider may select different implementation technologies.
 
 
{| border="1" cellpadding="2" cellspacing="0"
 
|  Category
 
|-
 
|  Distributed virtual switches
 
|-
 
|  Inter-datacenter tunnels
 
|-
 
|  IP address management
 
|}
 
  
 
== Requirements ==
 
== Requirements ==
  
R1. Add a first-class, customer-facing service for management and configuration of network infrastructure via a RESTful API.  This service shall be known as openstack-NaaS, and the API that it exposes shall be known as the OpenStack NaaS API.
+
R1. 支持qcow2
  
R2. Modify nova-compute to obtain network details by calling openstack-NaaS through its public API, rather than calling nova-network as it does today.
+
R2. 修改qemu,增量两个命令
  
 
... lots more coming here don't worry!
 
... lots more coming here don't worry!
  
 
== Design Ideas ==
 
== Design Ideas ==
At this stage any attempt to provide a design for NaaS would be quite unrealistic. This section should be therefore regarded as a first attempt to define general design guidelines.
 
 
At a very high-level view the Network Service and its interactions with other entities (compute service, database, plugins) can be summarized as follows:
 
  
 
[[Image:LocalStorageVolume$Naas-Core-Overview.png]]
 
[[Image:LocalStorageVolume$Naas-Core-Overview.png]]
  
* The Nova Compute service should not have any knowledge of the hypervisor's networking stack and uses NaaS API for plugging VIFs into networks (this is sligthly different from current nova design where part of the network manager's code - namely setup_compute_network - is used by the compute service as well);  
+
* 使用qemu qcow2的快照机制;  
* Distinct plugins for Layer-2 networking and Layer-3 networking should be allowed; also multiple plugins should be expected at least for Layer-3 networking; for instance one plugin could provide IP configuration through DHCP, whereas another plugin could use agent-based configuration.
+
* 支持在线快照,在qemu增加2个monitor命令.
* Layer-3 networking, altough part of NaaS, is not mandatory. In its most simple form, NaaS can be providing Layer-2 connectivity only;
+
* 在swift实现增量快照;
* Both Layer-2 and Layer-3 plugins can be attached to NaaS; however, if no Layer-3 plugin is provided, NaaS should raise a [[NotImplemented]] error for every L3 API request.
 
* NaaS API serves requests from both customers and the compute service. In particular, the responsibilities of the compute services w.r.t. NaaS are the following:
 
** Plugging/Unplugging virtual interfaces in virtual networks managed by NaaS;
 
** Attach/Detach virtual interfaces to IP subnets configured in NaaS.
 
* The "Plugin dispatcher" component is in charge of dispatching API requests to the appropriate plugin (which is not part of NaaS);
 
* For Layer-2 networking, the plugin enforces network/port configuration on hypervisors using proprietary mechanisms;
 
* Similary, Layer-3 networking plugins enforce IP configurationa and routing using proprietary mechanism;
 
* Although the diagram supposes a plugin has a 'manager' component on the NaaS node and agent 'component', this is not a requirement, as NaaS should be completely agnostic w.r.t. plugin implementations; the 'plugin network agent' is not mandatory. The design of the plugin providing the implementation of the Core NaaS service is outside the scope of this blueprint;
 
* NaaS stores information about networks and IP subnets into its own DB. NaaS data model is discussed more in detail in the rest of this section;
 
 
 
=== Data Model for NaaS ===
 
 
 
Currently nova-network uses the nova database, storing information in the ''networks'' table. It also uses a few other tables, such as ''fixed_ips'' and ''floating_ips''; each project is associated with a network.
 
In order to achieve complete separation between NaaS and the other nova services, then NaaS should have its own database.
 
In the nova DB tables will still be associated with networks; however the network identifier will represent the unique identifier for a network created by NaaS rather than the primary key of a row in the ''networks'' table. There could be other elements cached in Nova, such as IP associated with instances but Nova would be no longer the system in which network information are recorded.
 
 
 
Moreover,it might be worth thinking about a 1:N association between networks and projects, which is already required by the [[NovaSpec]]:nova-multi-nic blueprint, or even an N:M association, if we want to support networks shared among different projects.
 
 
 
The following diagram reports an high-level view of the NaaS data model. Entities related to Layer-2 networking are reported in green, whereas Layer-3 entities are reported in blue.
 
 
 
[[Image:LocalStorageVolume$NaaS-Data-Model.png]]
 
 
 
The ''Network'' entity defines Layer-2 networks; its most important attributes are: a unique identifier (which could be a URI or a UUID), the owner, and the name for the network. Details such as the VLAN ID associated with the network) pertain to the specific implementation provided by the plugin and should therefore not be in the NaaS DB. Plugins (or the systems they connect to) will have their own database.
 
A ''Network'' can be associated with several logical ports. Apart from the port number, the ''Port'' entity can define the port's administrative status and cache statistic information about traffic flowing through the port itself.
 
Ports are associated in a 1:1 relationship with VIFs. The ''VIF'' table tracks attachments of virtual networks interfaces to ports. Although VIFs are not created by NaaS it is important to know which VIFs are attached where.
 
 
 
As regards L3 entities, ''IP_Subnet'' is the most important one; its attributes could be the subnet CIDR (or network address/netmask), and the default gateway, which should be optional.
 
The IP configuration schemes used for the subnet should also be part of this entity, assuming that NaaS can use several schemes at the same time.
 
Each entry in the ''IP_Subnet'' entity can be associated with one or more IP addresses. These are the address which are currently assigned for a given network (either via a DHCP lease or a static mechanism), and are associated with a VIF. While an IP address can obviously be associated with a single VIF, a VIF can instead be associated with multiple IP addresses.
 
Finally, the ''IP Routes'' entity links subnet for which an IP route is configured. Attributes of the IP route, such as cost and weight, are plugin-specific and should not be therefore part of this entity, which could probably be reduced to a self-association on the ''IP_Subnet'' table.
 
 
 
=== NaaS APIs ===
 
 
 
Draft document: [[attachment:NaaS_API_spec-draft-v0.1.docx]]
 
 
 
=== Possible IP Configuration Strategies ===
 
 
 
Each VIF will need an IP address (IPv4 or IPv6);  these are given to the VM by one of the schemes described here, which can be implemented by different plugins:
 
 
 
'''Scheme 1: Agent''': Use an agent inside the VM to set the IP address details.  The agent will receive the configuration through a virtualization-specific scheme (XenStore, VMCI, etc) or through a more generic scheme such as a virtual CD-ROM.  This scheme of course requires installation of the agent in the VM image.
 
 
 
'''Scheme 2: DHCP''': Configure the VM to use DHCP (usually the default anyway) and configure a DHCP server before booting the VM.  This may be a private DHCP server, visible only to that VM, or it could be a server that is shared more widely.
 
 
 
'''Scheme 3: Filesystem modification''': Before booting the VM, set the IP configuration by directly modifying its filesystem.
 
  
 
== Pre-requisites ==
 
== Pre-requisites ==
 
'''Multiple VIFs per VM'''.  Not in OpenStack in Cactus, but expected to be added to Nova through [[NovaSpec]]:multi-nic and [[NovaSpec]]:multinic-libvirt for  Diablo.  This is required for all supported virtualization technologies (currently KVM/libvirt, XenAPI, Hyper-V, ESX).
 
  
 
== Development Resources ==
 
== Development Resources ==
 
No commitments have been made yet, but development resources have been offered by Citrix, Grid Dynamics, NTT, Midokura, and Rackspace.
 
 
We will sort out how to share the development burden when this specification is nearer completion.
 
  
 
== Work in Progress ==
 
== Work in Progress ==
 
The following blueprints concerning Network Services for Openstack have been registered:
 
 
* [http://wiki.openstack.org/NetworkServicePOC Network Service POC], registered by [https://launchpad.net/~ishii-hisaharu Hisaharu Ishii] from [https://launchpad.net/~ntt-pf-lab NTT-PF Lab]. There is also some POC code being worked on at lp:~ntt-pf-lab/nova/network-service
 
* [[NovaSpec]]:netcontainers, registered by [https://launchpad.net/~dramesh Ram Durairaj] from [https://launchpad.net/~cisco-openstack Cisco]
 
* [[NovaSpec]]:naas-project, registered by [https://launchpad.net/~dendrobates Rick Clark], which can be regarded as an attempt to merge all these blueprints in a single specification for NaaS.
 
 
Also:
 
* [https://launchpad.net/~erik-carlin Erik Carlin] is working on a draft spec for the OpenStack Networking API.
 
* As already mentioned, work on supporting multiple virtual network cards per instance is already in progress. [[NovaSpec]]:nova-multi-nic
 
* [https://launchpad.net/~ilyaalekseyev Ilya Alekseyev] has registered the [[NovaSpec]]:distros-net-injection blueprint in order to support file-system-based IP configuration in injection for a number of linux distros (nova now supports debian-based distros only). [https://launchpad.net/~berendt Christian Berendt] also registered a similar blueprint, [[NovaSpec]]:injection
 
* [https://launchpad.net/~danwent Dan Wendlandt] has registered [[NovaSpec]]:openvswitch-network-plugin for a NaaS plugin based on [http://www.openvswitch.org Open vSwitch]
 
  
 
== Discussion ==
 
== Discussion ==
 
Erik Carlin has created an Etherpad for general NaaS discussion.
 
Please add your comments [http://etherpad.openstack.org/6LJFVsQAL7 here]
 
 
Other discussion resources:
 
 
Etherpad from discussion session at Bexar design summit: http://etherpad.openstack.org/i5aSxrDeUU
 
 
Etherpad from alternative discussion session at Bexar design summit: http://etherpad.openstack.org/6tvrm3aEBt
 
 
Slide deck from discussion session at Bexar design summit: http://www.slideshare.net/danwent/bexar-network-blueprint
 
  
 
----
 
----
 
[[Category:Spec]]
 
[[Category:Spec]]

Revision as of 08:38, 20 September 2012


<<TableOfContents()>>

Glossary

Qcow2 Image: Qcow2 Image

Incremental Snapshot: incremental snapshot

Pointer Table: Pointer Table

Summary

  • 这只是cinder的一个driver
  • 实现高性能的volume
  • 减少网络负载
  • 提高系统可靠性
  • 用户可以从swift下载他的快照
  • 某个region可以基于另外一个region的快照创建volume
  • 增量快照
  • 在线快照
  • 其他的从AWS EBS中找到

Rationale

  • 性价比高
  • 可靠安全
  • 减少网络负载
  • 性能高
  • 可靠性高

Goals

Goal 1: 与AWS EBS一致。

Goal 2: 增量快照。

Goal 3: 高性能。

Use cases

Use case 1: Create volume

  1. cinder-api create a new volume DB item;
  2. cinder-driver do noting.

NOTE: It create the qcow2 image of the volume when the volume is attaching.

Use case 2: Create volume from snapshot

  1. cinder-api create a new volume DB item and set volume['snapshot_id'];
  2. cinder-driver do nothing.

Use case 3: Delete volume

  1. cinder-driver check the status of snapshots of the volume.
  2. cinder-api destroy volume DB item.

Use case 4: Create snapshot

  1. cinder-driver use qemu monitor command to create snapshot in image;
  2. cinder-driver upload incremental snapshot to swift;
  3. cinder-driver delete old snapshot in image.

NOTE: snapshot can be created when the volume state is attached. When the volume is detached, there are not changes in volume, so it don't need to create snapshot.

Use case 5: Delete snapshot

  1. cinder-driver delete snapshot in swift.

Use case 6: Attach volume

  1. cinder-driver create the qcow2 image of the volume.
  2. if the volume is new volume, goto (5);
  3. if the volume is new volume from snapshot, download the snapshot and write it to the qcow2 image, goto (5);
  4. if the volume is old volume, download the last snapshot and write it to the qcow2 image;
  5. update volume['host'] to new host which the instance besides. the volume is attached to the instance.

Use case 7: Detach volume

  1. Customer invokes NaaS API to create a route between the two subnets

Requirements

R1. 支持qcow2

R2. 修改qemu,增量两个命令

... lots more coming here don't worry!

Design Ideas

File:LocalStorageVolume$Naas-Core-Overview.png

  • 使用qemu qcow2的快照机制;
  • 支持在线快照,在qemu增加2个monitor命令.
  • 在swift实现增量快照;

Pre-requisites

Development Resources

Work in Progress

Discussion