Jump to: navigation, search

Difference between revisions of "Tricircle"

(Video Resources)
 
(107 intermediate revisions by 4 users not shown)
Line 1: Line 1:
Tricircle provides an OpenStack API gateway and networking automation to allow multiple OpenStack instances, spanning in one site or multiple sites or in hybrid cloud, to be managed as a single OpenStack cloud
+
The Tricircle is to provide networking automation across Neutron  in multi-region OpenStack deployments.
  
 
==Use Cases==
 
==Use Cases==
'''* Massive distributed edge clouds'''<br />
+
To understand the motivation of the Tricircle project, we should understand the use cases with end user, and it would help you understand this project, please refer to presentation material: https://docs.google.com/presentation/d/1Zkoi4vMOGN713Vv_YO0GP6YLyjLpQ7fRbHlirpq6ZK4/
  
Current Internet is good at processing downlink service. All contents are stored in centralized data centers and to some extent the access is accelerated with CDN.<br />
+
==== 0. Telecom application cloud level redundancy in OPNFV Beijing Summit ====
 +
Shared Networks to Support VNF High Availability Across OpenStack Multi Region Deployment
 +
This slides was presented in OPNFV Beijing Summit, 2017.
 +
slides: https://docs.google.com/presentation/d/1WBdra-ZaiB-K8_m3Pv76o_jhylEqJXTTxzEZ-cu8u2A/
 +
Video: https://www.youtube.com/watch?v=tbcc7-eZnkY
 +
OPNFV Summit vIMS Multisite Demo(Youtube): https://www.youtube.com/watch?v=zS0wwPHmDWs
 +
Video Conference High Availability across multiple OpenStack clouds(Youtube): https://www.youtube.com/watch?v=nK1nWnH45gI
  
Now building massive distributed edge clouds in edge data centers with computing and storage close to end user is emerging, the high bandwidth and low latency provided by the edge cloud is critical for application like video editing, 3D modeling, IoT service, real time video upload/streaming from end user to the cloud.  
+
==== 1. Application high availability running over cloud ====
 +
In telecom and other industries(for example, MySQL Galera cluster), normally applications tend to be designed as Active-Active, Active-Passive or N-to-1 node configurations. Because those applications need to be available as much as possible. Once those applications are planned to be migrated to cloud infrastructure, active instance(s) need(s) to be deployed in one OpenStack instance first. After that, passive or active instances(s) will be deployed in another (other) OpenStack instance(s) in order for achieving 99.999% availability, if it’s required.
  
As more and more user generated content uploaded/streamed to the cloud and web site, these contents and data still have to be uploaded/streamed to some centralized data centers, the path is long and the bandwidth is limited and slow. For example, it’s very slow to upload/streaming HD/2k/4k video for every user concurrently. Both for pictures or videos, they have to be uploaded with quality loss, and slow, use cloud as the first storage for user data is not the choice yet, currently it’s mainly for backup, and for none time/latency sensitive data. Some video captured and stored with quality loss even lead to the difficulty to provide the crime evidence or other purpose. The last mile of network access (fix or mobile) is wide enough, the main hindrance is that bandwidth in MAN(Metropolitan Area Network) and Backbone and WAN is limited and expensive.
+
The reason why this deployment style is required is that in general cloud system only achieve 99.99% availability. [1] [2]
  
Some enterprises also found issues for applications running in centralized  cloud, for example for video editing, 3D modeling application, IoT service which are bandwidth and latency sensitive.  
+
To achieve required high availability, design of network architecture (especially Layer 2 and Layer 3) needs across Neutron to be considered for application state replication or heartbeat.
 +
<br />
 +
[[File:Tricircle_usecase2.png|frameless|center|600px|Tricircle usecase2]]
 +
<br />
 +
The above picture shows an application with Galera DB cluster as backend which are geographically distributed into multiple OpenStack instances.
  
And NFV(network function virtualization) will provide more flexible and better customized networking capability, for example, dynamic personalized network bandwidth management,  and also help to move the computing and storage close to end user. With shortest path from the end user to the storage and computing, the uplink speed could be larger and terminate the bandwidth consumption as early as possible,  will definitely bring better user experience, and change the way of content generation and store: real time, all data in cloud. For example, an user/enterprise can dynamically ask for high bandwidth/storage requirement for streaming the HD video/AR/VR data into the cloud temporary, after finish the streaming, ask for more computing resources to do the post processing, and re-distribute the video to other sites.
+
[1] https://aws.amazon.com/cn/ec2/sla/<br />
 +
[2] https://news.ycombinator.com/item?id=2470298<br />
  
For Enterprise, most of the employee will work in different branches, and access to the nearby edge cloud, and collaboration among employee from different branch leads to the requirement on cross edge cloud functionalities, like tenant level networking, data distribution and migration.
+
====  2. Dual ISPs Load Balancing for internet link ====
 +
Deploy application in separate OpenStack instance with dual ISPs for internet link redundancy, load balancing, east-west traffic isolation for data/state replication is needed.
 +
<br />
 +
[[File:Tricircle_usecase5.png|frameless|center|600px|Tricircle usecase5]]
 +
<br />
  
From family or personal point of view, the movement or distribution of VNF(telecom virtualized application)/App/Storage from one edge data center to another one is also needed. For example, all video will be stored and processed in Hawaii locally when I am taking video in travelling, but I hope the video after processing will be moved to China Shenzhen when I come back to China. But in Shenzhen, I want to share the video with streaming service not only in Shenzhen but to friends in Shanghai Beijing, so the data and the streaming service can be built in Shenzhen/Shanghai/Beijing too.  
+
==== 3. Isolation of East-West traffic ====
 
+
In financial industry, more than one OpenStack instances will be deployed, some OpenStack instance will be put in DMZ zone, and the other ones in trust zone, so that application or part of the application could be put in different level security zone. Although the tenant’s resources are provisioned in different OpenStack instances, east-west traffic among the tenant’s resources should be isolated.
For VNF(telecom virtualized application), distributed designed VNF will be placed to multiple edge data centers for higher reliability/availability. For example, the vEPC is designed to be able to distributed into multiple data centers by make the DB being fully distributed, and even chaining multiple VNFs cross edge data centers for better customized networking capabilities.
+
<br />
 
+
[[File:Tricircle_usecase3.png|frameless|center|600px|Tricircle usecase3]]
The emerging massive distributed edge clouds will not only be some independent clouds, some new requirements are needed:
+
<br />
* L2/L3 networking across data centers
+
Bandwidth sensitive, heavy load App like CAD modeling asked for the cloud close to the end user in distributed edge site for better user experience, multiple OpenStack instances will be deployed into edge sites, but the east-west communication with isolation between the tenant’s resources are also required.  
* Volume/VM/object storage backup/migration/distribution
+
<br />
* Distributed image management
+
[[File:Tricircle_usecase4.png|frameless|center|600px|Tricircle usecase4]]
* Distributed quota management
+
<br />
* ...
 
 
 
'''* Large scale cloud'''<br />
 
 
 
Compared Amazon, the scalability of OpenStack is still not good enough. One Amazon AZ can supports >50000 servers(http://www.slideshare.net/AmazonWebServices/spot301-aws-innovation-at-scale-aws-reinvent-2014).
 
 
 
Cells is a good enhancement, but the shortage of Cells is: 1) only nova supports cells. 2) using RPC for inter-datacenter communication will bring the difficulty in inter-dc troubleshooting. 3) upgrade has to deal with DB and RPC change. 4)difficult for multi-vendor integration for different cell.  
 
 
 
From the experience of production large scale public cloud, the large scale cloud can only be built by capacity expansion step by step (intra-AZ and inter-AZ). And the challenge in capacity expansion is how to do the sizing:  
 
* Number of Nova-API Server...
 
* Number of Cinder-API Server..
 
* Number of Neutron-API Server…
 
* Number of Scheduler..
 
* Number of Conductor…
 
* specification of physical server…
 
* specification of physical switch…
 
* Size of storage for Image..
 
* Size of management plane bandwidth…
 
* size of data plane bandwidth…
 
* reservation of rack space …
 
* reservation of networking slots…
 
* ….<br />
 
 
 
You have to estimate, calculate, monitoring, simulate, test, online grey expansion for controller nodes and network nodes…whenever you add new machines to the cloud. The difficulty is that you can’t test and verify in all size.
 
  
The feasible way to expand one large scale cloud is to add some already tested building block, that means we would prefer to build large scale public cloud by adding tested OpenStack instance (including controller and compute nodes) one by one, but not enlarge one OpenStack uncontraintly. This way put the cloud construction under control.
+
Similar requirements could also be found in the ops session in OpenStack Barcelona summit "Control Plane Design (multi-region)" Line 25~26, 47~50: https://etherpad.openstack.org/p/BCN-ops-control-plane-design
  
Building large scale cloud by by adding tested OpenStack instance one by one, will lead to tenant’s resource distributed in multiple OpenStacks, also brings some new requirement to OpenStack based cloud, quite similar like that in massive distributed edge clouds:
+
====  4. Cross Neutron L2 network for NFV area ====
 +
In NFV(Network Function Virtualization) area, network functions like router, NAT, LB etc will be virtualized, The cross Neutron  L2 networking capability like IP address space management, IP allocation and L2 network segment global management provided by the Tricircle can help VNFs(virtualized network function) across site inter-connection. For example, vRouter1 in site1, but vRouter2 in site2, these two VNFs could be in one L2 network across site.
  
* L2/L3 networking across OpenStack instances
+
==== 5. Cloud Capacity Expansion ====
* Distributed quota management
+
When an OpenStack cloud is created and many resources has been provisioned on it, the capacity of one of the OpenStack deployments may not be enough. A new OpenStack deployment needs to be added to this cloud. But tenants still want to add VMs to existing network. And they don’t want to add a new network, router or whatever required resources.
* Global resource view of the tenant
 
* Volume/VM migration/backup
 
* Multi-DC image import/clone/export
 
* ...
 
 
 
'''* OpenStack API enabled hybrid cloud'''
 
 
<br />
 
<br />
Refer to https://wiki.openstack.org/wiki/Jacket
+
[[File:Tricircle_usecase1.png|frameless|center|600px|Tricircle usecase1]]
 
 
 
<br />
 
<br />
The detailed use cases could be found in this presentation: https://docs.google.com/presentation/d/1UQWeAMIJgJsWw-cyz9R7NvcAuSWUnKvaZFXLfRAQ6fI/edit?usp=sharing
 
 
And also can meet the demand for several working group: Telco WG [https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases documents], [https://etherpad.openstack.org/p/Network_Segmentation_Usecases Large Deployment Team Use Cases], and [https://wiki.opnfv.org/multisite/use_cases OPNFV Multisite Use Cases]
 
 
==Overview==
 
 
Tricircle provides an OpenStack API gateway and networking automation to allow multiple OpenStack instances, spanning in one site or multiple sites or in hybrid cloud, to be managed as a single OpenStack cloud.
 
 
Tricircle would enable user to have a single management view by having only one Tricircle instance on behalf of all the involved OpenStack instances. Tricircle essentially serves as the central OpenStack API calls gateway to other OpenStack instances that are called upon, and doing the networking automation across multiple OpenStack instances.
 
 
Tricircle is the formal open source project for OpenStack cascading solution ( https://wiki.openstack.org/wiki/OpenStack_cascading_solution ).
 
 
Currently there have been various good sub-projects within different OpenStack projects that try to solve the same problem as Tricircle does, however when OpenStack is deployed in the real world, a suite of OpenStack projects need to be deployed together rather than individual one. That puts additional difficulties on the multi-openstack-instances deployment. Builds upon that, the management of such deployment would make it sounds even impossible to deal with.
 
 
Here in Tricircle, we try to solve this problem by defining a unified approach that would apply to any OpenStack projects, as well as providing a plugable structure that is extensible and has minimal impact on the main in-tree code.
 
 
Tricircle could be extended to support more powerful capabilities such as support the central Tricircle instance being virtually splitted into multiple micro instances which could enable user to have a more fine granularity on the tenancy and service. And the Tricircle also enables OpenStack based hybrid cloud.
 
  
 
==Architecture==
 
==Architecture==
The cascading solution based on PoC design with enhancement is running in several production clouds like Huawei Public Cloud in China, which brings the confidence of the value of cascading, here the focus is on how to design and develop a perfect cascading solution in open source.<br />
 
  
The initial architectural in the PoC is stateful, which could be found in https://wiki.openstack.org/wiki/OpenStack_cascading_solution, and the major notorious part identified in the PoC are status synchronization for VM,Volume, etc, UUID mapping and coupling with OpenSatck existing services like Nova, Cinder.
+
Now the Tricircle is dedicated for networking automation across Neutron in multi-region OpenStack deployments. The design blueprint has been developed with ongoing improvement in https://docs.google.com/document/d/1zcxwl8xMEpxVCqLTce2-dUOtB-ObmzJTbV1uSQ6qTsY/,  
  
Now Tricircle will be developed with stateless design to remove the challenges, and fully decouple with OpenStack services. An  improved design is discussed and developed in https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit?usp=sharing,  
+
From the control plane view (cloud management view ), Tricircle is to make Neutron(s) in multi-region OpenStack clouds working as one cluster, and enable the creation of global  network/router etc abstract networking resources across multiple OpenStack clouds. From the data plane view (end user resources view), all VMs(also could be bare metal servers or containers) are provisioned in different cloud but can be inter-connected via the global abstract networking resources, of course, with tenant level isolation.
  
<big>Stateless Architecture</big>
 
 
<br />
 
<br />
  
[[File:Tricircle_Stateless_proposal.png|frameless|center|x400px|Tricircle improved architecture design - stateless]]
+
[[File:Tricircle_view.png|frameless|center|x400px|Tricircle_view]]
 
<br />
 
<br />
  
Admin API
+
The software design is as follows:
* expose api for administrator to manage the cascading
 
* manage sites and availability zone mapping
 
* retrieve resource routing
 
* expose api for maintenance
 
  
Nova API-GW
+
<br />
  
* an standalone web service to receive all nova api request, and routing the request to regarding bottom OpenStack according to Availability Zone ( during creation ) or resource id ( during operation and query ).
+
[[File:Tricircle_architecture.png|frameless|center|x400px|Tricircle_architecture]]
* work as stateless service, and could run with processes distributed in mutlti-hosts.
+
<br />
  
Cinder API-GW
+
Shared KeyStone (centralized or distributed deployment) or federated KeyStones could be used for identity management for the Tricrcle and multiple OpenStack instances.
  
* an standalone web service to receive all cinder api request, and routing the request to regarding bottom OpenStack according to Availability Zone ( during creation ) or resource id ( during operation and query ).
+
* Tricircle Local Neutron Plugin
* work as stateless service, and could run with processes distributed in mutlti-hosts.
+
# It runs under Neutron server in the same process like OVN/Dragonflow Neutron plugin.
 +
# The Tricircle Local Neutron Plugin serve for cross Neutron networking automation triggering. It is a shim layer between real core plugin and Neutron API server
 +
# In the Tricircle context, The Tricircle Local Neutron Plugin could be called “local plugin” for short.
 +
# The Neutron server with the Tricircle Local Neutron Plugin installed is also called “local Neutron” or “local Neutron server”. Nova/Cinder which work together with local Neutron also could be called local Nova/local Cinder, compared to the central Neutron.  
  
XJob
+
* Tricircle Central Neutron Plugin
 +
# It runs under Neutron server in the same process like OVN/Dragonflow Neutron plugin.
 +
# The Tricircle Central Neutron Plugin serve for tenant level L2/L3 networking automation across multi-OpenStack instances.
 +
# In the Tricircle context, The Tricircle Central Neutron Plugin could be called “central plugin” for short.
 +
# The Neutron server with the Tricircle Central Neutron Plugin installed is also called “central Neutron” or “central Neutron server”.
  
* receive and process cross OpenStack functionalities and other aync. jobs from message bus
+
* Admin API
* for example, when booting a VM for the first time for the project, router, security group rule, FIP and other resources may have not already been created in the bottom site, but it’s required. Not like network,security group, ssh key etc resources they must be created before a VM booting, these resources could be created in async.way to accelerate response for the first VM booting request
+
# Manage the mappings between OpenStack instances and availability zone.
* cross OpenStack networking also will be done in async. jobs
+
# Retrieve object uuid routing.
* Any of Admin API, Nova API-GW, Cinder API-GW, Neutron Tricircle plugin could send an async. job to XJob through message bus with RPC API provided by XJob
+
# Expose API for maintenance.
 
Neutron Tricircle plugin
 
  
* Just like OVN Neutron plugin, the tricircle plugin serve for multi-site networking purpose, including interaction with DCI SDN controller, will use ML2 mechnism driver interface to call DCI SDN controller, especially for cross OpenStack provider multi-segment L2 networking.
+
* XJob
 +
# Receive and process cross OpenStack functionalities and other async jobs from Admin API or Tricircle Central Neutron Plugin.
 +
# For example, when booting an instance for the first time for the project, router, security group rule, FIP and other resources may have not already been created in the bottom OpenStack instance, these resources could be created in async way to accelerate response for the first instance booting request. Not like network, subnet, security group resources they must be created before an instance booting.
 +
# Admin API, Tricircle Central Neutron plugin could send an async job to XJob through message bus with RPC API provided by XJob.
  
DB
+
* Database
* Tricircle can have its own database to store sites, availability zone mapping, jobs, resource routing tables
+
# The Tricircle has its own database to store pods, jobs and resource routing tables for the Tricircle Central Neutron plugin, Admin API and XJob.
 +
# The Tricircle Central Neutron Plugin will also reuse database of central Neutron server to do the global management of tenant's resources like IP/MAC/network/router.
 +
# The Tricircle Local Neutron Plugin is a shim layer between real core plugin and Neutron API server, so Neutron DB will be still there for Neutron API server and real core plugin.
  
==FAQ==
+
For Glance deployment, there are several choice:
 +
* Shared Glance, if all OpenStack instances are located inside a high bandwidth, low latency site.
 +
* Shared Glance with distributed back-end, if OpenStack instances are located in several sites.
 +
* Distributed Glance deployment, Glance service is deployed distributed in multiple site with distributed back-end
 +
* Separate Glance deployment, each site is installed with separate Glance instance and back-end, no cross site image sharing is needed.
  
'''Q: What is the different between Tricircle and OpenStack Cascading?''' <br />
+
==Value==
 +
The motivation to develop the Tricircle open source project is to meet the demands which are required in the use cases mentioned above:
  
OpenStack Cascading was mainly an implementation method used in a PoC done in late 2014 and early 2015, which aims to test out the idea that multiple OpenStack instances '''COULD''' be deployed across multiple geo-diverse sites. After the PoC was carried out successfully, the team then planned to contribute the core idea to the community.
+
* Leverage Neutron API for cross Neutron networking automation, eco-system like CLI, SDK, Heat, Murano, Magum etc, all of these could be reserved seamlessly.
 +
* support modularized capacity expansion in large scale cloud, just add more and more OpenStack instance, and these OpenStack instances are inter-connected in tenant level.
 +
* L2/L3 networking automation across OpenStack instances.
 +
* Tenant's VMs communicate with each other via L2 or L3 networking across OpenStack instances.
 +
* Security group applied across OpenStack instances.
 +
* Tenant level IP/mac addresses management to avoid conflict across OpenStack instances.
 +
* Tenant level quota control across OpenStack instances.
  
Tricircle Project was born out of that idea, however got a different shape and focus. Unlike what is usually part of in a PoC, which has plenty twists and plumbers of feature enhancements, Tricircle in its earliest stage tries to build a clean architecture that is extendable, pluggable and reusable in nature.
+
==Installation and Play==
 +
refer to the docuementation in https://docs.openstack.org/developer/tricircle/ for single node/multi-nodes setup and networking guide.
  
In short, OpenStack Cascading is a specific deployment solution used for production purpose, while Tricircle represents an idea of one type of services, like Neutron or Murano, that in the future could be applied to OpenStack Ecosystem.
+
==How to read the source code==
 +
To read the source code, there is one guide: https://wiki.openstack.org/wiki/TricircleHowToReadCode
  
'''Q: What is the goal of Tricircle?'''<br />
+
==Resources==
 +
* Design documentation: [https://docs.google.com/document/d/1zcxwl8xMEpxVCqLTce2-dUOtB-ObmzJTbV1uSQ6qTsY/ Tricircle Design Blueprint]
 +
* Wiki: https://wiki.openstack.org/wiki/tricircle
 +
* Documentation(Installation, Configuration, Networking guide): https://docs.openstack.org/tricircle/latest/
 +
* Source: https://github.com/openstack/tricircle
 +
* Bugs: http://bugs.launchpad.net/tricircle
 +
* Blueprints: https://launchpad.net/tricircle
 +
* Review Board: https://review.openstack.org/#/q/project:openstack/tricircle
 +
* Announcements: https://wiki.openstack.org/wiki/Meetings/Tricircle#Announcements
 +
* Weekly meeting IRC channel: #openstack-meeting, irc.freenode.net on every Wednesday starting from UTC 1:00 to UTC 2:00
 +
* Weekly meeting IRC log: https://wiki.openstack.org/wiki/Meetings/Tricircle#Meeting_minutes_and_logs
 +
* Tricircle project IRC channel: #openstack-tricircle on irc.freenode.net
 +
* Tricircle project IRC channel log: http://eavesdrop.openstack.org/irclogs/%23openstack-tricircle/
 +
* Mail list:  openstack-dev@lists.openstack.org, with [openstack-dev][tricircle] in the mail subject
 +
* New contributor's guide: http://docs.openstack.org/infra/manual/developers.html
 +
* Documentation: http://docs.openstack.org/developer/tricircle
 +
* Tricircle discuss zone: https://wiki.openstack.org/wiki/TricircleDiscuzZone
  
In short term, Tricircle would focus on developing a robust architecture and related features, in a long run, we hope we could successfully establish a paradigm that could be applied to the whole OpenStack community
+
* Tricircle big-tent application defense: https://review.openstack.org/#/c/338796 (A lots of comment and discussion to learn about Tricircle from many aspects)
  
'''Q: How can I set up Tricircle hand by hand ?'''<br />
+
Tricircle is designed to use the same tools for submission and review as other OpenStack projects. As such we follow the [http://docs.openstack.org/infra/manual/developers.html#development-workflow OpenStack development workflow]. New contributors should follow the [http://docs.openstack.org/infra/manual/developers.html#getting-started getting started] steps before proceeding, as a Launchpad ID and signed contributor license are required to add new entries.
 
 
Yes, some volunteers sucessfully set up the Tricircle in 3 VMs with virtualbox in Ubuntu 14.04 LTS. The blog can be found in [http://shipengfei92.cn/play_tricircle_with_virtualbox this]
 
 
 
==To do list==
 
To do list is in the etherpad: https://etherpad.openstack.org/p/TricircleToDo
 
 
 
==Cross Pod L2 Networking==
 
We will discuss cross pod l2 networking in: https://etherpad.openstack.org/p/TricircleCrossPodL2Networking
 
 
 
==How to read source code==
 
To read the source code, it's much easier if you follow this blueprint:
 
 
 
Implement Stateless Architecture: https://blueprints.launchpad.net/tricircle/+spec/implement-stateless
 
 
 
This blueprint is to build Tricircle from scratch
 
 
 
==How to contribute==
 
  
# Clone https://github.com/openstack/tricircle
+
==Video Resources==
# Make the changes to your entry, be sure to include what’s changed and why
+
* Tricircle project update(Pike, OpenStack Sydney Summit), video: https://www.youtube.com/watch?v=baSu-eoUE1E
# Commit the change for review
+
* Tricircle project update(Pike, OpenStack Sydney Summit), slides: https://docs.google.com/presentation/d/1JlGaMPDvnv42QV5isUl7Y1JHuTjBmAqR5fGzIcizlvg
# The changes will be reviewed, merged within a day or so.
+
* Move mission critical application to multi-site, what we learned: https://www.youtube.com/watch?v=l4Q2EoblDnY
 +
* Shared Networks to Support VNF High Availability Across OpenStack Multi Region Deployment  slides: https://docs.google.com/presentation/d/1WBdra-ZaiB-K8_m3Pv76o_jhylEqJXTTxzEZ-cu8u2A/ Video: https://www.youtube.com/watch?v=tbcc7-eZnkY
 +
* OPNFV Summit vIMS Multisite Demo(Youtube): https://www.youtube.com/watch?v=zS0wwPHmDWs
 +
* Video Conference High Availability across multiple OpenStack clouds(Youtube): https://www.youtube.com/watch?v=nK1nWnH45gI
  
Tricircle is designed to use the same tools for submission and review as other OpenStack projects.  As such we follow the [http://docs.openstack.org/infra/manual/developers.html#development-workflow OpenStack development workflow]. New contributors should follow the [http://docs.openstack.org/infra/manual/developers.html#getting-started getting started] steps before proceeding, as a Launchpad ID and signed contributor license are required to add new entries.
+
==History==
 +
During the big-tent application of Tricircle: https://review.openstack.org/#/c/338796/, the proposal is to move API-gateway part away from Tricircle, and form two independent and decoupled projects:<br />
  
The Tricircle Launchpad page can be found at https://launchpad.net/tricircle. Register BP or report bug in https://launchpad.net/tricircle
+
Tricircle: Dedicated for cross Neutron networking automation in multi-region OpenStack deployments, run without or with Trio2o. <br />
  
==Community==
+
Trio2o: Dedicated to provide API gateway for those who need single Nova/Cinder API endpoint in multi-region OpenStack deployment, run without or with Tricircle.<br />
  
We have regular [[Meetings/Tricircle|weekly meetings]] at #openstack-meeting on every Wednesday starting from UTC 13:00.
+
Splitting blueprint: https://blueprints.launchpad.net/tricircle/+spec/make-tricircle-dedicated-for-networking-automation-across-neutron
  
You are also welcomed to discuss issues you cared about using openstack-dev mailing list with [Tricircle] in the mail title. I believe our team member would be quite responsible :)
+
The wiki for Tricircle before splitting is linked here: https://wiki.openstack.org/wiki/tricircle_before_splitting
  
 
==Meeting minutes and logs==
 
==Meeting minutes and logs==
 
all meeting logs and minutes could be found in <br />
 
all meeting logs and minutes could be found in <br />
 +
2017: http://eavesdrop.openstack.org/meetings/tricircle/2017/
 +
<br />
 
2016: http://eavesdrop.openstack.org/meetings/tricircle/2016/
 
2016: http://eavesdrop.openstack.org/meetings/tricircle/2016/
 
<br />
 
<br />
 
2015: http://eavesdrop.openstack.org/meetings/tricircle/2015/
 
2015: http://eavesdrop.openstack.org/meetings/tricircle/2015/
  
==Team Member==
+
==To do list==
Contact team members in IRC channel: #openstack-tricircle
+
 
 +
Queens:<br />
 +
# Queens Etherpad: https://etherpad.openstack.org/p/tricircle-queens-ptg
  
===Current active participants===
+
Pike:<br />
 +
# Pike Etherpad: https://etherpad.openstack.org/p/tricircle-pike-design-topics
 +
# Boston On-boarding Etherpad: https://etherpad.openstack.org/p/BOS-forum-tricircle-onboarding
  
Joe Huang, Huawei
+
Ocata:<br />
 +
# To do list in Ocata: https://etherpad.openstack.org/p/ocata-tricircle-work-session
  
Khayam Gondal, Dell
+
Newton:<br />
 +
# To do list is in the etherpad: https://etherpad.openstack.org/p/TricircleToDo<br />
 +
# Splitting Tricircle into two projects: https://etherpad.openstack.org/p/TricircleSplitting
  
Shinobu Kinjo, RedHat
+
==Team Member==
 +
Contact team members in IRC channel: #openstack-tricircle
  
Vega Cai, Huawei
+
===Current active contributors===
  
Pengfei Shi, OMNI Lab
+
You can find active contributors from http://stackalytics.com:
  
Bean Zhang, OMNI Lab
+
Review: http://stackalytics.com/?release=all&module=tricircle-group&metric=marks<br />
  
Yipei Niu, Huazhong University of Science and Technology
+
Code of lines: http://stackalytics.com/?release=all&module=tricircle-group&metric=loc<br />
  
Howard Huang, Huawei
+
Email: http://stackalytics.com/?release=all&module=tricircle-group&metric=emails<br />

Latest revision as of 06:18, 11 December 2017

The Tricircle is to provide networking automation across Neutron in multi-region OpenStack deployments.

Use Cases

To understand the motivation of the Tricircle project, we should understand the use cases with end user, and it would help you understand this project, please refer to presentation material: https://docs.google.com/presentation/d/1Zkoi4vMOGN713Vv_YO0GP6YLyjLpQ7fRbHlirpq6ZK4/

0. Telecom application cloud level redundancy in OPNFV Beijing Summit

Shared Networks to Support VNF High Availability Across OpenStack Multi Region Deployment This slides was presented in OPNFV Beijing Summit, 2017. slides: https://docs.google.com/presentation/d/1WBdra-ZaiB-K8_m3Pv76o_jhylEqJXTTxzEZ-cu8u2A/ Video: https://www.youtube.com/watch?v=tbcc7-eZnkY OPNFV Summit vIMS Multisite Demo(Youtube): https://www.youtube.com/watch?v=zS0wwPHmDWs Video Conference High Availability across multiple OpenStack clouds(Youtube): https://www.youtube.com/watch?v=nK1nWnH45gI

1. Application high availability running over cloud

In telecom and other industries(for example, MySQL Galera cluster), normally applications tend to be designed as Active-Active, Active-Passive or N-to-1 node configurations. Because those applications need to be available as much as possible. Once those applications are planned to be migrated to cloud infrastructure, active instance(s) need(s) to be deployed in one OpenStack instance first. After that, passive or active instances(s) will be deployed in another (other) OpenStack instance(s) in order for achieving 99.999% availability, if it’s required.

The reason why this deployment style is required is that in general cloud system only achieve 99.99% availability. [1] [2]

To achieve required high availability, design of network architecture (especially Layer 2 and Layer 3) needs across Neutron to be considered for application state replication or heartbeat.

Tricircle usecase2


The above picture shows an application with Galera DB cluster as backend which are geographically distributed into multiple OpenStack instances.

[1] https://aws.amazon.com/cn/ec2/sla/
[2] https://news.ycombinator.com/item?id=2470298

2. Dual ISPs Load Balancing for internet link

Deploy application in separate OpenStack instance with dual ISPs for internet link redundancy, load balancing, east-west traffic isolation for data/state replication is needed.

Tricircle usecase5


3. Isolation of East-West traffic

In financial industry, more than one OpenStack instances will be deployed, some OpenStack instance will be put in DMZ zone, and the other ones in trust zone, so that application or part of the application could be put in different level security zone. Although the tenant’s resources are provisioned in different OpenStack instances, east-west traffic among the tenant’s resources should be isolated.

Tricircle usecase3


Bandwidth sensitive, heavy load App like CAD modeling asked for the cloud close to the end user in distributed edge site for better user experience, multiple OpenStack instances will be deployed into edge sites, but the east-west communication with isolation between the tenant’s resources are also required.

Tricircle usecase4


Similar requirements could also be found in the ops session in OpenStack Barcelona summit "Control Plane Design (multi-region)" Line 25~26, 47~50: https://etherpad.openstack.org/p/BCN-ops-control-plane-design

4. Cross Neutron L2 network for NFV area

In NFV(Network Function Virtualization) area, network functions like router, NAT, LB etc will be virtualized, The cross Neutron L2 networking capability like IP address space management, IP allocation and L2 network segment global management provided by the Tricircle can help VNFs(virtualized network function) across site inter-connection. For example, vRouter1 in site1, but vRouter2 in site2, these two VNFs could be in one L2 network across site.

5. Cloud Capacity Expansion

When an OpenStack cloud is created and many resources has been provisioned on it, the capacity of one of the OpenStack deployments may not be enough. A new OpenStack deployment needs to be added to this cloud. But tenants still want to add VMs to existing network. And they don’t want to add a new network, router or whatever required resources.

Tricircle usecase1


Architecture

Now the Tricircle is dedicated for networking automation across Neutron in multi-region OpenStack deployments. The design blueprint has been developed with ongoing improvement in https://docs.google.com/document/d/1zcxwl8xMEpxVCqLTce2-dUOtB-ObmzJTbV1uSQ6qTsY/,

From the control plane view (cloud management view ), Tricircle is to make Neutron(s) in multi-region OpenStack clouds working as one cluster, and enable the creation of global network/router etc abstract networking resources across multiple OpenStack clouds. From the data plane view (end user resources view), all VMs(also could be bare metal servers or containers) are provisioned in different cloud but can be inter-connected via the global abstract networking resources, of course, with tenant level isolation.


Tricircle_view


The software design is as follows:


Tricircle_architecture


Shared KeyStone (centralized or distributed deployment) or federated KeyStones could be used for identity management for the Tricrcle and multiple OpenStack instances.

  • Tricircle Local Neutron Plugin
  1. It runs under Neutron server in the same process like OVN/Dragonflow Neutron plugin.
  2. The Tricircle Local Neutron Plugin serve for cross Neutron networking automation triggering. It is a shim layer between real core plugin and Neutron API server
  3. In the Tricircle context, The Tricircle Local Neutron Plugin could be called “local plugin” for short.
  4. The Neutron server with the Tricircle Local Neutron Plugin installed is also called “local Neutron” or “local Neutron server”. Nova/Cinder which work together with local Neutron also could be called local Nova/local Cinder, compared to the central Neutron.
  • Tricircle Central Neutron Plugin
  1. It runs under Neutron server in the same process like OVN/Dragonflow Neutron plugin.
  2. The Tricircle Central Neutron Plugin serve for tenant level L2/L3 networking automation across multi-OpenStack instances.
  3. In the Tricircle context, The Tricircle Central Neutron Plugin could be called “central plugin” for short.
  4. The Neutron server with the Tricircle Central Neutron Plugin installed is also called “central Neutron” or “central Neutron server”.
  • Admin API
  1. Manage the mappings between OpenStack instances and availability zone.
  2. Retrieve object uuid routing.
  3. Expose API for maintenance.
  • XJob
  1. Receive and process cross OpenStack functionalities and other async jobs from Admin API or Tricircle Central Neutron Plugin.
  2. For example, when booting an instance for the first time for the project, router, security group rule, FIP and other resources may have not already been created in the bottom OpenStack instance, these resources could be created in async way to accelerate response for the first instance booting request. Not like network, subnet, security group resources they must be created before an instance booting.
  3. Admin API, Tricircle Central Neutron plugin could send an async job to XJob through message bus with RPC API provided by XJob.
  • Database
  1. The Tricircle has its own database to store pods, jobs and resource routing tables for the Tricircle Central Neutron plugin, Admin API and XJob.
  2. The Tricircle Central Neutron Plugin will also reuse database of central Neutron server to do the global management of tenant's resources like IP/MAC/network/router.
  3. The Tricircle Local Neutron Plugin is a shim layer between real core plugin and Neutron API server, so Neutron DB will be still there for Neutron API server and real core plugin.

For Glance deployment, there are several choice:

  • Shared Glance, if all OpenStack instances are located inside a high bandwidth, low latency site.
  • Shared Glance with distributed back-end, if OpenStack instances are located in several sites.
  • Distributed Glance deployment, Glance service is deployed distributed in multiple site with distributed back-end
  • Separate Glance deployment, each site is installed with separate Glance instance and back-end, no cross site image sharing is needed.

Value

The motivation to develop the Tricircle open source project is to meet the demands which are required in the use cases mentioned above:

  • Leverage Neutron API for cross Neutron networking automation, eco-system like CLI, SDK, Heat, Murano, Magum etc, all of these could be reserved seamlessly.
  • support modularized capacity expansion in large scale cloud, just add more and more OpenStack instance, and these OpenStack instances are inter-connected in tenant level.
  • L2/L3 networking automation across OpenStack instances.
  • Tenant's VMs communicate with each other via L2 or L3 networking across OpenStack instances.
  • Security group applied across OpenStack instances.
  • Tenant level IP/mac addresses management to avoid conflict across OpenStack instances.
  • Tenant level quota control across OpenStack instances.

Installation and Play

refer to the docuementation in https://docs.openstack.org/developer/tricircle/ for single node/multi-nodes setup and networking guide.

How to read the source code

To read the source code, there is one guide: https://wiki.openstack.org/wiki/TricircleHowToReadCode

Resources

Tricircle is designed to use the same tools for submission and review as other OpenStack projects. As such we follow the OpenStack development workflow. New contributors should follow the getting started steps before proceeding, as a Launchpad ID and signed contributor license are required to add new entries.

Video Resources

History

During the big-tent application of Tricircle: https://review.openstack.org/#/c/338796/, the proposal is to move API-gateway part away from Tricircle, and form two independent and decoupled projects:

Tricircle: Dedicated for cross Neutron networking automation in multi-region OpenStack deployments, run without or with Trio2o.

Trio2o: Dedicated to provide API gateway for those who need single Nova/Cinder API endpoint in multi-region OpenStack deployment, run without or with Tricircle.

Splitting blueprint: https://blueprints.launchpad.net/tricircle/+spec/make-tricircle-dedicated-for-networking-automation-across-neutron

The wiki for Tricircle before splitting is linked here: https://wiki.openstack.org/wiki/tricircle_before_splitting

Meeting minutes and logs

all meeting logs and minutes could be found in
2017: http://eavesdrop.openstack.org/meetings/tricircle/2017/
2016: http://eavesdrop.openstack.org/meetings/tricircle/2016/
2015: http://eavesdrop.openstack.org/meetings/tricircle/2015/

To do list

Queens:

  1. Queens Etherpad: https://etherpad.openstack.org/p/tricircle-queens-ptg

Pike:

  1. Pike Etherpad: https://etherpad.openstack.org/p/tricircle-pike-design-topics
  2. Boston On-boarding Etherpad: https://etherpad.openstack.org/p/BOS-forum-tricircle-onboarding

Ocata:

  1. To do list in Ocata: https://etherpad.openstack.org/p/ocata-tricircle-work-session

Newton:

  1. To do list is in the etherpad: https://etherpad.openstack.org/p/TricircleToDo
  2. Splitting Tricircle into two projects: https://etherpad.openstack.org/p/TricircleSplitting

Team Member

Contact team members in IRC channel: #openstack-tricircle

Current active contributors

You can find active contributors from http://stackalytics.com:

Review: http://stackalytics.com/?release=all&module=tricircle-group&metric=marks

Code of lines: http://stackalytics.com/?release=all&module=tricircle-group&metric=loc

Email: http://stackalytics.com/?release=all&module=tricircle-group&metric=emails