Jump to: navigation, search

Difference between revisions of "TricircleBigTentQnA"

Line 11: Line 11:
 
How about your suggestion for the Tricircle release model.
 
How about your suggestion for the Tricircle release model.
  
===== The milestone model is actually less strict than the other release models,
+
===== The milestone model is actually less strict than the other release models, because the milestones are picked based on the schedule rather than the stability of the code. If you follow an independent or cycle-with-intermediary model then you are telling your users that all releases are ready to be used in production. At this point, we are past the second milestone, and so I don't think Tricircle would be considered part of Newton anyway just because of the timing. That said, if you intend to try to follow the release cycle, choosing one of those models instead of independent will help users understand that. =====
because the milestones are picked based on the schedule rather than
 
the stability of the code. If you follow an independent or cycle-with-intermediary  
 
model then you are telling your users that all releases are ready to be used
 
in production.
 
At this point, we are past the second milestone, and so I don't think
 
Tricircle would be considered part of Newton anyway just because of the
 
timing. That said, if you intend to try to follow the release cycle,
 
choosing one of those models instead of independent will help users
 
understand that. =====
 
 
Thank you for your comment.
 
Thank you for your comment.
 
OK, will use cycle-with-intermediary release model instead. The comment for concise mission statement will be updated in next patch.
 
OK, will use cycle-with-intermediary release model instead. The comment for concise mission statement will be updated in next patch.
  
===
+
=== I don't like how this attempts to re-implement our core APIs: https://github.com/openstack/tricircle/blob/master/tricircle/nova_apigw/controllers/server.py#L121 The above shows many "expected" 500 errors, which is something we explicitly call a bug in OpenStack APIs. I am curious if DefCore tests pass when using Tricircle. It certainly fails the requirement for using certain upstream code sections. ===
> I don't like how this attempts to re-implement our core APIs:
 
> https://github.com/openstack/tricircle/blob/master/tricircle/nova_apigw/controllers/server.py#L121
 
>
 
> The above shows many "expected" 500 errors, which is something we
 
> explicitly call a bug in OpenStack APIs. I am curious if DefCore
 
> tests pass when using Tricircle. It certainly fails the requirement
 
> for using certain upstream code sections. ===
 
  
 
The Tricircle has not had better error handling yet, this needs to be fixed, thank you for pointing out this "500" error handling issue.
 
The Tricircle has not had better error handling yet, this needs to be fixed, thank you for pointing out this "500" error handling issue.
Line 40: Line 24:
 
The Tricircle should have its own error handling mechanism otherwise there could be mishandling, but the output will be kept consistency as Nova, Cinder, Neutron.It's because that main feature of its is to handle remote resources running on independent openstack instances. Means that the Tricircle should have some capabilities of what are happening on remote site considering consistency.
 
The Tricircle should have its own error handling mechanism otherwise there could be mishandling, but the output will be kept consistency as Nova, Cinder, Neutron.It's because that main feature of its is to handle remote resources running on independent openstack instances. Means that the Tricircle should have some capabilities of what are happening on remote site considering consistency.
  
===
+
=== We have discussed cascading at a previous design summit session, and on the ML. There were questions from that session around use cases that were never answered. In particular, why not exposing the geographical regions and AZs was not acceptable. The cases where a proxy approach seemed to be required, didn't appear to be the target use cases. I don't like how this dilutes the per project efforts around Federation, multi-Region support and scaling patterns like Cells v2 and Routed Networks. It would be better if as a community there is a single way to consume large collections of OpenStack clouds. Federation appears to be the current approach, although there is still work needed around Quotas and having an integrated network service to bridge remote isolated networks. ===
> We have discussed cascading at a previous design summit session,
 
> and on the ML. There were questions from that session around use
 
> cases that were never answered. In particular, why not exposing the
 
> geographical regions and AZs was not acceptable. The cases where a
 
> proxy approach seemed to be required, didn't appear to be the
 
> target use cases. I don't like how this dilutes the per project
 
> efforts around Federation, multi-Region support and scaling
 
> patterns like Cells v2 and Routed Networks.
 
>
 
> It would be better if as a community there is a single way to
 
> consume large collections of OpenStack clouds. Federation appears
 
> to be the current approach, although there is still work needed
 
> around Quotas and having an integrated network service to bridge
 
> remote isolated networks. ===
 
  
 
There are four use cases described why we need the Tricircle project in the reference material[1]:https://docs.google.com/presentation/d/1Zkoi4vMOGN713Vv_YO0GP6YLyjLpQ7fRbHlirpq6ZK4/edit?usp=sharing, and exposing independent geographical regions and AZs was not enough, should I describe all use cases in the commit message? This will make the commit message quite long.
 
There are four use cases described why we need the Tricircle project in the reference material[1]:https://docs.google.com/presentation/d/1Zkoi4vMOGN713Vv_YO0GP6YLyjLpQ7fRbHlirpq6ZK4/edit?usp=sharing, and exposing independent geographical regions and AZs was not enough, should I describe all use cases in the commit message? This will make the commit message quite long.
Line 67: Line 37:
 
The Tricircle is just applying big-tent project, to be a member and complement of the OpenStack Eco-system. The Tricircle will not require any modification on existing components, it will make use of existing or updated features on existing components, and re-use tempest test cases to ensure the API compliance and consistency. No conflict will happen. As you proposed here, multi-region, cells and federation are possible ways to address these use cases, although some requirements are still not fulfilled. So there are many options for cloud operators, it's no harm for the Tricircle to provide one more option.
 
The Tricircle is just applying big-tent project, to be a member and complement of the OpenStack Eco-system. The Tricircle will not require any modification on existing components, it will make use of existing or updated features on existing components, and re-use tempest test cases to ensure the API compliance and consistency. No conflict will happen. As you proposed here, multi-region, cells and federation are possible ways to address these use cases, although some requirements are still not fulfilled. So there are many options for cloud operators, it's no harm for the Tricircle to provide one more option.
  
===  
+
=== If the intent is to hide orchestration complexity from the user, it feels like this would be better as extensions to Heat. ===
> If the intent is to hide orchestration complexity from the user, it
 
> feels like this would be better as extensions to Heat. ===
 
 
 
 
Heat does't provide Nova, Cinder, Neutron API to end user, instead Heat provides its own APIs, but the end user or software still wants to use CLI or APIs or SDK of Nova, Cinder, Neutron. Especially for public cloud, some PaaS platform will talk to Nova, Cinder, Neutron API directly.
 
Heat does't provide Nova, Cinder, Neutron API to end user, instead Heat provides its own APIs, but the end user or software still wants to use CLI or APIs or SDK of Nova, Cinder, Neutron. Especially for public cloud, some PaaS platform will talk to Nova, Cinder, Neutron API directly.
  

Revision as of 03:05, 13 July 2016

Contents

Questions and Answers in the Tricircle Big Tent project application

How much of the API specifics need to be reimplemented in the Cinder / Nova APIGW components ? How much maintenance is needed there in case of changes in the bottom APIs ?

For VM/Volume related API(like VM/Volum/Backup/Snapshot...), no need to be re-implemented in Cinder/Nova APIGW, for the Tricircle just forwards the request. For those APIs which manage common attributes like Cinder Volume Type, Nova Flavor,quota which are only some objects in the databases, need to be re-implemented. The maintenance for the change in the bottom APIs is quite small. The Tricircle reuses tempest test cases for Nova/Cinder/Neutron to guarantee if there is change in the bottom APIs, and the change has impact on the implementation of the Tricircle, then the check/gate test for each patch submitted in the Tricircle will be failed, so that the contributor can correct the Tricircle in time. In this patch, the check and gate test has just been added to the patches(https://review.openstack.org/#/c/339332/), more test cases will be opened to cover the features coming in the Tricircle.

You're using an independent release model, which means you do not follow the OpenStack development cycles. Nova, Cinder, Neutron and Keystone follow a cycle-based release model. How does the Tricircle release map to supported releases in bottom instances ? How does it map to the supported Keystone/Neutron implementations running in the top Tricircle instance ?

The Tricircle will release in the same cycle based model, and have a branch accordingly when Nova, Cinder, Neutron and KeyStone have a new branch. In this patch, "independent" release model is configured only temporary because the Tricircle is in early stage of development, currently we want to develop more features and but not strictly follow the milestones like Newton-1, Newton-2, Newton-3. The "independent" release model may last one to two releases, when most of basic features are ready, will use the same release model as Nova, Cinder, Neutron and KeyStone. How about your suggestion for the Tricircle release model.

The milestone model is actually less strict than the other release models, because the milestones are picked based on the schedule rather than the stability of the code. If you follow an independent or cycle-with-intermediary model then you are telling your users that all releases are ready to be used in production. At this point, we are past the second milestone, and so I don't think Tricircle would be considered part of Newton anyway just because of the timing. That said, if you intend to try to follow the release cycle, choosing one of those models instead of independent will help users understand that.

Thank you for your comment. OK, will use cycle-with-intermediary release model instead. The comment for concise mission statement will be updated in next patch.

I don't like how this attempts to re-implement our core APIs: https://github.com/openstack/tricircle/blob/master/tricircle/nova_apigw/controllers/server.py#L121 The above shows many "expected" 500 errors, which is something we explicitly call a bug in OpenStack APIs. I am curious if DefCore tests pass when using Tricircle. It certainly fails the requirement for using certain upstream code sections.

The Tricircle has not had better error handling yet, this needs to be fixed, thank you for pointing out this "500" error handling issue. The check and gate test which is reusing Nova, Cinder,Neutron tempest test cases has just been added to the Tricircle in this patch(https://review.openstack.org/#/c/339332/). Because the job was just merged last week, currently only volume list|get related test cases were opened to test the tricircle(https://github.com/openstack/tricircle/blob/master/tricircle/tempestplugin/tempest_volume.sh). ostestr --regex '(tempest.api.volume.test_volumes_list|\ tempest.api.volume.test_volumes_get)' Server related and other test cases will be added into the job step by step. If tempest test cases pass, then DefCore tests should also pass. The Tricircle should have its own error handling mechanism otherwise there could be mishandling, but the output will be kept consistency as Nova, Cinder, Neutron.It's because that main feature of its is to handle remote resources running on independent openstack instances. Means that the Tricircle should have some capabilities of what are happening on remote site considering consistency.

We have discussed cascading at a previous design summit session, and on the ML. There were questions from that session around use cases that were never answered. In particular, why not exposing the geographical regions and AZs was not acceptable. The cases where a proxy approach seemed to be required, didn't appear to be the target use cases. I don't like how this dilutes the per project efforts around Federation, multi-Region support and scaling patterns like Cells v2 and Routed Networks. It would be better if as a community there is a single way to consume large collections of OpenStack clouds. Federation appears to be the current approach, although there is still work needed around Quotas and having an integrated network service to bridge remote isolated networks.

There are four use cases described why we need the Tricircle project in the reference material[1]:https://docs.google.com/presentation/d/1Zkoi4vMOGN713Vv_YO0GP6YLyjLpQ7fRbHlirpq6ZK4/edit?usp=sharing, and exposing independent geographical regions and AZs was not enough, should I describe all use cases in the commit message? This will make the commit message quite long. Just one use cases here in short, in OpenStack based public cloud, for one region in one site or multi-sites, the end user only wants to see one endpoint. But one OpenStack instance will reach the capacity limit at last, we have to add more OpenStack instance into the cloud for capacity expansion, how to expose one endpoint to the end user? And the end user still wants to add new virtual machine into same network, and security group should work for the virtual machines in different OpenStack instances. There are other use cases are described the in the above mentioned communication material. In financial area, application will often be deployed into two sites and three data centers for high reliability, availability and durability, one region cloud often should support multi-data center and multi-site. Except the four use cases mentioned in the material[1] mentioned above, and there is another use cases reported in OpenStack Austin summit: https://www.openstack.org/videos/video/distributed-nfv-and-openstack-challenges-and-potential-solutions . Federation and multi-region are good solutions, but they don't provide single endpoint exposed to the end user, that is one requirement in the use cases mentioned above, and also no networking automation(for example tenant level L2/L3 networking automation and security handling), no quota control across OpenStack instances. Cells is a good enhancement for Nova scalability, but there are some limitation in deployment for Cells are: 1)only nova supports cells. 2) using RPC for inter-data center communication will bring the difficulty in inter-dc troubleshooting and maintenance, no CLI or restful API or other tools to manage a child cell directly. If the link between the API cell and child cells is broken, then the child cell in the remote site is unmanageable. Analysis and compare of these candidate solutions is also provided in the material[1].

The Tricircle is just applying big-tent project, to be a member and complement of the OpenStack Eco-system. The Tricircle will not require any modification on existing components, it will make use of existing or updated features on existing components, and re-use tempest test cases to ensure the API compliance and consistency. No conflict will happen. As you proposed here, multi-region, cells and federation are possible ways to address these use cases, although some requirements are still not fulfilled. So there are many options for cloud operators, it's no harm for the Tricircle to provide one more option.

If the intent is to hide orchestration complexity from the user, it feels like this would be better as extensions to Heat.

Heat does't provide Nova, Cinder, Neutron API to end user, instead Heat provides its own APIs, but the end user or software still wants to use CLI or APIs or SDK of Nova, Cinder, Neutron. Especially for public cloud, some PaaS platform will talk to Nova, Cinder, Neutron API directly.

Is Tricircle planning to be gateway for every OpenStack project?

No, Nova, Cinder , Neutron only, at most +Glance + Ceilometer, No more

How can we verify the API's exposed by Tricircle are indeed identical to the service's ?

Have explained in the commit meesage and comment many time: reuse the tempest test cases of these services to test tricircle

How does this impact defcore?

If tempest can pass, then defcore pass

What happens if a cloud exposes Tricircle instead of exposing, say, nova directly?

Adding more cross OpenStack scheduling and netwroking automation capabibility,