Jump to: navigation, search

Difference between revisions of "Auto-scaling SIG"

(Created page with "== Auto-scaling SIG == '''Status''': In review '''Original proposal''': http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001656.html === Mission === Thi...")
 
(add community resources and clarify scope)
Line 7: Line 7:
 
=== Mission ===
 
=== Mission ===
  
This [[OpenStack_SIGs|SIG]] aims to improve experience on develop, operate, and use auto-scaling and it's related features (like metering, cluster schedule, life cycle management) and to coordinate across projects and communities (like k8s cluster auto-scaling on OpenStack). Also provide a central place to put tests, documentations, and even common libraries for Auto-scaling features.
+
This [[OpenStack_SIGs|SIG]] aims to improve the experience of developing, operating, and using auto-scaling and its related features (like metering, cluster schedule, life cycle management), and to coordinate efforts across projects and communities (like k8s cluster auto-scaling on OpenStack). The SIG also provides a central place to put tests, documentations, and even common libraries for auto-scaling features.
  
 +
The SIG is expected to focus more on auto-scaling user workloads; however work on auto-scaling infrastructure is also welcome, especially considering that user workloads in an undercloud are actually infrastructure in the corresponding overcloud.
  
 
=== Background ===
 
=== Background ===
OpenStack provide multiple methods to auto-scale your cluster (Like using Heat AutoScalingGroup, Senlin Cluster, etc.). Without a general organize across projects, users and ops got a lot of confuse when trying to do auto-scaling on OpenStack. Developers also separated to projects and only doing their own stuff. Most of the components required by auso-scaling already exist within OpenStack, we need to provide a more simple way for users and ops to adopt auto-scaling. And allowing developers to coordinate together instead of implement something all over again.
+
OpenStack provides multiple methods to auto-scale your cluster (Like using Heat AutoScalingGroup, Senlin Cluster, etc.). However without general coordination across projects, it may not be easy for users and ops to achieve auto-scaling on OpenStack. Developers tend to be focused on individual projects rather than cross-project integration. Most of the components required by auto-scaling already exist within OpenStack, but we need to provide a more simple way for users and ops to adopt auto-scaling. And allowing developers to coordinate together instead of implement something all over again.
  
 
=== Goals ===
 
=== Goals ===
Line 19: Line 20:
 
=== Getting Involved ===
 
=== Getting Involved ===
  
We're currently working on a structure to make everyone more easier to getting involve. Our StoryBoard, init documentation, and other information will be up soon.
+
We're currently working on a structure to make everyone more easier to getting involved. Our initial git repository, documentation, and other information will be up soon.
 +
 
 +
=== Community Infrastructure / Resources ===
 +
 
 +
* Wiki: this page
 +
* [https://storyboard.openstack.org/#!/project/openstack/auto-scaling-sig SIG StoryBoard] (for an authoritative list of all ongoing work within the SIG)
 +
* [http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss openstack-discuss mailing list]; use the <code>[auto-scaling]</code> tag
 +
* IRC channel: #openstack-auto-scaling on [http://freenode.net/ Freenode] IRC
 +
* Regular IRC meetings have not been arranged yet
 +
* [https://review.openstack.org/#/q/project:openstack/auto-scaling-sig patch reviews] (gerrit)
  
 
=== SIG Chairs ===
 
=== SIG Chairs ===
Line 27: Line 37:
 
* Zane Bitter
 
* Zane Bitter
  
We welcome anyone who willing to join and help. And we surly are willing to re-elect chairs as soon as we got our init setup done.
+
We welcome anyone who willing to join and help. And we surely are willing to re-elect chairs as soon as we got our init setup done.
  
 
=== Upcoming events ===
 
=== Upcoming events ===

Revision as of 16:10, 2 May 2019

Auto-scaling SIG

Status: In review

Original proposal: http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001656.html

Mission

This SIG aims to improve the experience of developing, operating, and using auto-scaling and its related features (like metering, cluster schedule, life cycle management), and to coordinate efforts across projects and communities (like k8s cluster auto-scaling on OpenStack). The SIG also provides a central place to put tests, documentations, and even common libraries for auto-scaling features.

The SIG is expected to focus more on auto-scaling user workloads; however work on auto-scaling infrastructure is also welcome, especially considering that user workloads in an undercloud are actually infrastructure in the corresponding overcloud.

Background

OpenStack provides multiple methods to auto-scale your cluster (Like using Heat AutoScalingGroup, Senlin Cluster, etc.). However without general coordination across projects, it may not be easy for users and ops to achieve auto-scaling on OpenStack. Developers tend to be focused on individual projects rather than cross-project integration. Most of the components required by auto-scaling already exist within OpenStack, but we need to provide a more simple way for users and ops to adopt auto-scaling. And allowing developers to coordinate together instead of implement something all over again.

Goals

  • Set up SIG, init doc, meeting, ml
  • Create Forum and Auto-scaling SIG PTG sessions

Getting Involved

We're currently working on a structure to make everyone more easier to getting involved. Our initial git repository, documentation, and other information will be up soon.

Community Infrastructure / Resources

SIG Chairs

  • Rico Lin
  • Duc Truong
  • Zane Bitter

We welcome anyone who willing to join and help. And we surely are willing to re-elect chairs as soon as we got our init setup done.

Upcoming events

TBD

Past events

Project liasons

This is a good idea, but we need some more discussion on who and how we can bridge this system up.