Jump to: navigation, search

Difference between revisions of "SystemInfoAPI"

 
Line 10: Line 10:
 
* version - api (2.0), code (sha?), release (essex-1)
 
* version - api (2.0), code (sha?), release (essex-1)
 
* hardware info - arch, memory, cpu,  
 
* hardware info - arch, memory, cpu,  
* system info - os, environment variables
+
* system info - os, environment variables, versions of libraries used by the service?
 
* service config
 
* service config
  
Line 79: Line 79:
 
* does the solution need to scale to 1000s of workers if we expose information for workers?  
 
* does the solution need to scale to 1000s of workers if we expose information for workers?  
 
* perhaps an api call to get a list of "resources" (workers/hosts/nodes) and then the ability to dynamically query status of an individual node?
 
* perhaps an api call to get a list of "resources" (workers/hosts/nodes) and then the ability to dynamically query status of an individual node?
 +
 +
== Existing Projects ==
 +
 +
For inspiration and ideas for what works...
 +
 +
* jenkins /systemInfo [http://c213515.r15.cf1.rackcdn.com/30f5135662fa83e380a17e2c53f735ae.png screenshot]

Revision as of 19:59, 29 November 2011

System Info API

STATUS: DRAFT - please help define response buckets / values

OpenStack services have many configuration options, as well as environmental conditions (operating system versions, hardware information, ...)

Currently we expose minimal information about the service. This proposal is to add an admin api to expose service information in a standard way:

  • version - api (2.0), code (sha?), release (essex-1)
  • hardware info - arch, memory, cpu,
  • system info - os, environment variables, versions of libraries used by the service?
  • service config

TODO:

  • define minimum set of recommended response pairs
  • prepare example responses for projects
  • should config be multiple sections? defaults, settings, ...

API

REQUEST:


GET /systemInfo


RESPONSE (json or xml):


{ "version": {
    "api": "2.0",
    "code": "8230533824fd170498e51b43dd2f20e6af410c53",
    "release": "essex-1"},
  "system": {
    "arch": "x86_64",
    "memory": "74371184",
    "cpu": "Intel(R) Xeon(R) CPU           E5645  @ 2.40GHz",
    "disk": "an array or does this belong?",
    "os": {
      "arch": "amd64",
      "name": "Linux",
      "version": "#20-Ubuntu SMP Fri Oct 7 14:56:25 UTC 2011",
      "kernel": "3.0.0-12-generic",
    }
  },
  "environment": {
    "USER": "stack",
    "HOME": "/opt/stack",
    "PWD": "/opt/stack/nova/bin",
    "SHELL": "/bin/bash"
  },
  "config": {
     "storage_availability_zone": "nova",
     "vc_image_name": "vc_image",
     "ec2_dmz_host": "50.56.12.197",
     "fixed_range": "10.4.96.0/20",
     "compute_topic": "compute",
     "vsa_topic": "vsa",
     "fixed_range_v6": "fd00::/48",
     "glance_api_servers": ["50.56.12.197:9292"],
     "user_cert_subject": "/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=%s-%s-%s",
     "s3_dmz": "50.56.12.197",
     "quota_ram": "51200",
     "find_host_timeout": "30",
  }
}


Questions

What about all the workers (compute hosts, swift nodes, ...)?

  • does the solution need to scale to 1000s of workers if we expose information for workers?
  • perhaps an api call to get a list of "resources" (workers/hosts/nodes) and then the ability to dynamically query status of an individual node?

Existing Projects

For inspiration and ideas for what works...