Jump to: navigation, search

Difference between revisions of "Solum/CLI"

m (WIP pull request link was changed)
Line 1: Line 1:
Solum Command Line Interface
+
= Solum CLI =
  
== Overview ==
+
== Installing ==
As of 2014-01-13 Solum does not yet have a CLI. It is required for our initial release, and efforts to code one are underway. A number of options for developing the tool are under discussion. This wiki page identifies and detail the options under consideration, and describes our intended approach.
+
To install the solum CLI on any host that has Python and pip:
 +
pip install python-solumclient
 +
solum -h
  
== Goals ==
+
== Using ==
# Produce a prototype to satisfy the [https://blueprints.launchpad.net/solum/+spec/solum-minimal-cli solum-minimal-cli] blueprint
+
To build and run an application with Solum, you can register your app using a YAML file called a <code>plan file</code>. Example:
# Produce a complete CLI for Solum that allows simple interaction with the Solum API service, satisfying the [https://blueprints.launchpad.net/solum/+spec/solum-cli solum-cli] blueprint
 
# Adjust the CLI (if needed) to conform to the prevailing OpenStack CLI tools, and underlying resources, such as any prevailing SDK resources and client libraries.
 
  
== Blueprints ==
+
name: ex1
* https://blueprints.launchpad.net/solum/+spec/solum-cli
+
description: Nodejs express.
* https://blueprints.launchpad.net/solum/+spec/solum-minimal-cli
+
artifacts:
 +
- name: nodeus
 +
  artifact_type: application.heroku
 +
  content:
 +
    href: https://github.com/paulczar/example-nodejs-express.git
  
See the [https://www.dropbox.com/s/ejnu2iginqr6lau/Screenshot%202014-01-13%2014.29.47.png diagram showing how the CLI relates to the rest of the ecosystem]
+
For demonstration purposes, I have saved this in a file named ex1.yaml. First register the application:
  
== Mailing List Threads ==
+
$ solum app create ex1.yaml
* http://lists.openstack.org/pipermail/openstack-dev/2013-November/018505.html
+
+-------------+---------------------------------------------------------------------+
 +
| Property    | Value                                                              |
 +
+-------------+---------------------------------------------------------------------+
 +
| description | Nodejs express.                                                    |
 +
| uri        | http://10.0.2.15:9777/v1/plans/e7e6aaea-146d-494d-b2eb-3da8a648b87e |
 +
| uuid        | e7e6aaea-146d-494d-b2eb-3da8a648b87e                                |
 +
| name        | ex1                                                                |
 +
+-------------+---------------------------------------------------------------------+
  
== Minimal CLI Options ==
+
Now, you can start that application as many times as you want from that plan by creating an application <code>assembly</code> from it. THe following command shows the creation of an assemvbly named 'ex1' from the plan named 'ex1' registered in the previous step.
=== Argparse ===
 
* WIP pull request - https://review.openstack.org/#/c/66617
 
====  Pro ====
 
* Fast-track to completing an M1 CLI
 
* Relatively simple Python standard argparse library-based code
 
* No external (potentially changing) code dependencies
 
* Modeled after Trove architecture recommendations
 
  
==== Con ====
+
$ solum assembly create ex1 --assembly ex1
* May be the largest porting effort to the eventual OpenStack client of any options
+
Note: using plan_uri=http://10.0.2.15:9777/v1/plans/e7e6aaea-146d-494d-b2eb-3da8a648b87e
* No authentication handling built in (will depend on [https://github.com/openstack/python-keystoneclient python-keystoneclient] library)
+
+-----------------+--------------------------------------------------------------------------+
 +
| Property        | Value                                                                    |
 +
+-----------------+--------------------------------------------------------------------------+
 +
| status          | None                                                                    |
 +
| description    | None                                                                    |
 +
| application_uri | None                                                                    |
 +
| name            | ex1                                                                      |
 +
| trigger_uri    | http://10.0.2.15:9777/v1/public/triggers/4009c664-710b-4521-a468-cc24f04 |
 +
|                | 04e6b                                                                    |
 +
| uuid            | 050ff625-d32a-483b-8df4-715ed623b8af                                    |
 +
+-----------------+--------------------------------------------------------------------------+
  
=== OSC ===
+
Now you can use watch the build process traverse through the various states of BUILDING, DEPLOYING, and finally READY.
* WIP pull request/hack - https://review.openstack.org/#/c/64703/
 
* OSC now uses [https://cliff.readthedocs.org/en/latest/ Cliff]. Cliff [https://github.com/dreamhost/cliff/tree/master/demoapp demo app].
 
* Note: dtroyer is the technical lead for OSC
 
  
==== Pro ====
+
If something goes wrong, the assembly will display ERROR state. Here is how to find out what happened:
* Considered by many to be the best path for integration into the future OpenStack CLI architecture
 
* Integrates authentication already
 
* dtroyer agreed to implement two Solum plugin features to provide an example
 
  
==== Con ====
+
$ solum assembly show ex1
* Not complete - some APIs may change, especially around authentication
+
+-----------------+--------------------------------------------------------------------------+
* The plugin documentation is not complete - https://github.com/dtroyer/python-oscplugin
+
| Property        | Value                                                                    |
* No other project has implemented the OSC plugin yet (core OpenStack projects add their features directly into the OSC without plugins)
+
+-----------------+--------------------------------------------------------------------------+
* There are discussions about potential OSC architectural changes within OpenStack; want to verify that we will not be chasing large code changes
+
| status          | ERROR                                                                    |
 +
| description    | None                                                                    |
 +
| application_uri | None                                                                    |
 +
| name            | ex1                                                                      |
 +
| trigger_uri    | http://10.0.2.15:9777/v1/public/triggers/4009c664-710b-4521-a468-cc24f04 |
 +
|                | 04e6b                                                                    |
 +
| uuid            | 050ff625-d32a-483b-8df4-715ed623b8af                                    |
 +
+-----------------+--------------------------------------------------------------------------+
  
== Proposed Plan ==
+
We can look at the associated Heat stack:
* Develop a prototype based on [https://review.openstack.org/66065 simple argparse pull request] to satisfy the [https://blueprints.launchpad.net/solum/+spec/solum-minimal-cli solum-minimal-cli] blueprint as part of M1 (short term, considered disposable).
+
$ heat stack-list
* Develop a complete client based on [https://review.openstack.org/#/c/64703/ WIP OSC pull request] that can integrate with OSC to satisfy the [https://blueprints.launchpad.net/solum/+spec/solum-cli solum-cli] blueprint
+
+--------------------------------------+------------+---------------+----------------------+
* The "prototype" client will be completed first, and will only implement the minimum needed to address [https://blueprints.launchpad.net/solum/+spec/solum-minimal-cli solum-minimal-cli].
+
| id                                  | stack_name | stack_status  | creation_time        |
* OSC client obviates the argparse client, implementing everything in [https://blueprints.launchpad.net/solum/+spec/solum-cli solum-cli] once it is done.
+
+--------------------------------------+------------+---------------+----------------------+
 +
| ba6f1ecf-77f8-434f-b4ff-4555d1b71d2e | ex1        | CREATE_FAILED | 2014-05-09T20:30:26Z |
 +
+--------------------------------------+------------+---------------+----------------------+
 +
$ heat stack-show ex1 | grep stack_status
 +
| stack_status        | CREATE_FAILED                                                                                            |
 +
| stack_status_reason  | Resource CREATE failed: Error: Creation of server ex1                                                    |
 +
|                      | failed.                                                                                                   |
 +
 +
Now we can look at the event history for that stack:
 +
$ heat event-list ba6f1ecf-77f8-434f-b4ff-4555d1b71d2e
 +
+-----------------+--------------------------------------+---------------------------------------+--------------------+----------------------+
 +
| resource_name  | id                                  | resource_status_reason                | resource_status    | event_time          |
 +
+-----------------+--------------------------------------+---------------------------------------+--------------------+----------------------+
 +
| compute        | 09876afc-7547-4268-bd19-2b908f768ad9 | Error: Creation of server ex1 failed. | CREATE_FAILED      | 2014-05-09T20:30:41Z |
 +
| compute        | ae7dc18f-5a63-48d4-af98-469e45aae52d | state changed                        | CREATE_IN_PROGRESS | 2014-05-09T20:30:27Z |
 +
| external_access | 7a22de89-509d-457e-bfd9-e518cba6b9f2 | state changed                        | CREATE_IN_PROGRESS | 2014-05-09T20:30:26Z |
 +
| external_access | f421fd64-6b71-495a-8fbc-9e29148f500b | state changed                        | CREATE_COMPLETE    | 2014-05-09T20:30:27Z |
 +
+-----------------+--------------------------------------+---------------------------------------+--------------------+----------------------+
 +
 +
This is showing you that the compute service failed to create a compute instance (server).
 +
 
 +
So, let's look at that particular event:
 +
$ heat event-show ba6f1ecf-77f8-434f-b4ff-4555d1b71d2e compute 09876afc-7547-4268-bd19-2b908f768ad9 | grep physical_resource_id
 +
| physical_resource_id  | b282f2b9-88e2-4666-85bfa5fd86c9979a |
 +
 
 +
Now we can look at that individual nova instance to find out why it is in ERROR state.
 +
nova show b282f2b9-88e2-4666-85bf-a5fd86c9979a| grep fault
 +
| fault | {"message": "No valid host was found. ", "code": 500, "created": "2014-05-09T20:30:40Z"} |
 +
 
 +
This indicates that the scheduler can not find any compute nodes that have room for the new assembly.

Revision as of 20:55, 9 May 2014

Solum CLI

Installing

To install the solum CLI on any host that has Python and pip:

pip install python-solumclient
solum -h

Using

To build and run an application with Solum, you can register your app using a YAML file called a plan file. Example:

name: ex1
description: Nodejs express.
artifacts:
- name: nodeus
  artifact_type: application.heroku
  content:
    href: https://github.com/paulczar/example-nodejs-express.git

For demonstration purposes, I have saved this in a file named ex1.yaml. First register the application:

$ solum app create ex1.yaml 
+-------------+---------------------------------------------------------------------+
| Property    | Value                                                               |
+-------------+---------------------------------------------------------------------+
| description | Nodejs express.                                                     |
| uri         | http://10.0.2.15:9777/v1/plans/e7e6aaea-146d-494d-b2eb-3da8a648b87e |
| uuid        | e7e6aaea-146d-494d-b2eb-3da8a648b87e                                |
| name        | ex1                                                                 |
+-------------+---------------------------------------------------------------------+

Now, you can start that application as many times as you want from that plan by creating an application assembly from it. THe following command shows the creation of an assemvbly named 'ex1' from the plan named 'ex1' registered in the previous step.

$ solum assembly create ex1 --assembly ex1
Note: using plan_uri=http://10.0.2.15:9777/v1/plans/e7e6aaea-146d-494d-b2eb-3da8a648b87e
+-----------------+--------------------------------------------------------------------------+
| Property        | Value                                                                    |
+-----------------+--------------------------------------------------------------------------+
| status          | None                                                                     |
| description     | None                                                                     |
| application_uri | None                                                                     |
| name            | ex1                                                                      |
| trigger_uri     | http://10.0.2.15:9777/v1/public/triggers/4009c664-710b-4521-a468-cc24f04 |
|                 | 04e6b                                                                    |
| uuid            | 050ff625-d32a-483b-8df4-715ed623b8af                                     |
+-----------------+--------------------------------------------------------------------------+

Now you can use watch the build process traverse through the various states of BUILDING, DEPLOYING, and finally READY.

If something goes wrong, the assembly will display ERROR state. Here is how to find out what happened:

$ solum assembly show ex1
+-----------------+--------------------------------------------------------------------------+
| Property        | Value                                                                    |
+-----------------+--------------------------------------------------------------------------+
| status          | ERROR                                                                    |
| description     | None                                                                     |
| application_uri | None                                                                     |
| name            | ex1                                                                      |
| trigger_uri     | http://10.0.2.15:9777/v1/public/triggers/4009c664-710b-4521-a468-cc24f04 |
|                 | 04e6b                                                                    |
| uuid            | 050ff625-d32a-483b-8df4-715ed623b8af                                     |
+-----------------+--------------------------------------------------------------------------+

We can look at the associated Heat stack:

$ heat stack-list
+--------------------------------------+------------+---------------+----------------------+
| id                                   | stack_name | stack_status  | creation_time        |
+--------------------------------------+------------+---------------+----------------------+
| ba6f1ecf-77f8-434f-b4ff-4555d1b71d2e | ex1        | CREATE_FAILED | 2014-05-09T20:30:26Z |
+--------------------------------------+------------+---------------+----------------------+
$ heat stack-show ex1 | grep stack_status
| stack_status         | CREATE_FAILED                                                                                             |
| stack_status_reason  | Resource CREATE failed: Error: Creation of server ex1                                                     |
|                      | failed.                                                                                                   |

Now we can look at the event history for that stack:

$ heat event-list ba6f1ecf-77f8-434f-b4ff-4555d1b71d2e
+-----------------+--------------------------------------+---------------------------------------+--------------------+----------------------+
| resource_name   | id                                   | resource_status_reason                | resource_status    | event_time           |
+-----------------+--------------------------------------+---------------------------------------+--------------------+----------------------+
| compute         | 09876afc-7547-4268-bd19-2b908f768ad9 | Error: Creation of server ex1 failed. | CREATE_FAILED      | 2014-05-09T20:30:41Z |
| compute         | ae7dc18f-5a63-48d4-af98-469e45aae52d | state changed                         | CREATE_IN_PROGRESS | 2014-05-09T20:30:27Z |
| external_access | 7a22de89-509d-457e-bfd9-e518cba6b9f2 | state changed                         | CREATE_IN_PROGRESS | 2014-05-09T20:30:26Z |
| external_access | f421fd64-6b71-495a-8fbc-9e29148f500b | state changed                         | CREATE_COMPLETE    | 2014-05-09T20:30:27Z |
+-----------------+--------------------------------------+---------------------------------------+--------------------+----------------------+

This is showing you that the compute service failed to create a compute instance (server).

So, let's look at that particular event:

$ heat event-show ba6f1ecf-77f8-434f-b4ff-4555d1b71d2e compute 09876afc-7547-4268-bd19-2b908f768ad9 | grep physical_resource_id
| physical_resource_id   | b282f2b9-88e2-4666-85bfa5fd86c9979a | 

Now we can look at that individual nova instance to find out why it is in ERROR state.

nova show b282f2b9-88e2-4666-85bf-a5fd86c9979a| grep fault
| fault | {"message": "No valid host was found. ", "code": 500, "created": "2014-05-09T20:30:40Z"} |

This indicates that the scheduler can not find any compute nodes that have room for the new assembly.