https://wiki.openstack.org/w/api.php?action=feedcontributions&user=Alexei-kornienko&feedformat=atomOpenStack - User contributions [en]2024-03-28T22:14:57ZUser contributionsMediaWiki 1.28.2https://wiki.openstack.org/w/index.php?title=Meetings/Oslo&diff=57222Meetings/Oslo2014-07-02T10:53:31Z<p>Alexei-kornienko: /* Agenda for Next Meeting */</p>
<hr />
<div>Oslo will hold IRC meetings weekly at the time scheduled below.<br />
<br />
If there's an Oslo topic you think warrants a project meeting, please add it to the agenda section below and notify the [http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev openstack-dev@lists.openstack.org] mailing list. Please give everyone at least 24 hours notice.<br />
<br />
== Agenda for Next Meeting ==<br />
<br />
''Please include your IRC handle along with any agenda items you add, so we know who to call on in the meeting.''<br />
<br />
Date: 4 July 2014<br />
<br />
* oslo.messaging performance and actions needed to improve it - Alexei_987<br />
<br />
<br />
Date: 27 June 2014<br />
<br />
* Review action items from previous meeting<br />
** dhellmann clarify language about when to use _() vs. _LE()<br />
** Project liaisons review app-agnostic-logging-parameters: https://review.openstack.org/95281<br />
** Project liaisons review oslo-config-generator: https://review.openstack.org/100946<br />
* Some Oslo hacking at [[Sprints/ParisJuno2014]]<br />
* Red flags for/from liaisons<br />
* Adoption status<br />
** https://etherpad.openstack.org/p/juno-oslo-adoption-status<br />
* Spec status<br />
** config filter - https://review.openstack.org/#/c/97228/<br />
** oslo.serialization - https://review.openstack.org/#/c/97315/<br />
** remove context adapter - https://review.openstack.org/#/c/95870/<br />
* Graduation status<br />
** oslo.i18n<br />
* Release review<br />
* review priorities for this week<br />
** db migration test issues (devananda)<br />
*** https://bugs.launchpad.net/ironic/+bug/1327397 fixed in nova and ironic, not oslo.<br />
*** https://bugs.launchpad.net/nova/+bug/1328997 partial-fix in nova. incorrectly close in oslo.db.<br />
<br />
== General Information ==<br />
=== Regular Meeting Schedule ===<br />
* What day: Friday<br />
* What time: [http://www.timeanddate.com/worldclock/converted.html?iso=20140425T16&p1=0&p2=2133&p3=195&p4=224&p5=43 1600 UTC]<br />
* Where: #openstack-meeting-alt on freenode<br />
* Who: All are welcome to participate<br />
<br />
=== Notes from Previous Meetings ===<br />
<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-20-16.01.html Jun 20, 2014] - topics: oslo.db initial release; oslo.messaging good progress in neutron; alpha releases of 5 libraries next week; oslo.db test bugs reported by devananda<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-13-16.00.html Jun 13, 2014] - topics: oslo.db alpha release; db migration bug; <br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-06-16.00.html Jun 06, 2014] - topics: juno specs, spec approval process<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-30-16.00.html May 30, 2014]<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-23-16.01.html May 23, 2014] - topics: osprofile (postponed), run_test.sh, juno specs, oslo.test issue in tempest<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-09-16.02.html May 09, 2014] - topics: oslo-specs, oslo.messaging, summit prep, oslo.db, oslo.i18n<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-04-25-16.00.html April 24, 2014] - topics: oslotest, oslo.db, oslo.i18n, creating a specs repo<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-02-28-14.00.html Feb 28, 2014] - topics: icehouse feature freeze; syncing cinder & nova; uuidutils<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-02-14-14.01.html Feb 14, 2014] - topics: oslo.db, icehouse-3<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-01-31-14.01.html Jan 31, 2014] - topics: translation, deprecation policy, adopting taskflow, stevedore, and cliff<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-11-15-14.01.html Nov 15, 2013] - topics: translation, pecan/wsme common code, icehouse scheduling<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-25-14.00.html Oct 25, 2013] - topics: deprecated decorator and delayed translation implementation plan<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-11-14.00.html Oct 11, 2013] - topics: delayed translations<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-08-16-14.00.html Aug 16, 2013] - topic was new messaging API, message security and reject/reque/ack<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-07-19-14.00.html July 19, 2013] - topic was new messaging API, message security, qpid/proton messaging driver and removing logging dependency on eventlet<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-06-07-14.00.html June 7, 2013] - topic was new messaging API and message security<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-05-03-14.01.html May 3, 2013] - topic was new messaging API and message security<br />
<br />
(In case the list of notes is not up to date, please consult http://eavesdrop.openstack.org/meetings/oslo/)</div>Alexei-kornienkohttps://wiki.openstack.org/w/index.php?title=Meetings/Oslo&diff=57221Meetings/Oslo2014-07-02T10:53:17Z<p>Alexei-kornienko: /* Agenda for Next Meeting */</p>
<hr />
<div>Oslo will hold IRC meetings weekly at the time scheduled below.<br />
<br />
If there's an Oslo topic you think warrants a project meeting, please add it to the agenda section below and notify the [http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev openstack-dev@lists.openstack.org] mailing list. Please give everyone at least 24 hours notice.<br />
<br />
== Agenda for Next Meeting ==<br />
<br />
''Please include your IRC handle along with any agenda items you add, so we know who to call on in the meeting.''<br />
<br />
Date: 4 July 2014<br />
<br />
* oslo.messaging performance and actions needed to improve it - Alexei_987<br />
<br />
Date: 27 June 2014<br />
<br />
* Review action items from previous meeting<br />
** dhellmann clarify language about when to use _() vs. _LE()<br />
** Project liaisons review app-agnostic-logging-parameters: https://review.openstack.org/95281<br />
** Project liaisons review oslo-config-generator: https://review.openstack.org/100946<br />
* Some Oslo hacking at [[Sprints/ParisJuno2014]]<br />
* Red flags for/from liaisons<br />
* Adoption status<br />
** https://etherpad.openstack.org/p/juno-oslo-adoption-status<br />
* Spec status<br />
** config filter - https://review.openstack.org/#/c/97228/<br />
** oslo.serialization - https://review.openstack.org/#/c/97315/<br />
** remove context adapter - https://review.openstack.org/#/c/95870/<br />
* Graduation status<br />
** oslo.i18n<br />
* Release review<br />
* review priorities for this week<br />
** db migration test issues (devananda)<br />
*** https://bugs.launchpad.net/ironic/+bug/1327397 fixed in nova and ironic, not oslo.<br />
*** https://bugs.launchpad.net/nova/+bug/1328997 partial-fix in nova. incorrectly close in oslo.db.<br />
<br />
== General Information ==<br />
=== Regular Meeting Schedule ===<br />
* What day: Friday<br />
* What time: [http://www.timeanddate.com/worldclock/converted.html?iso=20140425T16&p1=0&p2=2133&p3=195&p4=224&p5=43 1600 UTC]<br />
* Where: #openstack-meeting-alt on freenode<br />
* Who: All are welcome to participate<br />
<br />
=== Notes from Previous Meetings ===<br />
<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-20-16.01.html Jun 20, 2014] - topics: oslo.db initial release; oslo.messaging good progress in neutron; alpha releases of 5 libraries next week; oslo.db test bugs reported by devananda<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-13-16.00.html Jun 13, 2014] - topics: oslo.db alpha release; db migration bug; <br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-06-16.00.html Jun 06, 2014] - topics: juno specs, spec approval process<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-30-16.00.html May 30, 2014]<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-23-16.01.html May 23, 2014] - topics: osprofile (postponed), run_test.sh, juno specs, oslo.test issue in tempest<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-05-09-16.02.html May 09, 2014] - topics: oslo-specs, oslo.messaging, summit prep, oslo.db, oslo.i18n<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-04-25-16.00.html April 24, 2014] - topics: oslotest, oslo.db, oslo.i18n, creating a specs repo<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-02-28-14.00.html Feb 28, 2014] - topics: icehouse feature freeze; syncing cinder & nova; uuidutils<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-02-14-14.01.html Feb 14, 2014] - topics: oslo.db, icehouse-3<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-01-31-14.01.html Jan 31, 2014] - topics: translation, deprecation policy, adopting taskflow, stevedore, and cliff<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-11-15-14.01.html Nov 15, 2013] - topics: translation, pecan/wsme common code, icehouse scheduling<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-25-14.00.html Oct 25, 2013] - topics: deprecated decorator and delayed translation implementation plan<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-10-11-14.00.html Oct 11, 2013] - topics: delayed translations<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-08-16-14.00.html Aug 16, 2013] - topic was new messaging API, message security and reject/reque/ack<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-07-19-14.00.html July 19, 2013] - topic was new messaging API, message security, qpid/proton messaging driver and removing logging dependency on eventlet<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-06-07-14.00.html June 7, 2013] - topic was new messaging API and message security<br />
* [http://eavesdrop.openstack.org/meetings/oslo/2013/oslo.2013-05-03-14.01.html May 3, 2013] - topic was new messaging API and message security<br />
<br />
(In case the list of notes is not up to date, please consult http://eavesdrop.openstack.org/meetings/oslo/)</div>Alexei-kornienkohttps://wiki.openstack.org/w/index.php?title=Rally&diff=29838Rally2013-09-10T09:56:02Z<p>Alexei-kornienko: /* Setup */</p>
<hr />
<div>= Introduction =<br />
Rally is a Benchmark-as-a-Service project for OpenStack. <br />
<br />
Rally is intended for providing the community with a benchmarking tool that is capable of performing '''specific''', '''complicated''' and '''reproducible''' test cases on '''real deployment''' scenarios.<br />
<br />
In the OpenStack ecosystem there are currently several tools that are helpful in carrying out the benchmarking process for an OpenStack deployment. To name a few, there are DevStack and FUEL which are intended for deploying and managing OpenStack clouds, the Tempest testing framework that validates OpenStack APIs, some tracing facilities like ''Tomograph'' with ''Zipkin'', and so on. The challenge, however, is to compile all these tools together on a reproducible basis. That can be a rather difficult task since the number of compute nodes in a practical deployment can be really huge and also because one may be willing to use lots of different deployment strategies that pursue different goals (e.g., while benchmarking the Nova Scheduler, one usually does not care of virtualization details, but is more concerned with the infrastructure topologies; while in other specific cases it may be the virtualization technology that matters). Compiling a bunch of already existing benchmarking facilities into one project, making it flexible to user requirements and ensuring the reproducibility of test results, is exactly what Rally does.<br />
<br />
= Architecture =<br />
<br />
Rally is basically split into 4 main components:<br />
<br />
# Deployment Engine, which is responsible for processing and deploying VM images (using DevStack or FUEL according to user’s preferences). The engine can do one of the following:<br />
#* deploying an OS on already existing VMs;<br />
#* starting VMs from a VM image with pre-installed OS and OpenStack;<br />
#* deploying multiply VMs inside each has OpenStack compute node based on a VM image.<br />
# VM Provider, which interacts with cloud provider-specific interfaces to load and destroy VM images;<br />
# Benchmarking Tool, which carries out the benchmarking process in several stages:<br />
#* runs Tempest tests, reduced to 5-minute length (to save the usually expensive computing time);<br />
#* runs the used-defined test scenarios (using the Rally testing framework);<br />
#* collects all the test results and processes the by Zipkin tracer;<br />
#* puts together a benchmarking report and stores it on the machine Rally was lauched on.<br />
# Orchestrator, which is the central component of the system. It uses the Deployment Engine to run control and compute nodes and to launch an OpenStack distribution and, after that, calls the Benchmarking Tool to start the benchmarking process.</div>Alexei-kornienkohttps://wiki.openstack.org/w/index.php?title=Rally/HowTo&diff=29837Rally/HowTo2013-09-10T09:55:57Z<p>Alexei-kornienko: /* HowTo Rally */</p>
<hr />
<div>= HowTo Rally =<br />
<br />
== Installing rally on fresh ubuntu == <br />
<br />
Install these requirements: <br />
<br />
sudo apt-get update<br />
sudo apt-get install git-core python-dev libevent-dev libssl-dev<br />
sudo pip install virtualenv<br />
<br />
<br />
Clone rally: <br />
<br />
git clone https://github.com/stackforge/rally.git<br />
<br />
Install rally requirements in venv:<br />
<br />
virtualenv rally/.venv<br />
. rally/.venv/bin/activate<br />
pip install -r requirements.txt<br />
<br />
<br />
= Setup =<br />
<br />
<br />
== Version 0.1 ==<br />
Initial version has many configuration options hardcoded and some things needs to be done manually. Configuration and setup will be improved in future versions.<br />
<br />
Currently rally is tested to run on x64 Ubuntu 12.04 server<br />
=== Install data collector node ===<br />
[http://twitter.github.io/zipkin/index.html Zipkin] is used to collect profiling information.<br />
<br />
Zipking uses a lot of RAM so collector node should have at least 6 Gb of RAM. Later we may choose alternative collector/visualization solution.<br />
<br />
Install with: <br />
$ git clone https://github.com/twitter/zipkin.git<br />
<br />
Zipkin uses sqlite by default but it doesn't work under production load. Currently we use [http://cassandra.apache.org/ Cassandra] to store collected data<br />
Install Cassandra DB:<br />
$ wget http://mirror.metrocast.net/apache/cassandra/2.0.0/apache-cassandra-2.0.0-bin.tar.gz<br />
$ tar xvzf apache-cassandra-2.0.0-bin.tar.gz<br />
$ sudo mkdir /var/lib/cassandra<br />
$ sudo chmod a+rw /var/lib/cassandra<br />
$ sudo mkdir /var/log/cassandra<br />
$ sudo chmod a+rw /var/log/cassandra<br />
$ apache-cassandra-2.0.0/bin/cassandra &> cassandra-out<br />
<br />
Create DB schema for zipkin:<br />
$ apache-cassandra-2.0.0/bin/cassandra-cli -host localhost -port 9160 -f zipkin/zipkin-cassandra/src/schema/cassandra-schema.txt<br />
<br />
Zipkin needs 3 services running. For the first time start each one separately and wait for it to load completely so it could download all dependencies<br />
$ bin/collector<br />
$ bin/query<br />
$ bin/web<br />
Later you can run them all together via screen:<br />
$ screen -dmS zipkin-collector bin/collector cassandra<br />
$ screen -dmS zipkin-query bin/query cassandra<br />
$ screen -dmS zipkin-web bin/web<br />
<br />
Collector node needs to be reachable from all cloud servers and it's IP should be defined in configuration file in deploy section:<br />
'collectors': {<br />
'zipkin': '#data_collector_ip#'<br />
}<br />
<br />
<br />
== Work with Rally ==</div>Alexei-kornienkohttps://wiki.openstack.org/w/index.php?title=Rally&diff=29836Rally2013-09-10T08:48:53Z<p>Alexei-kornienko: /* Install data collector node */</p>
<hr />
<div>= Introduction =<br />
Rally is a Benchmark-as-a-Service project for OpenStack. <br />
<br />
Rally is intended for providing the community with a benchmarking tool that is capable of performing '''specific''', '''complicated''' and '''reproducible''' test cases on '''real deployment''' scenarios.<br />
<br />
In the OpenStack ecosystem there are currently several tools that are helpful in carrying out the benchmarking process for an OpenStack deployment. To name a few, there are DevStack and FUEL which are intended for deploying and managing OpenStack clouds, the Tempest testing framework that validates OpenStack APIs, some tracing facilities like ''Tomograph'' with ''Zipkin'', and so on. The challenge, however, is to compile all these tools together on a reproducible basis. That can be a rather difficult task since the number of compute nodes in a practical deployment can be really huge and also because one may be willing to use lots of different deployment strategies that pursue different goals (e.g., while benchmarking the Nova Scheduler, one usually does not care of virtualization details, but is more concerned with the infrastructure topologies; while in other specific cases it may be the virtualization technology that matters). Compiling a bunch of already existing benchmarking facilities into one project, making it flexible to user requirements and ensuring the reproducibility of test results, is exactly what Rally does.<br />
<br />
= Architecture =<br />
<br />
Rally is basically split into 4 main components:<br />
<br />
# Deployment Engine, which is responsible for processing and deploying VM images (using DevStack or FUEL according to user’s preferences). The engine can do one of the following:<br />
#* deploying an OS on already existing VMs;<br />
#* starting VMs from a VM image with pre-installed OS and OpenStack;<br />
#* deploying multiply VMs inside each has OpenStack compute node based on a VM image.<br />
# VM Provider, which interacts with cloud provider-specific interfaces to load and destroy VM images;<br />
# Benchmarking Tool, which carries out the benchmarking process in several stages:<br />
#* runs Tempest tests, reduced to 5-minute length (to save the usually expensive computing time);<br />
#* runs the used-defined test scenarios (using the Rally testing framework);<br />
#* collects all the test results and processes the by Zipkin tracer;<br />
#* puts together a benchmarking report and stores it on the machine Rally was lauched on.<br />
# Orchestrator, which is the central component of the system. It uses the Deployment Engine to run control and compute nodes and to launch an OpenStack distribution and, after that, calls the Benchmarking Tool to start the benchmarking process.<br />
<br />
= Setup =<br />
<br />
<br />
== Version 0.1 ==<br />
Initial version has many configuration options hardcoded and some things needs to be done manually. Configuration and setup will be improved in future versions.<br />
<br />
Currently rally is tested to run on x64 Ubuntu 12.04 server<br />
=== Install data collector node ===<br />
[http://twitter.github.io/zipkin/index.html Zipkin] is used to collect profiling information.<br />
<br />
Zipking uses a lot of RAM so collector node should have at least 6 Gb of RAM. Later we may choose alternative collector/visualization solution.<br />
<br />
Install with: <br />
$ git clone https://github.com/twitter/zipkin.git<br />
<br />
Zipkin uses sqlite by default but it doesn't work under production load. Currently we use [http://cassandra.apache.org/ Cassandra] to store collected data<br />
Install Cassandra DB:<br />
$ wget http://mirror.metrocast.net/apache/cassandra/2.0.0/apache-cassandra-2.0.0-bin.tar.gz<br />
$ tar xvzf apache-cassandra-2.0.0-bin.tar.gz<br />
$ sudo mkdir /var/lib/cassandra<br />
$ sudo chmod a+rw /var/lib/cassandra<br />
$ sudo mkdir /var/log/cassandra<br />
$ sudo chmod a+rw /var/log/cassandra<br />
$ apache-cassandra-2.0.0/bin/cassandra &> cassandra-out<br />
<br />
Create DB schema for zipkin:<br />
$ apache-cassandra-2.0.0/bin/cassandra-cli -host localhost -port 9160 -f zipkin/zipkin-cassandra/src/schema/cassandra-schema.txt<br />
<br />
Zipkin needs 3 services running. For the first time start each one separately and wait for it to load completely so it could download all dependencies<br />
$ bin/collector<br />
$ bin/query<br />
$ bin/web<br />
Later you can run them all together via screen:<br />
$ screen -dmS zipkin-collector bin/collector cassandra<br />
$ screen -dmS zipkin-query bin/query cassandra<br />
$ screen -dmS zipkin-web bin/web<br />
<br />
Collector node needs to be reachable from all cloud servers and it's IP should be defined in configuration file in deploy section:<br />
'collectors': {<br />
'zipkin': '#data_collector_ip#'<br />
}</div>Alexei-kornienkohttps://wiki.openstack.org/w/index.php?title=Rally&diff=29812Rally2013-09-10T07:35:28Z<p>Alexei-kornienko: /* Install data collector node */</p>
<hr />
<div>= Introduction =<br />
Rally is a Benchmark-as-a-Service project for OpenStack. <br />
<br />
Rally is intended for providing the community with a benchmarking tool that is capable of performing '''specific''', '''complicated''' and '''reproducible''' test cases on '''real deployment''' scenarios.<br />
<br />
In the OpenStack ecosystem there are currently several tools that are helpful in carrying out the benchmarking process for an OpenStack deployment. To name a few, there are DevStack and FUEL which are intended for deploying and managing OpenStack clouds, the Tempest testing framework that validates OpenStack APIs, some tracing facilities like ''Tomograph'' with ''Zipkin'', and so on. The challenge, however, is to compile all these tools together on a reproducible basis. That can be a rather difficult task since the number of compute nodes in a practical deployment can be really huge and also because one may be willing to use lots of different deployment strategies that pursue different goals (e.g., while benchmarking the Nova Scheduler, one usually does not care of virtualization details, but is more concerned with the infrastructure topologies; while in other specific cases it may be the virtualization technology that matters). Compiling a bunch of already existing benchmarking facilities into one project, making it flexible to user requirements and ensuring the reproducibility of test results, is exactly what Rally does.<br />
<br />
= Architecture =<br />
<br />
Rally is basically split into 4 main components:<br />
<br />
# Deployment Engine, which is responsible for processing and deploying VM images (using DevStack or FUEL according to user’s preferences). The engine can do one of the following:<br />
#* deploying an OS on already existing VMs;<br />
#* starting VMs from a VM image with pre-installed OS and OpenStack;<br />
#* deploying multiply VMs inside each has OpenStack compute node based on a VM image.<br />
# VM Provider, which interacts with cloud provider-specific interfaces to load and destroy VM images;<br />
# Benchmarking Tool, which carries out the benchmarking process in several stages:<br />
#* runs Tempest tests, reduced to 5-minute length (to save the usually expensive computing time);<br />
#* runs the used-defined test scenarios (using the Rally testing framework);<br />
#* collects all the test results and processes the by Zipkin tracer;<br />
#* puts together a benchmarking report and stores it on the machine Rally was lauched on.<br />
# Orchestrator, which is the central component of the system. It uses the Deployment Engine to run control and compute nodes and to launch an OpenStack distribution and, after that, calls the Benchmarking Tool to start the benchmarking process.<br />
<br />
= Setup =<br />
<br />
<br />
== Version 0.1 ==<br />
Initial version has many configuration options hardcoded and some things needs to be done manually. Configuration and setup will be improved in future versions.<br />
<br />
Currently rally is tested to run on x64 Ubuntu 12.04 server<br />
=== Install data collector node ===<br />
[http://twitter.github.io/zipkin/index.html Zipkin] is used to collect profiling information.<br />
<br />
Zipking uses a lot of RAM so collector node should have at least 6 Gb of RAM. Later we may choose alternative collector/visualization solution.<br />
<br />
Install with: <br />
$ git clone https://github.com/twitter/zipkin.git<br />
<br />
Zipkin uses sqlite by default but it has shown to fail under production load.<br />
Install Cassandra DB:<br />
$ wget http://mirror.metrocast.net/apache/cassandra/2.0.0/apache-cassandra-2.0.0-bin.tar.gz<br />
$ tar xvzf apache-cassandra-2.0.0-bin.tar.gz<br />
$ sudo mkdir /var/lib/cassandra<br />
$ sudo chmod a+rw /var/lib/cassandra<br />
$ sudo mkdir /var/log/cassandra<br />
$ sudo chmod a+rw /var/log/cassandra<br />
$ apache-cassandra-2.0.0/bin/cassandra &> cassandra-out<br />
<br />
Create DB schema for zipkin:<br />
$ apache-cassandra-2.0.0/bin/cassandra-cli -host localhost -port 9160 -f zipkin/zipkin-cassandra/src/schema/cassandra-schema.txt<br />
<br />
Zipkin needs 3 services running. For the first time start each one separately and wait for it to load completely so it could download all dependencies<br />
$ bin/collector<br />
$ bin/query<br />
$ bin/web<br />
Later you can run them all together via screen:<br />
$ screen -dmS zipkin-collector bin/collector cassandra<br />
$ screen -dmS zipkin-query bin/query cassandra<br />
$ screen -dmS zipkin-web bin/web<br />
<br />
Collector node needs to be reachable from all cloud servers and it's IP should be defined in configuration file in deploy section:<br />
'collectors': {<br />
'zipkin': '#data_collector_ip#'<br />
}</div>Alexei-kornienkohttps://wiki.openstack.org/w/index.php?title=Rally&diff=29811Rally2013-09-10T07:31:48Z<p>Alexei-kornienko: /* Architecture */</p>
<hr />
<div>= Introduction =<br />
Rally is a Benchmark-as-a-Service project for OpenStack. <br />
<br />
Rally is intended for providing the community with a benchmarking tool that is capable of performing '''specific''', '''complicated''' and '''reproducible''' test cases on '''real deployment''' scenarios.<br />
<br />
In the OpenStack ecosystem there are currently several tools that are helpful in carrying out the benchmarking process for an OpenStack deployment. To name a few, there are DevStack and FUEL which are intended for deploying and managing OpenStack clouds, the Tempest testing framework that validates OpenStack APIs, some tracing facilities like ''Tomograph'' with ''Zipkin'', and so on. The challenge, however, is to compile all these tools together on a reproducible basis. That can be a rather difficult task since the number of compute nodes in a practical deployment can be really huge and also because one may be willing to use lots of different deployment strategies that pursue different goals (e.g., while benchmarking the Nova Scheduler, one usually does not care of virtualization details, but is more concerned with the infrastructure topologies; while in other specific cases it may be the virtualization technology that matters). Compiling a bunch of already existing benchmarking facilities into one project, making it flexible to user requirements and ensuring the reproducibility of test results, is exactly what Rally does.<br />
<br />
= Architecture =<br />
<br />
Rally is basically split into 4 main components:<br />
<br />
# Deployment Engine, which is responsible for processing and deploying VM images (using DevStack or FUEL according to user’s preferences). The engine can do one of the following:<br />
#* deploying an OS on already existing VMs;<br />
#* starting VMs from a VM image with pre-installed OS and OpenStack;<br />
#* deploying multiply VMs inside each has OpenStack compute node based on a VM image.<br />
# VM Provider, which interacts with cloud provider-specific interfaces to load and destroy VM images;<br />
# Benchmarking Tool, which carries out the benchmarking process in several stages:<br />
#* runs Tempest tests, reduced to 5-minute length (to save the usually expensive computing time);<br />
#* runs the used-defined test scenarios (using the Rally testing framework);<br />
#* collects all the test results and processes the by Zipkin tracer;<br />
#* puts together a benchmarking report and stores it on the machine Rally was lauched on.<br />
# Orchestrator, which is the central component of the system. It uses the Deployment Engine to run control and compute nodes and to launch an OpenStack distribution and, after that, calls the Benchmarking Tool to start the benchmarking process.<br />
<br />
= Setup =<br />
<br />
<br />
== Version 0.1 ==<br />
Initial version has many configuration options hardcoded and some things needs to be done manually. Configuration and setup will be improved in future versions.<br />
<br />
Currently rally is tested to run on x64 Ubuntu 12.04 server<br />
=== Install data collector node ===<br />
[http://twitter.github.io/zipkin/index.html Zipkin] is used to collect profiling information. <br />
Install with: <br />
$ git clone https://github.com/twitter/zipkin.git<br />
<br />
Zipkin uses sqlite by default but it has shown to fail under production load.<br />
Install Cassandra DB:<br />
$ wget http://mirror.metrocast.net/apache/cassandra/2.0.0/apache-cassandra-2.0.0-bin.tar.gz<br />
$ tar xvzf apache-cassandra-2.0.0-bin.tar.gz<br />
$ sudo mkdir /var/lib/cassandra<br />
$ sudo chmod a+rw /var/lib/cassandra<br />
$ sudo mkdir /var/log/cassandra<br />
$ sudo chmod a+rw /var/log/cassandra<br />
$ apache-cassandra-2.0.0/bin/cassandra &> cassandra-out<br />
<br />
Create DB schema for zipkin:<br />
$ apache-cassandra-2.0.0/bin/cassandra-cli -host localhost -port 9160 -f zipkin/zipkin-cassandra/src/schema/cassandra-schema.txt<br />
<br />
Zipkin needs 3 services running. For the first time start each one separately and wait for it to load completely so it could download all dependencies<br />
$ bin/collector<br />
$ bin/query<br />
$ bin/web<br />
Later you can run them all together via screen:<br />
$ screen -dmS zipkin-collector bin/collector cassandra<br />
$ screen -dmS zipkin-query bin/query cassandra<br />
$ screen -dmS zipkin-web bin/web<br />
<br />
Collector node needs to be reachable from all cloud servers and it's IP should be defined in configuration file in deploy section:<br />
'collectors': {<br />
'zipkin': '#data_collector_ip#'<br />
}</div>Alexei-kornienkohttps://wiki.openstack.org/w/index.php?title=TransactionManager&diff=24294TransactionManager2013-06-17T07:05:16Z<p>Alexei-kornienko: TrasactionManager utility that should simplify error handling</p>
<hr />
<div>In Openstack we have a lot of code that is creating/allocating some resources (networks, volumes, quotas, etc.) but we don't have a clear way to release such resources in case of failure.<br />
Release is currently done manually in "except" block. Such approach leads to more complicated error handling and potential resource leak. Currently Nova and other projects can use such system to improve error handling in many places.<br />
Examples:<br />
https://bugs.launchpad.net/nova/+bug/1173413<br />
https://bugs.launchpad.net/nova/+bug/1161657<br />
<br />
We could implement a utility that will help us to manage such resources and will allow us to simplify existing error handling and fix resource leak issues.<br />
I propose to implement a "transaction" system that will manage (release) such resources in transparent way:<br />
<br />
1) Create a decorator that will be used to mark functions that operate with such resources.<br />
Example:<br />
<pre><br />
@managed_resource(rollback=deallocate_network)<br />
def allocate_network(...<br />
</pre><br />
Such decorator will make sure that such function is called inside of the function that is marked with 2nd decorator - ''@transactional''<br />
<br />
2) transactional decorator will automatically call rollback function for each resource in case of exception in decorated function.<br />
<br />
Such transaction system can also be used to explicitly manage db transactions.<br />
<br />
Please see implementation draft below:<br />
<pre><br />
#!/usr/bin/env python<br />
<br />
import functools<br />
<br />
class TransactionManager:<br />
CONTEXTS = []<br />
<br />
def __init__(self, rollback_for, no_rollback_for):<br />
self._rollbacks = []<br />
self._rollback_for = rollback_for<br />
self._no_rollback_for = no_rollback_for or set()<br />
<br />
def __enter__(self):<br />
TransactionManager.CONTEXTS.append(self)<br />
return self<br />
<br />
def __exit__(self, exc_type, exc_value, traceback):<br />
try:<br />
if (exc_value is None or exc_type in self._no_rollback_for or<br />
(self._rollback_for is not None and<br />
not isinstance(exc_value, tuple(self._rollback_for)))):<br />
return # everything is OK<br />
for rollback, args, kwargs in self._rollbacks:<br />
rollback(*args, **kwargs)<br />
except Exception as e:<br />
print e<br />
finally:<br />
TransactionManager.CONTEXTS.pop()<br />
<br />
def add_rollback(self, rollback, args, kwargs):<br />
self._rollbacks.append((rollback, args, kwargs))<br />
<br />
@staticmethod<br />
def current():<br />
if not TransactionManager.CONTEXTS:<br />
raise ValueError('Trying to call method without transaction context')<br />
return TransactionManager.CONTEXTS[-1]<br />
<br />
<br />
def transactional(rollback_for=None, no_rollback_for=None):<br />
def decorator(fn):<br />
@functools.wraps(fn)<br />
def wrapper(*args, **kwargs):<br />
with TransactionManager(rollback_for, no_rollback_for):<br />
return fn(*args, **kwargs)<br />
return wrapper<br />
return decorator<br />
<br />
def rollback_required(rollback):<br />
def decorator(fn):<br />
@functools.wraps(fn)<br />
def wrapper(*args, **kwargs):<br />
TransactionManager.current().add_rollback(rollback, args, kwargs)<br />
return fn(*args, **kwargs)<br />
return wrapper<br />
return decorator<br />
<br />
<br />
def deallocate_stuff(name, *args):<br />
print 'Deallocating stuff - %s' % name<br />
<br />
@rollback_required(deallocate_stuff)<br />
def allocate_stuff(name, other, stuff):<br />
print 'Allocating important stuff named - %s' % name<br />
<br />
@transactional()<br />
def run_instance(name):<br />
print 'Running %s' % name<br />
allocate_stuff('network', 'foo', 'bar')<br />
raise TypeError<br />
print 'End run'<br />
<br />
<br />
try:<br />
run_instance('World')<br />
except Exception as e:<br />
pass<br />
<br />
</pre></div>Alexei-kornienko