https://wiki.openstack.org/w/api.php?action=feedcontributions&user=David.b.kinder&feedformat=atomOpenStack - User contributions [en]2024-03-28T23:45:45ZUser contributionsMediaWiki 1.28.2https://wiki.openstack.org/w/index.php?title=StarlingX/Docs_and_Infra&diff=164364StarlingX/Docs and Infra2018-08-28T22:29:32Z<p>David.b.kinder: /* Team members */</p>
<hr />
<div>=== Documentation and Infrastructure Sub-project ===<br />
<br />
Welcome to the Docs and Infra sub-project!<br />
<br />
==== Team members ====<br />
<br />
* Project Lead: '''Bruce Jones''' <bruce.e.jones@intel.com><br />
* Technical Lead : '''Abraham Arce Moreno''' <abraham.arce.moreno@intel.com><br />
* Contributors: '''Bruce Jones''' <bruce.e.jones@intel.com>; '''Dean Troyer''' <dtroyer@gmail.com>; '''Michael Tullis''' <michael.l.tullis@intel.com>; '''Scott Rifenbark''' <scottx.rifenbark@intel.com>; '''Hazzim Anaya Casas''' <hazzim.i.anaya.casas@intel.com>; '''Fernando Hernandez Gonzalez''' <fernando.hernandez.gonzalez@intel.com>; '''Greg Waines''' <Greg.Waines@windriver.com><br />
<br />
==== Weekly call ====<br />
<br />
We will hold a weekly team call on Wednesdays at 12:30 PST / 1930 UTC. All are welcome.<br />
<br />
Call details<br />
<br />
'''Zoom link: https://zoom.us/j/342730236'''<br />
<br />
Dialing in from phone:<br />
* Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923<br />
* Meeting ID: 342 730 236<br />
* International numbers available: https://zoom.us/u/ed95sU7aQ<br />
<br />
Agenda and meeting minutes [https://etherpad.openstack.org/p/stx-documentation are in this Etherpad].<br />
<br />
==== Work items ====<br />
* All Storyboard stories created for this team should use the tag "stx.docs" and the prefix [Doc].<br />
* The work items for this team can be found in Storyboard [https://storyboard.openstack.org/#!/story/list?status=active&project_group_id=86&tags=stx.docs here].<br />
* The bugs open against the Docs and Infra project can be found in story board [https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.bug&tags=stx.docs&project_group_id=86 here] or in launchpad [https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs here]<br />
* The Etherpad that contains a previous version of the Story list (now mostly obsolete) is [https://ethercalc.openstack.org/lwq0516fx2q4 here].<br />
<br />
==== Infrastructure ====<br />
* Our documentation is hosted in the stx-docs repo.<br />
* The home page for https://starlingx.io is https://github.com/iamweswilson/starling-landing<br />
<br />
===== General Documentation =====<br />
<br />
* [[StarlingX/Documentation]]<br />
<br />
===== API Documentation =====<br />
<br />
* [[StarlingX/Developer_Guide/API_Documentation]]<br />
<br />
===== ToDo =====<br />
<br />
* What are we missing - what documents are needed? What other infra needs to be built?</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX&diff=162605StarlingX2018-07-09T18:51:42Z<p>David.b.kinder: </p>
<hr />
<div>__NOTOC__<br />
<center><br />
<br />
== Welcome to the StarlingX Project ==<br />
</center><br />
StarlingX is a fully featured and high performance Edge Cloud software stack that is based on the [https://www.windriver.com/products/titanium-cloud/ Wind River® Titanium Cloud] R5 product. Intel and Wind River have jointly open sourced this software and we invite you to download, build, install, and run it. <br />
<br />
Wind River Titanium Cloud was originally built on open source components, that were then extended and hardened to meet critical infrastructure requirements, including: high availability, fault management, and performance management. This software provides numerous features and capabilities to enable 24/7 operation of mission critical applications.<br />
<br />
The StarlingX project opens all of these enhancements to the open source community. Please join us as we build the infrastructure stack for Edge Computing.<br />
----<br />
<!-- the rest of the page is a two-column table --><br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
|style="vertical-align:top; width:50%;" |<br />
<!-- left column contents --><br />
== Documentation ==<br />
<br />
These three documents will help get you started building, installing, and validating your installation of StarlingX:<br />
* [[StarlingX/Developer Guide|Developer Guide]]<br />
* [[StarlingX/Installation Guide|Installation Guide]]<br />
* [[StarlingX/Validation Guide|Validation Guide]]<br />
<br />
== Code ==<br />
The StarlingX project uses Gerrit as its web-based code change management and review tool.<br />
* [https://git.openstack.org/cgit/openstack/stx Gerrit repositories] maintain the StarlingX code, build instructions are in the [[StarlingX/Developer Guide]]<br />
* [https://review.openstack.org/#/admin/projects/?filter=stx StarlingX Gerrit Projects] and [https://review.openstack.org/#/q/status:open+AND+project:%255Eopenstack/stx-%2540 Open StarlingX project reviews]<br />
* [https://review.openstack.org/#/dashboard/?foreach=(project:openstack/stx-clients%20OR%20project:openstack/stx-config%20OR%20project:openstack/stx-fault%20OR%20project:openstack/stx-gplv2%20OR%20project:openstack/stx-gplv3%20OR%20project:openstack/stx-gui%20OR%20project:openstack/stx-ha%20OR%20project:openstack/stx-integ%20OR%20project:openstack/stx-manifest%20OR%20project:openstack/stx-metal%20OR%20project:openstack/stx-nfv%20OR%20project:openstack/stx-root%20OR%20project:openstack/stx-tis-repo%20OR%20project:openstack/stx-tools%20OR%20project:openstack/stx-update%20OR%20project:openstack/stx-upstream%20OR%20project:openstack/stx-utils)%20status:open%20NOT%20owner:self%20NOT%20label:Workflow%3C=-1%20label:Verified%3E=1,zuul%20NOT%20reviewedby:self&title=StarlingX%20Review%20Inbox&Needs%20final%20%202=label:Code-Review%3E=2%20limit:50%20NOT%20label:Code-Review%3C=-1,self&Passed%20Zuul,%20No%20Negative%20Feedback%20(Small%20Fixes)=NOT%20label:Code-Review%3E=2%20NOT%20label:Code-Review%3C=-1,starlingx-core%20delta:%3C=10&Passed%20Zuul,%20No%20Negative%20Feedback=NOT%20label:Code-Review%3E=2%20NOT%20label:Code-Review%3C=-1,starlingx-core%20delta:%3E10&Needs%20Feedback%20(Changes%20older%20than%205%20days%20that%20have%20not%20been%20reviewed%20by%20anyone)=NOT%20label:Code-Review%3C=-1%20NOT%20label:Code-Review%3E=1%20age:5d&You%20are%20a%20reviewer,%20but%20haven't%20voted%20in%20the%20current%20revision=NOT%20label:Code-Review%3C=-1,self%20NOT%20label:Code-Review%3E=1,self%20reviewer:self&Wayward%20Changes%20(Changes%20with%20no%20code%20review%20in%20the%20last%202days)=NOT%20is:reviewed%20age:2d StarlingX Gerrit Review Dashboard]<br />
<br />
== Bug Tracking ==<br />
<br />
* [https://storyboard.openstack.org/#!/worklist/354 StarlingX Bug List]<br />
** This list is sorted manually by drag and drop. <br />
** Please create bugs for any issues found in Storyboard, against one of the stx-* projects. If you can't find the right project, use stx-integ<br />
** After you create the bug, please add it to the Bug Worklist (link above). <br />
** If you don't see the "add a card" button on the Worklist, you need to be added as a user of the Worklist. Please contact bruce.e.jones@intel.com to be added. <br />
<br />
== References ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-notes The list of repos] and other things<br />
<br />
== OpenStack Documentation ==<br />
<br />
These are references to general OpenStack material:<br />
<br />
* [https://docs.openstack.org/infra/manual/developers.html Developer's Guide]<br />
* [https://docs.openstack.org/infra/manual/creators.html Project Creator's Guide]<br />
* [https://wiki.openstack.org/wiki/How_To_Contribute The Contributors Guide] (the older wiki page)<br />
* [https://governance.openstack.org/tc/reference/project-testing-interface.html Project Testing Interface]<br />
<br />
|style="vertical-align:top; width:50%;" |<br />
<!-- right column contents --><br />
== Culture ==<br />
<br />
We are proud to be an OpenStack Foundation project!<br />
* We support and adhere to the [https://www.openstack.org/legal/community-code-of-conduct/ OpenStack community Code of Conduct]<br />
* We support and fully embrace the [https://governance.openstack.org/tc/reference/opens.html Four Opens]<br />
* [[StarlingX/Contribution Guidelines|Contribution Guidelines]]<br />
<br />
== Meetings ==<br />
<br />
Weekly call every Wednesday at 7am PDT / 1400 UTC<br />
<br />
==== Next Meeting: Wednesday (July 11) at 7am PDT / 1400 UTC ====<br />
<br />
==== Call details ==== <br />
<br />
* ''' Zoom link: https://zoom.us/j/342730236 '''<br />
* ''' Dialing in from phone: '''<br />
** Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923<br />
** Meeting ID: 342 730 236<br />
** International numbers available: https://zoom.us/u/ed95sU7aQ<br />
<br />
=== Agenda ===<br />
<br />
Please feel free to add your topic to the agenda. Please add your name as well so we know on the meeting who to ping.<br />
<br />
* PTG planning<br />
* Documentation<br />
** Project map<br />
<br />
== Status and Planning ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-status Status Meeting Notes]<br />
* [https://storyboard.openstack.org/#!/project_group/86 StarlingX Storyboards]<br />
* [https://etherpad.openstack.org/p/stx-planning StarlingX Planning] (contains sub-projects and Worklists)<br />
* [https://etherpad.openstack.org/p/stx-PTG-agenda StarlingX Denver PTG Agenda]<br />
<br />
== Upstream Status ==<br />
<br />
The StarlingX code base contains a number of out-of-tree patches against other open source components. One of our highest priorities is to contribute those changes to their upstream communities.<br />
<br />
TODO: Add a link to our Dashboard showing the status of upstream submissions<br />
|}</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX&diff=162221StarlingX2018-06-21T17:13:50Z<p>David.b.kinder: </p>
<hr />
<div>__NOTOC__<br />
<center><br />
[[File:Starlingx-logo-300w.png|center]]<br />
<br />
== Welcome to the StarlingX Project ==<br />
</center><br />
StarlingX is a fully featured and high performance Edge Cloud software stack that is based on the [https://www.windriver.com/products/titanium-cloud/ Wind River® Titanium Cloud] R5 product. Intel and Wind River have jointly open sourced this software and we invite you to download, build, install, and run it. <br />
<br />
Wind River Titanium Cloud was originally built on open source components, that were then extended and hardened to meet critical infrastructure requirements, including: high availability, fault management, and performance management. This software provides numerous features and capabilities to enable 24/7 operation of mission critical applications.<br />
<br />
The StarlingX project opens all of these enhancements to the open source community. Please join us as we build the infrastructure stack for Edge Computing.<br />
----<br />
<!-- the rest of the page is a two-column table --><br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
|style="vertical-align:top; width:50%;" |<br />
<!-- left column contents --><br />
== Documentation ==<br />
<br />
These three documents will help get you started building, installing, and validating your installation of StarlingX:<br />
* [[StarlingX/Developer Guide|Developer Guide]]<br />
* [[StarlingX/Installation Guide|Installation Guide]]<br />
* [[StarlingX/Validation Guide|Validation Guide]]<br />
<br />
== Code ==<br />
The StarlingX project uses Gerrit as its web-based code change management and review tool.<br />
* [https://git.openstack.org/cgit/openstack/stx Gerrit repositories] maintain the StarlingX code, build instructions are in the [[StarlingX/Developer Guide]]<br />
* [https://review.openstack.org/#/admin/projects/?filter=stx StarlingX Gerrit Projects] and [https://review.openstack.org/#/q/status:open+AND+project:%255Eopenstack/stx-%2540 Open StarlingX project reviews]<br />
* [https://review.openstack.org/#/dashboard/?foreach=(project:openstack/stx-clients%20OR%20project:openstack/stx-config%20OR%20project:openstack/stx-fault%20OR%20project:openstack/stx-gplv2%20OR%20project:openstack/stx-gplv3%20OR%20project:openstack/stx-gui%20OR%20project:openstack/stx-ha%20OR%20project:openstack/stx-integ%20OR%20project:openstack/stx-manifest%20OR%20project:openstack/stx-metal%20OR%20project:openstack/stx-nfv%20OR%20project:openstack/stx-root%20OR%20project:openstack/stx-tis-repo%20OR%20project:openstack/stx-tools%20OR%20project:openstack/stx-update%20OR%20project:openstack/stx-upstream%20OR%20project:openstack/stx-utils)%20status:open%20NOT%20owner:self%20NOT%20label:Workflow%3C=-1%20label:Verified%3E=1,zuul%20NOT%20reviewedby:self&title=StarlingX%20Review%20Inbox&Needs%20final%20%202=label:Code-Review%3E=2%20limit:50%20NOT%20label:Code-Review%3C=-1,self&Passed%20Zuul,%20No%20Negative%20Feedback%20(Small%20Fixes)=NOT%20label:Code-Review%3E=2%20NOT%20label:Code-Review%3C=-1,starlingx-core%20delta:%3C=10&Passed%20Zuul,%20No%20Negative%20Feedback=NOT%20label:Code-Review%3E=2%20NOT%20label:Code-Review%3C=-1,starlingx-core%20delta:%3E10&Needs%20Feedback%20(Changes%20older%20than%205%20days%20that%20have%20not%20been%20reviewed%20by%20anyone)=NOT%20label:Code-Review%3C=-1%20NOT%20label:Code-Review%3E=1%20age:5d&You%20are%20a%20reviewer,%20but%20haven't%20voted%20in%20the%20current%20revision=NOT%20label:Code-Review%3C=-1,self%20NOT%20label:Code-Review%3E=1,self%20reviewer:self&Wayward%20Changes%20(Changes%20with%20no%20code%20review%20in%20the%20last%202days)=NOT%20is:reviewed%20age:2d StarlingX Gerrit Review Dashboard]<br />
<br />
== Bug Tracking ==<br />
<br />
* [https://storyboard.openstack.org/#!/worklist/354 StarlingX Bug List]<br />
** This list is sorted manually by drag and drop. <br />
** Please create bugs for any issues found in Storyboard, against one of the stx-* projects. If you can't find the right project, use stx-integ<br />
** After you create the bug, please add it to the Bug Worklist (link above). <br />
** If you don't see the "add a card" button on the Worklist, you need to be added as a user of the Worklist. Please contact bruce.e.jones@intel.com to be added. <br />
<br />
|style="vertical-align:top; width:50%;" |<br />
<!-- right column contents --><br />
== Culture ==<br />
<br />
We are proud to be an OpenStack Foundation project!<br />
* We support and adhere to the [https://www.openstack.org/legal/community-code-of-conduct/ OpenStack community Code of Conduct]<br />
* We support and fully embrace the [https://governance.openstack.org/tc/reference/opens.html Four Opens]<br />
* [[StarlingX/Contribution Guidelines|Contribution Guidelines]]<br />
<br />
== Status and Planning ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-status Status Meeting Notes]<br />
* [https://storyboard.openstack.org/#!/project_group/86 StarlingX Storyboards]<br />
* [https://etherpad.openstack.org/p/stx-planning StarlingX Planning] (contains sub-projects and Worklists)<br />
* [https://etherpad.openstack.org/p/stx-PTG-agenda StarlingX Denver PTG Agenda]<br />
<br />
== Upstream Status ==<br />
<br />
The StarlingX code base contains a number of out-of-tree patches against other open source components. One of our highest priorities is to contribute those changes to their upstream communities.<br />
<br />
TODO: Add a link to our Dashboard showing the status of upstream submissions<br />
<br />
== References ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-notes The list of repos] and other things<br />
<br />
== OpenStack Documentation ==<br />
<br />
These are references to general OpenStack material:<br />
<br />
* [https://docs.openstack.org/infra/manual/developers.html Developer's Guide]<br />
* [https://docs.openstack.org/infra/manual/creators.html Project Creator's Guide]<br />
* [https://wiki.openstack.org/wiki/How_To_Contribute The Contributors Guide] (the older wiki page)<br />
* [https://governance.openstack.org/tc/reference/project-testing-interface.html Project Testing Interface]<br />
|}</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX/Developer_Guide&diff=162212StarlingX/Developer Guide2018-06-21T14:41:34Z<p>David.b.kinder: </p>
<hr />
<div>This section contains the steps for building a StarlingX ISO.<br />
<br />
== Requirements ==<br />
<br />
The recommended minimum requirements include:<br />
<br />
=== Hardware Requirements ===<br />
<br />
A workstation computer with:<br />
<br />
* Processor: x86_64 only supported architecture<br />
* Memory: At least 32GB RAM<br />
* Hard Disk: 100GB HDD<br />
* Network: Network adapter with active Internet connection<br />
<br />
=== Software Requirements ===<br />
<br />
A workstation computer with:<br />
<br />
* Operating System: Ubuntu 16.04 LTS 64-bit<br />
* Docker<br />
* Android Repo Tool<br />
* Proxy Settings Configured (If Required)<br />
* Public SSH Key<br />
<br />
== Development Environment Setup ==<br />
<br />
This section describes how to set up a StarlingX development system on a workstation computer. After completing these steps, you will be able to build a StarlingX ISO image on the following Linux distribution:<br />
<br />
* Ubuntu 16.04 LTS 64-bit<br />
<br />
=== Update Your Operating System ===<br />
<br />
Before proceeding with the build, ensure your OS is up to date. You’ll first need to update the local database list of available packages:<br />
<br />
<source lang="sh">$ sudo apt-get update<br />
</source><br />
=== Installation Requirements and Dependencies ===<br />
<br />
<ol start="1"><li>Install the required packages in an Ubuntu host system with:<br />
<br />
<source lang="sh">$ sudo apt-get install git<br />
</source></li><br />
<li><p>Install the required Docker CE packages in an Ubuntu host system. See [https://docs.docker.com/install/ Get Docker] for more information.</p></li><br />
<li><p>Install the required Android Repo Tool in an Ubuntu host system. See [https://source.android.com/setup/build/downloading#installing-repo Installing Repo] for more information.</p></li></ol><br />
<br />
=== Install Public SSH Key===<br />
<br />
# Follow these instructions on GitHub to [https://help.github.com/articles/connecting-to-github-with-ssh Generate a Public SSH Key] and then upload your public key to your GitHub and Gerrit account profiles:<br />
#* [https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account Upload to Github]<br />
#* [https://review.openstack.org/#/settings/ssh-keys Upload to Gerrit]<br />
<br />
=== Install stx-tools project ===<br />
<br />
<ol start="1"><li>Clone the &lt;stx-tools&gt; project<br />
<br />
<source lang="sh">$ git clone git://git.openstack.org/openstack/stx-tools<br />
</source></li></ol><br />
<br />
=== Create a Workspace Directory ===<br />
<br />
<ol start="1"><li>Create a ''starlingx'' workspace directory on your workstation computer. Usually, you’ll want to create it somewhere under your user’s home directory.<br />
<br />
<source lang="sh">$ mkdir -p $HOME/starlingx/<br />
</source></li></ol><br />
<br />
== Build the CentOS Mirror Repository ==<br />
<br />
This section describes how to build the CentOS Mirror Repository.<br />
<br />
=== Setup Repository Docker Container ===<br />
<br />
<ol start="1"><li>Navigate to the ''&lt;stx-tools&gt;/centos-mirror-tool'' project directory:<br />
<br />
<source lang="sh">$ cd stx-tools/centos-mirror-tools/<br />
</source></li><br />
<li>If necessary you might have to set http/https proxy in your Dockerfile before building the docker image.<br />
<br />
<source lang="sh">ENV http_proxy "http://your.actual_http_proxy.com:your_port" && \<br />
https_proxy "https://your.actual_https_proxy.com:your_port" && \<br />
ftp_proxy "http://your.actual_ftp_proxy.com:your_port"<br />
RUN echo "proxy=http://your-proxy.com:port" >> /etc/yum.conf<br />
</source></li><br />
<li>Build your ''&lt;name&gt;:&lt;tag&gt;'' base container image with '''e.g.''' ''aarcemor:centos-mirror-repository''<br />
<br />
<source lang="sh">$ docker build -t aarcemor:centos-mirror-repository -f Dockerfile .<br />
</source></li><br />
<li>Launch a ''&lt;name&gt;'' docker container using previously created Docker base container image ''&lt;name&gt;:&lt;tag&gt;'' '''e.g.''' ''aarcemor-centos-mirror-repository''. As /localdisk is defined as the workdir of the container, the same folder name should be used to define the volume. The container will start to run and populate a logs and output folders in this directory. The container shall be run from the same directory where the other scripts are stored.<br />
<br />
<source lang="sh">$ docker run -itd --name aarcemor-centos-mirror-repository -v $(pwd):/localdisk aarcemor:centos-mirror-repository bash<br />
</source></li><br />
<li>Execute the ''&lt;name&gt;'' docker container '''e.g.''' ''aarcemor-centos-mirror-repository''<br />
<br />
<source lang="sh">$ docker exec -it aarcemor-centos-mirror-repository bash<br />
</source></li></ol><br />
<br />
=== Import GPG Keys ===<br />
<br />
<ol start="1"><li>Inside the docker container, import the keys into the local GPG keyring and query public key information:<br />
<br />
<source lang="none"># rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*<br />
# rpm -qi gpg-pubkey-\*<br />
</source></li></ol><br />
<br />
=== Download Packages ===<br />
<br />
<ol start="1"><li>Enter the following command to download the required packages to populate the CentOS Mirror Repository:<br />
<br />
<source lang="none"># bash download_mirror.sh<br />
</source></li><br />
<li>Monitor the download of packages until it is complete. When download is complete, the following message is displayed:<br />
<br />
<source lang="none">totally 17 files are downloaded!<br />
step #3: done successfully<br />
IMPORTANT: The following 3 files are just bootstrap versions. Based on them, the workable images<br />
for StarlingX could be generated by running "update-pxe-network-installer" command after "build-iso"<br />
- out/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img<br />
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img<br />
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz<br />
</source></li></ol><br />
<br />
=== Verify Packages ===<br />
<br />
<ol start="1"><li>Verify there are no missing or failed packages:<br />
<br />
<source lang="none"># cat output/3rd_rpms_missing_L1.txt output/3rd_srpms_missing_L1.txt output/centos_rpms_missing_L1.txt output/centos_srpms_missing_L1.txt<br />
# cat output/3rd_rpms_fail_move_L1.txt output/3rd_srpms_fail_move_L1.txt output/centos_rpms_fail_move_L1.txt output/centos_srpms_fail_move_L1.txt<br />
</source></li><br />
<li><p>In case there are missing or failed ones due to network instability (or timeout), you should download them manually, to assure you get all RPMs listed in &quot;rpms_from_3rd_parties.lst&quot; and &quot;rpms_from_centos_repo.lst&quot;.</p></li><br />
<li><p>After all packages were succesfully downloaded, remove all i686 RPMs packages and change ''output'' directory ownership:</p><br />
<br />
<source lang="none"># find ./output -name "*.i686.rpm" | xargs rm -f<br />
# chown 751:751 -R ./output<br />
</source></li></ol><br />
<br />
=== Create CentOS Mirror Repository ===<br />
<br />
<ol start="1"><li>From a console of the workstation, create a ''mirror/CentOS'' directory under your ''starlingx'' workspace directory:<br />
<br />
<source lang="sh">$ mkdir -p $HOME/starlingx/mirror/CentOS/<br />
</source></li><br />
<li>Copy the built CentOS Mirror Repository built under ''&lt;stx-tools&gt;/centos-mirror-tool'' to the ''$HOME/starlingx/mirror/CentOS'' workspace directory.<br />
<br />
<source lang="sh">$ cp -r stx-tools/centos-mirror-tools/output/stx-r1/ $HOME/starlingx/mirror/CentOS/<br />
</source></li></ol><br />
<br />
= Work in Progress... =</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX&diff=162197StarlingX2018-06-20T21:24:20Z<p>David.b.kinder: </p>
<hr />
<div>__NOTOC__<br />
<center><br />
[[File:Starlingx-logo-300w.png|center]]<br />
<br />
== Welcome to the StarlingX Project ==<br />
</center><br />
StarlingX is a fully featured and high performance Edge Cloud software stack that is based on the [https://www.windriver.com/products/titanium-cloud/ Wind River® Titanium Cloud] R5 product. Intel and Wind River have jointly open sourced this software and we invite you to download, build, install, and run it. <br />
<br />
Wind River Titanium Cloud was originally built on open source components, that were then extended and hardened to meet critical infrastructure requirements, including: high availability, fault management, and performance management. This software provides numerous features and capabilities to enable 24/7 operation of mission critical applications.<br />
<br />
The StarlingX project opens all of these enhancements to the open source community. Please join us as we build the infrastructure stack for Edge Computing.<br />
----<br />
<!-- the rest of the page is a two-column table --><br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
|style="vertical-align:top; width:50%;" |<br />
<!-- left column contents --><br />
== Documentation ==<br />
<br />
These three documents will help get you started building, installing, and validating your installation of StarlingX:<br />
* [[StarlingX/Developer Guide|Developer Guide]]<br />
* [[StarlingX/Installation Guide|Installation Guide]]<br />
* [[StarlingX/Validation Guide|Validation Guide]]<br />
<br />
== Code ==<br />
The StarlingX project uses Gerrit as its web-based code change management and review tool.<br />
* [https://git.openstack.org/cgit/openstack/stx Gerrit repositories] maintain the StarlingX code, build instructions are in the [[StarlingX/Developer Guide]]<br />
* [https://review.openstack.org/#/admin/projects/?filter=stx StarlingX Gerrit Projects] and [https://review.openstack.org/#/q/status:open+AND+project:%255Eopenstack/stx-%2540 Open StarlingX project reviews]<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fstx%2Dclients+OR+project%3Aopenstack%2Fstx%2Dconfig+OR+project%3Aopenstack%2Fstx%2Dfault+OR+project%3Aopenstack%2Fstx%2Dgplv2+OR+project%3Aopenstack%2Fstx%2Dgplv3+OR+project%3Aopenstack%2Fstx%2Dgui+OR+project%3Aopenstack%2Fstx%2Dha+OR+project%3Aopenstack%2Fstx%2Dinteg+OR+project%3Aopenstack%2Fstx%2Dmanifest+OR+project%3Aopenstack%2Fstx%2Dmetal+OR+project%3Aopenstack%2Fstx%2Dnfv+OR+project%3Aopenstack%2Fstx%2Droot+OR+project%3Aopenstack%2Fstx%2Dtis%2Drepo+OR+project%3Aopenstack%2Fstx%2Dtools+OR+project%3Aopenstack%2Fstx%2Dupdate+OR+project%3Aopenstack%2Fstx%2Dupstream+OR+project%3Aopenstack%2Fstx%2Dutils%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1%2Czuul+NOT+reviewedby%3Aself&title=StarlingX+Review+Inbox&Needs+final+%2B2=label%3ACode%2DReview%3E%3D2+limit%3A50+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself&Passed+Zuul%2C+No+Negative+Feedback+%28Small+Fixes%29=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3C%3D10&Passed+Zuul%2C+No+Negative+Feedback=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3E10&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D1+age%3A5d&You+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself+NOT+label%3ACode%2DReview%3E%3D1%2Cself+reviewer%3Aself&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+is%3Areviewed+age%3A2d StarlingX Gerrit Review Dashboard]<br />
** Also in [http://paste.openstack.org/show/723397/ StarlingX Gerrit Review Dashboard] (copy the URL from there into a browser bookmark)<br />
<br />
== Bug Tracking ==<br />
<br />
* [https://storyboard.openstack.org/#!/worklist/354 StarlingX Bug List]<br />
** This list is sorted manually by drag and drop. <br />
** Please create bugs for any issues found in Storyboard, against one of the stx-* projects. If you can't find the right project, use stx-integ<br />
** After you create the bug, please add it to the Bug Worklist (link above)<br />
<br />
|style="vertical-align:top; width:50%;" |<br />
<!-- right column contents --><br />
== Culture ==<br />
<br />
We are proud to be an OpenStack Foundation project!<br />
* We support and adhere to the [https://www.openstack.org/legal/community-code-of-conduct/ OpenStack community Code of Conduct]<br />
* We support and fully embrace the [https://governance.openstack.org/tc/reference/opens.html Four Opens]<br />
<br />
== Status and Planning ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-status Status Meeting Notes]<br />
* [https://storyboard.openstack.org/#!/project_group/86 StarlingX Storyboards]<br />
* [https://etherpad.openstack.org/p/stx-planning StarlingX Planning] (contains sub-projects and Worklists)<br />
* [https://etherpad.openstack.org/p/stx-PTG-agenda StarlingX Denver PTG Agenda]<br />
<br />
== Upstream Status ==<br />
<br />
The StarlingX code base contains a number of out-of-tree patches against other open source components. One of our highest priorities is to contribute those changes to their upstream communities.<br />
<br />
TODO: Add a link to our Dashboard showing the status of upstream submissions<br />
<br />
== References ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-notes The list of repos] and other things<br />
<br />
== OpenStack Documentation ==<br />
<br />
These are references to general OpenStack material:<br />
<br />
* [https://docs.openstack.org/infra/manual/developers.html Developer's Guide]<br />
* [https://docs.openstack.org/infra/manual/creators.html Project Creator's Guide]<br />
* [https://wiki.openstack.org/wiki/How_To_Contribute The Contributors Guide] (the older wiki page)<br />
* [https://governance.openstack.org/tc/reference/project-testing-interface.html Project Testing Interface]<br />
|}</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX&diff=162189StarlingX2018-06-20T20:42:19Z<p>David.b.kinder: /* Code */</p>
<hr />
<div>__NOTOC__<br />
[[File:Starlingx-logo-300w.png|center]]<br />
<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
| style="vertical-align:top; width:75%;" | <br />
== Welcome to the StarlingX Project ==<br />
<br />
<br />
StarlingX is a fully featured and high performance Edge Cloud software stack that is based on the [https://www.windriver.com/products/titanium-cloud/ Wind River® Titanium Cloud] R5 product. Intel and Wind River have jointly open sourced this software and we invite you to download, build, install and run it. <br />
<br />
Wind River® Titanium Cloud was originally built on open source components, which were then extended and hardened to meet critical infrastructure requirements: high availability, fault management, and performance management. This software provides numerous features and capabilities to enable 24/7 operation of mission critical applications.<br />
<br />
The StarlingX project opens all of these enhancements to the open source community. Please join us as we build the infrastructure stack for Edge Computing.<br />
<br />
|}<br />
----<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
|style="vertical-align:top; width:50%;" |<br />
<br />
== Documentation ==<br />
<br />
These three documents will help get you started building, installing, and validating your installation of StarlingX:<br />
* [[StarlingX/Developer Guide|Developer Guide]]<br />
* [[StarlingX/Installation Guide|Installation Guide]]<br />
* [[StarlingX/Validation Guide|Validation Guide]]<br />
<br />
== Code ==<br />
The StarlingX project uses Gerrit as its web-based code change management and review tool.<br />
* [https://git.openstack.org/cgit/openstack/stx Gerrit repositories] maintain the StarlingX code, build instructions are in the [[StarlingX/Developer Guide]]<br />
* [https://review.openstack.org/#/admin/projects/?filter=stx StarlingX Gerrit Projects] and [https://review.openstack.org/#/q/status:open+AND+project:%255Eopenstack/stx-%2540 Open StarlingX project reviews]<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fstx%2Dclients+OR+project%3Aopenstack%2Fstx%2Dconfig+OR+project%3Aopenstack%2Fstx%2Dfault+OR+project%3Aopenstack%2Fstx%2Dgplv2+OR+project%3Aopenstack%2Fstx%2Dgplv3+OR+project%3Aopenstack%2Fstx%2Dgui+OR+project%3Aopenstack%2Fstx%2Dha+OR+project%3Aopenstack%2Fstx%2Dinteg+OR+project%3Aopenstack%2Fstx%2Dmanifest+OR+project%3Aopenstack%2Fstx%2Dmetal+OR+project%3Aopenstack%2Fstx%2Dnfv+OR+project%3Aopenstack%2Fstx%2Droot+OR+project%3Aopenstack%2Fstx%2Dtis%2Drepo+OR+project%3Aopenstack%2Fstx%2Dtools+OR+project%3Aopenstack%2Fstx%2Dupdate+OR+project%3Aopenstack%2Fstx%2Dupstream+OR+project%3Aopenstack%2Fstx%2Dutils%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1%2Czuul+NOT+reviewedby%3Aself&title=StarlingX+Review+Inbox&Needs+final+%2B2=label%3ACode%2DReview%3E%3D2+limit%3A50+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself&Passed+Zuul%2C+No+Negative+Feedback+%28Small+Fixes%29=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3C%3D10&Passed+Zuul%2C+No+Negative+Feedback=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3E10&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D1+age%3A5d&You+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself+NOT+label%3ACode%2DReview%3E%3D1%2Cself+reviewer%3Aself&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+is%3Areviewed+age%3A2d StarlingX Gerrit Review Dashboard]<br />
** Also in [http://paste.openstack.org/show/723397/ StarlingX Gerrit Review Dashboard] (copy the URL from there into a browser bookmark)<br />
<br />
== Bug Tracking ==<br />
<br />
* [https://storyboard.openstack.org/#!/worklist/354 StarlingX Bug List]<br />
** This list is sorted manually by drag and drop. <br />
** Please create bugs for any issues found in Storyboard, against one of the stx-* projects. If you can't find the right project, use stx-integ<br />
** After you create the bug, please add it to the Bug Worklist (link above)<br />
<br />
|style="vertical-align:top; width:50%;" |<br />
== Culture ==<br />
<br />
We are proud to be an OpenStack Foundation project!<br />
* We support and adhere to the [https://www.openstack.org/legal/community-code-of-conduct/ OpenStack community Code of Conduct]<br />
* We support and fully embrace the [https://governance.openstack.org/tc/reference/opens.html Four Opens]<br />
<br />
== Status and Planning ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-status Status Meeting Notes]<br />
* [https://storyboard.openstack.org/#!/project_group/86 StarlingX Storyboards]]<br />
* [https://etherpad.openstack.org/p/stx-planning StarlingX Planning] (contains sub-projects and Worklists)<br />
<br />
== Upstream Status ==<br />
<br />
The StarlingX code base contains a number of out-of-tree patches against other open source components. One of our highest priorities is to contribute those changes to their upstream communities.<br />
<br />
TODO: Add a link to our Dashboard showing the status of upstream submissions<br />
<br />
== References ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-notes The list of repos] and other things<br />
<br />
== OpenStack Documentation ==<br />
<br />
These are references to general OpenStack material:<br />
<br />
* [https://docs.openstack.org/infra/manual/developers.html Developer's Guide]<br />
* [https://docs.openstack.org/infra/manual/creators.html Project Creator's Guide]<br />
* [https://wiki.openstack.org/wiki/How_To_Contribute The Contributors Guide] (the older wiki page)<br />
* [https://governance.openstack.org/tc/reference/project-testing-interface.html Project Testing Interface]<br />
|}</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX&diff=162185StarlingX2018-06-20T20:26:56Z<p>David.b.kinder: /* Documentation */</p>
<hr />
<div>__NOTOC__<br />
[[File:Starlingx-logo-300w.png|center]]<br />
<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
| style="vertical-align:top; width:75%;" | <br />
== Welcome to the StarlingX Project ==<br />
<br />
<br />
StarlingX is a fully featured and high performance Edge Cloud software stack that is based on the [https://www.windriver.com/products/titanium-cloud/ Wind River® Titanium Cloud] R5 product. Intel and Wind River have jointly open sourced this software and we invite you to download, build, install and run it. <br />
<br />
Wind River® Titanium Cloud was originally built on open source components, which were then extended and hardened to meet critical infrastructure requirements: high availability, fault management, and performance management. This software provides numerous features and capabilities to enable 24/7 operation of mission critical applications.<br />
<br />
The StarlingX project opens all of these enhancements to the open source community. Please join us as we build the infrastructure stack for Edge Computing.<br />
<br />
|}<br />
----<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
|style="vertical-align:top; width:50%;" |<br />
<br />
== Documentation ==<br />
<br />
These three documents will help get you started building, installing, and validating your installation of StarlingX:<br />
* [[StarlingX/Developer Guide|Developer Guide]]<br />
* [[StarlingX/Installation Guide|Installation Guide]]<br />
* [[StarlingX/Validation Guide|Validation Guide]]<br />
<br />
== Code ==<br />
<br />
* [https://git.openstack.org/cgit/openstack/stx Gerrit repositories]<br />
** Instructions for how to download and build the code are in progress<br />
* [https://review.openstack.org/ Gerrit Web UI]<br />
* [https://review.openstack.org/#/admin/projects/?filter=stx StarlingX Gerrit Projects]]<br />
** Hint for fast review of open reviews related to STX, in search box use the regular expression "status:open AND project:^openstack/stx-@"<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fstx%2Dclients+OR+project%3Aopenstack%2Fstx%2Dconfig+OR+project%3Aopenstack%2Fstx%2Dfault+OR+project%3Aopenstack%2Fstx%2Dgplv2+OR+project%3Aopenstack%2Fstx%2Dgplv3+OR+project%3Aopenstack%2Fstx%2Dgui+OR+project%3Aopenstack%2Fstx%2Dha+OR+project%3Aopenstack%2Fstx%2Dinteg+OR+project%3Aopenstack%2Fstx%2Dmanifest+OR+project%3Aopenstack%2Fstx%2Dmetal+OR+project%3Aopenstack%2Fstx%2Dnfv+OR+project%3Aopenstack%2Fstx%2Droot+OR+project%3Aopenstack%2Fstx%2Dtis%2Drepo+OR+project%3Aopenstack%2Fstx%2Dtools+OR+project%3Aopenstack%2Fstx%2Dupdate+OR+project%3Aopenstack%2Fstx%2Dupstream+OR+project%3Aopenstack%2Fstx%2Dutils%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1%2Czuul+NOT+reviewedby%3Aself&title=StarlingX+Review+Inbox&Needs+final+%2B2=label%3ACode%2DReview%3E%3D2+limit%3A50+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself&Passed+Zuul%2C+No+Negative+Feedback+%28Small+Fixes%29=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3C%3D10&Passed+Zuul%2C+No+Negative+Feedback=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3E10&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D1+age%3A5d&You+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself+NOT+label%3ACode%2DReview%3E%3D1%2Cself+reviewer%3Aself&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+is%3Areviewed+age%3A2d StarlingX Gerrit Review Dashboard]<br />
** Also in [http://paste.openstack.org/show/723397/ StarlingX Gerrit Review Dashboard] (copy the URL from there into a browser bookmark)<br />
<br />
== Bug Tracking ==<br />
<br />
* [https://storyboard.openstack.org/#!/worklist/354 StarlingX Bug List]<br />
** This list is sorted manually by drag and drop. <br />
** Please create bugs for any issues found in Storyboard, against one of the stx-* projects. If you can't find the right project, use stx-integ<br />
** After you create the bug, please add it to the Bug Worklist (link above)<br />
<br />
|style="vertical-align:top; width:50%;" |<br />
== Culture ==<br />
<br />
We are proud to be an OpenStack Foundation project!<br />
* We support and adhere to the [https://www.openstack.org/legal/community-code-of-conduct/ OpenStack community Code of Conduct]<br />
* We support and fully embrace the [https://governance.openstack.org/tc/reference/opens.html Four Opens]<br />
<br />
== Status and Planning ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-status Status Meeting Notes]<br />
* [https://storyboard.openstack.org/#!/project_group/86 StarlingX Storyboards]]<br />
* [https://etherpad.openstack.org/p/stx-planning StarlingX Planning] (contains sub-projects and Worklists)<br />
<br />
== Upstream Status ==<br />
<br />
The StarlingX code base contains a number of out-of-tree patches against other open source components. One of our highest priorities is to contribute those changes to their upstream communities.<br />
<br />
TODO: Add a link to our Dashboard showing the status of upstream submissions<br />
<br />
== References ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-notes The list of repos] and other things<br />
<br />
== OpenStack Documentation ==<br />
<br />
These are references to general OpenStack material:<br />
<br />
* [https://docs.openstack.org/infra/manual/developers.html Developer's Guide]<br />
* [https://docs.openstack.org/infra/manual/creators.html Project Creator's Guide]<br />
* [https://wiki.openstack.org/wiki/How_To_Contribute The Contributors Guide] (the older wiki page)<br />
* [https://governance.openstack.org/tc/reference/project-testing-interface.html Project Testing Interface]<br />
|}</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX&diff=162184StarlingX2018-06-20T20:24:43Z<p>David.b.kinder: /* Documentation */</p>
<hr />
<div>__NOTOC__<br />
[[File:Starlingx-logo-300w.png|center]]<br />
<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
| style="vertical-align:top; width:75%;" | <br />
== Welcome to the StarlingX Project ==<br />
<br />
<br />
StarlingX is a fully featured and high performance Edge Cloud software stack that is based on the [https://www.windriver.com/products/titanium-cloud/ Wind River® Titanium Cloud] R5 product. Intel and Wind River have jointly open sourced this software and we invite you to download, build, install and run it. <br />
<br />
Wind River® Titanium Cloud was originally built on open source components, which were then extended and hardened to meet critical infrastructure requirements: high availability, fault management, and performance management. This software provides numerous features and capabilities to enable 24/7 operation of mission critical applications.<br />
<br />
The StarlingX project opens all of these enhancements to the open source community. Please join us as we build the infrastructure stack for Edge Computing.<br />
<br />
|}<br />
----<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
|style="vertical-align:top; width:50%;" |<br />
<br />
== Documentation ==<br />
<br />
* [[StarlingX/Developer Guide|Developer Guide]]<br />
* [[StarlingX/Installation Guide|Installation Guide]]<br />
* [[StarlingX/Validation Guide|Validation Guide]]<br />
<br />
== Code ==<br />
<br />
* [https://git.openstack.org/cgit/openstack/stx Gerrit repositories]<br />
** Instructions for how to download and build the code are in progress<br />
* [https://review.openstack.org/ Gerrit Web UI]<br />
* [https://review.openstack.org/#/admin/projects/?filter=stx StarlingX Gerrit Projects]]<br />
** Hint for fast review of open reviews related to STX, in search box use the regular expression "status:open AND project:^openstack/stx-@"<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fstx%2Dclients+OR+project%3Aopenstack%2Fstx%2Dconfig+OR+project%3Aopenstack%2Fstx%2Dfault+OR+project%3Aopenstack%2Fstx%2Dgplv2+OR+project%3Aopenstack%2Fstx%2Dgplv3+OR+project%3Aopenstack%2Fstx%2Dgui+OR+project%3Aopenstack%2Fstx%2Dha+OR+project%3Aopenstack%2Fstx%2Dinteg+OR+project%3Aopenstack%2Fstx%2Dmanifest+OR+project%3Aopenstack%2Fstx%2Dmetal+OR+project%3Aopenstack%2Fstx%2Dnfv+OR+project%3Aopenstack%2Fstx%2Droot+OR+project%3Aopenstack%2Fstx%2Dtis%2Drepo+OR+project%3Aopenstack%2Fstx%2Dtools+OR+project%3Aopenstack%2Fstx%2Dupdate+OR+project%3Aopenstack%2Fstx%2Dupstream+OR+project%3Aopenstack%2Fstx%2Dutils%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1%2Czuul+NOT+reviewedby%3Aself&title=StarlingX+Review+Inbox&Needs+final+%2B2=label%3ACode%2DReview%3E%3D2+limit%3A50+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself&Passed+Zuul%2C+No+Negative+Feedback+%28Small+Fixes%29=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3C%3D10&Passed+Zuul%2C+No+Negative+Feedback=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3E10&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D1+age%3A5d&You+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself+NOT+label%3ACode%2DReview%3E%3D1%2Cself+reviewer%3Aself&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+is%3Areviewed+age%3A2d StarlingX Gerrit Review Dashboard]<br />
** Also in [http://paste.openstack.org/show/723397/ StarlingX Gerrit Review Dashboard] (copy the URL from there into a browser bookmark)<br />
<br />
== Bug Tracking ==<br />
<br />
* [https://storyboard.openstack.org/#!/worklist/354 StarlingX Bug List]<br />
** This list is sorted manually by drag and drop. <br />
** Please create bugs for any issues found in Storyboard, against one of the stx-* projects. If you can't find the right project, use stx-integ<br />
** After you create the bug, please add it to the Bug Worklist (link above)<br />
<br />
|style="vertical-align:top; width:50%;" |<br />
== Culture ==<br />
<br />
We are proud to be an OpenStack Foundation project!<br />
* We support and adhere to the [https://www.openstack.org/legal/community-code-of-conduct/ OpenStack community Code of Conduct]<br />
* We support and fully embrace the [https://governance.openstack.org/tc/reference/opens.html Four Opens]<br />
<br />
== Status and Planning ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-status Status Meeting Notes]<br />
* [https://storyboard.openstack.org/#!/project_group/86 StarlingX Storyboards]]<br />
* [https://etherpad.openstack.org/p/stx-planning StarlingX Planning] (contains sub-projects and Worklists)<br />
<br />
== Upstream Status ==<br />
<br />
The StarlingX code base contains a number of out-of-tree patches against other open source components. One of our highest priorities is to contribute those changes to their upstream communities.<br />
<br />
TODO: Add a link to our Dashboard showing the status of upstream submissions<br />
<br />
== References ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-notes The list of repos] and other things<br />
<br />
== OpenStack Documentation ==<br />
<br />
These are references to general OpenStack material:<br />
<br />
* [https://docs.openstack.org/infra/manual/developers.html Developer's Guide]<br />
* [https://docs.openstack.org/infra/manual/creators.html Project Creator's Guide]<br />
* [https://wiki.openstack.org/wiki/How_To_Contribute The Contributors Guide] (the older wiki page)<br />
* [https://governance.openstack.org/tc/reference/project-testing-interface.html Project Testing Interface]<br />
|}</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX/Testing_Guide&diff=162183StarlingX/Testing Guide2018-06-20T20:24:11Z<p>David.b.kinder: Created page with "This document contains the steps for validating a StarlingX System has been installed correctly. == Requirements == The recommended minimum requirements include: === System..."</p>
<hr />
<div>This document contains the steps for validating a StarlingX System has been installed correctly.<br />
<br />
== Requirements ==<br />
<br />
The recommended minimum requirements include:<br />
<br />
=== System Requirements ===<br />
<br />
* A StarlingX System<br />
<br />
== Launch an Instance ==<br />
<br />
=== Download CirrOS Image ===<br />
<br />
Download a CirrOS image in QCOW2 format from the [http://download.cirros-cloud.net/ CirrOS download page]:<br />
<br />
<source lang="sh">$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img<br />
</source><br />
<br />
Transfer the CirrOS QCOW2 image to the StarlingX System:<br />
<br />
<source lang="sh">$ scp cirros-0.4.0-x86_64-disk.img wrsroot@10.10.10.3:~/<br />
</source><br />
<br />
=== Acquire administrative privileges ===<br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<source lang="sh">controller-0:~$ source /etc/nova/openrc<br />
</source><br />
<br />
=== Create OpenStack Images ===<br />
<br />
<source lang="sh">~(keystone_admin)]$ openstack image create --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --public cirros<br />
</source><br />
=== Create OpenStack Flavors ===<br />
<br />
<source lang="sh">~(keystone_admin)]$ openstack flavor create --id 1 --ram 64 --disk 1 --vcpus 1 --public flavor.nano<br />
~(keystone_admin)]$ openstack flavor create --id 2 --ram 128 --disk 2 --vcpus 1 --public flavor.micro<br />
</source><br />
=== Create OpenStack Network ===<br />
<br />
<source lang="sh">~(keystone_admin)]$ openstack network create network.one<br />
</source><br />
=== Create OpenStack Sub Network ===<br />
<br />
<source lang="sh">~(keystone_admin)]$ openstack subnet create --network network.one --ip-version 4 --subnet-range 192.168.1.0/24 --dhcp subnet.one<br />
</source><br />
=== Create OpenStack Servers ===<br />
<br />
<source lang="sh">~(keystone_admin)]$ openstack server create --flavor flavor.nano --image cirros --nic net-id=network.one server.nano<br />
~(keystone_admin)]$ openstack server create --flavor flavor.micro --image cirros --nic net-id=network.one server.micro<br />
</source></div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX&diff=162179StarlingX2018-06-20T20:06:02Z<p>David.b.kinder: /* Welcome to the StarlingX project */</p>
<hr />
<div>__NOTOC__<br />
[[File:Starlingx-logo-300w.png|center]]<br />
<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
| style="vertical-align:top; width:75%;" | <br />
== Welcome to the StarlingX Project ==<br />
<br />
<br />
StarlingX is a fully featured and high performance Edge Cloud software stack that is based on the [https://www.windriver.com/products/titanium-cloud/ Wind River® Titanium Cloud] R5 product. Intel and Wind River have jointly open sourced this software and we invite you to download, build, install and run it. <br />
<br />
Wind River® Titanium Cloud was originally built on open source components, which were then extended and hardened to meet critical infrastructure requirements: high availability, fault management, and performance management. This software provides numerous features and capabilities to enable 24/7 operation of mission critical applications.<br />
<br />
The StarlingX project opens all of these enhancements to the open source community. Please join us as we build the infrastructure stack for Edge Computing.<br />
<br />
|}<br />
----<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
|style="vertical-align:top; width:50%;" |<br />
<br />
== Documentation ==<br />
<br />
* [[StarlingX/Developer Guide|Developer Guide]]<br />
* [[StarlingX/Installation Guide|Installation Guide]]<br />
<br />
== Code ==<br />
<br />
* [https://git.openstack.org/cgit/openstack/stx Gerrit repositories]<br />
** Instructions for how to download and build the code are in progress<br />
* [https://review.openstack.org/ Gerrit Web UI]<br />
* [https://review.openstack.org/#/admin/projects/?filter=stx StarlingX Gerrit Projects]]<br />
** Hint for fast review of open reviews related to STX, in search box use the regular expression "status:open AND project:^openstack/stx-@"<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fstx%2Dclients+OR+project%3Aopenstack%2Fstx%2Dconfig+OR+project%3Aopenstack%2Fstx%2Dfault+OR+project%3Aopenstack%2Fstx%2Dgplv2+OR+project%3Aopenstack%2Fstx%2Dgplv3+OR+project%3Aopenstack%2Fstx%2Dgui+OR+project%3Aopenstack%2Fstx%2Dha+OR+project%3Aopenstack%2Fstx%2Dinteg+OR+project%3Aopenstack%2Fstx%2Dmanifest+OR+project%3Aopenstack%2Fstx%2Dmetal+OR+project%3Aopenstack%2Fstx%2Dnfv+OR+project%3Aopenstack%2Fstx%2Droot+OR+project%3Aopenstack%2Fstx%2Dtis%2Drepo+OR+project%3Aopenstack%2Fstx%2Dtools+OR+project%3Aopenstack%2Fstx%2Dupdate+OR+project%3Aopenstack%2Fstx%2Dupstream+OR+project%3Aopenstack%2Fstx%2Dutils%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1%2Czuul+NOT+reviewedby%3Aself&title=StarlingX+Review+Inbox&Needs+final+%2B2=label%3ACode%2DReview%3E%3D2+limit%3A50+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself&Passed+Zuul%2C+No+Negative+Feedback+%28Small+Fixes%29=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3C%3D10&Passed+Zuul%2C+No+Negative+Feedback=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3E10&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D1+age%3A5d&You+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself+NOT+label%3ACode%2DReview%3E%3D1%2Cself+reviewer%3Aself&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+is%3Areviewed+age%3A2d StarlingX Gerrit Review Dashboard]<br />
** Also in [http://paste.openstack.org/show/723397/ StarlingX Gerrit Review Dashboard] (copy the URL from there into a browser bookmark)<br />
<br />
== Bug Tracking ==<br />
<br />
* [https://storyboard.openstack.org/#!/worklist/354 StarlingX Bug List]<br />
** This list is sorted manually by drag and drop. <br />
** Please create bugs for any issues found in Storyboard, against one of the stx-* projects. If you can't find the right project, use stx-integ<br />
** After you create the bug, please add it to the Bug Worklist (link above)<br />
<br />
|style="vertical-align:top; width:50%;" |<br />
== Culture ==<br />
<br />
We are proud to be an OpenStack Foundation project!<br />
* We support and adhere to the [https://www.openstack.org/legal/community-code-of-conduct/ OpenStack community Code of Conduct]<br />
* We support and fully embrace the [https://governance.openstack.org/tc/reference/opens.html Four Opens]<br />
<br />
== Status and Planning ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-status Status Meeting Notes]<br />
* [https://storyboard.openstack.org/#!/project_group/86 StarlingX Storyboards]]<br />
* [https://etherpad.openstack.org/p/stx-planning StarlingX Planning] (contains sub-projects and Worklists)<br />
<br />
== Upstream Status ==<br />
<br />
The StarlingX code base contains a number of out-of-tree patches against other open source components. One of our highest priorities is to contribute those changes to their upstream communities.<br />
<br />
TODO: Add a link to our Dashboard showing the status of upstream submissions<br />
<br />
== References ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-notes The list of repos] and other things<br />
<br />
== OpenStack Documentation ==<br />
<br />
These are references to general OpenStack material:<br />
<br />
* [https://docs.openstack.org/infra/manual/developers.html Developer's Guide]<br />
* [https://docs.openstack.org/infra/manual/creators.html Project Creator's Guide]<br />
* [https://wiki.openstack.org/wiki/How_To_Contribute The Contributors Guide] (the older wiki page)<br />
* [https://governance.openstack.org/tc/reference/project-testing-interface.html Project Testing Interface]<br />
|}</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX&diff=162178StarlingX2018-06-20T20:04:26Z<p>David.b.kinder: /* Welcome to the StarlingX project!!! */</p>
<hr />
<div>__NOTOC__<br />
[[File:Starlingx-logo-300w.png|center]]<br />
<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
| style="vertical-align:top; width:75%;" | <br />
== Welcome to the StarlingX project ==<br />
<br />
<br />
StarlingX is a fully featured and high performance Edge Cloud software stack that is based on the [https://www.windriver.com/products/titanium-cloud/ Wind River® Titanium Cloud] R5 product. Intel and Wind River have jointly open sourced this software and we invite you to download, build, install and run it. <br />
<br />
Wind River® Titanium Cloud was originally built on open source components, which were then extended and hardened to meet critical infrastructure requirements: high availability, fault management, and performance management. This software provides numerous features and capabilities to enable 24/7 operation of mission critical applications.<br />
<br />
The StarlingX project opens all of these enhancements to the open source community. Please join us as we build the infrastructure stack for Edge Computing.<br />
<br />
|}<br />
----<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
|style="vertical-align:top; width:50%;" |<br />
<br />
== Documentation ==<br />
<br />
* [[StarlingX/Developer Guide|Developer Guide]]<br />
* [[StarlingX/Installation Guide|Installation Guide]]<br />
<br />
== Code ==<br />
<br />
* [https://git.openstack.org/cgit/openstack/stx Gerrit repositories]<br />
** Instructions for how to download and build the code are in progress<br />
* [https://review.openstack.org/ Gerrit Web UI]<br />
* [https://review.openstack.org/#/admin/projects/?filter=stx StarlingX Gerrit Projects]]<br />
** Hint for fast review of open reviews related to STX, in search box use the regular expression "status:open AND project:^openstack/stx-@"<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fstx%2Dclients+OR+project%3Aopenstack%2Fstx%2Dconfig+OR+project%3Aopenstack%2Fstx%2Dfault+OR+project%3Aopenstack%2Fstx%2Dgplv2+OR+project%3Aopenstack%2Fstx%2Dgplv3+OR+project%3Aopenstack%2Fstx%2Dgui+OR+project%3Aopenstack%2Fstx%2Dha+OR+project%3Aopenstack%2Fstx%2Dinteg+OR+project%3Aopenstack%2Fstx%2Dmanifest+OR+project%3Aopenstack%2Fstx%2Dmetal+OR+project%3Aopenstack%2Fstx%2Dnfv+OR+project%3Aopenstack%2Fstx%2Droot+OR+project%3Aopenstack%2Fstx%2Dtis%2Drepo+OR+project%3Aopenstack%2Fstx%2Dtools+OR+project%3Aopenstack%2Fstx%2Dupdate+OR+project%3Aopenstack%2Fstx%2Dupstream+OR+project%3Aopenstack%2Fstx%2Dutils%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1%2Czuul+NOT+reviewedby%3Aself&title=StarlingX+Review+Inbox&Needs+final+%2B2=label%3ACode%2DReview%3E%3D2+limit%3A50+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself&Passed+Zuul%2C+No+Negative+Feedback+%28Small+Fixes%29=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3C%3D10&Passed+Zuul%2C+No+Negative+Feedback=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3E10&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D1+age%3A5d&You+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself+NOT+label%3ACode%2DReview%3E%3D1%2Cself+reviewer%3Aself&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+is%3Areviewed+age%3A2d StarlingX Gerrit Review Dashboard]<br />
** Also in [http://paste.openstack.org/show/723397/ StarlingX Gerrit Review Dashboard] (copy the URL from there into a browser bookmark)<br />
<br />
== Bug Tracking ==<br />
<br />
* [https://storyboard.openstack.org/#!/worklist/354 StarlingX Bug List]<br />
** This list is sorted manually by drag and drop. <br />
** Please create bugs for any issues found in Storyboard, against one of the stx-* projects. If you can't find the right project, use stx-integ<br />
** After you create the bug, please add it to the Bug Worklist (link above)<br />
<br />
|style="vertical-align:top; width:50%;" |<br />
== Culture ==<br />
<br />
We are proud to be an OpenStack Foundation project!<br />
* We support and adhere to the [https://www.openstack.org/legal/community-code-of-conduct/ OpenStack community Code of Conduct]<br />
* We support and fully embrace the [https://governance.openstack.org/tc/reference/opens.html Four Opens]<br />
<br />
== Status and Planning ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-status Status Meeting Notes]<br />
* [https://storyboard.openstack.org/#!/project_group/86 StarlingX Storyboards]]<br />
* [https://etherpad.openstack.org/p/stx-planning StarlingX Planning] (contains sub-projects and Worklists)<br />
<br />
== Upstream Status ==<br />
<br />
The StarlingX code base contains a number of out-of-tree patches against other open source components. One of our highest priorities is to contribute those changes to their upstream communities.<br />
<br />
TODO: Add a link to our Dashboard showing the status of upstream submissions<br />
<br />
== References ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-notes The list of repos] and other things<br />
<br />
== OpenStack Documentation ==<br />
<br />
These are references to general OpenStack material:<br />
<br />
* [https://docs.openstack.org/infra/manual/developers.html Developer's Guide]<br />
* [https://docs.openstack.org/infra/manual/creators.html Project Creator's Guide]<br />
* [https://wiki.openstack.org/wiki/How_To_Contribute The Contributors Guide] (the older wiki page)<br />
* [https://governance.openstack.org/tc/reference/project-testing-interface.html Project Testing Interface]<br />
|}</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX/Installation_Guide&diff=162177StarlingX/Installation Guide2018-06-20T20:01:35Z<p>David.b.kinder: /* Initializing Compute Host */</p>
<hr />
<div>== Intro ==<br />
<br />
This section contains information about the StarlingX installation in a virtualized environment using Libvirt/QEMU.<br />
<br />
==Software Configurations==<br />
<br />
* All In One<br />
* Standard Controller Storage<br />
* Duplex<br />
* Standard Dedicated Storage<br />
<br />
==Standard Controller Storage==<br />
<br />
==Requirements==<br />
<br />
Different use cases require different configurations. For general StarlingX deployment, the recommended minimum requirements include:<br />
<br />
===Hardware Requirements===<br />
<br />
A workstation computer with:<br />
<br />
* Processor: x86_64 only supported architecture with hardware virtualization extensions<br />
* Memory: At least 32GB RAM<br />
* Hard Disk: 500GB HDD<br />
* Network: Two network adapters with active Internet connection<br />
<br />
===Software Requirements===<br />
<br />
A workstation computer with:<br />
<br />
* Operating System: Freshly installed Ubuntu 16.04 LTS 64-bit<br />
* Proxy settings configured (if applies)<br />
* Git<br />
* KVM/VirtManager<br />
* Libvirt Library<br />
* QEMU Full System Emulation Binaries<br />
* <stx-deployment> project<br />
* StarlingX ISO Image<br />
<br />
==Deployment Environment Setup==<br />
<br />
This section describes how to set up a StarlingX system in a workstation computer. After completing these steps, you will be able to deploy and run your StarlingX system on the following Linux distribution:<br />
<br />
* Ubuntu 16.04 LTS 64-bit<br />
<br />
===Updating Your Operating System===<br />
<br />
Before proceeding with the build, ensure your OS is up to date. You’ll first need to update the local database list of available packages:<br />
<br />
<pre><nowiki><br />
$ sudo apt-get update<br />
</nowiki></pre><br />
<br />
===Installing Requirements and Dependencies===<br />
<br />
Install the required packages in an Ubuntu host system with:<br />
<br />
<pre><nowiki><br />
$ sudo apt-get install git virt-manager libvirt-bin qemu-system<br />
</nowiki></pre><br />
<br />
===Installing Deployment Tool===<br />
<br />
Clone the <stx-deployment> project. Usually you’ll want to clone it under your user’s home directory.<br />
<br />
<pre><nowiki><br />
$ cd $HOME<br />
$ git clone <stx-deployment> <br />
</nowiki></pre><br />
<br />
===Getting the StarlingX ISO Image===<br />
<br />
1. Get the StarlingX ISO Image from:<br />
<br />
<pre><nowiki><br />
Tbd<br />
</nowiki></pre><br />
<br />
2. Copy the StarlingX ISO Image to the ''<stx-deployment>'' libvirt project directory naming it as bootimage.iso:<br />
<br />
<pre><nowiki><br />
$ cp <starlingx iso image> $HOME/<stx-deployment>/libvirt/bootimage.iso<br />
</nowiki></pre><br />
<br />
==Controller-0 Host Installation==<br />
<br />
Installing controller-0 involves initializing a host with software and then applying a configuration from the command line. The configured host becomes Controller-0. <br><br />
Procedure:<br />
<br />
# Using an ISO image of StarlingX, initialize the controller host via Libvirt/QEMU.<br />
# Configure the controller using the config_controller script.<br />
<br />
===Initializing Controller-0===<br />
This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the Workstation.<br />
<br />
Navigate to the ''<stx-deployment>'' libvirt project directory:<br />
<pre><nowiki><br />
$ cd <stx-deployment>/libvirt<br />
</nowiki></pre><br />
<br />
Run the install packages script:<br />
<pre><nowiki><br />
$ bash install_packages.sh<br />
</nowiki></pre><br />
<br />
Run the libvirt qemu setup script:<br />
<pre><nowiki><br />
$ bash setup_tic.sh<br />
</nowiki></pre><br />
<br />
From the KVM/VirtManager window, power on the host to be configured as Controller-0 and show the virtual machine console and details:<br />
* When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "Standard Controller Configuration".<br />
* Select the "Graphical Console" as the console to use during installation.<br />
* Select "Standard Security Boot Profile" as the Security Profile.<br />
* Monitor the initialization until it is complete. When initialization is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.<br />
<br />
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):<br />
<pre><nowiki><br />
Changing password for wrsroot.<br />
(current) UNIX Password:<br />
</nowiki></pre><br />
<br />
Enter a new password for the wrsroot account:<br />
<pre><nowiki><br />
New password:<br />
</nowiki></pre><br />
<br />
Enter the new password again to confirm it:<br />
<pre><nowiki><br />
Retype new password:<br />
</nowiki></pre><br />
<br />
Controller-0 is initialized with StarlingX, and is ready for configuration.<br />
<br />
===Configuring Controller-0===<br />
<br />
This section describes how to perform the Controller-0 configuration interactively. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0.<br />
<br />
When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX. The script is used to configure the first controller in the StarlingX cluster as controller-0. The prompts are grouped by configuration area. To start the script interactively, use the following command with no parameters and accept all the default values:<br />
<br />
<pre><nowiki><br />
controller-0:~$ sudo config_controller<br />
</nowiki></pre><br />
<br />
The output when config_controller script is run interactively is:<br />
<br />
<pre><nowiki><br />
WARNING: Command should only be run from the console. Continuing with this<br />
terminal may cause loss of connectivity and configuration failure<br />
...<br />
Apply the above configuration? [y/n]: y<br />
<br />
Applying configuration (this will take several minutes):<br />
<br />
01/08: Creating bootstrap configuration ... DONE<br />
02/08: Applying bootstrap manifest ... DONE<br />
03/08: Persisting local configuration ... DONE<br />
04/08: Populating initial system inventory ... DONE<br />
05:08: Creating system configuration ... DONE<br />
06:08: Applying controller manifest ... DONE<br />
07:08: Finalize controller configuration ... DONE<br />
08:08: Waiting for service activation ... DONE<br />
<br />
Configuration was applied<br />
<br />
Please complete any out of service comissioning steps with system commands and unlock controller to proceed.<br />
</nowiki></pre><br />
<br />
==Controller-0 and System Provision==<br />
<br />
===Configuring Provider Networks at Installation===<br />
<br />
You must set up provider networks at installation so that you can attach data interfaces and unlock the compute nodes.<br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Set up one provider network of the vlan type, named providernet-a:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan<br />
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a<br />
</nowiki></pre><br />
<br />
===Unlocking Controller-0===<br />
<br />
You must unlock controller-0 so that you can use it to install the remaining hosts. On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Use the system host-unlock command:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0<br />
</nowiki></pre><br />
<br />
The host is rebooted. During the reboot, the command line is unavailable, and any ssh connections are dropped. To monitor the progress of the reboot, use the controller-0 console.<br />
<br />
===Verifying the Controller-0 Configuration===<br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Verify that the Titanium Cloud controller services are running:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list<br />
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...<br />
| Id | Binary | Host | Zone | Status | State | ...<br />
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...<br />
| d7cdfaf0-9394-4053-b752-d609c837b729 | nova-conductor | controller-0 | internal | enabled | up | ...<br />
| 692c2659-7188-42ec-9ad1-c88ac3020dc6 | nova-scheduler | controller-0 | internal | enabled | up | ...<br />
| 5c7c9aad-696f-4dd1-a497-464bdd525e0c | nova-consoleauth | controller-0 | internal | enabled | up | ...<br />
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...<br />
</nowiki></pre><br />
<br />
Verify that controller-0 is unlocked, enabled, and available:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| id | hostname | personality | administrative | operational | availability |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| 1 | controller-0 | controller | unlocked | enabled | available |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
</nowiki></pre><br />
<br />
==Compute Host Installation==<br />
<br />
After initializing and configuring an active controller, you can add and configure a backup controller and additional compute or storage hosts. Using the system host-add command, you add one or more host entries to the system inventory, assigning a personality, MAC address, IP address, and so on for each host, and then you power on the hosts, causing them to be recognized and configured according to the system inventory entry.<br />
<br />
===Initializing Compute Host===<br />
<br />
On Workstation, print information of virbr2 virtual interface associated to compute-N host:<br />
<br />
<pre><nowiki><br />
$ sudo virsh domiflist compute-0 | grep virbr2<br />
vnet5 bridge virbr2 e1000 52:54:00:b6:1f:c7<br />
$ sudo virsh domiflist compute-1 | grep virbr2<br />
vnet9 bridge virbr2 e1000 52:54:00:da:58:b4<br />
</nowiki></pre><br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Use the system host-add command to add compute-N host and specify their compute personality using their associated virbr2 virtual interfaces MAC addresses:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n compute-0 -p compute -m 52:54:00:15:7a:86<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n compute-1 -p compute -m 52:54:00:aa:a2:46<br />
</nowiki></pre><br />
<br />
On Workstation, start Compute-N hosts: <br />
<br />
<pre><nowiki><br />
$ sudo virsh start compute-0<br />
$ sudo virsh start compute-1<br />
</nowiki></pre><br />
<br />
Once the message "Domain compute-N started" is displayed, from the KVM/VirtManager window, power on the host to be configured as compute-N and show the virtual machine console and details. The node is assigned the personality specified in the system host-add parameters. A display device menu appears on the console, with text customized for the personality (Controller, Storage, or Compute Node). You can start the installation manually by pressing Enter. Otherwise, it is started automatically after a few seconds.<br />
<br />
On Controller-0, you can monitor the installation progress by running the system host-show command for the host periodically. Progress is shown in the install_state field.<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show compute-0 | grep install<br />
| install_output | text |<br />
| install_state | booting |<br />
| install_state_info | None |<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show compute-1 | grep install<br />
| install_output | text |<br />
| install_state | booting |<br />
| install_state_info | None |<br />
[wrsroot@controller-0 ~(keystone_admin)]$ <br />
</nowiki></pre><br />
<br />
Wait while the compute-N is configured and rebooted. Up to 20 minutes may be required for a reboot, depending on hardware. When the reboot is complete, the compute-N is reported as Locked, Disabled, and Online.<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| id | hostname | personality | administrative | operational | availability |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| 1 | controller-0 | controller | unlocked | enabled | available |<br />
| 2 | compute-0 | compute | locked | disabled | online |<br />
| 3 | compute-1 | compute | locked | disabled | online |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
[wrsroot@controller-0 ~(keystone_admin)]$ <br />
</nowiki></pre><br />
<br />
==Compute Host Provision==<br />
<br />
You must configure the network interfaces and the storage disks on a host before you can unlock it. <br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
===Provisioning Network Interfaces on a Compute Host===<br />
<br />
Provision the data interfaces<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-0 ens6<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-1 ens6<br />
</nowiki></pre><br />
<br />
===Provisioning Storage on a Compute Host===<br />
<br />
Ensure that provider networks are available for the data interfaces. Provision the data interfaces:<br />
<br />
<pre><nowiki><br />
system host-list --nowrap &> /dev/null && NOWRAP="--nowrap"<br />
ALL_COMPUTE=`system host-list $NOWRAP | grep compute- | cut -d '|' -f 3`<br />
# for each compute node, we should run the followings<br />
for compute in $ALL_COMPUTE; do<br />
system host-cpu-modify ${compute} -f vswitch -p0 1<br />
system host-lvg-add ${compute} nova-local<br />
system host-pv-add ${compute} nova-local $(system host-disk-list ${compute} $NOWRAP | grep /dev/sdb | awk '{print $2}')<br />
system host-lvg-modify -b image -s 10240 ${compute} nova-local<br />
done<br />
</nowiki></pre><br />
<br />
===Unlocking a Compute Host===<br />
<br />
Use the system host-unlock command to unlock the node:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-1<br />
</nowiki></pre><br />
<br />
Wait while the compute-N is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware. The host is rebooted, and its Availability State is reported as In-Test.<br />
<br />
==System Health Check==<br />
<br />
After a few minutes, all nodes shall be reported as Unlocked, Enabled, and Available:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| id | hostname | personality | administrative | operational | availability |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| 1 | controller-0 | controller | unlocked | enabled | available |<br />
| 2 | compute-0 | compute | unlocked | enabled | available |<br />
| 3 | compute-1 | compute | unlocked | enabled | available |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
</nowiki></pre></div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX/Installation_Guide&diff=162176StarlingX/Installation Guide2018-06-20T20:01:21Z<p>David.b.kinder: /* Initializing Compute Host */</p>
<hr />
<div>== Intro ==<br />
<br />
This section contains information about the StarlingX installation in a virtualized environment using Libvirt/QEMU.<br />
<br />
==Software Configurations==<br />
<br />
* All In One<br />
* Standard Controller Storage<br />
* Duplex<br />
* Standard Dedicated Storage<br />
<br />
==Standard Controller Storage==<br />
<br />
==Requirements==<br />
<br />
Different use cases require different configurations. For general StarlingX deployment, the recommended minimum requirements include:<br />
<br />
===Hardware Requirements===<br />
<br />
A workstation computer with:<br />
<br />
* Processor: x86_64 only supported architecture with hardware virtualization extensions<br />
* Memory: At least 32GB RAM<br />
* Hard Disk: 500GB HDD<br />
* Network: Two network adapters with active Internet connection<br />
<br />
===Software Requirements===<br />
<br />
A workstation computer with:<br />
<br />
* Operating System: Freshly installed Ubuntu 16.04 LTS 64-bit<br />
* Proxy settings configured (if applies)<br />
* Git<br />
* KVM/VirtManager<br />
* Libvirt Library<br />
* QEMU Full System Emulation Binaries<br />
* <stx-deployment> project<br />
* StarlingX ISO Image<br />
<br />
==Deployment Environment Setup==<br />
<br />
This section describes how to set up a StarlingX system in a workstation computer. After completing these steps, you will be able to deploy and run your StarlingX system on the following Linux distribution:<br />
<br />
* Ubuntu 16.04 LTS 64-bit<br />
<br />
===Updating Your Operating System===<br />
<br />
Before proceeding with the build, ensure your OS is up to date. You’ll first need to update the local database list of available packages:<br />
<br />
<pre><nowiki><br />
$ sudo apt-get update<br />
</nowiki></pre><br />
<br />
===Installing Requirements and Dependencies===<br />
<br />
Install the required packages in an Ubuntu host system with:<br />
<br />
<pre><nowiki><br />
$ sudo apt-get install git virt-manager libvirt-bin qemu-system<br />
</nowiki></pre><br />
<br />
===Installing Deployment Tool===<br />
<br />
Clone the <stx-deployment> project. Usually you’ll want to clone it under your user’s home directory.<br />
<br />
<pre><nowiki><br />
$ cd $HOME<br />
$ git clone <stx-deployment> <br />
</nowiki></pre><br />
<br />
===Getting the StarlingX ISO Image===<br />
<br />
1. Get the StarlingX ISO Image from:<br />
<br />
<pre><nowiki><br />
Tbd<br />
</nowiki></pre><br />
<br />
2. Copy the StarlingX ISO Image to the ''<stx-deployment>'' libvirt project directory naming it as bootimage.iso:<br />
<br />
<pre><nowiki><br />
$ cp <starlingx iso image> $HOME/<stx-deployment>/libvirt/bootimage.iso<br />
</nowiki></pre><br />
<br />
==Controller-0 Host Installation==<br />
<br />
Installing controller-0 involves initializing a host with software and then applying a configuration from the command line. The configured host becomes Controller-0. <br><br />
Procedure:<br />
<br />
# Using an ISO image of StarlingX, initialize the controller host via Libvirt/QEMU.<br />
# Configure the controller using the config_controller script.<br />
<br />
===Initializing Controller-0===<br />
This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the Workstation.<br />
<br />
Navigate to the ''<stx-deployment>'' libvirt project directory:<br />
<pre><nowiki><br />
$ cd <stx-deployment>/libvirt<br />
</nowiki></pre><br />
<br />
Run the install packages script:<br />
<pre><nowiki><br />
$ bash install_packages.sh<br />
</nowiki></pre><br />
<br />
Run the libvirt qemu setup script:<br />
<pre><nowiki><br />
$ bash setup_tic.sh<br />
</nowiki></pre><br />
<br />
From the KVM/VirtManager window, power on the host to be configured as Controller-0 and show the virtual machine console and details:<br />
* When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "Standard Controller Configuration".<br />
* Select the "Graphical Console" as the console to use during installation.<br />
* Select "Standard Security Boot Profile" as the Security Profile.<br />
* Monitor the initialization until it is complete. When initialization is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.<br />
<br />
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):<br />
<pre><nowiki><br />
Changing password for wrsroot.<br />
(current) UNIX Password:<br />
</nowiki></pre><br />
<br />
Enter a new password for the wrsroot account:<br />
<pre><nowiki><br />
New password:<br />
</nowiki></pre><br />
<br />
Enter the new password again to confirm it:<br />
<pre><nowiki><br />
Retype new password:<br />
</nowiki></pre><br />
<br />
Controller-0 is initialized with StarlingX, and is ready for configuration.<br />
<br />
===Configuring Controller-0===<br />
<br />
This section describes how to perform the Controller-0 configuration interactively. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0.<br />
<br />
When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX. The script is used to configure the first controller in the StarlingX cluster as controller-0. The prompts are grouped by configuration area. To start the script interactively, use the following command with no parameters and accept all the default values:<br />
<br />
<pre><nowiki><br />
controller-0:~$ sudo config_controller<br />
</nowiki></pre><br />
<br />
The output when config_controller script is run interactively is:<br />
<br />
<pre><nowiki><br />
WARNING: Command should only be run from the console. Continuing with this<br />
terminal may cause loss of connectivity and configuration failure<br />
...<br />
Apply the above configuration? [y/n]: y<br />
<br />
Applying configuration (this will take several minutes):<br />
<br />
01/08: Creating bootstrap configuration ... DONE<br />
02/08: Applying bootstrap manifest ... DONE<br />
03/08: Persisting local configuration ... DONE<br />
04/08: Populating initial system inventory ... DONE<br />
05:08: Creating system configuration ... DONE<br />
06:08: Applying controller manifest ... DONE<br />
07:08: Finalize controller configuration ... DONE<br />
08:08: Waiting for service activation ... DONE<br />
<br />
Configuration was applied<br />
<br />
Please complete any out of service comissioning steps with system commands and unlock controller to proceed.<br />
</nowiki></pre><br />
<br />
==Controller-0 and System Provision==<br />
<br />
===Configuring Provider Networks at Installation===<br />
<br />
You must set up provider networks at installation so that you can attach data interfaces and unlock the compute nodes.<br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Set up one provider network of the vlan type, named providernet-a:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan<br />
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a<br />
</nowiki></pre><br />
<br />
===Unlocking Controller-0===<br />
<br />
You must unlock controller-0 so that you can use it to install the remaining hosts. On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Use the system host-unlock command:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0<br />
</nowiki></pre><br />
<br />
The host is rebooted. During the reboot, the command line is unavailable, and any ssh connections are dropped. To monitor the progress of the reboot, use the controller-0 console.<br />
<br />
===Verifying the Controller-0 Configuration===<br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Verify that the Titanium Cloud controller services are running:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list<br />
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...<br />
| Id | Binary | Host | Zone | Status | State | ...<br />
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...<br />
| d7cdfaf0-9394-4053-b752-d609c837b729 | nova-conductor | controller-0 | internal | enabled | up | ...<br />
| 692c2659-7188-42ec-9ad1-c88ac3020dc6 | nova-scheduler | controller-0 | internal | enabled | up | ...<br />
| 5c7c9aad-696f-4dd1-a497-464bdd525e0c | nova-consoleauth | controller-0 | internal | enabled | up | ...<br />
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...<br />
</nowiki></pre><br />
<br />
Verify that controller-0 is unlocked, enabled, and available:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| id | hostname | personality | administrative | operational | availability |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| 1 | controller-0 | controller | unlocked | enabled | available |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
</nowiki></pre><br />
<br />
==Compute Host Installation==<br />
<br />
After initializing and configuring an active controller, you can add and configure a backup controller and additional compute or storage hosts. Using the system host-add command, you add one or more host entries to the system inventory, assigning a personality, MAC address, IP address, and so on for each host, and then you power on the hosts, causing them to be recognized and configured according to the system inventory entry.<br />
<br />
===Initializing Compute Host===<br />
<br />
On Workstation, print information of virbr2 virtual interface associated to compute-N host:<br />
<br />
<pre><nowiki><br />
$ sudo virsh domiflist compute-0 | grep virbr2<br />
vnet5 bridge virbr2 e1000 52:54:00:b6:1f:c7<br />
$ sudo virsh domiflist compute-1 | grep virbr2<br />
vnet9 bridge virbr2 e1000 52:54:00:da:58:b4<br />
</nowiki></pre><br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Use the system host-add command to add compute-N host and specify their compute personality using their associated virbr2 virtual interfaces MAC addresses:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n compute-0 -p compute -m 52:54:00:15:7a:86<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n compute-1 -p compute -m 52:54:00:aa:a2:46<br />
</nowiki></pre><br />
<br />
On Workstation, start Compute-N host: <br />
<br />
<pre><nowiki><br />
$ sudo virsh start compute-0<br />
$ sudo virsh start compute-1<br />
</nowiki></pre><br />
<br />
Once the message "Domain compute-N started" is displayed, from the KVM/VirtManager window, power on the host to be configured as compute-N and show the virtual machine console and details. The node is assigned the personality specified in the system host-add parameters. A display device menu appears on the console, with text customized for the personality (Controller, Storage, or Compute Node). You can start the installation manually by pressing Enter. Otherwise, it is started automatically after a few seconds.<br />
<br />
On Controller-0, you can monitor the installation progress by running the system host-show command for the host periodically. Progress is shown in the install_state field.<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show compute-0 | grep install<br />
| install_output | text |<br />
| install_state | booting |<br />
| install_state_info | None |<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show compute-1 | grep install<br />
| install_output | text |<br />
| install_state | booting |<br />
| install_state_info | None |<br />
[wrsroot@controller-0 ~(keystone_admin)]$ <br />
</nowiki></pre><br />
<br />
Wait while the compute-N is configured and rebooted. Up to 20 minutes may be required for a reboot, depending on hardware. When the reboot is complete, the compute-N is reported as Locked, Disabled, and Online.<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| id | hostname | personality | administrative | operational | availability |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| 1 | controller-0 | controller | unlocked | enabled | available |<br />
| 2 | compute-0 | compute | locked | disabled | online |<br />
| 3 | compute-1 | compute | locked | disabled | online |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
[wrsroot@controller-0 ~(keystone_admin)]$ <br />
</nowiki></pre><br />
<br />
==Compute Host Provision==<br />
<br />
You must configure the network interfaces and the storage disks on a host before you can unlock it. <br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
===Provisioning Network Interfaces on a Compute Host===<br />
<br />
Provision the data interfaces<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-0 ens6<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-1 ens6<br />
</nowiki></pre><br />
<br />
===Provisioning Storage on a Compute Host===<br />
<br />
Ensure that provider networks are available for the data interfaces. Provision the data interfaces:<br />
<br />
<pre><nowiki><br />
system host-list --nowrap &> /dev/null && NOWRAP="--nowrap"<br />
ALL_COMPUTE=`system host-list $NOWRAP | grep compute- | cut -d '|' -f 3`<br />
# for each compute node, we should run the followings<br />
for compute in $ALL_COMPUTE; do<br />
system host-cpu-modify ${compute} -f vswitch -p0 1<br />
system host-lvg-add ${compute} nova-local<br />
system host-pv-add ${compute} nova-local $(system host-disk-list ${compute} $NOWRAP | grep /dev/sdb | awk '{print $2}')<br />
system host-lvg-modify -b image -s 10240 ${compute} nova-local<br />
done<br />
</nowiki></pre><br />
<br />
===Unlocking a Compute Host===<br />
<br />
Use the system host-unlock command to unlock the node:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-1<br />
</nowiki></pre><br />
<br />
Wait while the compute-N is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware. The host is rebooted, and its Availability State is reported as In-Test.<br />
<br />
==System Health Check==<br />
<br />
After a few minutes, all nodes shall be reported as Unlocked, Enabled, and Available:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| id | hostname | personality | administrative | operational | availability |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| 1 | controller-0 | controller | unlocked | enabled | available |<br />
| 2 | compute-0 | compute | unlocked | enabled | available |<br />
| 3 | compute-1 | compute | unlocked | enabled | available |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
</nowiki></pre></div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX/Installation_Guide&diff=162175StarlingX/Installation Guide2018-06-20T19:56:11Z<p>David.b.kinder: /* Verifying the Controller-0 Configuration */</p>
<hr />
<div>== Intro ==<br />
<br />
This section contains information about the StarlingX installation in a virtualized environment using Libvirt/QEMU.<br />
<br />
==Software Configurations==<br />
<br />
* All In One<br />
* Standard Controller Storage<br />
* Duplex<br />
* Standard Dedicated Storage<br />
<br />
==Standard Controller Storage==<br />
<br />
==Requirements==<br />
<br />
Different use cases require different configurations. For general StarlingX deployment, the recommended minimum requirements include:<br />
<br />
===Hardware Requirements===<br />
<br />
A workstation computer with:<br />
<br />
* Processor: x86_64 only supported architecture with hardware virtualization extensions<br />
* Memory: At least 32GB RAM<br />
* Hard Disk: 500GB HDD<br />
* Network: Two network adapters with active Internet connection<br />
<br />
===Software Requirements===<br />
<br />
A workstation computer with:<br />
<br />
* Operating System: Freshly installed Ubuntu 16.04 LTS 64-bit<br />
* Proxy settings configured (if applies)<br />
* Git<br />
* KVM/VirtManager<br />
* Libvirt Library<br />
* QEMU Full System Emulation Binaries<br />
* <stx-deployment> project<br />
* StarlingX ISO Image<br />
<br />
==Deployment Environment Setup==<br />
<br />
This section describes how to set up a StarlingX system in a workstation computer. After completing these steps, you will be able to deploy and run your StarlingX system on the following Linux distribution:<br />
<br />
* Ubuntu 16.04 LTS 64-bit<br />
<br />
===Updating Your Operating System===<br />
<br />
Before proceeding with the build, ensure your OS is up to date. You’ll first need to update the local database list of available packages:<br />
<br />
<pre><nowiki><br />
$ sudo apt-get update<br />
</nowiki></pre><br />
<br />
===Installing Requirements and Dependencies===<br />
<br />
Install the required packages in an Ubuntu host system with:<br />
<br />
<pre><nowiki><br />
$ sudo apt-get install git virt-manager libvirt-bin qemu-system<br />
</nowiki></pre><br />
<br />
===Installing Deployment Tool===<br />
<br />
Clone the <stx-deployment> project. Usually you’ll want to clone it under your user’s home directory.<br />
<br />
<pre><nowiki><br />
$ cd $HOME<br />
$ git clone <stx-deployment> <br />
</nowiki></pre><br />
<br />
===Getting the StarlingX ISO Image===<br />
<br />
1. Get the StarlingX ISO Image from:<br />
<br />
<pre><nowiki><br />
Tbd<br />
</nowiki></pre><br />
<br />
2. Copy the StarlingX ISO Image to the ''<stx-deployment>'' libvirt project directory naming it as bootimage.iso:<br />
<br />
<pre><nowiki><br />
$ cp <starlingx iso image> $HOME/<stx-deployment>/libvirt/bootimage.iso<br />
</nowiki></pre><br />
<br />
==Controller-0 Host Installation==<br />
<br />
Installing controller-0 involves initializing a host with software and then applying a configuration from the command line. The configured host becomes Controller-0. <br><br />
Procedure:<br />
<br />
# Using an ISO image of StarlingX, initialize the controller host via Libvirt/QEMU.<br />
# Configure the controller using the config_controller script.<br />
<br />
===Initializing Controller-0===<br />
This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the Workstation.<br />
<br />
Navigate to the ''<stx-deployment>'' libvirt project directory:<br />
<pre><nowiki><br />
$ cd <stx-deployment>/libvirt<br />
</nowiki></pre><br />
<br />
Run the install packages script:<br />
<pre><nowiki><br />
$ bash install_packages.sh<br />
</nowiki></pre><br />
<br />
Run the libvirt qemu setup script:<br />
<pre><nowiki><br />
$ bash setup_tic.sh<br />
</nowiki></pre><br />
<br />
From the KVM/VirtManager window, power on the host to be configured as Controller-0 and show the virtual machine console and details:<br />
* When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "Standard Controller Configuration".<br />
* Select the "Graphical Console" as the console to use during installation.<br />
* Select "Standard Security Boot Profile" as the Security Profile.<br />
* Monitor the initialization until it is complete. When initialization is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.<br />
<br />
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):<br />
<pre><nowiki><br />
Changing password for wrsroot.<br />
(current) UNIX Password:<br />
</nowiki></pre><br />
<br />
Enter a new password for the wrsroot account:<br />
<pre><nowiki><br />
New password:<br />
</nowiki></pre><br />
<br />
Enter the new password again to confirm it:<br />
<pre><nowiki><br />
Retype new password:<br />
</nowiki></pre><br />
<br />
Controller-0 is initialized with StarlingX, and is ready for configuration.<br />
<br />
===Configuring Controller-0===<br />
<br />
This section describes how to perform the Controller-0 configuration interactively. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0.<br />
<br />
When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX. The script is used to configure the first controller in the StarlingX cluster as controller-0. The prompts are grouped by configuration area. To start the script interactively, use the following command with no parameters and accept all the default values:<br />
<br />
<pre><nowiki><br />
controller-0:~$ sudo config_controller<br />
</nowiki></pre><br />
<br />
The output when config_controller script is run interactively is:<br />
<br />
<pre><nowiki><br />
WARNING: Command should only be run from the console. Continuing with this<br />
terminal may cause loss of connectivity and configuration failure<br />
...<br />
Apply the above configuration? [y/n]: y<br />
<br />
Applying configuration (this will take several minutes):<br />
<br />
01/08: Creating bootstrap configuration ... DONE<br />
02/08: Applying bootstrap manifest ... DONE<br />
03/08: Persisting local configuration ... DONE<br />
04/08: Populating initial system inventory ... DONE<br />
05:08: Creating system configuration ... DONE<br />
06:08: Applying controller manifest ... DONE<br />
07:08: Finalize controller configuration ... DONE<br />
08:08: Waiting for service activation ... DONE<br />
<br />
Configuration was applied<br />
<br />
Please complete any out of service comissioning steps with system commands and unlock controller to proceed.<br />
</nowiki></pre><br />
<br />
==Controller-0 and System Provision==<br />
<br />
===Configuring Provider Networks at Installation===<br />
<br />
You must set up provider networks at installation so that you can attach data interfaces and unlock the compute nodes.<br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Set up one provider network of the vlan type, named providernet-a:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan<br />
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a<br />
</nowiki></pre><br />
<br />
===Unlocking Controller-0===<br />
<br />
You must unlock controller-0 so that you can use it to install the remaining hosts. On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Use the system host-unlock command:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0<br />
</nowiki></pre><br />
<br />
The host is rebooted. During the reboot, the command line is unavailable, and any ssh connections are dropped. To monitor the progress of the reboot, use the controller-0 console.<br />
<br />
===Verifying the Controller-0 Configuration===<br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Verify that the Titanium Cloud controller services are running:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list<br />
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...<br />
| Id | Binary | Host | Zone | Status | State | ...<br />
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...<br />
| d7cdfaf0-9394-4053-b752-d609c837b729 | nova-conductor | controller-0 | internal | enabled | up | ...<br />
| 692c2659-7188-42ec-9ad1-c88ac3020dc6 | nova-scheduler | controller-0 | internal | enabled | up | ...<br />
| 5c7c9aad-696f-4dd1-a497-464bdd525e0c | nova-consoleauth | controller-0 | internal | enabled | up | ...<br />
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...<br />
</nowiki></pre><br />
<br />
Verify that controller-0 is unlocked, enabled, and available:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| id | hostname | personality | administrative | operational | availability |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| 1 | controller-0 | controller | unlocked | enabled | available |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
</nowiki></pre><br />
<br />
==Compute Host Installation==<br />
<br />
After initializing and configuring an active controller, you can add and configure a backup controller and additional compute or storage hosts. Using the system host-add command, you add one or more host entries to the system inventory, assigning a personality, MAC address, IP address, and so on for each host, and then you power on the hosts, causing them to be recognized and configured according to the system inventory entry.<br />
<br />
===Initializing Compute Host===<br />
<br />
On Workstation, print information of virbr2 virtual interface associated to compute-N host:<br />
<br />
<pre><nowiki><br />
$ sudo virsh domiflist compute-0 | grep virbr2<br />
vnet5 bridge virbr2 e1000 52:54:00:b6:1f:c7<br />
$ sudo virsh domiflist compute-1 | grep virbr2<br />
vnet9 bridge virbr2 e1000 52:54:00:da:58:b4<br />
</nowiki></pre><br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Use the system host-add command to add compute-N host and specify their compute personality using their associated virbr2 virtual interfaces MAC addresses:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n compute-0 -p compute -m 52:54:00:15:7a:86<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n compute-1 -p compute -m 52:54:00:aa:a2:46<br />
</nowiki></pre><br />
<br />
On Workstation, start Compute-N host: <br />
<br />
<pre><nowiki><br />
$ sudo virsh start compute-0<br />
</nowiki></pre><br />
<br />
Once the message "Domain compute-N started" is displayed, from the KVM/VirtManager window, power on the host to be configured as compute-N and show the virtual machine console and details. The node is assigned the personality specified in the system host-add parameters. A display device menu appears on the console, with text customized for the personality (Controller, Storage, or Compute Node). You can start the installation manually by pressing Enter. Otherwise, it is started automatically after a few seconds.<br />
<br />
On Controller-0, you can monitor the installation progress by running the system host-show command for the host periodically. Progress is shown in the install_state field.<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show compute-0 | grep install<br />
| install_output | text |<br />
| install_state | booting |<br />
| install_state_info | None |<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show compute-1 | grep install<br />
| install_output | text |<br />
| install_state | booting |<br />
| install_state_info | None |<br />
[wrsroot@controller-0 ~(keystone_admin)]$ <br />
</nowiki></pre><br />
<br />
Wait while the compute-N is configured and rebooted. Up to 20 minutes may be required for a reboot, depending on hardware. When the reboot is complete, the compute-N is reported as Locked, Disabled, and Online.<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| id | hostname | personality | administrative | operational | availability |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| 1 | controller-0 | controller | unlocked | enabled | available |<br />
| 2 | compute-0 | compute | locked | disabled | online |<br />
| 3 | compute-1 | compute | locked | disabled | online |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
[wrsroot@controller-0 ~(keystone_admin)]$ <br />
</nowiki></pre><br />
<br />
==Compute Host Provision==<br />
<br />
You must configure the network interfaces and the storage disks on a host before you can unlock it. <br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
===Provisioning Network Interfaces on a Compute Host===<br />
<br />
Provision the data interfaces<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-0 ens6<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-1 ens6<br />
</nowiki></pre><br />
<br />
===Provisioning Storage on a Compute Host===<br />
<br />
Ensure that provider networks are available for the data interfaces. Provision the data interfaces:<br />
<br />
<pre><nowiki><br />
system host-list --nowrap &> /dev/null && NOWRAP="--nowrap"<br />
ALL_COMPUTE=`system host-list $NOWRAP | grep compute- | cut -d '|' -f 3`<br />
# for each compute node, we should run the followings<br />
for compute in $ALL_COMPUTE; do<br />
system host-cpu-modify ${compute} -f vswitch -p0 1<br />
system host-lvg-add ${compute} nova-local<br />
system host-pv-add ${compute} nova-local $(system host-disk-list ${compute} $NOWRAP | grep /dev/sdb | awk '{print $2}')<br />
system host-lvg-modify -b image -s 10240 ${compute} nova-local<br />
done<br />
</nowiki></pre><br />
<br />
===Unlocking a Compute Host===<br />
<br />
Use the system host-unlock command to unlock the node:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-1<br />
</nowiki></pre><br />
<br />
Wait while the compute-N is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware. The host is rebooted, and its Availability State is reported as In-Test.<br />
<br />
==System Health Check==<br />
<br />
After a few minutes, all nodes shall be reported as Unlocked, Enabled, and Available:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| id | hostname | personality | administrative | operational | availability |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| 1 | controller-0 | controller | unlocked | enabled | available |<br />
| 2 | compute-0 | compute | unlocked | enabled | available |<br />
| 3 | compute-1 | compute | unlocked | enabled | available |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
</nowiki></pre></div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX/Developer_Guide&diff=162173StarlingX/Developer Guide2018-06-20T19:53:19Z<p>David.b.kinder: </p>
<hr />
<div>This section contains the steps for building a StarlingX ISO.<br />
<br />
== Requirements ==<br />
<br />
The recommended minimum requirements include:<br />
<br />
=== Hardware Requirements ===<br />
<br />
A workstation computer with:<br />
<br />
* Processor: x86_64 only supported architecture<br />
* Memory: At least 32GB RAM<br />
* Hard Disk: 100GB HDD<br />
* Network: Network adapter with active Internet connection<br />
<br />
=== Software Requirements ===<br />
<br />
A workstation computer with:<br />
<br />
* Operating System: Ubuntu 16.04 LTS 64-bit<br />
* Docker<br />
* Android Repo Tool<br />
* Proxy Settings Configured (If Required)<br />
<br />
== Development Environment Setup ==<br />
<br />
This section describes how to set up a StarlingX development system on a workstation computer. After completing these steps, you will be able to build a StarlingX ISO image on the following Linux distribution:<br />
<br />
* Ubuntu 16.04 LTS 64-bit<br />
<br />
=== Update Your Operating System ===<br />
<br />
Before proceeding with the build, ensure your OS is up to date. You’ll first need to update the local database list of available packages:<br />
<br />
<source lang="sh">$ sudo apt-get update<br />
</source><br />
=== Installation Requirements and Dependencies ===<br />
<br />
<ol start="1"><li>Install the required packages in an Ubuntu host system with:<br />
<br />
<source lang="sh">$ sudo apt-get install git<br />
</source></li></ol><br />
<ol start="2"><br />
<li><p>Install the required Docker CE packages in an Ubuntu host system. See [https://docs.docker.com/install/ Get Docker] for more information.</p></li><br />
<li><p>Install the required Android Repo Tool in an Ubuntu host system. See [https://source.android.com/setup/build/downloading#installing-repo Installing Repo] for more information.</p></li></ol><br />
<br />
=== Install stx-tools project ===<br />
<br />
<ol start="1"><li>Clone the &lt;stx-tools&gt; project<br />
<br />
<source lang="sh">$ git clone git://git.openstack.org/openstack/stx-tools<br />
</source></li></ol><br />
<br />
=== Create a Workspace Directory ===<br />
<br />
<ol start="1"><li>Create a ''starlingx'' workspace directory on your workstation computer. Usually, you’ll want to create it somewhere under your user’s home directory.<br />
<br />
<source lang="sh">$ mkdir -p $HOME/starlingx/<br />
</source></li></ol><br />
<br />
== Build the CentOS Mirror Repository ==<br />
<br />
This section describes how to build the CentOS Mirror Repository.<br />
<br />
=== Setup Repository Docker Container ===<br />
<br />
<ol start="1"><li>Navigate to the ''&lt;stx-tools&gt;/centos-mirror-tool'' project directory:<br />
<br />
<source lang="sh">$ cd stx-tools/centos-mirror-tools/<br />
</source></li></ol><br />
<ol start="2"><br />
<li>If necessary you might have to set http/https proxy in your Dockerfile before building the docker image.<br />
<br />
<source lang="sh">ENV http_proxy "http://your.actual_http_proxy.com:your_port" && \<br />
https_proxy "https://your.actual_https_proxy.com:your_port" && \<br />
ftp_proxy "http://your.actual_ftp_proxy.com:your_port"<br />
RUN echo "proxy=http://your-proxy.com:port" >> /etc/yum.conf<br />
</source></li></ol><br />
<ol start="3"><br />
<li>Build your ''&lt;name&gt;:&lt;tag&gt;'' base container image with '''e.g.''' ''aarcemor:centos-mirror-repository''<br />
<br />
<source lang="sh">$ docker build -t aarcemor:centos-mirror-repository -f Dockerfile .<br />
</source></li></ol><br />
<ol start="4"><br />
<li>Launch a ''&lt;name&gt;'' docker container using previously created Docker base container image ''&lt;name&gt;:&lt;tag&gt;'' '''e.g.''' ''aarcemor-centos-mirror-repository''. As /localdisk is defined as the workdir of the container, the same folder name should be used to define the volume. The container will start to run and populate a logs and output folders in this directory. The container shall be run from the same directory where the other scripts are stored.<br />
<br />
<source lang="sh">$ docker run -itd --name aarcemor-centos-mirror-repository -v $(pwd):/localdisk aarcemor:centos-mirror-repository bash<br />
</source></li></ol><br />
<ol start="5"><br />
<li>Execute the ''&lt;name&gt;'' docker container '''e.g.''' ''aarcemor-centos-mirror-repository''<br />
<br />
<source lang="sh">$ docker exec -it aarcemor-centos-mirror-repository bash<br />
</source></li></ol><br />
<br />
=== Import GPG Keys ===<br />
<br />
<ol start="1"><li>Inside the docker container, import the keys into the local GPG keyring and query public key information:<br />
<br />
<source lang="none"># rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*<br />
# rpm -qi gpg-pubkey-\*<br />
</source></li></ol><br />
<br />
=== Download Packages ===<br />
<br />
<ol start="1"><li>Enter the following command to download the required packages to populate the CentOS Mirror Repository:<br />
<br />
<source lang="none"># bash download_mirror.sh<br />
</source></li></ol><br />
<ol start="2"><br />
<li>Monitor the download of packages until it is complete. When download is complete, the following message is displayed:<br />
<br />
<source lang="none">totally 17 files are downloaded!<br />
step #3: done successfully<br />
IMPORTANT: The following 3 files are just bootstrap versions. Based on them, the workable images<br />
for StarlingX could be generated by running "update-pxe-network-installer" command after "build-iso"<br />
- out/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img<br />
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img<br />
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz<br />
</source></li></ol><br />
<br />
=== Verify Packages ===<br />
<br />
<ol start="1"><li>Verify there are no missing or failed packages:<br />
<br />
<source lang="none"># cat output/3rd_rpms_missing_L1.txt output/3rd_srpms_missing_L1.txt output/centos_rpms_missing_L1.txt output/centos_srpms_missing_L1.txt<br />
# cat output/3rd_rpms_fail_move_L1.txt output/3rd_srpms_fail_move_L1.txt output/centos_rpms_fail_move_L1.txt output/centos_srpms_fail_move_L1.txt<br />
</source></li></ol><br />
<ol start="2"><br />
<li><p>In case there are missing or failed ones due to network instability (or timeout), you should download them manually, to assure you get all RPMs listed in &quot;rpms_from_3rd_parties.lst&quot; and &quot;rpms_from_centos_repo.lst&quot;.</p></li><br />
<li><p>After all packages were succesfully downloaded, remove all i686 RPMs packages and change ''output'' directory ownership:</p><br />
<br />
<source lang="none"># find ./output -name "*.i686.rpm" | xargs rm -f<br />
# chown 751:751 -R ./output<br />
</source></li></ol><br />
<br />
=== Create CentOS Mirror Repository ===<br />
<br />
<ol start="1"><li>From a console of the workstation, create a ''mirror/CentOS'' directory under your ''starlingx'' workspace directory:<br />
<br />
<source lang="sh">$ mkdir -p $HOME/starlingx/mirror/CentOS/<br />
</source></li></ol><br />
<ol start="2"><br />
<li>Copy the built CentOS Mirror Repository built under ''&lt;stx-tools&gt;/centos-mirror-tool'' to the ''$HOME/starlingx/mirror/CentOS'' workspace directory.<br />
<br />
<source lang="sh">$ cp -r stx-tools/centos-mirror-tools/output/stx-r1/ $HOME/starlingx/mirror/CentOS/<br />
</source></li></ol><br />
<br />
= Work in Progress... =</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX/Developer_Guide&diff=162171StarlingX/Developer Guide2018-06-20T19:48:23Z<p>David.b.kinder: </p>
<hr />
<div>This section contains the steps for building a StarlingX ISO.<br />
<br />
== Requirements ==<br />
<br />
The recommended minimum requirements include:<br />
<br />
=== Hardware Requirements ===<br />
<br />
A workstation computer with:<br />
<br />
* Processor: x86_64 only supported architecture<br />
* Memory: At least 32GB RAM<br />
* Hard Disk: 100GB HDD<br />
* Network: Network adapter with active Internet connection<br />
<br />
=== Software Requirements ===<br />
<br />
A workstation computer with:<br />
<br />
* Operating System: Ubuntu 16.04 LTS 64-bit<br />
* Docker<br />
* Android Repo Tool<br />
* Proxy Settings Configured (If Required)<br />
<br />
== Development Environment Setup ==<br />
<br />
This section describes how to set up a StarlingX development system on a workstation computer. After completing these steps, you will be able to build a StarlingX ISO image on the following Linux distribution:<br />
<br />
* Ubuntu 16.04 LTS 64-bit<br />
<br />
=== Update Your Operating System ===<br />
<br />
Before proceeding with the build, ensure your OS is up to date. You’ll first need to update the local database list of available packages:<br />
<br />
<source lang="sh">$ sudo apt-get update<br />
</source><br />
=== Installation Requirements and Dependencies ===<br />
<br />
<ol start="1"><li>Install the required packages in an Ubuntu host system with:<br />
<br />
<source lang="sh">$ sudo apt-get install git<br />
</source></li></ol><br />
<ol start="2"><br />
<li><p>Install the required Docker CE packages in an Ubuntu host system. See [https://docs.docker.com/install/ Get Docker] for more information.</p></li><br />
<li><p>Install the required Android Repo Tool in an Ubuntu host system. See [https://source.android.com/setup/build/downloading#installing-repo Installing Repo] for more information.</p></li></ol><br />
<br />
=== Install stx-tools project ===<br />
<br />
<ol start="1"><li>Clone the &lt;stx-tools&gt; project<br />
<br />
<source lang="sh">$ git clone git://git.openstack.org/openstack/stx-tools<br />
</source></li></ol><br />
<br />
=== Create a Workspace Directory ===<br />
<br />
<ol start="1"><li>Create a ''starlingx'' workspace directory on your workstation computer. Usually, you’ll want to create it somewhere under your user’s home directory.<br />
<br />
<source lang="sh">$ mkdir -p $HOME/starlingx/<br />
</source></li></ol><br />
<br />
== Build the CentOS Mirror Repository ==<br />
<br />
This section describes how to build the CentOS Mirror Repository.<br />
<br />
=== Setup Repository Docker Container ===<br />
<br />
<ol start="1"><li>Navigate to the ''&lt;stx-tools&gt;/centos-mirror-tool'' project directory:<br />
<br />
<source lang="sh">$ cd stx-tools/centos-mirror-tools/<br />
</source></li></ol><br />
<ol start="2"><br />
<li>If necessary you might have to set http/https proxy in your Dockerfile before building the docker image.<br />
<br />
<source lang="sh">ENV http_proxy "http://your.actual_http_proxy.com:your_port" && \<br />
https_proxy "https://your.actual_https_proxy.com:your_port" && \<br />
ftp_proxy "http://your.actual_ftp_proxy.com:your_port"<br />
RUN echo "proxy=http://your-proxy.com:port" >> /etc/yum.conf<br />
</source></li></ol><br />
<ol start="3"><br />
<li>Build your ''&lt;name&gt;:&lt;tag&gt;'' base container image with '''e.g.''' ''aarcemor:centos-mirror-repository''<br />
<br />
<source lang="sh">$ docker build -t aarcemor:centos-mirror-repository -f Dockerfile .<br />
</source></li></ol><br />
<ol start="4"><br />
<li>Launch a ''&lt;name&gt;'' docker container using previously created Docker base container image ''&lt;name&gt;:&lt;tag&gt;'' '''e.g.''' ''aarcemor-centos-mirror-repository''. As /localdisk is defined as the workdir of the container, the same folder name should be used to define the volume. The container will start to run and populate a logs and output folders in this directory. The container shall be run from the same directory where the other scripts are stored.<br />
<br />
<source lang="sh">$ docker run -itd --name aarcemor-centos-mirror-repository -v $(pwd):/localdisk aarcemor:centos-mirror-repository bash<br />
</source></li></ol><br />
<ol start="5"><br />
<li>Execute the ''&lt;name&gt;'' docker container '''e.g.''' ''aarcemor-centos-mirror-repository''<br />
<br />
<source lang="sh">$ docker exec -it aarcemor-centos-mirror-repository bash<br />
</source></li></ol><br />
<br />
=== Import GPG Keys ===<br />
<br />
<ol start="1"><li>Inside the docker container, import the keys into the local GPG keyring and query public key information:<br />
<br />
<source lang="none"># rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*<br />
# rpm -qi gpg-pubkey-\*<br />
</source></li></ol><br />
<br />
=== Download Packages ===<br />
<br />
<ol start="1"><li>Enter the following command to download the required packages to populate the CentOS Mirror Repository:<br />
<br />
<source lang="sh"># bash download_mirror.sh<br />
</source></li></ol><br />
<ol start="2"><br />
<li>Monitor the download of packages until it is complete. When download is complete, the following message is displayed:<br />
<br />
<source lang="none">totally 17 files are downloaded!<br />
step #3: done successfully<br />
IMPORTANT: The following 3 files are just bootstrap versions. Based on them, the workable images<br />
for StarlingX could be generated by running "update-pxe-network-installer" command after "build-iso"<br />
- out/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img<br />
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img<br />
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz<br />
</source></li></ol><br />
<br />
=== Verify Packages ===<br />
<br />
<ol start="1"><li>Verify there are no missing or failed packages:<br />
<br />
<source lang="none"># cat output/3rd_rpms_missing_L1.txt output/3rd_srpms_missing_L1.txt output/centos_rpms_missing_L1.txt output/centos_srpms_missing_L1.txt<br />
# cat output/3rd_rpms_fail_move_L1.txt output/3rd_srpms_fail_move_L1.txt output/centos_rpms_fail_move_L1.txt output/centos_srpms_fail_move_L1.txt<br />
</source></li></ol><br />
<ol start="2"><br />
<li><p>In case there are missing or failed ones due to network instability (or timeout), you should download them manually, to assure you get all RPMs listed in &quot;rpms_from_3rd_parties.lst&quot; and &quot;rpms_from_centos_repo.lst&quot;.</p></li><br />
<li><p>After all packages were succesfully downloaded, remove all i686 RPMs packages and change ''output'' directory ownership:</p><br />
<br />
<source lang="none"># find ./output -name "*.i686.rpm" | xargs rm -f<br />
# chown 751:751 -R ./output<br />
</source></li></ol><br />
<br />
=== Create CentOS Mirror Repository ===<br />
<br />
<ol start="1"><li>From a console of the workstation, create a ''mirror/CentOS'' directory under your ''starlingx'' workspace directory:<br />
<br />
<source lang="sh">$ mkdir -p $HOME/starlingx/mirror/CentOS/<br />
</source></li></ol><br />
<ol start="2"><br />
<li>Copy the built CentOS Mirror Repository built under ''&lt;stx-tools&gt;/centos-mirror-tool'' to the ''$HOME/starlingx/mirror/CentOS'' workspace directory.<br />
<br />
<source lang="sh">$ cp -r stx-tools/centos-mirror-tools/output/stx-r1/ $HOME/starlingx/mirror/CentOS/<br />
</source></li></ol><br />
<br />
= Work in Progress... =</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX&diff=162167StarlingX2018-06-20T19:38:44Z<p>David.b.kinder: /* Documentation */</p>
<hr />
<div>__NOTOC__<br />
[[File:Starlingx-logo-300w.png|center]]<br />
<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
| style="vertical-align:top; width:75%;" | <br />
== Welcome to the StarlingX project!!! ==<br />
<br />
<br />
StarlingX is a fully featured and high performance Edge Cloud software stack that is based on the [https://www.windriver.com/products/titanium-cloud/ Wind River® Titanium Cloud] R5 product. Intel and Wind River have jointly open sourced this software and we invite you to download, build, install and run it. <br />
<br />
Wind River® Titanium Cloud was originally built on open source components, which were then extended and hardened to meet critical infrastructure requirements: high availability, fault management, and performance management. This software provides numerous features and capabilities to enable 24/7 operation of mission critical applications.<br />
<br />
The StarlingX project opens all of these enhancements to the open source community. We invite the community to contribute to the project and join us as we build the infrastructure stack for Edge Computing.<br />
<br />
|}<br />
----<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
|style="vertical-align:top; width:50%;" |<br />
<br />
== Documentation ==<br />
<br />
* [[StarlingX/Developer Guide|Developer Guide]]<br />
* [[StarlingX/Installation Guide|Installation Guide]]<br />
<br />
== Code ==<br />
<br />
* [https://git.openstack.org/cgit/openstack/stx Gerrit repositories]<br />
** Instructions for how to download and build the code are in progress<br />
* [https://review.openstack.org/ Gerrit Web UI]<br />
* [https://review.openstack.org/#/admin/projects/?filter=stx StarlingX Gerrit Projects]]<br />
** Hint for fast review of open reviews related to STX, in search box use the regular expression "status:open AND project:^openstack/stx-@"<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fstx%2Dclients+OR+project%3Aopenstack%2Fstx%2Dconfig+OR+project%3Aopenstack%2Fstx%2Dfault+OR+project%3Aopenstack%2Fstx%2Dgplv2+OR+project%3Aopenstack%2Fstx%2Dgplv3+OR+project%3Aopenstack%2Fstx%2Dgui+OR+project%3Aopenstack%2Fstx%2Dha+OR+project%3Aopenstack%2Fstx%2Dinteg+OR+project%3Aopenstack%2Fstx%2Dmanifest+OR+project%3Aopenstack%2Fstx%2Dmetal+OR+project%3Aopenstack%2Fstx%2Dnfv+OR+project%3Aopenstack%2Fstx%2Droot+OR+project%3Aopenstack%2Fstx%2Dtis%2Drepo+OR+project%3Aopenstack%2Fstx%2Dtools+OR+project%3Aopenstack%2Fstx%2Dupdate+OR+project%3Aopenstack%2Fstx%2Dupstream+OR+project%3Aopenstack%2Fstx%2Dutils%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1%2Czuul+NOT+reviewedby%3Aself&title=StarlingX+Review+Inbox&Needs+final+%2B2=label%3ACode%2DReview%3E%3D2+limit%3A50+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself&Passed+Zuul%2C+No+Negative+Feedback+%28Small+Fixes%29=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3C%3D10&Passed+Zuul%2C+No+Negative+Feedback=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3E10&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D1+age%3A5d&You+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself+NOT+label%3ACode%2DReview%3E%3D1%2Cself+reviewer%3Aself&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+is%3Areviewed+age%3A2d StarlingX Gerrit Review Dashboard]<br />
** Also in [http://paste.openstack.org/show/723397/ StarlingX Gerrit Review Dashboard] (copy the URL from there into a browser bookmark)<br />
<br />
== Bug Tracking ==<br />
<br />
* [https://storyboard.openstack.org/#!/worklist/354 StarlingX Bug List]<br />
** This list is sorted manually by drag and drop. <br />
** Please create bugs for any issues found in Storyboard, against one of the stx-* projects. If you can't find the right project, use stx-integ<br />
** After you create the bug, please add it to the Bug Worklist (link above)<br />
<br />
|style="vertical-align:top; width:50%;" |<br />
== Status and Planning ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-status Status Meeting Notes]<br />
* [https://storyboard.openstack.org/#!/project_group/86 StarlingX Storyboards]]<br />
* [https://etherpad.openstack.org/p/stx-planning StarlingX Planning] (contains sub-projects and Worklists)<br />
<br />
== References ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-notes The list of repos] and other things<br />
<br />
== OpenStack Documentation ==<br />
<br />
These are references to general OpenStack material:<br />
<br />
* [https://docs.openstack.org/infra/manual/developers.html Developer's Guide]<br />
* [https://docs.openstack.org/infra/manual/creators.html Project Creator's Guide]<br />
* [https://wiki.openstack.org/wiki/How_To_Contribute The Contributors Guide] (the older wiki page)<br />
* [https://governance.openstack.org/tc/reference/project-testing-interface.html Project Testing Interface]<br />
|}</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX/Installation_Guide&diff=162165StarlingX/Installation Guide2018-06-20T19:35:13Z<p>David.b.kinder: David.b.kinder moved page StarlingX Installation Guide to StarlingX/Installation Guide</p>
<hr />
<div>== Intro ==<br />
<br />
This section contains information about the StarlingX installation in a virtualized environment using Libvirt/QEMU.<br />
<br />
==Software Configurations==<br />
<br />
* All In One<br />
* Standard Controller Storage<br />
* Duplex<br />
* Standard Dedicated Storage<br />
<br />
==Standard Controller Storage==<br />
<br />
==Requirements==<br />
<br />
Different use cases require different configurations. For general StarlingX deployment, the recommended minimum requirements include:<br />
<br />
===Hardware Requirements===<br />
<br />
A workstation computer with:<br />
<br />
* Processor: x86_64 only supported architecture with hardware virtualization extensions<br />
* Memory: At least 32GB RAM<br />
* Hard Disk: 500GB HDD<br />
* Network: Two network adapters with active Internet connection<br />
<br />
===Software Requirements===<br />
<br />
A workstation computer with:<br />
<br />
* Operating System: Freshly installed Ubuntu 16.04 LTS 64-bit<br />
* Proxy settings configured (if applies)<br />
* Git<br />
* KVM/VirtManager<br />
* Libvirt Library<br />
* QEMU Full System Emulation Binaries<br />
* <stx-deployment> project<br />
* StarlingX ISO Image<br />
<br />
==Deployment Environment Setup==<br />
<br />
This section describes how to set up a StarlingX system in a workstation computer. After completing these steps, you will be able to deploy and run your StarlingX system on the following Linux distribution:<br />
<br />
* Ubuntu 16.04 LTS 64-bit<br />
<br />
===Updating Your Operating System===<br />
<br />
Before proceeding with the build, ensure your OS is up to date. You’ll first need to update the local database list of available packages:<br />
<br />
<pre><nowiki><br />
$ sudo apt-get update<br />
</nowiki></pre><br />
<br />
===Installing Requirements and Dependencies===<br />
<br />
Install the required packages in an Ubuntu host system with:<br />
<br />
<pre><nowiki><br />
$ sudo apt-get install git virt-manager libvirt-bin qemu-system<br />
</nowiki></pre><br />
<br />
===Installing Deployment Tool===<br />
<br />
Clone the <stx-deployment> project. Usually you’ll want to clone it under your user’s home directory.<br />
<br />
<pre><nowiki><br />
$ cd $HOME<br />
$ git clone <stx-deployment> <br />
</nowiki></pre><br />
<br />
===Getting the StarlingX ISO Image===<br />
<br />
1. Get the StarlingX ISO Image from:<br />
<br />
<pre><nowiki><br />
Tbd<br />
</nowiki></pre><br />
<br />
2. Copy the StarlingX ISO Image to the ''<stx-deployment>'' libvirt project directory naming it as bootimage.iso:<br />
<br />
<pre><nowiki><br />
$ cp <starlingx iso image> $HOME/<stx-deployment>/libvirt/bootimage.iso<br />
</nowiki></pre><br />
<br />
==Controller-0 Host Installation==<br />
<br />
Installing controller-0 involves initializing a host with software and then applying a configuration from the command line. The configured host becomes Controller-0. <br><br />
Procedure:<br />
<br />
# Using an ISO image of StarlingX, initialize the controller host via Libvirt/QEMU.<br />
# Configure the controller using the config_controller script.<br />
<br />
===Initializing Controller-0===<br />
This section describes how to initialize StarlingX in host Controller-0. Except where noted, all the commands must be executed from a console of the Workstation.<br />
<br />
Navigate to the ''<stx-deployment>'' libvirt project directory:<br />
<pre><nowiki><br />
$ cd <stx-deployment>/libvirt<br />
</nowiki></pre><br />
<br />
Run the install packages script:<br />
<pre><nowiki><br />
$ bash install_packages.sh<br />
</nowiki></pre><br />
<br />
Run the libvirt qemu setup script:<br />
<pre><nowiki><br />
$ bash setup_tic.sh<br />
</nowiki></pre><br />
<br />
From the KVM/VirtManager window, power on the host to be configured as Controller-0 and show the virtual machine console and details:<br />
* When the installer is loaded and the installer welcome screen appears in the Controller-0 host, select the type of installation "Standard Controller Configuration".<br />
* Select the "Graphical Console" as the console to use during installation.<br />
* Select "Standard Security Boot Profile" as the Security Profile.<br />
* Monitor the initialization until it is complete. When initialization is complete, a reboot is initiated on the Controller-0 host, briefly displays a GNU GRUB screen, and then boots automatically into the StarlingX image.<br />
<br />
Log into Controller-0 as user wrsroot, with password wrsroot. The first time you log in as wrsroot, you are required to change your password. Enter the current password (wrsroot):<br />
<pre><nowiki><br />
Changing password for wrsroot.<br />
(current) UNIX Password:<br />
</nowiki></pre><br />
<br />
Enter a new password for the wrsroot account:<br />
<pre><nowiki><br />
New password:<br />
</nowiki></pre><br />
<br />
Enter the new password again to confirm it:<br />
<pre><nowiki><br />
Retype new password:<br />
</nowiki></pre><br />
<br />
Controller-0 is initialized with StarlingX, and is ready for configuration.<br />
<br />
===Configuring Controller-0===<br />
<br />
This section describes how to perform the Controller-0 configuration interactively. Except where noted, all the commands must be executed from the console of the active controller (here assumed to be controller-0.<br />
<br />
When run interactively, the config_controller script presents a series of prompts for initial configuration of StarlingX. The script is used to configure the first controller in the StarlingX cluster as controller-0. The prompts are grouped by configuration area. To start the script interactively, use the following command with no parameters and accept all the default values:<br />
<br />
<pre><nowiki><br />
controller-0:~$ sudo config_controller<br />
</nowiki></pre><br />
<br />
The output when config_controller script is run interactively is:<br />
<br />
<pre><nowiki><br />
WARNING: Command should only be run from the console. Continuing with this<br />
terminal may cause loss of connectivity and configuration failure<br />
...<br />
Apply the above configuration? [y/n]: y<br />
<br />
Applying configuration (this will take several minutes):<br />
<br />
01/08: Creating bootstrap configuration ... DONE<br />
02/08: Applying bootstrap manifest ... DONE<br />
03/08: Persisting local configuration ... DONE<br />
04/08: Populating initial system inventory ... DONE<br />
05:08: Creating system configuration ... DONE<br />
06:08: Applying controller manifest ... DONE<br />
07:08: Finalize controller configuration ... DONE<br />
08:08: Waiting for service activation ... DONE<br />
<br />
Configuration was applied<br />
<br />
Please complete any out of service comissioning steps with system commands and unlock controller to proceed.<br />
</nowiki></pre><br />
<br />
==Controller-0 and System Provision==<br />
<br />
===Configuring Provider Networks at Installation===<br />
<br />
You must set up provider networks at installation so that you can attach data interfaces and unlock the compute nodes.<br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Set up one provider network of the vlan type, named providernet-a:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan<br />
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a<br />
</nowiki></pre><br />
<br />
===Unlocking Controller-0===<br />
<br />
You must unlock controller-0 so that you can use it to install the remaining hosts. On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Use the system host-unlock command:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0<br />
</nowiki></pre><br />
<br />
The host is rebooted. During the reboot, the command line is unavailable, and any ssh connections are dropped. To monitor the progress of the reboot, use the controller-0 console.<br />
<br />
===Verifying the Controller-0 Configuration===<br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Verify that the Titanium Cloud controller services are running:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list<br />
[wrsroot@controller-0 ~(keystone_admin)]$ nova service-list<br />
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...<br />
| Id | Binary | Host | Zone | Status | State | ...<br />
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...<br />
| d7cdfaf0-9394-4053-b752-d609c837b729 | nova-conductor | controller-0 | internal | enabled | up | ...<br />
| 692c2659-7188-42ec-9ad1-c88ac3020dc6 | nova-scheduler | controller-0 | internal | enabled | up | ...<br />
| 5c7c9aad-696f-4dd1-a497-464bdd525e0c | nova-consoleauth | controller-0 | internal | enabled | up | ...<br />
+--------------------------------------+------------------+--------------+----------+---------+-------+ ...<br />
</nowiki></pre><br />
<br />
Verify that controller-0 is unlocked, enabled, and available:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| id | hostname | personality | administrative | operational | availability |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| 1 | controller-0 | controller | unlocked | enabled | available |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
</nowiki></pre><br />
<br />
==Compute Host Installation==<br />
<br />
After initializing and configuring an active controller, you can add and configure a backup controller and additional compute or storage hosts. Using the system host-add command, you add one or more host entries to the system inventory, assigning a personality, MAC address, IP address, and so on for each host, and then you power on the hosts, causing them to be recognized and configured according to the system inventory entry.<br />
<br />
===Initializing Compute Host===<br />
<br />
On Workstation, print information of virbr2 virtual interface associated to compute-N host:<br />
<br />
<pre><nowiki><br />
$ sudo virsh domiflist compute-0 | grep virbr2<br />
vnet5 bridge virbr2 e1000 52:54:00:b6:1f:c7<br />
$ sudo virsh domiflist compute-1 | grep virbr2<br />
vnet9 bridge virbr2 e1000 52:54:00:da:58:b4<br />
</nowiki></pre><br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
Use the system host-add command to add compute-N host and specify their compute personality using their associated virbr2 virtual interfaces MAC addresses:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n compute-0 -p compute -m 52:54:00:15:7a:86<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n compute-1 -p compute -m 52:54:00:aa:a2:46<br />
</nowiki></pre><br />
<br />
On Workstation, start Compute-N host: <br />
<br />
<pre><nowiki><br />
$ sudo virsh start compute-0<br />
</nowiki></pre><br />
<br />
Once the message "Domain compute-N started" is displayed, from the KVM/VirtManager window, power on the host to be configured as compute-N and show the virtual machine console and details. The node is assigned the personality specified in the system host-add parameters. A display device menu appears on the console, with text customized for the personality (Controller, Storage, or Compute Node). You can start the installation manually by pressing Enter. Otherwise, it is started automatically after a few seconds.<br />
<br />
On Controller-0, you can monitor the installation progress by running the system host-show command for the host periodically. Progress is shown in the install_state field.<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show compute-0 | grep install<br />
| install_output | text |<br />
| install_state | booting |<br />
| install_state_info | None |<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show compute-1 | grep install<br />
| install_output | text |<br />
| install_state | booting |<br />
| install_state_info | None |<br />
[wrsroot@controller-0 ~(keystone_admin)]$ <br />
</nowiki></pre><br />
<br />
Wait while the compute-N is configured and rebooted. Up to 20 minutes may be required for a reboot, depending on hardware. When the reboot is complete, the compute-N is reported as Locked, Disabled, and Online.<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| id | hostname | personality | administrative | operational | availability |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| 1 | controller-0 | controller | unlocked | enabled | available |<br />
| 2 | compute-0 | compute | locked | disabled | online |<br />
| 3 | compute-1 | compute | locked | disabled | online |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
[wrsroot@controller-0 ~(keystone_admin)]$ <br />
</nowiki></pre><br />
<br />
==Compute Host Provision==<br />
<br />
You must configure the network interfaces and the storage disks on a host before you can unlock it. <br />
<br />
On Controller-0, acquire Keystone administrative privileges:<br />
<br />
<pre><nowiki><br />
controller-0:~$ source /etc/nova/openrc<br />
</nowiki></pre><br />
<br />
===Provisioning Network Interfaces on a Compute Host===<br />
<br />
Provision the data interfaces<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-0 ens6<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-1 ens6<br />
</nowiki></pre><br />
<br />
===Provisioning Storage on a Compute Host===<br />
<br />
Ensure that provider networks are available for the data interfaces. Provision the data interfaces:<br />
<br />
<pre><nowiki><br />
system host-list --nowrap &> /dev/null && NOWRAP="--nowrap"<br />
ALL_COMPUTE=`system host-list $NOWRAP | grep compute- | cut -d '|' -f 3`<br />
# for each compute node, we should run the followings<br />
for compute in $ALL_COMPUTE; do<br />
system host-cpu-modify ${compute} -f vswitch -p0 1<br />
system host-lvg-add ${compute} nova-local<br />
system host-pv-add ${compute} nova-local $(system host-disk-list ${compute} $NOWRAP | grep /dev/sdb | awk '{print $2}')<br />
system host-lvg-modify -b image -s 10240 ${compute} nova-local<br />
done<br />
</nowiki></pre><br />
<br />
===Unlocking a Compute Host===<br />
<br />
Use the system host-unlock command to unlock the node:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0<br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-1<br />
</nowiki></pre><br />
<br />
Wait while the compute-N is rebooted. Up to 10 minutes may be required for a reboot, depending on hardware. The host is rebooted, and its Availability State is reported as In-Test.<br />
<br />
==System Health Check==<br />
<br />
After a few minutes, all nodes shall be reported as Unlocked, Enabled, and Available:<br />
<br />
<pre><nowiki><br />
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| id | hostname | personality | administrative | operational | availability |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
| 1 | controller-0 | controller | unlocked | enabled | available |<br />
| 2 | compute-0 | compute | unlocked | enabled | available |<br />
| 3 | compute-1 | compute | unlocked | enabled | available |<br />
+----+--------------+-------------+----------------+-------------+--------------+<br />
</nowiki></pre></div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX_Installation_Guide&diff=162166StarlingX Installation Guide2018-06-20T19:35:13Z<p>David.b.kinder: David.b.kinder moved page StarlingX Installation Guide to StarlingX/Installation Guide</p>
<hr />
<div>#REDIRECT [[StarlingX/Installation Guide]]</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX&diff=162160StarlingX2018-06-20T19:29:45Z<p>David.b.kinder: /* Documentation */</p>
<hr />
<div>__NOTOC__<br />
[[File:Starlingx-logo-300w.png|center]]<br />
<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
| style="vertical-align:top; width:75%;" | <br />
== Welcome to the StarlingX project!!! ==<br />
<br />
<br />
StarlingX is a fully featured and high performance Edge Cloud software stack that is based on [https://www.windriver.com/products/titanium-cloud/ Wind River® Titanium Cloud] R5 product. Intel and Wind River have jointly open sourced this software and we invite you to download, build, install and run it. <br />
<br />
<br />
Wind River Titanium Cloud was originally built on open source components, which were then extended and targeted to be hardened to address critical infrastructure requirements: high availability, fault management, and performance management needed for continuous 24/7 operation.<br />
<br />
The StarlingX project opens all of these enhancements to the open source community. We invite the community to contribute to the project and join us as we build the infrastructure stack for Edge Computing.<br />
<br />
|}<br />
----<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
|style="vertical-align:top; width:50%;" |<br />
== Documentation ==<br />
<br />
* [[StarlingX/Developer Guide|Developer Guide]]<br />
* [[StarlingX Installation Guide]]<br />
<br />
== Code ==<br />
<br />
* [https://git.openstack.org/cgit/openstack/stx Gerrit repositories]<br />
** Instructions for how to download and build the code are in progress<br />
* [https://review.openstack.org/ Gerrit Web UI]<br />
* [https://review.openstack.org/#/admin/projects/?filter=stx StarlingX Gerrit Projects]]<br />
** Hint for fast review of open reviews related to STX, in search box use the regular expression "status:open AND project:^openstack/stx-@"<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fstx%2Dclients+OR+project%3Aopenstack%2Fstx%2Dconfig+OR+project%3Aopenstack%2Fstx%2Dfault+OR+project%3Aopenstack%2Fstx%2Dgplv2+OR+project%3Aopenstack%2Fstx%2Dgplv3+OR+project%3Aopenstack%2Fstx%2Dgui+OR+project%3Aopenstack%2Fstx%2Dha+OR+project%3Aopenstack%2Fstx%2Dinteg+OR+project%3Aopenstack%2Fstx%2Dmanifest+OR+project%3Aopenstack%2Fstx%2Dmetal+OR+project%3Aopenstack%2Fstx%2Dnfv+OR+project%3Aopenstack%2Fstx%2Droot+OR+project%3Aopenstack%2Fstx%2Dtis%2Drepo+OR+project%3Aopenstack%2Fstx%2Dtools+OR+project%3Aopenstack%2Fstx%2Dupdate+OR+project%3Aopenstack%2Fstx%2Dupstream+OR+project%3Aopenstack%2Fstx%2Dutils%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1%2Czuul+NOT+reviewedby%3Aself&title=StarlingX+Review+Inbox&Needs+final+%2B2=label%3ACode%2DReview%3E%3D2+limit%3A50+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself&Passed+Zuul%2C+No+Negative+Feedback+%28Small+Fixes%29=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3C%3D10&Passed+Zuul%2C+No+Negative+Feedback=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3E10&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D1+age%3A5d&You+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself+NOT+label%3ACode%2DReview%3E%3D1%2Cself+reviewer%3Aself&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+is%3Areviewed+age%3A2d StarlingX Gerrit Review Dashboard]<br />
** Also in [http://paste.openstack.org/show/723397/ StarlingX Gerrit Review Dashboard] (copy the URL from there into a browser bookmark)<br />
<br />
== Bug Tracking ==<br />
<br />
* [https://storyboard.openstack.org/#!/worklist/354 StarlingX Bug List]<br />
** This list is sorted manually by drag and drop. <br />
** Please create bugs for any issues found in Storyboard, against one of the stx-* projects. If you can't find the right project, use stx-integ<br />
** After you create the bug, please add it to the Bug Worklist (link above)<br />
<br />
|style="vertical-align:top; width:50%;" |<br />
== Status and Planning ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-status Status Meeting Notes]<br />
* [https://storyboard.openstack.org/#!/project_group/86 StarlingX Storyboards]]<br />
* [https://etherpad.openstack.org/p/stx-planning StarlingX Planning] (contains sub-projects and Worklists)<br />
<br />
== References ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-notes The list of repos] and other things<br />
<br />
== OpenStack Documentation ==<br />
<br />
These are references to general OpenStack material:<br />
<br />
* [https://docs.openstack.org/infra/manual/developers.html Developer's Guide]<br />
* [https://docs.openstack.org/infra/manual/creators.html Project Creator's Guide]<br />
* [https://wiki.openstack.org/wiki/How_To_Contribute The Contributors Guide] (the older wiki page)<br />
* [https://governance.openstack.org/tc/reference/project-testing-interface.html Project Testing Interface]<br />
|}</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX/Developer_Guide&diff=162151StarlingX/Developer Guide2018-06-20T19:24:21Z<p>David.b.kinder: Created page with "This section contains the steps for building a StarlingX ISO. == Requirements == The recommended minimum requirements include: === Hardware Requirements === A workstation..."</p>
<hr />
<div>This section contains the steps for building a StarlingX ISO.<br />
<br />
== Requirements ==<br />
<br />
The recommended minimum requirements include:<br />
<br />
=== Hardware Requirements ===<br />
<br />
A workstation computer with:<br />
<br />
* Processor: x86_64 only supported architecture<br />
* Memory: At least 32GB RAM<br />
* Hard Disk: 100GB HDD<br />
* Network: Network adapter with active Internet connection<br />
<br />
=== Software Requirements ===<br />
<br />
A workstation computer with:<br />
<br />
* Operating System: Ubuntu 16.04 LTS 64-bit<br />
* Docker<br />
* Android Repo Tool<br />
* Proxy Settings Configured (If Required)<br />
<br />
== Development Environment Setup ==<br />
<br />
This section describes how to set up a StarlingX development system on a workstation computer. After completing these steps, you will be able to build a StarlingX ISO image on the following Linux distribution:<br />
<br />
* Ubuntu 16.04 LTS 64-bit<br />
<br />
=== Update Your Operating System ===<br />
<br />
Before proceeding with the build, ensure your OS is up to date. You’ll first need to update the local database list of available packages:<br />
<br />
<source lang="sh">$ sudo apt-get update<br />
</source><br />
=== Installation Requirements and Dependencies ===<br />
<br />
<ol start="1"><li>Install the required packages in an Ubuntu host system with:<br />
<br />
<source lang="sh">$ sudo apt-get install git<br />
</source></li></ol><br />
<ol start="2"><br />
<li><p>Install the required Docker CE packages in an Ubuntu host system. See [https://docs.docker.com/install/ Get Docker] for more information.</p></li><br />
<li><p>Install the required Android Repo Tool in an Ubuntu host system. See [https://source.android.com/setup/build/downloading#installing-repo Installing Repo] for more information.</p></li></ol><br />
<br />
=== Install stx-tools project ===<br />
<br />
<ol start="1"><li>Clone the &lt;stx-tools&gt; project<br />
<br />
<source lang="sh">$ git clone git://git.openstack.org/openstack/stx-tools<br />
</source></li></ol><br />
<br />
=== Create a Workspace Directory ===<br />
<br />
<ol start="1"><li>Create a ''starlingx'' workspace directory on your workstation computer. Usually, you’ll want to create it somewhere under your user’s home directory.<br />
<br />
<source lang="sh">$ mkdir -p $HOME/starlingx/<br />
</source></li></ol><br />
<br />
== Build the CentOS Mirror Repository ==<br />
<br />
This section describes how to build the CentOS Mirror Repository.<br />
<br />
=== Setup Repository Docker Container ===<br />
<br />
<ol start="1"><li>Navigate to the ''&lt;stx-tools&gt;/centos-mirror-tool'' project directory:<br />
<br />
<source lang="sh">$ cd stx-tools/centos-mirror-tools/<br />
</source></li></ol><br />
<ol start="2"><br />
<li>If necessary you might have to set http/https proxy in your Dockerfile before building the docker image.<br />
<br />
<source lang="sh">ENV http_proxy "http://your.actual_http_proxy.com:your_port" && \<br />
https_proxy "https://your.actual_https_proxy.com:your_port" && \<br />
ftp_proxy "http://your.actual_ftp_proxy.com:your_port"<br />
RUN echo "proxy=http://your-proxy.com:port" >> /etc/yum.conf<br />
</source></li></ol><br />
<ol start="3"><br />
<li>Build your ''&lt;name&gt;:&lt;tag&gt;'' base container image with '''e.g.''' ''aarcemor:centos-mirror-repository''<br />
<br />
<source lang="sh">$ docker build -t aarcemor:centos-mirror-repository -f Dockerfile .<br />
</source></li></ol><br />
<ol start="4"><br />
<li>Launch a ''&lt;name&gt;'' docker container using previously created Docker base container image ''&lt;name&gt;:&lt;tag&gt;'' '''e.g.''' ''aarcemor-centos-mirror-repository''. As /localdisk is defined as the workdir of the container, the same folder name should be used to define the volume. The container will start to run and populate a logs and output folders in this directory. The container shall be run from the same directory where the other scripts are stored.<br />
<br />
<source lang="sh">$ docker run -itd --name aarcemor-centos-mirror-repository -v $(pwd):/localdisk aarcemor:centos-mirror-repository bash<br />
</source></li></ol><br />
<ol start="5"><br />
<li>Execute the ''&lt;name&gt;'' docker container '''e.g.''' ''aarcemor-centos-mirror-repository''<br />
<br />
<source lang="sh">$ docker exec -it aarcemor-centos-mirror-repository bash<br />
</source></li></ol><br />
<br />
=== Import GPG Keys ===<br />
<br />
<ol start="1"><li>Inside the docker container, import the keys into the local GPG keyring and query public key information:<br />
<br />
<source lang="none">/ # rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*<br />
/ # rpm -qi gpg-pubkey-\*<br />
</source></li></ol><br />
<br />
=== Download Packages ===<br />
<br />
<ol start="1"><li>Enter the following command to download the required packages to populate the CentOS Mirror Repository:<br />
<br />
<source lang="sh">/ # bash download_mirror.sh<br />
</source></li></ol><br />
<ol start="2"><br />
<li>Monitor the download of packages until it is complete. When download is complete, the following message is displayed:<br />
<br />
<source lang="none">totally 17 files are downloaded!<br />
step #3: done successfully<br />
IMPORTANT: The following 3 files are just bootstrap versions. Based on them, the workable images<br />
for StarlingX could be generated by running "update-pxe-network-installer" command after "build-iso"<br />
- out/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img<br />
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img<br />
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz<br />
</source></li></ol><br />
<br />
=== Verify Packages ===<br />
<br />
<ol start="1"><li>Verify there are no missing or failed packages:<br />
<br />
<source lang="none">/ # cat output/3rd_rpms_missing_L1.txt output/3rd_srpms_missing_L1.txt output/centos_rpms_missing_L1.txt output/centos_srpms_missing_L1.txt<br />
/ # cat output/3rd_rpms_fail_move_L1.txt output/3rd_srpms_fail_move_L1.txt output/centos_rpms_fail_move_L1.txt output/centos_srpms_fail_move_L1.txt<br />
</source></li></ol><br />
<ol start="2"><br />
<li><p>In case there are missing or failed ones due to network instability (or timeout), you should download them manually, to assure you get all RPMs listed in &quot;rpms_from_3rd_parties.lst&quot; and &quot;rpms_from_centos_repo.lst&quot;.</p></li><br />
<li><p>After all packages were succesfully downloaded, remove all i686 RPMs packages and change ''output'' directory ownership:</p><br />
<br />
<source lang="none">/ # find ./output -name "*.i686.rpm" | xargs rm -f<br />
/ # chown 751:751 -R ./output<br />
</source></li></ol><br />
<br />
=== Create CentOS Mirror Repository ===<br />
<br />
<ol start="1"><li>From a console of the workstation, create a ''mirror/CentOS'' directory under your ''starlingx'' workspace directory:<br />
<br />
<source lang="sh">$ mkdir -p $HOME/starlingx/mirror/CentOS/<br />
</source></li></ol><br />
<ol start="2"><br />
<li>Copy the built CentOS Mirror Repository built under ''&lt;stx-tools&gt;/centos-mirror-tool'' to the ''$HOME/starlingx/mirror/CentOS'' workspace directory.<br />
<br />
<source lang="sh">$ cp -r stx-tools/centos-mirror-tools/output/stx-r1/ $HOME/starlingx/mirror/CentOS/<br />
</source></li></ol><br />
<br />
= Work in Progress... =</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=StarlingX&diff=162107StarlingX2018-06-19T21:40:16Z<p>David.b.kinder: </p>
<hr />
<div>__NOTOC__<br />
[[File:Starlingx-logo-300w.png|center]]<br />
<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
| style="vertical-align:top; width:50%;" | Intel and Wind River have jointly open sourced components from the [https://www.windriver.com/products/titanium-cloud/ Wind River® Titanium Cloud] portfolio, with code being upstreamed to a new open source project called [https://starlingx.io StarlingX], and hosted by the OpenStack Foundation.<br />
<br />
<br />
Wind River Titanium Cloud was built on open source components, which were then extended and targeted to be hardened to address critical infrastructure requirements: high availability, fault management, and performance management needed for continuous 24/7 operation.<br />
<br />
The StarlingX project opens many of these enhancements to the open source community, giving others a reference platform upon which to innovate. Through the StarlingX project, Intel and Wind River invite the community to contribute code, and look forward to working together to define the infrastructure stack for edge computing and accelerating integration across many existing open source projects including cloud native technologies.<br />
| style="vertical-align:top; width:50%;" | The code made available via the StarlingX project will:<br />
* Provide service management, REST APIs, and process monitoring<br />
* Deliver standalone fault management service, including extensions to OpenStack Horizon<br />
* Provide software repository management, patching, upgrade, backup, and restore services<br />
* Include bare metal management, a next-generation Virtual Infrastructure Manager (VIM) along with VIM helper components, the OpenStack Nova API proxy, and guest API infrastructure<br />
<br />
<br />
Additional code contributions not upstreamed to existing projects will deliver capabilities through sub-projects under StarlingX, to provide critical functionality: service management, fault management, software and lifecycle management, bare metal installation and management, and configuration management.<br />
|}<br />
----<br />
{| style="border-collapse: separate; border-spacing: 25px;"<br />
|style="vertical-align:top; width:50%;" |<br />
== Documentation ==<br />
<br />
* Under development<br />
<br />
== Code ==<br />
<br />
* [https://git.openstack.org/cgit/openstack/stx Gerrit repositories]<br />
** Instructions for how to download and build the code are in progress<br />
* [https://review.openstack.org/ Gerrit Web UI]<br />
* [https://review.openstack.org/#/admin/projects/?filter=stx StarlingX Gerrit Projects]]<br />
** Hint for fast review of open reviews related to STX, in search box use the regular expression "status:open AND project:^openstack/stx-@"<br />
* [https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fstx%2Dclients+OR+project%3Aopenstack%2Fstx%2Dconfig+OR+project%3Aopenstack%2Fstx%2Dfault+OR+project%3Aopenstack%2Fstx%2Dgplv2+OR+project%3Aopenstack%2Fstx%2Dgplv3+OR+project%3Aopenstack%2Fstx%2Dgui+OR+project%3Aopenstack%2Fstx%2Dha+OR+project%3Aopenstack%2Fstx%2Dinteg+OR+project%3Aopenstack%2Fstx%2Dmanifest+OR+project%3Aopenstack%2Fstx%2Dmetal+OR+project%3Aopenstack%2Fstx%2Dnfv+OR+project%3Aopenstack%2Fstx%2Droot+OR+project%3Aopenstack%2Fstx%2Dtis%2Drepo+OR+project%3Aopenstack%2Fstx%2Dtools+OR+project%3Aopenstack%2Fstx%2Dupdate+OR+project%3Aopenstack%2Fstx%2Dupstream+OR+project%3Aopenstack%2Fstx%2Dutils%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D%2D1+label%3AVerified%3E%3D1%2Czuul+NOT+reviewedby%3Aself&title=StarlingX+Review+Inbox&Needs+final+%2B2=label%3ACode%2DReview%3E%3D2+limit%3A50+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself&Passed+Zuul%2C+No+Negative+Feedback+%28Small+Fixes%29=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3C%3D10&Passed+Zuul%2C+No+Negative+Feedback=NOT+label%3ACode%2DReview%3E%3D2+NOT+label%3ACode%2DReview%3C%3D%2D1%2Cstarlingx%2Dcore+delta%3A%3E10&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode%2DReview%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D1+age%3A5d&You+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=NOT+label%3ACode%2DReview%3C%3D%2D1%2Cself+NOT+label%3ACode%2DReview%3E%3D1%2Cself+reviewer%3Aself&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+is%3Areviewed+age%3A2d StarlingX Gerrit Review Dashboard]<br />
** Also in [http://paste.openstack.org/show/723397/ StarlingX Gerrit Review Dashboard] (copy the URL from there into a browser bookmark)<br />
<br />
== Bug Tracking ==<br />
<br />
* [https://storyboard.openstack.org/#!/worklist/354 StarlingX Bug List]<br />
** This list is sorted manually by drag and drop. <br />
** Please create bugs for any issues found in Storyboard, against one of the stx-* projects. If you can't find the right project, use stx-integ<br />
** After you create the bug, please add it to the Bug Worklist (link above)<br />
<br />
|style="vertical-align:top; width:50%;" |<br />
== Status and Planning ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-status Status Meeting Notes]<br />
* [https://storyboard.openstack.org/#!/project_group/86 StarlingX Storyboards]]<br />
* [https://etherpad.openstack.org/p/stx-planning StarlingX Planning] (contains sub-projects and Worklists)<br />
<br />
== References ==<br />
<br />
* [https://etherpad.openstack.org/p/stx-notes The list of repos] and other things<br />
<br />
== OpenStack Documentation ==<br />
<br />
These are references to general OpenStack material:<br />
<br />
* [https://docs.openstack.org/infra/manual/developers.html Developer's Guide]<br />
* [https://docs.openstack.org/infra/manual/creators.html Project Creator's Guide]<br />
* [https://wiki.openstack.org/wiki/How_To_Contribute The Contributors Guide] (the older wiki page)<br />
* [https://governance.openstack.org/tc/reference/project-testing-interface.html Project Testing Interface]<br />
|}</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=Starlingx&diff=162103Starlingx2018-06-19T20:37:07Z<p>David.b.kinder: </p>
<hr />
<div>Here's the [[StarlingX|StarlingX Wiki Page]]</div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=File:Starlingx-logo-300w.png&diff=162102File:Starlingx-logo-300w.png2018-06-19T20:28:44Z<p>David.b.kinder: </p>
<hr />
<div></div>David.b.kinderhttps://wiki.openstack.org/w/index.php?title=Starlingx&diff=162067Starlingx2018-06-18T23:17:25Z<p>David.b.kinder: Created page with "StarlingX Wiki Page content would go here"</p>
<hr />
<div>StarlingX Wiki Page content would go here</div>David.b.kinder