Jump to: navigation, search

Difference between revisions of "SmartOS"

m (Nimbosa moved page Smartos to SmartOS: SmartOS is the proper typography per official documents)
 
(20 intermediate revisions by 7 users not shown)
Line 1: Line 1:
__NOTOC__
+
 
<<[[TableOfContents]]()>>
+
__TOC__
  
 
= [[OpenStack]]/SmartOS Compute =
 
= [[OpenStack]]/SmartOS Compute =
  
 
This is a work-in-progress page about [[OpenStack]] and SmartOS.
 
This is a work-in-progress page about [[OpenStack]] and SmartOS.
 +
 +
= Getting a Development Environment Going for SmartStack =
 +
 +
'''TODO''': Put this under control of Chef/Puppet
 +
 +
* dev: need a vagrant smartos box - use veewee
 +
* prod: DHCP, PXE boot, foreman, razor
 +
 +
Current assumption for now is that there is a controller node running with all necessary services (like keystone). This could be a devstack environment. It is also possible to get these service running in separate zones on SmartOS btw. I currently use a devstack (Grizzly) environment on the controller node. The IPs which interconnect the two VMs are 192.168.56.101 (SmartOS) and 192.168.56.102.
 +
 +
I will not go into detail on how [http://www.devstack.org to run devstack] on the controller node - it's straight forward.
 +
 +
== The basics ==
 +
 +
=== Getting SmartOS ===
 +
 +
Check out:
 +
 +
* [http://wiki.smartos.org/display/DOC/SmartOS+on+VirtualBox%20 SmartOS wiki hints]
 +
* [http://www.perkin.org.uk/posts/automated-virtualbox-smartos-installs.html Automatically creating a SmartOS VirtualBox VM]
 +
 +
If you use the script above, modify the SmartOS VirtualBox VM to have two NICs configured, one with Internet access (NAT) the other one host-only.
 +
 +
=== Onwards...===
 +
 +
Then get `pkgin` running in the global zone:
 +
 +
    $ cd /
 +
    $ curl -k http://pkgsrc.joyent.com/packages/SmartOS/bootstrap/bootstrap-2013Q1-x86_64.tar.gz | gzcat | tar -xf -
 +
    $ pkg_admin rebuild
 +
    $ pkgin -y up
 +
 +
Installed some packages which are needed or come in handy (mc, vim) during the next steps:
 +
 +
    $ pkgin install mc vim scmgit python2.7 py27-pip py27-expat py27-sqlite2 py27-mysqldb gcc47 gnu-binutils libxslt
 +
 +
Get the code & install the dependencies needed:
 +
 +
    $ mkdir -p /zones/workspace
 +
    $ cd /zones/workspace
 +
    $ git clone git://github.com/tmetsch/nova.git -b smartos/grizzly
 +
    $ cd nova/tools
 +
    $ export PATH=/opt/local/gcc47/bin/:$PATH
 +
    $ export CFLAGS="-D_XPG6 -std=c99"
 +
    $ pip install -r pip-requires
 +
 +
== Configuring the SmartOS Compute Node ==
 +
 +
Edit sample `./etc/nova/nova.conf` file:
 +
 +
    [DEFAULT]
 +
    rabbit_host=192.168.56.102
 +
    rabbit_password=secret
 +
    verbose=True
 +
    fake_network=True
 +
    auth_strategy=keystone
 +
    # network_manager=nova.network.manager.FlatDHCPManager
 +
    glance_api_servers=192.168.56.102:9292
 +
    sql_connection=mysql://root:secret@192.168.56.102/nova?charset=utf8
 +
    compute_driver=nova.virt.smartosapi.driver.SmartOSDriver
 +
 +
Create a symlink `/opt/local/bin/python` to `/opt/local/bin/python2.7`
 +
 +
    $ ln -s /opt/local/bin/python2.7 /opt/local/bin/python
 +
 +
Run `nova-compute`:
 +
 +
    $ export LD_LIBRARY_PATH=/opt/local/lib
 +
    $ bin/nova-compute --config-file etc/nova/nova.conf
 +
 +
Run `nova-network`:
 +
 +
    $ export LD_LIBRARY_PATH=/opt/local/lib
 +
    $ bin/nova-network --config-file etc/nova/nova.conf
 +
 +
At this point you will have the set of services necessary to begin provisioning VMs.
 +
 +
== Starting a SmartOS image ==
 +
 +
Get an image (make sure it is f9e4be48-9466-11e1-bc41-9f993f5dff36 for now - smartos64 v1.6.3 - [https://github.com/dstroppa/openstack-smartos-nova-grizzly/blob/master/nova/virt/smartosapi/zone_image.py#L40 See this line]):
 +
 +
    $ imgadm import f9e4be48-9466-11e1-bc41-9f993f5dff36
 +
    $ zfs snapshot zones/f9e4be48-9466-11e1-bc41-9f993f5dff36@now
 +
    $ zfs send zones/f9e4be48-9466-11e1-bc41-9f993f5dff36@now > /zones/workspace/smartos.img
 +
 +
Now on the controller node - register it with glance:
 +
 +
    $ cd /zones/workspace
 +
    $ glance image-create --name 'smartos' --is-public 'true' --container-format 'bare' --disk-format 'raw' --property 'zone=true' < smartos.img"
 +
 +
Boot a instance:
 +
 +
    $ nova boot --flavor=m1.nano --image=smartos testserver --availability-zone nova:08-00-27-ee-14-2a
 +
 +
'''Note''': It might be needed to change to policies for the user you are using to start the instance - must allow for forced_hosts.
 +
 +
Check instance:
 +
 +
    $ zoneadm list -cv
 +
    ID NAME            STATUS    PATH                          BRAND    IP
 +
    0 global          running    /                              liveimg  shared
 +
    1 e99947b5-3578-4f4b-b102-d6ef5706b173 running    /zones/e99947b5-3578-4f4b-b102-d6ef5706b173 joyent  excl
 +
 +
== Coding against it with PyCharm's remote interpreter option ==
 +
 +
PyCharm offers a nice [http://www.jetbrains.com/pycharm/webhelp/configuring-remote-interpreters-via-ssh.html remote interpreter] features. To set it up do:
 +
 +
* Configure Remote Interpreter: Settings > Project Interpreters > Python Interpreters > Add (Remote)
 +
* Configure Deployment Settings: Settings > Deployment > Add
 +
** Do not forget to set proper path mappings
 +
** Include .git in the deployment options tab
 +
* Right click in the project tree > Upload to **your name here**
 +
* Create a new Python run configuration with the Remote Interpreter
 +
** Do not forget to set the '--config-file' script parameter
 +
** Alter the Path environment and add: /smartdc/bin
 +
 +
Now you can run nova-compute and nova-network from you IDE on the SmartOS node.
  
 
= Some helpers =
 
= Some helpers =
  
== Getting an SmartOS imager in [[OpenStack]] ==
+
== Getting the Code ==
  
On a SmartOS host:
+
The code is currently not in the mainline code base. The latest version is available at:
  
 +
* https://github.com/hvolkmer/nova/tree/smartosfolsom
 +
* https://github.com/dstroppa/openstack-smartos-nova-grizzly
 +
* https://github.com/tmetsch/nova/tree/smartos/grizzly
 +
 +
Once the code is mature enough it should be easy to integrate the code into the main.
 +
 +
== Setting up an environment ==
 +
 +
There is also setup script that you can use to create an environment; similar to devstack but focussed on SmartOS: https://github.com/hvolkmer/openstack-smartos
 +
 +
This is a work in progess. YMMV. Additions welcome.
 +
 +
This Metsch also has a setup script.
 +
 +
=== Running the metadata service ===
 +
 +
1. Add these options to compute.conf: metadata_listen=169.254.169.254 and metadata_listen_port=80
 +
2. Configure metadata interface like this:
  
 
<pre><nowiki>
 
<pre><nowiki>
imgadm update
+
dladm create-vnic -l e1000g0 meta0
imgadm import f9e4be48-9466-11e1-bc41-9f993f5dff36
+
ifconfig meta0 plumb
zfs snapshot zones/f9e4be48-9466-11e1-bc41-9f993f5dff36@now
+
ifconfig meta0 169.254.169.254/32
zfs send zones/f9e4be48-9466-11e1-bc41-9f993f5dff36@now > /zones/workspace/smartos.img
+
ifconfig meta0 up
 
</nowiki></pre>
 
</nowiki></pre>
  
 +
3. Then start /openstack/nova/bin/nova-api-metadata --config-file=/openstack/cfg/compute.conf in the global (compute) zone
 +
 +
4. When booting VMs I currently force the default route IP to be the IP of the compute zone, so that the metadata IP is reachable
 +
 +
==  Install Newrelic ==
 +
 +
    $ pkgin install nrsysmond
 +
    $ vi /opt/local/etc/nrsysmond.cfg
 +
    $ svcadm enable nrsysmond:default
 +
 +
== Install Puppet ==
 +
 +
    $ pkgin in ruby19-puppet
 +
 +
or:
  
and than:
+
    $ pkgin in ruby18-puppet
  
 +
== Install Chef ==
  
<pre><nowiki>
+
    $ pkgin in ruby19
glance image-create --name 'smartos' --is-public 'true' --container-format 'bare' --disk-format 'raw' --property 'zone=true' < /zones/workspace/smartos.img
+
    $ gem install chef
</nowiki></pre>
+
 
 +
= Next development steps =
 +
 
 +
As Basic VM handling is working now the following dev steps should happen:
  
 +
* add unit tests to fixate the current VM start/stop behaviour for SmartOS
 +
* add networking manager based on the VLANManager but integrate with vmadm/crossbow instead of the Linux way
  
 
= References =
 
= References =
  
 +
* http://www.cloudcomp.ch/2013/04/openstack-on-smartos/
 
* http://blog.hendrikvolkmer.de/2012/08/31/porting-openstack-to-smartos/
 
* http://blog.hendrikvolkmer.de/2012/08/31/porting-openstack-to-smartos/
 
* http://www.nohuddleoffense.de/2012/02/12/smartstack-smartos-openstack-part-1/
 
* http://www.nohuddleoffense.de/2012/02/12/smartstack-smartos-openstack-part-1/
 
* https://blueprints.launchpad.net/nova/+spec/smartos-support
 
* https://blueprints.launchpad.net/nova/+spec/smartos-support
 +
* https://docs.google.com/presentation/d/1N2x1itaMc9h7q-6520vuFmmjT0MXrIt1MdfAoGnENDI/edit
 +
* http://andy.edmonds.be/post/smartos-openstack
 +
* http://www.nohuddleoffense.de/2012/02/12/smartstack-smartos-openstack-part-2/

Latest revision as of 14:00, 14 April 2014

OpenStack/SmartOS Compute

This is a work-in-progress page about OpenStack and SmartOS.

Getting a Development Environment Going for SmartStack

TODO: Put this under control of Chef/Puppet

  • dev: need a vagrant smartos box - use veewee
  • prod: DHCP, PXE boot, foreman, razor

Current assumption for now is that there is a controller node running with all necessary services (like keystone). This could be a devstack environment. It is also possible to get these service running in separate zones on SmartOS btw. I currently use a devstack (Grizzly) environment on the controller node. The IPs which interconnect the two VMs are 192.168.56.101 (SmartOS) and 192.168.56.102.

I will not go into detail on how to run devstack on the controller node - it's straight forward.

The basics

Getting SmartOS

Check out:

If you use the script above, modify the SmartOS VirtualBox VM to have two NICs configured, one with Internet access (NAT) the other one host-only.

Onwards...

Then get `pkgin` running in the global zone:

   $ cd /
   $ curl -k http://pkgsrc.joyent.com/packages/SmartOS/bootstrap/bootstrap-2013Q1-x86_64.tar.gz | gzcat | tar -xf -
   $ pkg_admin rebuild
   $ pkgin -y up

Installed some packages which are needed or come in handy (mc, vim) during the next steps:

   $ pkgin install mc vim scmgit python2.7 py27-pip py27-expat py27-sqlite2 py27-mysqldb gcc47 gnu-binutils libxslt

Get the code & install the dependencies needed:

   $ mkdir -p /zones/workspace
   $ cd /zones/workspace
   $ git clone git://github.com/tmetsch/nova.git -b smartos/grizzly
   $ cd nova/tools
   $ export PATH=/opt/local/gcc47/bin/:$PATH
   $ export CFLAGS="-D_XPG6 -std=c99"
   $ pip install -r pip-requires

Configuring the SmartOS Compute Node

Edit sample `./etc/nova/nova.conf` file:

   [DEFAULT]
   rabbit_host=192.168.56.102
   rabbit_password=secret
   verbose=True
   fake_network=True
   auth_strategy=keystone
   # network_manager=nova.network.manager.FlatDHCPManager
   glance_api_servers=192.168.56.102:9292
   sql_connection=mysql://root:secret@192.168.56.102/nova?charset=utf8
   compute_driver=nova.virt.smartosapi.driver.SmartOSDriver

Create a symlink `/opt/local/bin/python` to `/opt/local/bin/python2.7`

   $ ln -s /opt/local/bin/python2.7 /opt/local/bin/python

Run `nova-compute`:

   $ export LD_LIBRARY_PATH=/opt/local/lib
   $ bin/nova-compute --config-file etc/nova/nova.conf

Run `nova-network`:

   $ export LD_LIBRARY_PATH=/opt/local/lib
   $ bin/nova-network --config-file etc/nova/nova.conf

At this point you will have the set of services necessary to begin provisioning VMs.

Starting a SmartOS image

Get an image (make sure it is f9e4be48-9466-11e1-bc41-9f993f5dff36 for now - smartos64 v1.6.3 - See this line):

   $ imgadm import f9e4be48-9466-11e1-bc41-9f993f5dff36
   $ zfs snapshot zones/f9e4be48-9466-11e1-bc41-9f993f5dff36@now
   $ zfs send zones/f9e4be48-9466-11e1-bc41-9f993f5dff36@now > /zones/workspace/smartos.img

Now on the controller node - register it with glance:

   $ cd /zones/workspace
   $ glance image-create --name 'smartos' --is-public 'true' --container-format 'bare' --disk-format 'raw' --property 'zone=true' < smartos.img"

Boot a instance:

   $ nova boot --flavor=m1.nano --image=smartos testserver --availability-zone nova:08-00-27-ee-14-2a

Note: It might be needed to change to policies for the user you are using to start the instance - must allow for forced_hosts.

Check instance:

   $ zoneadm list -cv
   ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              liveimg  shared
   1 e99947b5-3578-4f4b-b102-d6ef5706b173 running    /zones/e99947b5-3578-4f4b-b102-d6ef5706b173 joyent   excl

Coding against it with PyCharm's remote interpreter option

PyCharm offers a nice remote interpreter features. To set it up do:

  • Configure Remote Interpreter: Settings > Project Interpreters > Python Interpreters > Add (Remote)
  • Configure Deployment Settings: Settings > Deployment > Add
    • Do not forget to set proper path mappings
    • Include .git in the deployment options tab
  • Right click in the project tree > Upload to **your name here**
  • Create a new Python run configuration with the Remote Interpreter
    • Do not forget to set the '--config-file' script parameter
    • Alter the Path environment and add: /smartdc/bin

Now you can run nova-compute and nova-network from you IDE on the SmartOS node.

Some helpers

Getting the Code

The code is currently not in the mainline code base. The latest version is available at:

Once the code is mature enough it should be easy to integrate the code into the main.

Setting up an environment

There is also setup script that you can use to create an environment; similar to devstack but focussed on SmartOS: https://github.com/hvolkmer/openstack-smartos

This is a work in progess. YMMV. Additions welcome.

This Metsch also has a setup script.

Running the metadata service

1. Add these options to compute.conf: metadata_listen=169.254.169.254 and metadata_listen_port=80 2. Configure metadata interface like this:

dladm create-vnic -l e1000g0 meta0
ifconfig meta0 plumb
ifconfig meta0 169.254.169.254/32
ifconfig meta0 up

3. Then start /openstack/nova/bin/nova-api-metadata --config-file=/openstack/cfg/compute.conf in the global (compute) zone

4. When booting VMs I currently force the default route IP to be the IP of the compute zone, so that the metadata IP is reachable

Install Newrelic

   $ pkgin install nrsysmond
   $ vi /opt/local/etc/nrsysmond.cfg
   $ svcadm enable nrsysmond:default

Install Puppet

   $ pkgin in ruby19-puppet

or:

   $ pkgin in ruby18-puppet

Install Chef

   $ pkgin in ruby19
   $ gem install chef

Next development steps

As Basic VM handling is working now the following dev steps should happen:

  • add unit tests to fixate the current VM start/stop behaviour for SmartOS
  • add networking manager based on the VLANManager but integrate with vmadm/crossbow instead of the Linux way

References