all your hosts are belong to us!!!
Project description
hostout.cloud let’s you build whole new application clusters in a matter of minutes.
Building on the power of collective.hostout, buildout and fabric just few lines of configuration and a single command is all thats needed to deploy apache, squid, mysql, zope, plone, django…
Installing hostout.cloud
Here is the worlds simplest buildout which just creates a python script which outputs a single line. Checkout buildout for the power of what it can really deploy.
[buildout] parts = host1 helloworld [helloworld] recipe = zc.recipe.egg:scripts eggs = zc.recipe.egg initialization = import sys main=lambda: sys.stdout.write('all your hosts are below to us!!!') entry-points = helloworld=__main__:main
This is the development buildout which you can build as normal on your local machine.
Add the collective.hostout part to your development buildout. Using the extends option we add hostout.cloud to handle creating a host and hostout.ubuntu to bootstrap that host ready for deployment.
>>> write('buildout.cfg', ... """ ... [buildout] ... parts = host1 helloworld ... ... [helloworld] ... recipe = zc.recipe.egg:scripts ... eggs = zc.recipe.egg ... initialization = import sys ... main=lambda: sys.stdout.write('all your hosts are below to us!!!') ... entry-points = helloworld=__main__:main ... ... [host1] ... recipe = collective.hostout ... extends = hostout.cloud ... hostsize = 256 ... hostos = Ubuntu 9.10 ... hosttype = rackspacecloud ... key = myaccount ... secret = myapikey ... parts = helloworld ... """ ... )
>>> print system('bin/buildout -N') Installing host1. Generated script '/sample-buildout/bin/hostout'. Generated script '/sample-buildout/bin/helloworld'.
Now with a single command everything is done for us (see collective.hostout for more information):
>>> print system('bin/hostout host1 deploy')
Now we have both a local testing environment for our app:
>>> print system('bin/helloworld') all your hosts are below to us!!!
is now deployed to the cloud in our production environment. We can use the collective.hostout run command to test this:
>>> print system('bin/hostout host1 run bin/helloworld') all your hosts are below to us!!!
Change your local code and just run deploy again:
>>> print system('bin/hostout host1 deploy')
Redeploying only uploads whats changed.
Reboot your server:
>>> print system('bin/hostout host1 reboot')
and destroy it when you’re done:
>>> print system('bin/hostout host1 destroy')
Supported Cloud providers
hostout.cloud uses libcloud. See the libcloud site for the supported serviers and options for each.
Currently rackspacecloud and Ec2 are all thats tested.
Common options
- hostname
Unique name to create the VM
- hostsize
The desired RAM size in MB. You will get the closet VM with at least this size
Rackspace
- key
your username
- secret
your api password
- hostos
the title of the OS as shown on the distributions selection list
Amazon Ec2
key
- secret
api secret key
- key_filename
path to your pem file
- hostos
is set to the image title
1.0a5 (2010-03-21)
add support to choose your instance type size (exemple m1.large on EC2)
improve image picking algorithm: pick later image_id. on ec2 just use official ubuntu images
commands for listing images and sizes.
made compatible with collective.hostout 1.0a5
libcloud ignored when ipadress specified
fix default sudo-user to “root”
added ability to test if node has been created: is_created() returning bool
fixed how hostout hooks into bootstrap process to create node before bootstrapping
fixed “destroy” so “createnode” can work straight afterwards.
1.04
fixed some bugs
1.0a3 (2010-06-03)
rerelease to bad version in upload
1.0a2 (2010-06-03)
use fabfile entrypoint
ec2 now working
deploy with create works now
1.0a1 (unreleased)
rackspace cloud working
initial verion
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.