-
Notifications
You must be signed in to change notification settings - Fork 172
OpenStack_Deployment_using_Puppet
Table of Contents
- Introduction
- Puppet server and client installation
- OpenStack Deployment with Puppet
- Other Design Considerations
This mini-design discusses how to use Puppet to deploy OpenStack software. Puppet is an automation software that helps system administrators manage software throughout its life cycle, from provisioning and configuration to patch management and compliance. Puppet is available as both open source and commercial software. We only support the open source Puppet in xCAT. Puppet lab (https://puppetlabs.com/) provides many modules that automates the deployment of some critical applications such as OpenStack. OpenStack (http://www.openstack.org/) is an open source software that provides infrastructure for cloud computing.
This doc discusses how to setup puppet server and client within xCAT cluster and then kick off the OpenStack deployment using puppet. We'll have the following assumption:
- All the nodes have an external network that has internet connection and a internal network.
- The OpenStack controller will not be the same as the xCAT management node. This is due to the conflict of OpenStack DHCP and xCAT DHCP server.
We'd like to divide it into following two functions:
- Puppet server and client installation
- OpenStack deployment with puppet
Function #1 itself is a unique feature that allows user to use the puppet deploying other applications.
Puppet server and client can be installed on many operating systems. Due to time limit, we'll limit it to Ubuntu and RedHat for xCAT 2.8.1 release.
The following 4 postscripts will be created, they will be installed under /install/postscripts directory.
- install_puppet_server It can be run on the mn as a script or on a node as a postscript to install and configure the puppet server. It first installs the puppet-server rpm and its dependencies and then calls the config_puppet_server script to modify the puppet server configuration files. And then it restarts the puppet server so that the new configuration can take effect.
- install_puppet_client It is run as a postscript on a node. It first download and installs the puppet client rpm and its dependencies and then calls config_puppet_client script to modify the puppet client configuration files. It does NOT start the puppet agent because it may kick off the application deployment prematurely.
- config_puppet_server It is called by install_puppet_server on Ubuntu and the puppet kit on RH (discussed later). It sets the certname in /etc/puppet/puppet.conf file. This name will be referenced as the puppet server name by the client in order to certify with the server. A site.puppetserver value will be used as the certname if it is defined, otherwise site.master will be used. It also sets up the /etc/puppet/autosign.conf file so that the client certification can be done automatically.
- config_puppet_client It is called by install_puppet_client on Ubuntu and the puppet kit on RH (discussed later). It sets the server name and its own node name in /etc/puppet/puppet.conf file.
This is what the user will do when installing puppet:
First assign a node as a puppet server, the node can be mn or any node.
chdef -t site clustersite puppetserver=<nodename>
If the puppet server is mn, run
install_puppet_server
If puppet server is not mn, first add install_puppet_server to the postscripts table for the node,
chdef -t node -o <nodename> -p postbootscripts=install_puppet_server
Then run updatnode or redeploy the node:
updatenode <nodename> -P install_puppet_server
or
rsetboot <nodename> net
nodeset <nodename> osimage=<imgname>
rpower <nodename> reset
To install puppet client on nodes, first add install_puppet_client to the postscripts table for the nodes,
chdef -t node -o <noderange> -p postbootscripts=install_puppet_client
Then run updatenode or redeploy the node:
updatenode <noderange> -P install_puppet_client
or
rsetbootseq <noderange> net
nodeset <noderange> osimage=<imgname>
rpower <noderange> reset
A kit will be used to install the puppet server and client on RedHat and other platform. In this release we'll create a kit just for rhels6 with x86_64 architecture. The puppet rpms and the dependency packages will be downloaded from https://yum.puppetlabs.com/el/6/dependencies/x86_64/ and https://yum.puppetlabs.com/el/6/products/x86_64/
The name of the kit is called puppet. There will be 2 kit components in the kit:
- puppet_server_kit
- puppet_client_kit
The buildkit.conf file looks like this:
kit:
basename=puppet
description=Kit for installing puppet server and client
version=1.0
ostype=Linux
kitlicense=EPL
kitrepo:
kitrepoid=rhels6_x86_64
osbasename=rhels
osmajorversion=6
#osminorversion=
osarch=x86_64
#compat_osbasenames=
kitcomponent:
basename=puppet_client_kit
description=For installing puppet client
version=1.0
release=1
serverroles=servicenode,compute
kitrepoid=rhels6_x86_64
kitpkgdeps=puppet
postinstall=client.rpm_post
postbootscripts=client.post
kitcomponent:
basename=puppet_server_kit
description=For installing puppet server
version=1.0
release=1
serverroles=mgtnode
kitrepoid=rhels6_x86_64
kitpkgdeps=puppet-server
postinstall=server.rpm_post
postbootscripts=server.post
kitpackage:
filename=puppet-3*
kitrepoid=rhels6_x86_64
isexternalpkg=no
rpm_prebuiltdir=rhels6/x86_64
kitpackage:
filename=puppet-server-*
kitrepoid=rhels6_x86_64
isexternalpkg=no
rpm_prebuiltdir=rhels6/x86_64
kitpackage:
filename=*
kitrepoid=rhels6_x86_64
isexternalpkg=no
rpm_prebuiltdir=rhels6/x86_64
The server.rpm_post and client.rpm_post are used as the %post script for the puppet_server_kit.rpm and puppet_client_kit.rpm meta packages respectively. They are used to configure the puppet after it is installed. server.rpm_post calls config_puppet_server and client.rpm_post calls config_puppet_client. However, this will not work for stateless/statelite when the puppet rpms are installed into images by genimage command. This is because both config* scripts need the environmental variables such as $NODE, $PUPPETSERVER and $SITEMASTER etc. in order to do the configuration for puppet. When genimage is called, these environmental variables are not set and the $NODE cannot be the same for every node. In order to solve this problem, two other scripts server.post and client.post will be used as the postbootscripts in the postscripts table for the nodes.
The client.rpm_post looks like this:
if [ -f "/proc/cmdline" ]; then # prevent running it during install into chroot image
#configure the puppet agent configuration files
/xcatpost/config_puppet_client "$@"
fi
exit 0
The client.post looks like this:
if [ "$NODESETSTATE" = "install" ]; then
#prevent getting called during full install bootup
#because the function will be called in the rpm %post section instead
exit 0
else
#configure the puppet agent configuration files
/xcatpost/config_puppet_client "$@"
fi
exit 0
The following is the user instruction on how to install puppet server and client on RH, assuming mn is the puppet server.
Add mn on the xCAT DB
xcatconfig -m
Please make sure that there is an image name assigned to the mn and each node. To check, run
lsdef <nodename> provmethod
Add puppet server name in the site table
chdef -t site clustersite puppetserver=<mnname>
Download the puppet kit
wget http://xcat.sourceforge.net/#download... (TBD)
Add the kit in xCAT
addkit puppet-1.0-Linux.tar.bz2
addkitcomp -i <mn_image_name> puppet_server_kit
addkitcomp -i <node_image_name> puppet_client_kit
To install the puppet server on the mn, run
updatenode <mnname> -P otherpkgs
To install the puppet client on the node, run updatenode or redeploy the node. Please make sure yum is installed on all the nodes.
updatenode <noderange> -P otherpkgs
or
rsetbootseq <noderange> net
nodeset <noderange> osimage=<imgname>
rpower <noderange> reset
With automation setup by the OpenStack Puppet Modules, deploying OpenStack is quite easy, if everything goes well. If something is wrong, you have to debug the modules which is not that easy. Hope we setup everything up front so that the installation goes smoothly.
1. Load the OpenStack modules
puppet module install puppetlabs-openstack
puppet module list
2. Create a site manifest site.pp for OpenStack
cat /etc/puppet/modules/openstack/examples/site.pp >> /etc/puppet/manifests/site.pp
Note: There is 2 errors in /etc/puppet/manifests/site.pp file, they are found when deploying OpenStack Folsom on Ubuntu 12.04.2. You may need to make the following changes for your cluster: In 'openstack::controller' class, comment out export_resources entry and add a entry for secret_key. So the last two entried of the class look like this:
#export_resources => false,
secret_key => 'dummy_secret_key',
Note: I have created a script for step 1 and 2, but feel it is not worth to check in. This should not be a big burden for the user anyway.
3. Input cluster info in the site.pp file Now you can modify the file /etc/puppet/manifests/site.pp and input the network info and the a few passwords. We usually make all the passwords the same. The following entries must be filled:
$public_interface = 'eth0'
$private_interface = 'eth1'
# credentials
$admin_email = 'root@localhost'
$admin_password = 'keystone_admin'
$keystone_db_password = 'keystone_db_pass'
$keystone_admin_token = 'keystone_admin_token'
$nova_db_password = 'nova_pass'
$nova_user_password = 'nova_pass'
$glance_db_password = 'glance_pass'
$glance_user_password = 'glance_pass'
$rabbit_password = 'openstack_rabbit_password'
$rabbit_user = 'openstack_rabbit_user'
$fixed_network_range = '10.0.0.0/24'
$floating_network_range = '192.168.101.64/28'
$controller_node_address = '192.168.101.11'
Add the OpenStack controller and compute nodes in the site.pp file. You can replace "node /openstack_controller/" and "node /openstack_compute/" or "node /openstack_all/" with the node names of your cluster, for example:
node "node1" {
class { 'openstack::controller':
...
}
node "node2,node3" {
class { 'openstack::compute':
...
}
1. Setup the OpenStack repo on the nodes
chdef -t node -o <noderange> -p postbootscripts=setup_openstack_repo
updatenode <noderange> -P setup_openstack_repo
setup_openstack_repo has hard coded OpenStack repositories which you can modify to fit your needs. It uses the following repositories:
- Ubuntu: http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main
- RH: TBD
2. Deploy OpenStack
xdsh <controller_nodename> "puppet agent -t"
xdsh <compute_nodenames> "puppet agent -t"
Now OpenStack is installed and configured on your node. Please refer to Puppet's own doc /etc/puppet/modules/openstack/README.md for detailed instructions on how to use puppet to deploy OpenStack
- Required reviewers: Bruce Potter, Guang Cheng, Jie Hua
- Required approvers: Bruce Potter
- Database schema changes: N/A
- Affect on other components: N/A
- External interface changes, documentation, and usability issues: N/A
- Packaging, installation, dependencies: N/A
- Portability and platforms (HW/SW) supported: N/A
- Performance and scaling considerations: N/A
- Migration and coexistence: N/A
- Serviceability: N/A
- Security: N/A
- NLS and accessibility: N/A
- Invention protection: N/A
- Mar 08, 2023: xCAT 2.16.5 released.
- Jun 20, 2022: xCAT 2.16.4 released.
- Nov 17, 2021: xCAT 2.16.3 released.
- May 25, 2021: xCAT 2.16.2 released.
- Nov 06, 2020: xCAT 2.16.1 released.
- Jun 17, 2020: xCAT 2.16 released.
- Mar 06, 2020: xCAT 2.15.1 released.
- Nov 11, 2019: xCAT 2.15 released.
- Mar 29, 2019: xCAT 2.14.6 released.
- Dec 07, 2018: xCAT 2.14.5 released.
- Oct 19, 2018: xCAT 2.14.4 released.
- Aug 24, 2018: xCAT 2.14.3 released.
- Jul 13, 2018: xCAT 2.14.2 released.
- Jun 01, 2018: xCAT 2.14.1 released.
- Apr 20, 2018: xCAT 2.14 released.
- Mar 14, 2018: xCAT 2.13.11 released.
- Jan 26, 2018: xCAT 2.13.10 released.
- Dec 18, 2017: xCAT 2.13.9 released.
- Nov 03, 2017: xCAT 2.13.8 released.
- Sep 22, 2017: xCAT 2.13.7 released.
- Aug 10, 2017: xCAT 2.13.6 released.
- Jun 30, 2017: xCAT 2.13.5 released.
- May 19, 2017: xCAT 2.13.4 released.
- Apr 14, 2017: xCAT 2.13.3 released.
- Feb 24, 2017: xCAT 2.13.2 released.
- Jan 13, 2017: xCAT 2.13.1 released.
- Dec 09, 2016: xCAT 2.13 released.
- Dec 06, 2016: xCAT 2.9.4 (AIX only) released.
- Nov 11, 2016: xCAT 2.12.4 released.
- Sep 30, 2016: xCAT 2.12.3 released.
- Aug 19, 2016: xCAT 2.12.2 released.
- Jul 08, 2016: xCAT 2.12.1 released.
- May 20, 2016: xCAT 2.12 released.
- Apr 22, 2016: xCAT 2.11.1 released.
- Mar 11, 2016: xCAT 2.9.3 (AIX only) released.
- Dec 11, 2015: xCAT 2.11 released.
- Nov 11, 2015: xCAT 2.9.2 (AIX only) released.
- Jul 30, 2015: xCAT 2.10 released.
- Jul 30, 2015: xCAT migrates from sourceforge to github
- Jun 26, 2015: xCAT 2.7.9 released.
- Mar 20, 2015: xCAT 2.9.1 released.
- Dec 12, 2014: xCAT 2.9 released.
- Sep 5, 2014: xCAT 2.8.5 released.
- May 23, 2014: xCAT 2.8.4 released.
- Jan 24, 2014: xCAT 2.7.8 released.
- Nov 15, 2013: xCAT 2.8.3 released.
- Jun 26, 2013: xCAT 2.8.2 released.
- May 17, 2013: xCAT 2.7.7 released.
- May 10, 2013: xCAT 2.8.1 released.
- Feb 28, 2013: xCAT 2.8 released.
- Nov 30, 2012: xCAT 2.7.6 released.
- Oct 29, 2012: xCAT 2.7.5 released.
- Aug 27, 2012: xCAT 2.7.4 released.
- Jun 22, 2012: xCAT 2.7.3 released.
- May 25, 2012: xCAT 2.7.2 released.
- Apr 20, 2012: xCAT 2.7.1 released.
- Mar 19, 2012: xCAT 2.7 released.
- Mar 15, 2012: xCAT 2.6.11 released.
- Jan 23, 2012: xCAT 2.6.10 released.
- Nov 15, 2011: xCAT 2.6.9 released.
- Sep 30, 2011: xCAT 2.6.8 released.
- Aug 26, 2011: xCAT 2.6.6 released.
- May 20, 2011: xCAT 2.6 released.
- Feb 14, 2011: Watson plays on Jeopardy and is managed by xCAT!
- xCAT OS And Hw Support Matrix
- Oct 22, 2010: xCAT 2.5 released.
- Apr 30, 2010: xCAT 2.4 is released.
- Oct 31, 2009: xCAT 2.3 released. xCAT's 10 year anniversary!
- Apr 16, 2009: xCAT 2.2 released.
- Oct 31, 2008: xCAT 2.1 released.
- Sep 12, 2008: Support for xCAT 2 can now be purchased!
- June 9, 2008: xCAT breaths life into (at the time) the fastest supercomputer on the planet
- May 30, 2008: xCAT 2.0 for Linux officially released!
- Oct 31, 2007: IBM open sources xCAT 2.0 to allow collaboration among all of the xCAT users.
- Oct 31, 1999: xCAT 1.0 is born!
xCAT started out as a project in IBM developed by Egan Ford. It was quickly adopted by customers and IBM manufacturing sites to rapidly deploy clusters.