of 48

doc

Published on December 2016 | Categories: Documents | Downloads: 9 | Comments: 0
209 views

c

Comments

Content


© 2014 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.
Ubuntu Cloud Documentation
Deploying Production Grade OpenStack with MAAS, Juju and
Landscape
This documentation has been created to describe best practice in deploying a Production Grade installation of OpenStack
using current Canonical technologies, including bare metal provisioning using MAAS, service orchestration with Juju and
system management with Landscape.
This documentation is divided into four main topics:
1. Installing the MAAS Metal As A Service software
2. Installing Juju and configuring it to work with MAAS
3. Using Juju to deploy OpenStack
4. Deploying Landscape to manage your OpenStack cloud
Once you have an up and running OpenStack deployment, you should also read our Administration Guide which details
common tasks for maintenance and scaling of your service.
Legal notices
This documentation is copyright of Canonical Limited. You are welcome to display on your computer, download and print
this documentation or to use the hard copy provided to you for personal, education and non-commercial use only. You
must retain copyright, trademark and other notices unaltered on any copies or printouts you make. Any trademarks, logos
and service marks displayed in this document are property of their owners, whether Canonical or third parties. This
documentation is provided on an “as is” basis, without warranty of any kind, either express or implied. Your use of this
documentation is at your own risk. Canonical disclaims all warranties and liability that may result directly or indirectly
from the use of this documentation.
Installing the MAAS software
Scope of this documentation
This document provides instructions on how to install the Metal As A Service (MAAS) software. It has been prepared
alongside guides for installing Juju, OpenStack and Landscape as part of a production grade cloud environment. MAAS
itself may be used in different ways and you can find documentation for this on the main MAAS website [MAAS docs]. For
the purposes of this documentation, the following assumptions have been made:
You have sufficient, appropriate node hardware
You will be using Juju to assign workloads to MAAS
You will be configuring the cluster network to be controlled entirely by MAAS (i.e. DNS and DHCP)
If you have a compatible power-management system, any additional hardware required is also installed(e.g. IPMI
network).
Introducing MAAS
Metal as a Service – MAAS – lets you treat physical servers like virtual machines in the cloud. Rather than having to
manage each server individually, MAAS turns your bare metal into an elastic cloud-like resource.
What does that mean in practice? Tell MAAS about the machines you want it to manage and it will boot them, check the
hardware’s okay, and have them waiting for when you need them. You can then pull nodes up, tear them down and
redeploy them at will; just as you can with virtual machines in the cloud.
When you’re ready to deploy a service, MAAS gives Juju the nodes it needs to power that service. It’s as simple as that: no
need to manually provision, check and, afterwards, clean-up. As your needs change, you can easily scale services up or
down. Need more power for your Hadoop cluster for a few hours? Simply tear down one of your Nova compute nodes and
redeploy it to Hadoop. When you’re done, it’s just as easy to give the node back to Nova.
MAAS is ideal where you want the flexibility of the cloud, and the hassle-free power of Juju charms, but you need to
deploy to bare metal.
Installing MAAS from the Cloud Archive
The Ubuntu Cloud Archive is a repository made especially to provide users with the most up to date, stable versions of
MAAS, Juju and other tools. It is highly recommended to keep your software up to date:
sudo apt-get update
There are several packages that comprise a MAAS install. These are:
maas-region-controller: Which comprises the 'control' part of the software, including the web-based user interface, the
API server and the main database. maas-cluster-controller: This includes the software required to manage a cluster of
nodes, including managing DHCP and boot images. maas-dns: This is a customised DNS service that MAAS can use locally
to manage DNS for all the connected nodes. mass-dhcp: As for DNS, there is a DHCP service to enable MAAS to correctly
enlist nodes and assign IP addresses. The DHCP setup is critical for the correct PXE booting of nodes.
As a convenience, there is also a maas metapackage, which will install all these components
If you need to separate these services or want to deploy an additional cluster controller, you should install the
corresponding packages individually.
Installing the packages
Running the command:
sudo apt-get install maas
...will initiate installation of all the components of MAAS.
The maas-dhcp and maas-dns packages should be installed by default.
Once the installation is complete, the web-based interface for MAAS will start. In many cases, your MAAS controller will
have several NICs. By default, all the services will initiate using the first discovered controller (i.e. usually eth0)
Before you login to the server for the first time, you should create a superuser account.
Create a superuser account
Once MAAS is installed, you'll need to create an administrator account:
sudo maas-region-admin createsuperuser
Running this command will prompt for a username, an email address and a password for the admin user. You may also use
a different username for your administrator account, but "root" is a common convention and easy to remember.
You can run this command again for any further administrator accounts you may wish to create, but you need at least one.
Import the boot images
MAAS will check for and download new Ubuntu images once a week. However, you'll need to download them manually
the first time. To do this you should connect to the MAAS web interface using a web browser. Use the URL:
http://172.18.100.1/MAAS/
You should substitute in the IP address of the server where you have installed the MAAS software. If there are several
possible networks, by default it will be on whichever one is assigned to the eth0 device.
You should see a login screen like this:
Enter the username and password you specified for the admin account. When you have successfully logged in you should
see the main MAAS page:
Either click on the link displayed in the warning at the top, or on the 'Cluster' tab in the menu to get to the cluster
configuration screen. The initial cluster is automatically added to MAAS when you install it, but it has no associated
images for booting nodes with yet. Click on the button to begin the download of suitable boot images.
Importing the boot images can take some time, depending on the available network connection. This page does not
dynamically refresh, so you can refresh it manually to determine when the boot images have been imported.
Login to the server
To check that everything is working properly, you should try and login to the server now. Both the error messages should
have gone (it can take a few minutes for the boot image files to register) and you can see that there are currently 0 nodes
attached to this controller.
Configure switches on the network
Some switches use Spanning-Tree Protocol (STP) to negotiate a loop-free path through a root bridge. While scanning, it
can make each port wait up to 50 seconds before data is allowed to be sent on the port. This delay in turn can cause
problems with some applications/protocols such as PXE, DHCP and DNS, of which MAAS makes extensive use.
To alleviate this problem, you should enable Portfast for Cisco switches or its equivalent on other vendor equipment,
which enables the ports to come up almost immediately.
Add an additional cluster
Whilst it is certainly possible to run MAAS with just one cluster controller for all the nodes, in the interests of easier
maintenance, uprades and stability, it is desirable to have at least two operational clusters.
Each cluster needs a controller node. Install Ubuntu on this node and then follow a similar setup proceedure to install the
cluster controller software:
sudo apt-get update
sudo apt-get install maas-cluster-controller
sudo apt-get install maas-dhcp maas-dns
Once the cluster software is installed, it is useful to run:
sudo dpkg-reconfigure maas-cluster-controller
This will enable you to make sure the cluster controller agent is pointed at the correct address for the MAAS master
controller.
Configure additional Cluster Controller(s)
Cluster acceptance
When you install your first cluster controller on the same system as the region controller, it will be automatically
accepted by default (but not yet configured, see below). Any other cluster controllers you set up will show up in the user
interface as “pending,” until you manually accept them into the MAAS.
To accept a cluster controller, click on the "Clusters" tab at the top of the MAAS web interface:
You should see that the text at the top of the page indicates a pending cluster. Click on that text to get to the Cluster
acceptance screen.
Here you can change the cluster’s name as it appears in the UI, its DNS zone, and its status. Accepting the cluster changes
its status from “pending” to “accepted.”
Now that the cluster controller is accepted, you can configure one or more of its network interfaces to be managed by
MAAS. This will enable the cluster controller to manage nodes attached to those networks. The next section explains how
to do this and what choices are to be made.
Cluster Configuration
MAAS automatically recognises the network interfaces on each cluster controller. Some of these will be connected to
networks where you want to manage nodes. We recommend letting your cluster controller act as a DHCP server for these
networks, by configuring those interfaces in the MAAS user interface.
As an example, we will configure the cluster controller to manage a network on interface eth0. Click on the edit icon for
eth0, which takes us to this page:
Here
you can select to what extent you want the cluster controller to manage the network:
DHCP only - this will run a DHCP server on your cluster
DHCP and DNS - this will run a DHCP server on the cluster and configure the DNS server included with the region
controller so that it can be used to look up hosts on this network by name (recommended).
You cannot have DNS management without DHCP management because MAAS relies on its own DHCP server’s leases file
to work out the IP address of nodes in the cluster. If you set the interface to be managed, you now need to provide all of
the usual DHCP details in the input fields below. Once done, click “Save interface”. The cluster controller will now be able
to boot nodes on this network.
There is also an option to leave the network unmanaged. Use this for networks where you don’t want to manage any
nodes. Or, if you do want to manage nodes but want to use an existing DHCP service on your network.
A single cluster controller can manage more than one network, each from a different network interface on the cluster-
controller server. This may help you scale your cluster to larger numbers of nodes, or it may be a requirement of your
network architecture.
Enlisting nodes
Now that the MAAS controller is running, we need to make the nodes aware of MAAS and vice-versa. With MAAS
controlling DHCP and nodes capable of PXE booting, this is straightforward
Automatic Discovery
With nodes set to boot from a PXE image, they will start, look for a DHCP server, receive the PXE boot details, boot the
image, contact the MAAS server and shut down.
During this process, the MAAS server will be passed information about the node, including the architecture, MAC address
and other details which will be stored in the database of nodes. You can accept and comission the nodes via the web
interface. When the nodes have been accepted the selected series of Ubuntu will be installed.
You may also accept and commission all nodes from the commandline. This requires that you first login with the API key
see Appendix I , then run the command:
maas-cli maas-profile nodes accept-all
Once commissioned, the node's status will be updated to "Ready". you can check the results of the comissioning scripts
by clicking on the node name and then clicking on the link below the heading "Commissioning output". The screen will
show a list of files and their result - you can further examine the output by clicking on the status of any of the files.
Manually adding nodes
If your nodes are not capable of booting from PXE images, they can be manually registered with MAAS. On the main web
interface screen, click on the "Add Node" button:
This will load a new page where you can manually enter details about the node, including its MAC address. This is used to
identify the node when it contacts the DHCP server.
Power management
MAAS supports several types of power management. To configure power management, you should click on an individual
node entry, then click on the "Edit" button. The power management type should be selected from the drop down list, and
the appropriate power management details added.
If you have a large number of nodes, it should be possible to script this process using the MAAS cli. See Appendix 1 for
more details of the MAAS CLI
Without power management, MAAS will be unable to power on nodes when they are required.
Preparing MAAS for Juju and OpenStack using
Simplestreams
When Juju bootstraps a cloud, it needs two critical pieces of information:
1. The uuid of the image to use when starting new compute instances.
2. The URL from which to download the correct version of a tools tarball.
This necessary information is stored in a json metadata format called "simplestreams". For supported public cloud services
such as Amazon Web Services, HP Cloud, Azure, etc, no action is required by the end user. However, those setting up a
private cloud, or who want to change how things work (eg use a different Ubuntu image), can create their own metadata,
after understanding a bit about how it works.
The simplestreams format is used to describe related items in a structural fashion.( See the Launchpad project
lp:simplestreams for more details on implementation). Below we will discuss how Juju determines which metadata to use,
and how to create your own images and tools and have Juju use them instead of the defaults.
Basic Workflow
Whether images or tools, Juju uses a search path to try and find suitable metadata. The path components (in order of
lookup) are:
1. User supplied location (specified by tools-metadata-url or image-metadata-url config settings).
2. The environment's cloud storage.
3. Provider specific locations (eg keystone endpoint if on Openstack).
4. A web location with metadata for supported public clouds (https://streams.canonical.com).
Metadata may be inline signed, or unsigned. We indicate a metadata file is signed by using the '.sjson' extension. Each
location in the path is first searched for signed metadata, and if none is found, unsigned metadata is attempted before
moving onto the next path location.
Juju ships with public keys used to validate the integrity of image and tools metadata obtained from
https://streams.canonical.com. So out of the box, Juju will "Just Work" with any supported public cloud, using signed
metadata. Setting up metadata for a private (eg Openstack) cloud requires metadata to be generated using tools which
ship with Juju.
Image Metadata Contents
Image metadata uses a simplestreams content type of "image-ids". The product id is formed as follows:
com.ubuntu.cloud:server:: For Example: com.ubuntu.cloud:server:14.04:amd64 Non-released images (eg beta, daily etc)
have product ids like: com.ubuntu.cloud.daily:server:13.10:amd64
The metadata index and product files are required to be in the following directory tree (relative to the URL associated
with each path component):
|-streams |-v1 |-index.(s)json |-product-foo.(s)json |-product-bar.(s)json
The index file must be called "index.(s)json" (sjson for signed). The various product files are named according to the Path
values contained in the index file.
Tools metadata uses a simplestreams content type of "content-download". The product id is formed as follows:
"com.ubuntu.juju::"
For example:
"com.ubuntu.juju:12.04:amd64"
The metadata index and product files are required to be in the following directory tree (relative to the URL associated
with each path component). In addition, tools tarballs which Juju needs to download are also expected.
|-streams | |-v1 | |-index.(s)json | |-product-foo.(s)json | |-product-bar.(s)json | |-releases |-tools-abc.tar.gz |-tools-def.tar.gz
|-tools-xyz.tar.gz
The index file must be called "index.(s)json" (sjson for signed). The product file and tools tarball name(s) match whatever
is in the index/product files.
Configuration
For supported public clouds, no extra configuration is required; things work out-of-the-box. However, for testing
purposes, or for non-supported cloud deployments, Juju needs to know where to find the tools and which image to run.
Even for supported public clouds where all required metadata is available, the user can put their own metadata in the
search path to override what is provided by the cloud.
User specified URLs
These are initially specified in the environments.yaml file (and then subsequently copied to the jenv file when the
environment is bootstrapped). For images, use "image-metadata-url"; for tools, use "tools-metadata-url". The URLs can
point to a world readable container/bucket in the cloud, an address served by a http server, or even a shared directory
which is accessible by all node instances running in the cloud.
Assume an Apache http server with base URL https://juju-metadata , providing access to information at <base>/images
and <base>/tools . The Juju environment yaml file could have the following entries (one or both):
tools-metadata-url: https://juju-metadata/tools
image-metadata-url: https://juju-metadata/images
The required files in each location is as per the directory layout described earlier. For a shared directory, use a URL of the
form "file:///sharedpath".
Cloud storage
If no matching metadata is found in the user specified URL, environment's cloud storage is searched. No user
configuration is required here - all Juju environments are set up with cloud storage which is used to store state
information, charms etc. Cloud storage setup is provider dependent; for Amazon and Openstack clouds, the storage is
defined by the "control-bucket" value, for Azure, the "storage-account-name" value is relevant.
The (optional) directory structure inside the cloud storage is as follows:
|-tools | |-streams | |-v1 | |-releases | |-images |-streams |-v1
Of course, if only custom image metadata is required, the tools directory will not be required, and vice versa.
Note that if juju bootstrap is run with the --upload-tools option, the tools and metadata are placed according to the
above structure. That's why the tools are then available for Juju to use.
Provider specific storage
Providers may allow additional locations to search for metadata and tools. For OpenStack, Keystone endpoints may be
created by the cloud administrator. These are defined as follows:
juju-tools the value as described above in Tools Metadata Contentsproduct-streams the <path_url> value as described
above in Image Metadata Contents
Other providers may similarly be able to specify locations, though the implementation will vary.
This is the default location used to search for image and tools metadata and is used if no matches are found earlier in any
of the above locations. No user configuration is required.
There are two main issues when deploying a private cloud:
1. Image ids will be specific to the cloud.
2. Often, outside internet access is blocked
Issue 1 means that image id metadata needs to be generated and made available.
Issue 2 means that tools need to be mirrored locally to make them accessible.
Juju tools exist to help with generating and validating image and tools metadata. For tools, it is often easiest to just
mirror https://streams.canonical.com/tools . However image metadata cannot be simply mirrored because the image ids
are taken from the cloud storage provider, so this needs to be generated and validated using the commands described
below.
The available Juju metadata tools can be seen by using the help command:
juju help metadata
The overall workflow is:
Generate image metadata
Copy image metadata to somewhere in the metadata search path
Optionally, mirror tools to somewhere in the metadata search path
Optionally, configure tools-metadata-url and/or image-metadata-url
Image metadata
Generate image metadata using
juju metadata generate-image -d
As a minimum, the above command needs to know the image id to use and a directory in which to write the files.
Other required parameters like region, series, architecture etc. are taken from the current Juju environment (or an
environment specified with the -e option). These parameters can also be overridden on the command line.
The image metadata command can be run multiple times with different regions, series, architecture, and it will keep
adding to the metadata files. Once all required image ids have been added, the index and product json files can be
uploaded to a location in the Juju metadata search path. As per the Configuration section, this may be somewhere
specified by the image-metadata-url setting or the cloud's storage etc.
Examples:
1. image-metadata-url
2. upload contents of to http://somelocation
3. set image-metadata-url to http://somelocation/images
4. Cloud storage
If run without parameters, the validation command will take all required details from the current Juju environment (or as
specified by -e) and output the image id it would use to spin up an instance. Alternatively, series, region, architecture etc.
can be specified on the command line to override the values in the environment config.
Tools metadata
Generally, tools and related metadata are mirrored from https://streams.canonical.com/tools . However, it is possible to
manually generate metadata for a custom built tools tarball.
First, create a tarball of the relevant tools and place in a directory structured like this:
<tools_dir>/tools/releases/
Now generate relevant metadata for the tools by running the command:
juju generate-tools -d <tools_dir>
Finally, the contents of can be uploaded to a location in the Juju metadata search path. As per the Configuration section,
this may be somewhere specified by the tools-metadata-url setting or the cloud's storage path settings etc.
Examples:
1. tools-metadata-url
2. upload contents of the tools dir to http://somelocation
3. set tools-metadata-url to http://somelocation/tools
4. Cloud storage
upload contents of directly to environment's cloud storage
As with image metadata, the validation command is used to ensure tools are available for Juju to use:
juju metadata validate-tools
The same comments apply. Run the validation tool without parameters to use details from the Juju environment, or
override values as required on the command line. See juju help metadata validate-tools for more details.
Continue with your deployment by installing Juju >
Appendix I - Using the MAAS CLI
As well as the web interface, many tasks can be performed by accessing the MAAS API directly through the maas-cli
command. This section details how to login with this tool and perform some common operations.
Logging in
Before the API will accept any commands from maas-cli, you must first login. To do this, you need the API key which can
be found in the user interface.
Login to the web interface on your MAAS. Click on the username in the top right corner and select ‘Preferences’ from the
menu which appears.
The very first item is a list of MAAS keys. One will have already been generated when the system was
installed. It’s easiest to just select all the text, copy the key (it’s quite long!) and then paste it into the commandline. The
format of the login command is:
maas-cli login <profile-name> <hostname> <key>
The profile created is an easy way of associating your credentials with any subsequent call to the API. So an example login
might look like this:
maas-cli login maas http://10.98.0.13/MAAS/api/1.0
AWSCRMzqMNy:jjk...5e1FenoP82Qm5te2
which creates the profile ‘maas’ and registers it with the given key at the specified API endpoint. If you omit the
credentials, they will be prompted for in the console. It is also possible to use a hyphen, ‘-‘ in place of the credentials. In
this case a single line will be read from stdin, stripped of any whitespace and used as the credentials, which can be useful
if you are devolping scripts for specific tasks. If an empty string is passed instead of the credentials, the profile will be
logged in anonymously (and consequently some of the API calls will not be available)
maas-cli commands
The maas-cli command exposes the whole API, so you can do anything you actually can do with MAAS using this command.
This leaves us with a vast number of options, which are more fully expressed in the complete [2][MAAS Documentation]
list: lists the details [name url auth-key] of all the currently logged-in profiles.
login : Logs in to the MAAS controller API at the given URL, using the key provided and associates this connection with the
given profile name.
logout : Logs out from the given profile, flushing the stored credentials.
refresh: Refreshes the API descriptions of all the current logged in profiles. This may become necessary for example when
upgrading the maas packages to ensure the command-line options match with the API.
Useful examples
Displays current status of nodes in the commissioning phase:
maas cli maas nodes check-commissioning
Accept and commission all discovered nodes:
maas-cli maas nodes accept-all
List all known nodes:
maas-cli maas nodes list
Filter the list using specific key/value pairs:
maas-cli maas nodes list architecture="i386/generic"
Set the power parameters for an ipmi enabled node:
maas-cli maas node update <system_id> \
power_type="ipmi" \
power_parameters_power_address=192.168.22.33 \
power_parameters_power_user=root \
power_parameters_power_pass=ubuntu;
Appendix II - Using Tags
MAAS implements a system of tags based on the physical properties of the nodes. The idea behind this is that you can use
the tags to identify nodes with particular abilities which may be useful when it comes to deploying services.
A real world example of this might be to identify nodes which have fast GPUs installed, if you were planning on deploying
software which used CUDA or OpenCL which would make use of this hardware.
Tag definitions
Before we can create a tag we need to know how we will select which nodes it gets applied to. MAAS collects hardware
information from the nodes using the “lshw” utility to return detailed information in XML format. The definitions used in
creating a tag are then constructed using XPath expressions. If you are unfamiliar with XPath expressions, it is well worth
checking out the w3schools documentation. For the lshw XML, we will just check all the available nodes for some
properties. In our example case, we might want to find GPUs with a clock speed of over 1GHz. In this case, the relevant
XML node from the output will be labelled “display” and does have a property called clock, so it will look like this:
//node[@id="display"]/clock > 1000000000
Now we have a definition, we can go ahead and create a tag.
Creating a tag
Once we have sorted out what definition we will be using, creating the tag is easy using the maas command. You will need
to be logged in to the API first:
maas maas tags new name='gpu' \
comment='GPU with clock speed >1GHz for running CUDA type operations.' \
definition='//node[@id="display"]/clock > 1000000000'
The comment is really for your benefit. It pays to keep the actual tag name short and to the point as you will be using it
frequently in commands, but it may subsequently be hard to work out what exactly was the difference between tags like
“gpu” and “fastgpu” unless you have a good comment. Something which explains the definition in plain language is always
a good idea!
To check which nodes this tag applies to we can use the tag command:
maas maas tag nodes gpu
The process of updating the tags does take some time, expecially with a large number of nodes.
Using the tag
You can use the tag in the web interface to discover applicable nodes, but the real significance of it is when using Juju to
deploy services. Tags can be used with Juju constraints to make sure that a particular service only gets deployed on
hardware with the tag you have created.
Example: To use the ‘gpu’ tag we created to run a service called ‘cuda’ we would use:
juju deploy --constraints tags=gpu cuda
You could list several tags if required, and mix in other juju constraints if needed:
juju deploy --constraints "mem=1024 tags=gpu,intel" cuda
Manually assigning tags
MAAS supports the creation of arbitrary tags which don’t depend on XPath definitions (“nodes which make a lot of noise”
perhaps). If a tag is created without specifying the definition parameter then it will simply be ignored by tag refresh
mechanism, but the MAAS administrator will be able to manually add and remove the tag from specific nodes.
In this example we are assuming you are using the ‘maas’ profile and you want to create a tag called ‘my_tag’:
maas maas tags new name='my_tag' comment='nodes which go ping'
maas maas tag update-nodes my_tag add="<system_id>"
The first line creates a new tag but omits the definition, so no nodes are automatically added to it. The second line applies
that tag to a specific node referenced by its system id property.
You can easily remove a tag from a particular node, or indeed add and remove them at the same time:
maas maas tag update-nodes my_tag add=<system_id_1> \
add=<system_id_2> add=<system_id_3> remove=<system_id_4>
As the rule is that tags without a definition are ignored when rebuilds are done, it is also possible to create a normal tag
with a definition, and then subsequently edit it to remove the definition. From this point the tag behaves as if you had
manually created it, but it still retains all the existing associations it has with nodes. This is particularly useful if you have
some hardware which is conceptually similar but doesn’t easily fit within a single tag definition:
maas maas tag new name='my_tag' comment='nodes I like ' \
definition='contains(//node[@id=network]/vendor, "Intel")'
maas maas tag update my_tag definition=''
maas mass tag update-nodes my_tag add=<system_id>
Appendix III - Physical Zones
To help you maximise fault-tolerance and performance of the services you deploy, MAAS administrators can define
physical zones (or just zones for short), and assign nodes to them. When a user requests a node, they can ask for one that is
in a specific zone, or one that is not in a specific zone.
It's up to you as an administrator to decide what a physical zone should represent: it could be a server rack, a room, a data
centre, machines attached to the same UPS, or a portion of your network. Zones are most useful when they represent
portions of your infrastructure. But you could also use them simply to keep track of where your systems are located.
Each node is in one and only one physical zone. Each MAAS instance ships with a default zone to which nodes are attached
by default. If you do not need this feature, you can simply pretend it does not exist.
Applications
Since you run your own MAAS, its physical zones give you more flexibility than those of a third-party hosted cloud service.
That means that you get to design your zones and define what they mean. Below are some examples of how physical
zones can help you get the most out of your MAAS.
Creating a Zone
Only administrators can create and manage zones. To create a physical zone in the web user interface, log in as an
administrator and browse to the "Zones" section in the top bar. This will takes you to the zones listing page. At the
bottom of the page is a button for creating a new zone:
Assigning Nodes to a Zone
Once you have created one or more physical zones, you can set nodes' zones from the nodes listing page in the UI. Select
the nodes for which you wish to set a zone, and choose "Set physical zone" from the "Bulk action" dropdown list near the
top. A second dropdown list will appear, to let you select which zone you wish to set. Leave it blank to clear nodes'
physical zones. Clicking "Go" will apply the change to the selected nodes.
You can also set an individual node's zone on its "Edit node" page. Both ways are available in the API as well: edit an
individual node through a request to the node's URI, or set the zone on multiple nodes at once by calling the operation on
the endpoint.
Installing Juju for Ubuntu Cloud
Introduction
Juju is a powerful tool for managing scale out architectures in the cloud. It bootstraps an instance in your cloud from
where it can deploy, relate, manage and scale services in all directions. Running from the commandline or an intuitive GUI,
it delivers on its promise to orchestrate services rather than simply deploy them
In modern scale out architectures, servers are just units that enable application services to scale. Services are managed
independently of the underlying hardware so you don’t need to worry about launching new instances and setting up
config files to connect applications, Juju just takes care of it all.
Other solutions focus on configuration management to enforce consistency across scale out architectures but Juju creates
services as building blocks that are connected together simply by drawing a line between the two. It is this service based
approach that allows DevOps and architects to quickly be able to visualise, design, deploy and scale their application
infrastructures far more easily than if they are stuck in the weeds with configuration management tools. For more
background on Juju, please see the main Juju website
Scope of this documentation
The Juju client can run on a variety of architectures, and can be easily configured to work with different cloud providers -
or no cloud provider at all, creating all instances on a single machine. The purpose of this document is to enable users to
install and configure Juju to work as part of an Ubuntu Cloud OpenStack deployment.
Assumptions
To provide clarity in the instructions for the specific purposes outlined above, this document makes the following
assumptions.
You are installing Juju in conjunction with a MAAS deployment.
The Juju client will be running on, or can communicate with the MAAS controller node.
The Juju client will be installed on the Ubuntu 14.04 LTS release.
Installing Juju
To install Juju, use the standard package installer:
sudo apt-get update
sudo apt-get install juju-core
There are some additional tools which will be useful for testing your configuration and working with charms which should
also be installed:
sudo apt-get install juju-quickstart juju-deployer charm-tools
Configuring Juju to work with MAAS
Now the Juju software is installed, it needs to be configured to work with MAAS. This is done as follows.
1. Generate an SSH key
MAAS and Juju both use SSH to authenticate access to running nodes. If you do not have an SSH key, you should generate
one with the following command:
ssh-keygen -t rsa -b 2048
This key will be associated with the user account you are running in at the time, so should be the same account you intend
to run Juju from.
2. Obtain the MAAS API/OAuth key
You’ll need an API key from MAAS so that the Juju client can access it. Each user account in MAAS can have as many API
keys as desired but you should use a different user account for each Juju environment you want to use within MAAS.
In a web browser, navigate to the MAAS home page (this will be at http://{address}:80/MAAS/ where {address} is the IP
address or hostname of the MAAS controller node). Go to your MAAS preferences page (click your username at the top-
right of the page).
MAAS will have already generated a key to use (though you may add others for more environments if you wish) displayed
as indicated in the image below.
Make sure you copy the entire key. It is quite long and doesn't fully fit in the text box of the web interface. Paste this key
somewhere for reference - you will need it for the next step.
3. Edit the Juju environments.yaml file
This is done by generating and editing a file, environments.yaml, which will live in your ~/.juju/ directory. You can
generate the environments file manually , but Juju also includes a boilerplate configuration option that will flesh out
most of the file for you and minimise the amount of work (and potential errors).
To generate an initial config file, you simply need to run:
juju generate-config
This command will cause a template file to be written to your ~/.juju directory if an environments.yaml file does not
already exist. It will also create the ~./juju directory if that does not exist.
If you have previously installed Juju on this machine and the environments.yaml file already exists, you should simply edit
it for your new MAAS configuration.
This file will contain sample profiles for different types of cloud services, but you will need only need to edit the section
labelled maas, which should look something like this :
maas:
type: maas
# Change this to where your MAAS server lives. It must specify the base path.
maas-server: 'http://192.168.1.1/MAAS/'
maas-oauth: '<add your OAuth credentials from MAAS here>'
# default-series: precise
authorized-keys-path: ~/.ssh/authorized_keys # or any file you want.
# Or:
# authorized-keys: ssh-rsa keymaterialhere
The only required information here are the type, maas-server, maas-oauth and one or other of the ways of specifying your
ssh keys. The default-series itself defaults to 'trusty' and MAAS will add your SSH keys. So your config should be edited to
include the correct address for the MAAS master node and the key which you collected in the previous step:
maas:
type: maas
maas-server: 'http://172.16.100.1/MAAS/'
maas-oauth: '--MAAS API key string--'
authorized-keys-path: ~/.ssh/authorized_keys
You may optionally edit the line near the top of the file which reads:
default: amazon
to read:
default: maas
..and skip step 4 below.
NOTE: If you wish to configure your Juju client to additionally work with other cloud environments, please see the
documentation on the main Juju website.
4. Select the MAAS environment
To select the MAAS environment from your configuration file, run the following Juju command:
juju switch maas
Any further Juju commands will now apply to that specific environment. The switch setting is persistent, so it will remain
applicable up until you issue another switch command. You can check which environment is selected by running the
command without additional arguments:
juju switch
Environment testing
Once Juju has been installed and configured to use MAAS, it is useful to test that it is operating properly. The easiest way
to do this is to run the Juju quickstart command, which will create a Juju bootstrap node and then deploy the juju-gui
charm on it.
juju quickstart
The juju-gui web interface will then be automatically loaded in the default web browser. This process may take several
minutes to complete.
Your browser may come up with a security warning such as this:
This is because the web-server doesn't have security credentials (it was just created after all). You should accept any
options which allow you to proceed and view the site. Juju will automatically log you in to the juju-gui interface, and you
should see a screen like this:
Once you are satisfied that Juju is working correctly, you can destroy this running environment:
juju destroy-environment maas
You will need to confirm this action by typing 'y' at the prompt.
Intellectual property rights policy

Legal information

Privacy policy
© 2014 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.
This will remove the bootstrap and juju-gui instance and return the used node back to the pool in MAAS.
.
Installing OpenStack
Introduction
OpenStack is a versatile, open source cloud environment equally suited to serving up public, private or hybrid clouds.
Canonical is a Platinum Member of the OpenStack foundation and has been involved with the OpenStack project since its
inception; the software covered in this document has been developed with the intention of providing a streamlined way
to deploy and manage OpenStack installations.
Scope of this documentation
The OpenStack platform is powerful and its uses diverse. This section of documentation is primarily concerned with
deploying a 'standard' running OpenStack system using, but not limited to, Canonical components such as MAAS, Juju and
Ubuntu. Where appropriate other methods and software will be mentioned.
Assumptions
1. Use of MAAS This document is written to provide instructions on how to deploy OpenStack using MAAS for hardware
provisioning. If you are not deploying directly on hardware, this method will still work, with a few alterations, assuming
you have a properly configured Juju environment. The main difference will be that you will have to provide different
configuration options depending on the network configuration.
2. Use of Juju This document assumes an up to date, stable release version of Juju.
3. Local network configuration This document assumes that you have an adequate local network configuration, including
separate interfaces for access to the OpenStack cloud. Ideal networks are laid out in the MAAS[MAAS documentation
for OpenStack]
Planning an installation
Before deploying any services, it is very useful to take stock of the resources available and how they are to be used.
OpenStack comprises of a number of interrelated services (Nova, Swift, etc) which each have differing demands in terms
of hosts. For example, the Swift service, which provides object storage, has a different requirement than the Nova service,
which provides compute resources.
The minimum requirements for each service and recommendations are laid out in the official OpenStack Operations Guide
which is available (free) in HTML or various downloadable formats. The hardware requirements document specifies the
minimum requirements.
The recommended composition of nodes for deploying OpenStack with MAAS and Juju is that all nodes in the system
should be capable of running ANY of the services. This is best practice for the robustness of the system, as since any
physical node should fail, another can be repurposed to take its place. This obviously extends to any hardware
requirements such as extra network interfaces.
If for reasons of economy or otherwise you choose to use different configurations of hardware, you should note that your
ability to overcome hardware failure will be reduced. It will also be necessary to target deployments to specific nodes -
see the section in the MAAS documentation on tags.
Create the OpenStack configuration file
We will be using Juju charms to deploy the component parts of OpenStack. Each charm encapsulates everything required
to set up a particular service. However, the individual services have many configuration options, some of which we will
want to change.
To make this task easier and more reproduceable, we will create a separate configuration file with the relevant options
for all the services. This is written in a standard YAML format (see www.yaml.org if this is unfamiliar to you ).
You can download the [openstack-config.yaml] file we will be using from here. It is also reproduced below:
keystone:
admin-password: openstack
debug: 'true'
log-level: DEBUG
nova-cloud-controller:
network-manager: 'Neutron'
quantum-security-groups: 'yes'
neutron-external-network: Public_Network
nova-compute:
enable-live-migration: 'True'
migration-auth-type: "none"
virt-type: kvm
#virt-type: lxc
enable-resize: 'True'
quantum-gateway:
ext-port: 'eth1'
plugin: ovs
glance:
ceph-osd-replication-count: 3
cinder:
block-device: None
ceph-osd-replication-count: 3
overwrite: "true"
glance-api-version: 2
ceph:
fsid: a51ce9ea-35cd-4639-9b5e-668625d3c1d8
monitor-secret: AQCk5+dR6NRDMRAAKUd3B8SdAD7jLJ5nbzxXXA==
osd-devices: /dev/sdb
osd-reformat: 'True'
For all services, we can configure the openstack-origin to point to an install source. In this case, we will rely on the
default, which will point to the relevant sources for the Ubuntu 14.04 LTS Trusty release. Further configuration for each
service is explained below:
keystone
admin password:
You should set a memorable but secure password here to be able to access OpenStack when it is deployed.
debug:
It is useful to set this to 'true' initially, to monitor the setup. this will produce more verbose messaging.
log-level:
Similarly, setting the log-level to DEBUG means that more verbose logs can be generated. These options can be
changed once the system is set up and running normally.
nova-cloud-controller
cloud-controller:
'Neutron' - Other options are now depricated.
quantum-security-groups:
'yes'
neutron-external-network:
'Public_Network' - This is an interface we will use for allowing access to the cloud, and will be defined later.
nova-compute
enable-live-migration: :We have set this to 'True' to enable the OpenStack live migration feature. Note that in order for
this to work, passwordless SSH connections between compute nodes are enabled for the root user.
migration-auth-type:
'none'
virt-type:
This option is set to 'kvm', although it is also possible to use LXC containers.
enable-resize:
'True' This feature works by enabling passwordless SSH connections between nodes for the nova user, which may
have security implications.
quantum-gateway
ext-port:
This is where we specify the hardware for the public network. Use 'eth1' or the relevant network interface. It is also
possible to specify this port by the hardware MAC address, or a list of addresses to facilitate later scale-out.
plugin:
ovs
glance
ceph-osd-replication-count:
'3' - at least 3 nodes are required
cinder
block-device:
None
ceph-osd-replication-count:
3 - at least 3 are required
overwrite:
'true'
glance-api-version:
2
ceph
fsid:
The fsid is simply a unique identifier. You can generate a suitable value by running uuidgen which should return a
value which looks like: a51ce9ea-35cd-4639-9b5e-668625d3c1d8
monitor-secret:
The monitor secret is a secret string used to authenticate access. There is advice on how to generate a suitable
secure secret at the ceph website. A typical value would be AQCk5+dR6NRDMRAAKUd3B8SdAD7jLJ5nbzxXXA==
osd-devices:
This should point (in order of preference) to a device,partition or filename. In this case we will assume secondary
device level storage located at /dev/sdb
osd-reformat:
We will set this to 'True', allowing ceph to reformat the drive on provisioning.
Other configurations
Other settings and configuration options are possible for deployment of the OpenStack services. These are detailed in
the documentation of the individual charms used by Juju, and can be inspected by visiting the online Juju Charm Store and
searching for the charm using the search box in the top left-hand-side of the page. the configuration settings are then
detailed under "Configuration" in the main page, as shown:
Deploying OpenStack with Juju
Now that the configuration is defined, we can use Juju to deploy and relate the services.
Initialising Juju
Juju requires a minimal amount of setup. Here we assume it has already been configured to work with your MAAS cluster
(see the Juju Install Guide for more information on this.
Firstly, we need to fetch images and tools that Juju will use:
juju sync-tools --debug
Then we can create the bootstrap instance:
juju bootstrap --upload-tools --debug
We use the upload-tools switch to use the local versions of the tools which we just fetched. The debug switch will give
verbose output which can be useful. This process may take a few minutes, as Juju is creating an instance and installing the
tools. When it has finished, you can check the status of the system with the command:
juju status
This should return something like:
environment: maas
machines:
"0":
agent-state: started
agent-version: 1.18.1.1
dns-name: localhost
instance-id: localhost
series: trusty
Deploy the OpenStack Charms
Now that the Juju bootstrap node is up and running we can deploy the services required to make our OpenStack
installation. To configure these services properly as they are deployed, we will make use of the configuration file we
defined earlier, by passing it along with the --config switch with each deploy command. Substitute in the name and path
of your config file if different.
It is useful but not essential to deploy the services in the order below. It is also highly reccommended to open an
additional terminal window and run the command juju debug-log. This will output the logs of all the services as they run,
and can be useful for troubleshooting.
It is also recommended to run a juju status command periodically, to check that each service has been installed and is
running properly. Juju will automatically try to fetch the best possible version of the charm from online Charm Store. If
you are installing from within a restricted or closed network, it is possible to pre-fetch the required charms. See the
documentation for offline charms.
juju deploy --to=0 juju-gui
juju deploy rabbitmq-server
juju deploy mysql
juju deploy --config openstack-config.yaml openstack-dashboard
juju deploy --config openstack-config.yaml keystone
juju deploy --config openstack-config.yaml ceph -n 3
juju deploy --config openstack-config.yaml nova-compute -n 3
juju deploy --config openstack-config.yaml quantum-gateway
juju deploy --config openstack-config.yaml cinder
juju deploy --config openstack-config.yaml nova-cloud-controller
juju deploy --config openstack-config.yaml glance
juju deploy --config openstack-config.yaml ceph-radosgw
Add relations between the OpenStack services
Although the services are now deployed, they are not yet connected together. Each service currently exists in isolation.
We use the juju add-relation command to make them aware of each other and set up any relevant connections and
protocols. This extra configuration is taken care of by the individual charms themselves.
We should start adding relations between charms by setting up the Keystone authorization service and its database, as
this will be needed by many of the other connections:
juju add-relation keystone mysql
We wait until the relation is set. After it finishes check it with juju status:
juju status mysql
juju status keystone
It can take a few moments for this service to settle. Although it is certainly possible to continue adding relations (Juju
manages a queue for pending actions) it can be counterproductive in terms of the overall time taken, as many of the
relations refer to the same services. The following relations also need to be made:
juju add-relation nova-cloud-controller mysql
juju add-relation nova-cloud-controller rabbitmq-server
juju add-relation nova-cloud-controller glance
juju add-relation nova-cloud-controller keystone
juju add-relation nova-compute mysql
juju add-relation nova-compute rabbitmq-server
juju add-relation nova-compute glance
juju add-relation nova-compute nova-cloud-controller
juju add-relation glance mysql
juju add-relation glance keystone
juju add-relation cinder keystone
juju add-relation cinder mysql
juju add-relation cinder rabbitmq-server
juju add-relation cinder nova-cloud-controller
juju add-relation openstack-dashboard keystone
juju add-relation swift-proxy swift-storage
juju add-relation swift-proxy keystone
Finally, the output of juju status should show the all the relations as complete. The OpenStack cloud is now running, but it
needs to be populated with some additional components before it is ready for use.
Preparing OpenStack for use
Configuring access to Openstack
The configuration and authentication data for OpenStack can be fetched by reading the configuration file generated by
the Keystone service. You can also copy this information by logging in to the Horizon (OpenStack Dashboard) service and
examining the configuration there. However, we actually need only a few bits of information. The following bash script
can be run to extract the relevant information:
#!/bin/bash
set -e
KEYSTONE_IP=`juju status keystone/0 | grep public-address | awk '{ print $2 }' | xargs host | grep -v alias |
awk '{ print $4 }'`
KEYSTONE_ADMIN_TOKEN=`juju ssh keystone/0 "sudo cat /etc/keystone/keystone.conf | grep admin_token" | sed -e
'/^M/d' -e 's/.$//' | awk '{ print $3 }'`
echo "Keystone IP: [${KEYSTONE_IP}]"
echo "Keystone Admin Token: [${KEYSTONE_ADMIN_TOKEN}]"
cat << EOF > ./nova.rc
export SERVICE_ENDPOINT=http://${KEYSTONE_IP}:35357/v2.0/
export SERVICE_TOKEN=${KEYSTONE_ADMIN_TOKEN}
export OS_AUTH_URL=http://${KEYSTONE_IP}:35357/v2.0/
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_TENANT_NAME=admin
EOF
juju scp ./nova.rc nova-cloud-controller/0:~
This script extracts the required information and then copies the file to the instance running the nova-cloud-controller.
Before we do any nova or glance command we will load the file we just created:
$ source ./nova.rc
$ nova endpoints
At this point the output of nova endpoints should show the information of all the available OpenStack endpoints.
Install the Ubuntu Cloud Image
In order for OpenStack to create instances in its cloud, it needs to have access to relevant images $ mkdir ~/iso $ cd ~/iso
$ wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
Import the Ubuntu Cloud Image into Glance
NOTE: glance comes with the package glance-client which may need to be installed where you plan the run the command
from
apt-get install glance-client
glance add name="Trusty x86_64" is_public=true container_format=ovf disk_format=qcow2 < trusty-server-
cloudimg-amd64-disk1.img
Create OpenStack private network
Previously we configured components of OpenStack to use a private network. We must now define that network. The
nova-manage command may be run from the nova-cloud-controller node or any of the nova-compute nodes. To access the
node we run the following command:
juju ssh nova-cloud-controller/0
sudo nova-manage network create --label=private --fixed_range_v4=1.1.21.32/27 --num_networks=1 --
network_size=32 --multi_host=T --bridge_interface=eth0 --bridge=br100
To make sure that we have created the network we can now run the following command:
sudo nova-manage network list
Create OpenStack public network
sudo nova-manage floating create --ip_range=1.1.21.64/26
sudo nova-manage floating list
Allow ping and ssh access adding them to the default security group Note: The following commands are run from a
machine where we have the package python-novaclient installed and within a session where we have loaded the above
created nova.rc file.
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
Create and register the ssh keys in OpenStack
Generate a default keypair
ssh-keygen -t rsa -f ~/.ssh/admin-key
Copy the public key into Nova
We will name it admin-key: Note: In the precise version of python-novaclient the command works with --pub_key instead
of --pub-key
nova keypair-add --pub-key ~/.ssh/admin-key.pub admin-key
And make sure it’s been successfully created:
nova keypair-list
Create a test instance
We created an image with glance before. Now we need the image ID to start our first instance. The ID can be found with
this command:
nova image-list
Note: we can also use the command glance image-list
Boot the instance:
nova boot --flavor=m1.small --image=< image_id_from_glance_index > --key-name admin-key testserver1
Add a floating IP to the new instance
First we allocate a floating IP from the ones we created above:
nova floating-ip-create
Then we associate the floating IP obtained above to the new instance:
nova add-floating-ip 9363f677-2a80-447b-a606-a5bd4970b8e6 1.1.21.65
Create and attach a Cinder volume to the instance
Note: All these steps can be also done through the Horizon Web UI
We make sure that cinder works by creating a 1GB volume and attaching it to the VM:
cinder create --display_name test-cinder1 1
Get the ID of the volume with cinder list:
cinder list
Attach it to the VM as vdb
nova volume-attach test-server1 bbb5c5c2-a5fd-4fe1-89c2-d16fe91578d4 /dev/vdb
Now we should be able to ssh the VM test-server1 from a server with the private key we created above and see that vdb
appears in /proc/partitions
APPENDIX I - Economic Deployment
Although the above installation has indicated the ideal case for installing OpenStack, there are often reasons for making
more economic use of hardware at the expense of robustness and failover capabilities. MAAS and Juju have several
capabilities to make this easier.
Deploying to specific nodes
It may be the case that your hardware is not identical. Some nodes may have more compute power, some may have the
additional NICs to run services which require them and so on.
In this case it is possible to target services to be deployed on a certain type of node. This requires that the nodes be
tagged in MAAS beforehand. The services can then be deployed specifically to that type of node using a Juju feature
called 'constraints'. For example, if you had a number of nodes which were tagged 'compute', you could use constraints to
tell Juju to only deploy a service to a node with that tag:
juju deploy --constraints "tags=compute" nova-compute -n 2
Deploying multiple services to a single node
Juju can deploy multiple services to the same physical node by use of the '--to' option. This can be useful for combining
many services with light requirements onto the same physical node. This only requires that you know the machine number
of the node you wish to deploy to (as returned by the juju status command). For example, assume we have already
deployed nova compute:
juju deploy nova-cloud-controller
Now we run a juju status command and find amongst the output:
nova-cloud-controller:
charm: cs:trusty/nova-cloud-controller-36
exposed: false
relations:
cluster:
- nova-cloud-controller
units:
nova-cloud-controller/0:
agent-state: pending
machine: "3"
...which indicates that machine "3" has been used for the deployment. We could then deploy other services to the same
node like this:
juju deploy --to 3 openstack-dashboard
juju deploy --to 3 glance
It is also possible to containerise these deployments with LXC, to provide a more robust separation of services:
juju deploy --to lxc:3 cinder
Example - an OpenStack control node
An example OpenStack control node could be a machine running the following services:
Nova Cloud Controller
Cinder API
Glance API
Keystone
OpenStack Dashboard
Ceph RADOS Gateway
The recommended specs for the this node would be:
Node Attribute Specification
Number of CPUs 4
Memory 16 GB
Number of NIC ports 2 (PCE Management and VM Network)
Disk 20 GB
APPENDIX II - Deploying Charms Offline
Many private clouds have no direct access to the internet due to security reasons. In these cases the standard installation
of OpenStack using Juju doesn't work, because the Juju agents on the fresh installed nodes are not able to retrieve the
charms needed to fulfill the installation directly from the charm store.
Here the solution is the usage of a client which is able to connect to the internet, retrieve the needed charms, storm them
in a local repository and then deploy using this repository. Here the client for the retrieval surely can be a different
system than the one for deployment as long as both have access to a shared filesystem.
Rerieving charms using the Charm Tools
Installation
Charm tools was included in the list of packages recommended by in the [Juju installation documentation][juju-install]. If
you didn't install it then, you can do so now:
sudo apt-get update && sudo apt-get install charm-tools
Usage
The Charm Tools comes packaged as both a stand alone tool and a juju plugin. So you simply can call it with charm or as
usual for Juju commands with juju charm.
There are several tools available within the Charm Tools itself. At any time you can run juju charm to view the available
subcommands and all subcommands have independent help pages, accesible using either the -h or --help flags.
If you want to retrieve and branch one of the charm store charms, use the get command specifying the CHARM_NAME you
want to copy and provide an optional CHARMS_DIRECTORY. Otherwise the current directory will be used.
juju charm get [-h|--help] CHARM_NAME [CHARMS_DIRECTORY]
Example
The command
juju charm get mysql
will download the MySQL charm to a mysql directory within your current path. By running
juju charm get wordpress ~/charms/precise/
you will download the WordPress charm to ~/charms/precise/wordpress. It is also possible to fetch all official charm store
charms. The command for this task is
juju charm getall [-h|--help] [CHARMS_DIRECTORY]
The retrieved charms will be placed in the CHARMS_DIRECTORY, or your current directory if no CHARMS_DIRECTORY is provided.
This command can take quite a while to complete - there are a lot of charms!
Deploying from a local repository
There are many cases when you may wish to deploy charms from a local filesytem source rather than the charm store:
When testing charms you have written.
When you have modified store charms for some reason.
When you don't have direct internet access.
... and probably a lot more times which you can imagine yourselves.
Juju can be pointed at a local directory to source charms from using the --repository=<path/to/files> switch like this:
juju deploy --repository=/usr/share/charms/ local:trusty/vsftpd
The --repository switch can be omitted when shell environment defines JUJU_REPOSITORY like so:
export JUJU_REPOSITORY=/usr/share/charms/
juju deploy local:trusty/vsftpd
You can also make use of standard filesystem shortcuts, if the environment specifies the default-series. The following
examples will deploy the trusty charms in the local repository when default-series is set to trusty:
juju deploy --repository=. local:haproxy
juju deploy --repository ~/charms/ local:wordpress
The default-series can be specified in environments.yaml thusly:
default-series: precise
The default-series can also be added to any bootstrapped environment with the set-env command:
juju set-env "default-series=trusty"
Note: Specifying a local repository makes Juju look there first, but if the relevant charm is not found in that repository, it
will fall back to fetching it from the charm store. If you wish to check where a charm was installed from, it is listed in the
juju status output.
APPENDIX III - Ceph
© 2014 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.
Typically, OpenStack uses local storage of nodes for the configuration data as well as for the object storage provided by
Swift and the block storage provided by Cinder and Glance. But it also can use Ceph as storage backend. Ceph stripes
block device images across a cluster. This way it provides a better performance than typical standalone server. It allows
scalabillity and redundancy needs to be satisfied and Cinder's RDB driver used to create, export and connect volumes to
instances.
Deployment
During the installation of OpenStack we've already seen the deployment of Ceph via
juju deploy --config openstack-config.yaml -n 3 ceph
juju deploy --config openstack-config.yaml -n 10 ceph-osd
juju deploy --config openstack-config.yaml ceph-radosgw
This will install three Ceph nodes configured with the information contained in the file openstack-config.yaml. This file
contains the configuration block-device: None for Cinder, so that this component does not use the local disk. Instead
we're calling Additionally 10 Ceph OSD nodes providing the object storage are deployed and related to the Ceph nodes by
juju add-relation ceph-osd ceph
Once the ceph charm has bootstrapped the cluster, it will notify the ceph-osd charm which will scan for the configured
storage devices and add them to the pool of available storage. Now the relation to Cinder and Glance can be established
with
juju add-relation cinder ceph
juju add-relation glance ceph
so that both are using the storage provided by Ceph.
See also
[https://manage.jujucharms.com/charms/precise/ceph]
[https://manage.jujucharms.com/charms/precise/ceph-osd]
.
Managing OpenStack with Landscape
About Landscape
Landscape is a system management tool designed to let you easily manage multiple Ubuntu systems - up to 40,000 with a
single Landscape instance. From a single dashboard you can apply package updates and perform other administrative
tasks on many machines. You can categorize machines by group and manage each group separately. You can make
changes to targeted machines even when they are offline; the changes will be applied next time they start. Landscape lets
you create scripts to automate routine work such as starting and stopping services and performing backups. It lets you
use both common Ubuntu repositories and any custom repositories you may create for your own computers. Landscape is
particularly adept at security updates; it can highlight newly available packages that involve security fixes so they can be
applied quickly. You can use Landscape as a hosted service as part of Ubuntu Advantage, or run it on premises via
Landscape Dedicated Server.
Ubuntu Advantage
Ubuntu Advantage comprises systems management tools, technical support, access to online resources and support
engineers, training, and legal assurance to keep organizations on top of their Ubuntu server, desktop, and cloud
deployments. Advantage provides subscriptions at various support levels to help organizations maintain the level of
support they need.
Scope of this documentation
Landscape and Ubuntu Advantage can be used for any Ubuntu installation. The purpose of this document is to enable
users to install and configure Landscape specifically to work as part of an Ubuntu Cloud OpenStack deployment. Full
documentation for Landscape is available online
Assumptions
To provide clarity in the instructions for the specific purposes outlined above, this document makes the following
assumptions.
You are installing Landscape on a MAAS/Juju/OpenStack environment
The above install will be based on the Ubuntu 14.04 LTS release.
Landscape Components
There are two components to Landscape, the client software which runs on the clients and the server which the clients
talk to.
If you have opted to use the hosted Landscape service, then the server will be [https://landscape.canonical.com] and your
systems will be managed there. If you choose to use Landscape Dedicated Server in your own network, the steps to
deploy and configure this are documented below.
Installing the Landscape Dedicated Server Charm (optional)
The landscape server itself requires connections to other running services. As such, it is more convenient to deploy it
from a Juju 'bundle'. The bundle file contains configuration information enabling Juju to automatically relate to other
required services. The bundle file for the Landscape server is contained within the charm. We will use the commands from
charm-tools and juju-deployer [which were installed at the same time as Juju][installing-juju]
First we need to fetch the charm:
charm get cs:trusty/landscape-server
This will fetch the charm from the Charm Store and place it in a local directory called landscape-server. The bundle itself is
found at landscape-server/config/landscape-deployments.yaml, so change to that directory:
cd landscape-server/config/
We should now prepare the server for deployment by creating two additional configuration files - a license file and a
source file for apt sources. This information will have been provided to you when you purchased licenses:
license file - Copy the text of your license into this file. You must have a valid license or the deployment will fail.
repo-file - add the URL portion of the 'sources' line for your apt repository here, e.g.:
https://username:[email protected]/
The bundle actually contains several potential setups. The recommended default setup is simply called 'landscape'. We
can deploy this configuration using the juju deployer command as follows:
juju deployer -Wdv -c landscape-deployments.yaml landscape
Depending on the scale of your deployment, you may wish to use the 'landscape-max' bundle target. the other targets
and their uses are explained fully in the Readme.md file included with the charm.
Note
The charm bundle deploys other charms from the charm store. Your Juju client will need Internet access to download
them.
Deploying the Landscape client
The Landscape client is a component of Landscape designed to manage your running services. It operates as a subordinate
charm in Juju. This means that it only runs as a component attached to other services. The advantage of this is that when
you scale out managed services, any new service units created will automatically be managed too.
juju deploy landscape-client
Configuration for hosted service
The recommended way to configure the Landscape client is to generate a file containing the desired configuration values
in a standard YAML format. For the hosted service, this would look like:
landscape-client: account-name:
where the is your account named used to access https://landscape.canonical.com This configuration should be saved in a
suitably named file, e.g. landscape-configuration.yami
Configuration for standalone LDS server
The Landscape client requires some extra configuration data when used with a Landscape Dedicated Server. For example:
landscape-client origin: distro url: https://10.0.10.140/message-system ping-url: http://10.0.10.140/ping account-name:
standalone registration-key: secret ssl-public-key: [ base64:LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0Ck1SUlwVEN
ZJQ0FURS0tLS0Ck1JSUlwVENDQVRxZ0lKQVBFV011eUhJ ...
At the minimum, the following needs to be configured:
*account-name: In the case of LDS, this should always be set to 'standalone' string containing the account name to
connect to
url: The message server URL to connect to. Normally https://fqdn/message-system
ping-url: The ping server URL to perform lightweight exchange initiation with. Normally http://fqdn/ping.
ssl-public-key: If you configured the server with a self-signed SSL certificate (which is the default) you will need to set this
value. This can be a path to a file or a base64 encoded entry of the certificate itself, prefixed with "base64:". This config is
only used if the message server URL given above starts with https://
You can set (or update) the configuration options of a deployed service by using the juju set command. For example:
Once the client has been configured, the service can be deployed and attached to the OpenStack services to be
© 2014 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.
monitored.
juju deploy landscape-client --config=landscape-config.yaml
juju add-relation landscape-client openstack-dashboard
juju add-relation landscape-client keystone
juju add-relation landscape-client ceph
juju add-relation landscape-client nova-compute
juju add-relation landscape-client quantum-gateway
juju add-relation landscape-client cinder
juju add-relation landscape-client nova-cloud-controller
juju add-relation landscape-client glance
juju add-relation landscape-client ceph-radosgw
As previously mentioned, the Landscape client will scale out with these services, so no further relations will need to be
made. However, you will need to make sure you have sufficient licenses to cover the services.
Further documentation about the use and management of Landscape can be found at the main Landscape website
Maintenance and Administration of your
Ubuntu Cloud
In order to keep your Ubuntu Cloud up to date and operating at its best, there are some common administration tasks you
may wish to perform. These are detailed here.
Juju
Logging
Logging is set up to occur on each instance. This includes the normal service logs for whichever services are deployed on
that particular instance. For example, if you deploy the apache2 service, the logs will appear on that instance in
/var/log/apache2/ as one might expect. To examine instance level logs, you can simply use ssh to connect to a given
machine:
juju ssh <machine number>
The same directory also contains the juju logs for that service, located in /var/log/juju. However, it is often more useful
to view the systemwide logs for Juju. Juju uses rsyslogd to aggregate all logs to get a better systemwide view of Juju's
activity.
Connecting to rsyslogd
The target of the aggregated log is the file /var/log/juju/all-machines.log. You can directly access it using the
command:
juju debug-log [-n <number>] [-n +<number>] [-e <environment>]
Where the -n switch is given and followed by a number, the log will be tailed from that many lines in the past (i.e., those
number of existing lines in the log will be included in the output, along with any subsequent output).
Where the -n switch is given and followed by a '+' and a number, the log will be tailed starting from that specific line in the
log file.
This somewhat unusual syntax has been chosen so that the command behaves like the standard Unix tail command. In
fact, it is analagous to running tail -f [options] /var/log/juju/all-machines.log on the bootstrap node. Examples:
To read the ten most recent log entries and follow any subsequent entries to the log:
juju debug-log
To read the thirty most recent log entries and follow any subsequent entries to the log:
juju debug-log -n 30
To read all the log entries and follow any subsequent entries to the log:
juju debug-log -n +1
And of course it is possible to combine the command with other shell tools to make the output more useful, e.g. to filter
the whole log for lines matching 'INFO':
juju debug-log -n +1 | grep 'INFO'
Note
As the command uses the follow behaviour of tail by default, you do not need to specify the -f switch. You will also need
to end the session with Control-C
Upgrading Juju
The Juju software is continually being improved in terms of functionality, speed, and utilization ease. While there's
generall no need to upgrade a perfectly stable production environment, sometimes it will be advantageous to do so.
When running on a MAAS environment, the first step to upgrading the Juju client is to make sure the latest tools are
available. This is achieved by running the command:
juju sync-tools
This copies the Juju tools tarball from the official tools store (located at https://streams.canonical.com/juju) into your
environment.
To upgrade the running version of Juju client you can run:
sudo apt-get update && sudo apt-get install juju-core
The final step is to upgrade the running agents and the bootstrap environment. This is achieved by running:
juju upgrade-juju
This command sets the version number for all Juju agents to run. This is the most recent supported version compatible
with the command-line tools version. First, ensure you have upgraded the Juju client.
When run without arguments, upgrade-juju will try to upgrade to a newer version. The version chosen depends on the
current value of the environment's agent-version setting, in this order:
The highest patch.build version of the next stable major.minor version.
The highest patch.build version of the current major.minor version.
Both depend on availability of the according tools, as mentioned above.
It is also possible to specify a version if desired:
juju upgrade-juju --version 1.18.2
The current running version of agents can be determined from running:
juju status
Backup and Recovery of Juju
Backup
Juju's working principle is based on storing the state of the cloud in databases containing information about the
environment, machines, services, and units. Changes to an environment are made to the state first, then detected by the
relevant agents, which assume the responsibility of performing any required actions.
This principle allows Juju to easily do a backup of this information, plus some needed configuration data and some more
useful information. The command to do so is juju-backup, which saves the currently selected environment. Please ensure
you switch to the environment you want to backup.
juju switch my-env
juju backup
The command creates two generations of backups on the bootstrap node, also known as machine-0. Beside the state and
configuration data about this machine itself and the others of its environment, the aggregated log for all machines and
the one of this machine itself are saved. The aggregated log is the same one accessed by calling:
juju debug-log
This enables you to retrieve helpful information in case of a problem. After the backup is created on the bootstrap node,
it is transferred to your working machine into the current directory as juju-backup-YYYYMMDD-HHMM.tgz, where YYYYMMDD-
HHMM is date and time of the backup. In case you want to open the backup manually to access the logging data, you can
find it in the contained archive root.tar. Beware that authentification details may be exposed in these logs, so
appropriate security measures should be taken.
Restore
To restore an environment, the according command is:
juju restore <BACKUPFILE>
This allows you to choose the environment to restore.
OpenStack
Upgrading
To upgrade an OpenStack cluster in one big step requires additional hardware to setup and update cloud in addition to
the productive one. This leads to a longer outage while your cloud is in read-only mode, the state is transferred to the new
one, and the environments are switched. The preferred way to upgrade an OpenStack cloud is the rolling upgrade of each
system component, piece by piece.
Here you can choose between in-place and side-by-side upgrades. The first one needs to shutdown the regarding
component while you perform the upgrade. Be aware you may have troubles in case of a rollback. To avoid this, utilize the
side-by-side upgrade approach.
Before starting the upgrade you should:
Perform some "cleaning" of the environment process to ensure a consistent state. For example, instances not fully
purged from the system after deletion may cause indeterminate behavior.
Read the release notes and documentation.
Find incompatibilities between your versions.
The following upgrade tasks follow the same procedure for each component:
1. Configure the new worker.
2. Turn off the current worker. During this time, hide the downtime using a message queue or a load balancer.
3. As described earlier, take a backup of the old worker for a rollback.
4. Copy the state of the current to the new worker.
5. Start up the new worker.
Now repeat these steps for each worker in an appropriate order. In case of a problem, it should be easy to rollback as long
as the former worker stays untouched. This is, beside the shorter downtime, the most important advantage of the side-
by-side upgrade.
The following order for service upgrades seems the most successful:
1. Upgrade the OpenStack Identity Service (Keystone).
2. Upgrade the OpenStack Image Service (Glance).
3. Upgrade OpenStack Compute (Nova), including networking components.
4. Upgrade OpenStack Block Storage (Cinder).
5. Upgrade the OpenStack dashboard.
These steps look very easy, but are still a complex procedure depending on your cloud configuration. We recommend
having a testing environment with a near-identical architecture to your production system. This doesn't mean you should
use the same sizes and hardware. This method would be best, but quite expensive. However, there are ways to reduce the
cost.
Use your own cloud. The simplest place to start testing the next version of OpenStack is by setting up a new
environment inside your own cloud. This may seem odd—especially the double virtualisation used in running compute
nodes — but it's the fastest way to test your configuration.
Use a public cloud. Especially because your own cloud is unlikely to have sufficient space to scale test to the level of
the entire cloud. Consider using a public cloud to test the scalability limits of your cloud controller configuration. Most
public clouds bill by the hour, which means it can be inexpensive to perform even a test with many nodes.
Make another storage endpoint on the same system. If you use an external storage plug-in or shared file system with
your cloud, in many cases it's possible to test that it works by creating a second share or endpoint. This will enable you
to test the system before entrusting the new version onto your storage.
Watch the network. Even with small-scale testing, it should be possible to determine if something is going horribly
wrong in inter component communication if you look at the network packets and see too many.
OpenStack - Backup and Recovery
OpenStack`s flexibility makes backup and restore a very individual process, depending on the used components. This
section describes how the critical parts OpenStack needs to run, like configuration files, and databases, are saved. Just
like before with Juju, it doesn't describe how to backup the objects inside the Object storage or the data inside the Block
Storage.
Backup Cloud Controller Database
Like Juju, the OpenStack cloud controller uses a database server that stores the central databases for Nova, Glance,
Keystone, Cinder, and Swift. You can backup the five databases into one common dump:
$ mysqldump --opt --all-databases > openstack.sql
Alternatively you can backup the database for each component individually:
$ mysqldump --opt nova > nova.sql
$ mysqldump --opt glance > glance.sql
$ mysqldump --opt keystone > keystone.sql
$ mysqldump --opt cinder > cinder.sql
$ mysqldump --opt swift > swift.sql
Backup File Systems
In addition to the databases, OpenStack uses different directories for its configuration, runtime files, and logging. Like
the databases, they are grouped individually per component. This also allows for the backup to be done per component.
Nova
You'll find the configuration directory /etc/nova on the cloud controller and each compute node. It should be regularly
backed up.
Another directory to backup is /var/lib/nova. Here, you must be careful with the instances subdirectory on the compute
nodes. It contains the KVM images of the running instances. If you want to maintain backup copies of those instances, you
can do a backup here too. In this case, make sure not to save a live KVM instance because it may not boot properly after
restoring the backup.
Third directory for the compute component is /var/log/nova. In case of a central logging server, this directory does not
need to be backed up. We suggest you run your environment with this kind of logging.
Glance
Like with Nova, you'll find the directories /etc/glance and /var/log/glance. The handling should be the same here as well.
Glance also uses the directory named /var/lib/glance, which should also be backed up.
Keystone
Keystone is using the directories /etc/keystone, /var/lib/keystone, and /var/log/keystone. They follow the same rules as
Nova and Glance. Even if the lib directory doesn't contain any data being used, it can also be backed up just in case.
Cinder
As before, you'll find the directories /etc/cinder, /var/log/cinder, and /var/lib/cinder. The handling here should also be
the same. Unlike Nova and Glance, there's no special handling of /var/lib/cinder needed.
Swift
Beside the Swift configuration, the directory /etc/swift contains ring files and ring builder files. If those get lost, your
data becomes inaccessable. You can imagine how important it is to backup this directory. Best practise is to copy the
builder files to the storage nodes along with the ring files. This ensures that multiple copies are spread throughout the
cluster.
Restore
The restore based on the backups is a step-by-step process restoring the components databases and all related
directories. It's essential the component to restore is not running. Always start the restoring after you've stopped all
components.
Take Nova for example. First execute:
$ stop nova-api
$ stop nova-cert
$ stop nova-consoleauth
$ stop nova-novncproxy
$ stop nova-objectstore
$ stop nova-scheduler
on the cloud controller to safely stop the processes of the component. Next step is to restore the database. By using the -
-opt option during backup, we ensured all tables are initially dropped and there's no conflict with existing data in the
databases.
$ mysql nova < nova.sql
Before restoring the directories, you should move the configuration directory /etc/nova into a secure location in case you
need to roll it back.
After the database and the files are restored, you can start MySQL and Nova again.
$ start mysql
$ start nova-api
$ start nova-cert
$ start nova-consoleauth
$ start nova-novncproxy
$ start nova-objectstore
$ start nova-scheduler
The process for the other components look similar.
Scaling OpenStack
While traditional applications require larger hardware to scale ("vertical scaling"), cloud-based applications typically
request more, discrete hardware ("horizontal scaling"). If your cloud is successful, eventually you must add resources to
meet the increasing demand.
To suit the cloud paradigm, OpenStack is designed to be horizontally scalable. Rather than switching to larger servers, you
procure more servers and simply install identically configured services. Ideally, you scale out and load balance among
groups of functionally identical services.
But to scale the services running on OpenStack, sometimes even OpenStack itself has to be scaled. This means nodes for
computing or storage have to be added.
Nova
Usage Statistics
To list the hosts and the nova-related services that run on them call:
nova host-list
+---------------+-------------+----------+
| host_name | service | zone |
+---------------+-------------+----------+
| mystack-alpha | conductor | internal |
| mystack-alpha | compute | nova |
| mystack-alpha | cert | internal |
| mystack-alpha | network | internal |
| mystack-alpha | scheduler | internal |
| mystack-alpha | consoleauth | internal |
+---------------+-------------+----------+
To get a summary of resource usage of all of the instances running on the host call:
nova host-describe mystack-alpha
+---------------+----------------------------------+-----+-----------+---------+
| HOST | PROJECT | cpu | memory_mb | disk_gb |
+---------------+----------------------------------+-----+-----------+---------+
| mystack-alpha | (total) | 2 | 4003 | 157 |
| mystack-alpha | (used_now) | 3 | 5120 | 40 |
| mystack-alpha | (used_max) | 3 | 4608 | 40 |
| mystack-alpha | b70d90d65e464582b6b2161cf3603ced | 1 | 512 | 0 |
| mystack-alpha | 66265572db174a7aa66eba661f58eb9e | 2 | 4096 | 40 |
+---------------+----------------------------------+-----+-----------+---------+
This information will help explain how this host is used.
The cpu column shows the sum of the virtual CPUs for instances running on the host.
The memory_mb column shows the sum of the memory (in MB) allocated to the instances that run on the hosts.
The disk_gb column shows the sum of the root and ephemeral disk sizes (in GB) of the instances that run on the hosts.
The used_now row shows the sum of the resources allocated to the instances that run on the host plus the resources
allocated to the virtual machine of the host itself.
The used_max row shows the sum of the resources allocated to the instances that run on the host.
Note: These values are computed using only information about flavors of the instances that run on the hosts. This
command does not query the CPU usage, memory usage, or hard disk usage of the physical host.
Now you can retrieve the CPU, memory, I/O, and network statistics for an instance. First list all instances.
nova list
+--------------------------------------+----------------------+--------+------------+-------------+----------
--------+
| ID | Name | Status | Task State | Power State | Networks
|
+--------------------------------------+----------------------+--------+------------+-------------+----------
--------+
| 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | myCirrosServer | ACTIVE | None | Running |
private=10.0.0.3 |
| 8a99547e-7385-4ad1-ae50-4ecfaaad5f42 | myInstanceFromVolume | ACTIVE | None | Running |
private=10.0.0.4 |
+--------------------------------------+----------------------+--------+------------+-------------+----------
--------+
Then, get some diagnostic statistics:
nova diagnostics myCirrosServer
+------------------+----------------+
| Property | Value |
+------------------+----------------+
| vnet1_rx | 1210744 |
| cpu0_time | 19624610000000 |
| vda_read | 0 |
| vda_write | 0 |
| vda_write_req | 0 |
| vnet1_tx | 863734 |
| vnet1_tx_errors | 0 |
| vnet1_rx_drop | 0 |
| vnet1_tx_packets | 3855 |
| vnet1_tx_drop | 0 |
| vnet1_rx_errors | 0 |
| memory | 2097152 |
| vnet1_rx_packets | 5485 |
| vda_read_req | 0 |
| vda_errors | -1 |
+------------------+----------------+
Finally you can get summary statistics for each tenant:
nova usage-list
Usage from 2013-06-25 to 2013-07-24:
+----------------------------------+-----------+--------------+-----------+---------------+
| Tenant ID | Instances | RAM MB-Hours | CPU Hours | Disk GB-Hours |
+----------------------------------+-----------+--------------+-----------+---------------+
| b70d90d65e464582b6b2161cf3603ced | 1 | 344064.44 | 672.00 | 0.00 |
| 66265572db174a7aa66eba661f58eb9e | 3 | 671626.76 | 327.94 | 6558.86 |
+----------------------------------+-----------+--------------+-----------+---------------+
Change server size
In case an image size doesn't match your needs anymore you're able to resize it. First list your available flavors.
$ nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+-----------+------+-------+-------------+
| 1 | m1.tiny | 512 | 0 | 0 | | 1 | 1.0 |
| 2 | m1.small | 2048 | 10 | 20 | | 1 | 1.0 |
| 3 | m1.medium | 4096 | 10 | 40 | | 2 | 1.0 |
| 4 | m1.large | 8192 | 10 | 80 | | 4 | 1.0 |
| 5 | m1.xlarge | 16384 | 10 | 160 | | 8 | 1.0 |
+----+-----------+-----------+------+-----------+------+-------+-------------+
Now you can show the information for the server you want to resize with:
$ nova show 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5
+------------------------+----------------------------------------------------------+
| Property | Value |
+------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-EXT-SRV-ATTR:host | mystack-alpha |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2012-05-09T15:47:48Z |
| flavor | m1.small |
| hostId | de0c201e62be88c61aeb52f51d91e147acf6cf2012bb57892e528487 |
| id | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 |
| image | maverick-image |
| key_name | |
| metadata | {} |
| name | myCirrosServer |
| private network | 172.16.101.6 |
| progress | 0 |
| public network | 10.4.113.6 |
| status | ACTIVE |
| tenant_id | e830c2fbb7aa4586adf16d61c9b7e482 |
| updated | 2012-05-09T15:47:59Z |
| user_id | de3f4e99637743c7b6d27faca4b800a9 |
+------------------------+----------------------------------------------------------+
Here in the example the flavor is m1.small. To resize it to m1.medium you need the server's ID and the ID of the flavor:
$ nova resize 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 3
Now the list command shows the changed status:
$ nova list
+--------------------------------------+----------------------+--------+------------+-------------+----------
--------+
| ID | Name | Status | Task State | Power State | Networks
|
+--------------------------------------+----------------------+--------+------------+-------------+----------
--------+
| 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | myCirrosServer | RESIZE | None | Running |
private=10.0.0.3 |
| 8a99547e-7385-4ad1-ae50-4ecfaaad5f42 | myInstanceFromVolume | ACTIVE | None | Running |
private=10.0.0.4 |
+--------------------------------------+----------------------+--------+------------+-------------+----------
--------+
Once this operation finished the status changes to VERIFY_RESIZE. You have to confirm that this operation has been
successful:
$ nova resize-confirm 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5
In case the resizing hasn't been successful you can revert it by executing:
$ nova resize-revert 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5
In both cases, the server status should go back to ACTIVE.
Adding compute nodes
Adding compute nodes is straightforward. They are easily picked up by the existing installation. Simply call:
juju add-unit nova-compute
Migrate instances to other nodes
OpenStack allows to migrate instances between nodes, e.g. in case of an overall too high load on the current node. You
alreade know the commands list and show to retrieve information about your instance. To list the possible nodes
execute:
nova-manager service list
mystack-alpha nova-scheduler enabled :-) None
mystack-alpha nova-network enabled :-) None
mystack-alpha nova-compute enabled :-) None
mystack-bravo nova-compute enabled :-) None
mystack-charlie nova-compute enabled :-) None
Here we will choose mystack-charlie as migration target, nova-compute is running on this system. First let's have a look if
the node has enough resources:
nova-manage service describe_resource mystack-charlie
HOST PROJECT cpu mem(mb) hdd
mystack-charlie(total) 16 32232 878
mystack-charlie(used_now) 13 21284 442
mystack-charlie(used_max) 13 21284 442
mystack-charlie p1 5 10240 150
mystack-charlie p2 5 10240 150
.....
These values show the physical resources and their usage.
The cpu column shows the number of cpu.
The mem(mb) column is the total amount of memory (MB).
The `hdd' column shows the total amount of space for NOVA-INST-DIR/instances (GB).
1st line shows total amount of resource physical server has.
2nd line shows current used resource.
3rd line shows maximum used resource.
4th line and under is used resource per project.
If those values show that our instance fits on that system we can migrate it with:
nova live-migration 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 mystack-charlie
Migration of 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 initiated.
Ceph and OpenStack
Ceph stripes block device images as objects across a cluster. This allows for better performance than standalone server.
OpenStack is able to use Ceph Block Devices through libvirt, which configures the QEMU interface to librbd.
To use Ceph Block Devices with OpenStack, you must install QEMU, libvirt, and OpenStack first. We recommended using
a separate physical node for your OpenStack installation. OpenStack recommends a minimum of 8GB of RAM and a quad-
core processor.
Three parts of OpenStack integrate with Ceph’s block devices:
Images: OpenStack Glance manages images for VMs. Images are immutable. OpenStack treats images as binary blobs
and downloads them accordingly.
Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs or to attach volumes to running VMs.
OpenStack manages volumes using Cinder services.
Guest Disks: Guest disks are guest operating system disks. By default, when you boot a virtual machine its disk appears
as a file on the filesystem of the hypervisor (usually under /var/lib/nova/instances//). Prior to OpenStack Havana, the
only way to boot a VM in Ceph was to use the boot from volume functionality from Cinder. Now it's possible to directly
boot every virtual machine inside Ceph without using Cinder. This comes in handy as it allows us to easily perform
maintenance operation with the live-migration process. On the other hand, if your hypervisor dies, it's also really
convenient to trigger Nova evacuate and almost seamlessly run the virtual machine somewhere else.
You can use OpenStack Glance to store images in a Ceph Block Device and you can use Cinder to boot a VM using a copy-
on-write clone of an image.
Create a pool
By default, Ceph block devices use the rbd pool. You may use any available pool. We recommend creating a pool for Cinder
and a pool for Glance. Ensure your Ceph cluster is running first, then create the pools.
ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool create backups 128
Configure OpenStack Ceph Clients
The nodes running glance-api, cinder-volume, nova-compute and cinder-backup act as Ceph clients. Each requires the
ceph.conf file
ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
On the glance-api node, you’ll need the Python bindings for librbd
sudo apt-get install python-ceph
sudo yum install python-ceph
On the nova-compute, cinder-backup and on the cinder-volume node, use both the Python bindings and the client command
line tools
sudo apt-get install ceph-common
sudo yum install ceph
If you have cephx authentication enabled, create a new user for Nova/Cinder and Glance. Execute the following:
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow
rwx pool=volumes, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow
rwx pool=images'
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children,
allow rwx pool=backups'
Add the keyrings for client.cinder, client.glance, and client.cinder-backup to the appropriate nodes and change their
ownership.
ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee
/etc/ceph/ceph.client.glance.keyring
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee
/etc/ceph/ceph.client.cinder.keyring
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee
/etc/ceph/ceph.client.cinder-backup.keyring
ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
Nodes running nova-compute need the keyring file for the nova-compute process. They also need to store the secret key of
the client.cinder user in libvirt. The libvirt process needs it to access the cluster while attaching a block device from
Cinder.
Create a temporary copy of the secret key on the nodes running nova-compute
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
Then, on the compute nodes, add the secret key to libvirt and remove the temporary copy of the key.
uuidgen
457eb676-33da-42ec-9a8c-9293d545c337
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key)
&& rm client.cinder.key secret.xml
Save the uuid of the secret for configuring nova-compute later.
Important You don’t necessarily need the UUID on all the compute nodes. However, from a platform consistency
perspective, it’s better to keep the same UUID.
Configure OpenStack to use Ceph
Glance
Glance can use multiple back ends to store images. To use Ceph block devices by default, edit /etc/glance/glance-
api.conf and add:
default_store=rbd
rbd_store_user=glance
rbd_store_pool=images
If you want to enable copy-on-write cloning of images into volumes, also add:
show_image_direct_url=True
Note that this exposes the back-end location via Glance’s API, so the endpoint with this option enabled should not be
publicly accessible.
Cinder
OpenStack requires a driver to interact with Ceph block devices. You must also specify the pool name for the block
device. On your OpenStack node, edit /etc/cinder/cinder.conf by adding:
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_pool=volumes
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=false
rbd_max_clone_depth=5
glance_api_version=2
If you’re using cephx authentication, also configure the user and UUID of the secret you added to libvirt as documented
earlier.
rbd_user=cinder
rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337
Cinder Backup
OpenStack Cinder Backup requires a specific daemon so don’t forget to install it. On your Cinder Backup node, edit
/etc/cinder/cinder.conf and add:
backup_driver=cinder.backup.drivers.ceph
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder-backup
backup_ceph_chunk_size=134217728
backup_ceph_pool=backups
backup_ceph_stripe_unit=0
backup_ceph_stripe_count=0
restore_discard_excess_bytes=true
Nova
In order to boot all virtual machines directly into Ceph, Nova must be configured. On every compute node, edit
/etc/nova/nova.conf and add:
libvirt_images_type=rbd
libvirt_images_rbd_pool=volumes
libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337
It's also good practice to disable any file injection. Usually while booting an instance, Nova attempts to open the roots of
the virtual machine. Then, it injects directly into the filesystem things like: password, ssh keys, etc. At this point, it is
better to rely on the metadata service and cloud-init. On every compute node, edit /etc/nova/nova.conf and add:
libvirt_inject_password=false
libvirt_inject_key=false
libvirt_inject_partition=-2
Restart OpenStack
To activate the Ceph block device driver and load the block device pool name into the configuration, you must restart
OpenStack.
sudo glance-control api restart
sudo service nova-compute restart
sudo service cinder-volume restart
sudo service cinder-backup restart
Once OpenStack is up and running, you should be able to create a volume and boot from it.
© 2014 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.
Scaling Ceph
In addition to Ceph providing a safer, higher performing platform as an OpenStack storage backend, the user also
benefits with an easier way to scale storage as the need grows.
The addition of Ceph nodes is done using the Juju add-node command. By default, it adds only one node, but it's possible
to add the number of wanted nodes as argument. To add one more Ceph OSD Daemon node you simply call:
juju add-node ceph-osd
Larger numbers of nodes can be added using the -n argument, e.g. 5 nodes with:
juju add-node -n 5 ceph-osd
Attention: Adding more nodes to Ceph leads to a redistribution of data between the nodes of an image. This can cause
inefficiencies during the process. So it should be done in smaller steps.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close