diff options
author | Stephen Finucane <stephenfin@redhat.com> | 2021-02-23 10:25:24 +0000 |
---|---|---|
committer | Stephen Finucane <stephenfin@redhat.com> | 2021-03-23 11:16:42 +0000 |
commit | 76549775fe73d4a54643803055688b7aaed5471c (patch) | |
tree | d47ed4338beba713e41e2ffe5dfff814d0a9e5a9 | |
parent | 04b869370369ca1398d24d18c15f2e1b516620a8 (diff) | |
download | nova-76549775fe73d4a54643803055688b7aaed5471c.tar.gz |
docs: Change formatting of hypervisor config guides
Use the formatting established in the style guide. There's a lot of
out-of-date information in here, but that's a battle for another day.
Change-Id: Ieec2c8f450c05a2451179e3bdba77514f2cc956e
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
-rw-r--r-- | doc/source/admin/configuration/api.rst | 3 | ||||
-rw-r--r-- | doc/source/admin/configuration/hypervisor-basics.rst | 14 | ||||
-rw-r--r-- | doc/source/admin/configuration/hypervisor-hyper-v.rst | 113 | ||||
-rw-r--r-- | doc/source/admin/configuration/hypervisor-ironic.rst | 35 | ||||
-rw-r--r-- | doc/source/admin/configuration/hypervisor-kvm.rst | 102 | ||||
-rw-r--r-- | doc/source/admin/configuration/hypervisor-lxc.rst | 10 | ||||
-rw-r--r-- | doc/source/admin/configuration/hypervisor-powervm.rst | 22 | ||||
-rw-r--r-- | doc/source/admin/configuration/hypervisor-qemu.rst | 13 | ||||
-rw-r--r-- | doc/source/admin/configuration/hypervisor-virtuozzo.rst | 11 | ||||
-rw-r--r-- | doc/source/admin/configuration/hypervisor-vmware.rst | 191 | ||||
-rw-r--r-- | doc/source/admin/configuration/hypervisor-zvm.rst | 15 | ||||
-rw-r--r-- | doc/source/admin/configuration/hypervisors.rst | 6 | ||||
-rw-r--r-- | doc/source/admin/configuration/index.rst | 6 |
13 files changed, 200 insertions, 341 deletions
diff --git a/doc/source/admin/configuration/api.rst b/doc/source/admin/configuration/api.rst index 979169ee8f..a8c2e6a0f4 100644 --- a/doc/source/admin/configuration/api.rst +++ b/doc/source/admin/configuration/api.rst @@ -6,8 +6,9 @@ The Compute API, is the component of OpenStack Compute that receives and responds to user requests, whether they be direct API calls, or via the CLI tools or dashboard. + Configure Compute API password handling -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +--------------------------------------- The OpenStack Compute API enables users to specify an administrative password when they create, rebuild, rescue or evacuate a server instance. diff --git a/doc/source/admin/configuration/hypervisor-basics.rst b/doc/source/admin/configuration/hypervisor-basics.rst deleted file mode 100644 index 9ac1e785e1..0000000000 --- a/doc/source/admin/configuration/hypervisor-basics.rst +++ /dev/null @@ -1,14 +0,0 @@ -=============================== -Hypervisor Configuration Basics -=============================== - -The node where the ``nova-compute`` service is installed and operates on the -same node that runs all of the virtual machines. This is referred to as the -compute node in this guide. - -By default, the selected hypervisor is KVM. To change to another hypervisor, -change the ``virt_type`` option in the ``[libvirt]`` section of ``nova.conf`` -and restart the ``nova-compute`` service. - -Specific options for particular hypervisors can be found in -the following sections. diff --git a/doc/source/admin/configuration/hypervisor-hyper-v.rst b/doc/source/admin/configuration/hypervisor-hyper-v.rst index b85a32a6e0..79d72cad05 100644 --- a/doc/source/admin/configuration/hypervisor-hyper-v.rst +++ b/doc/source/admin/configuration/hypervisor-hyper-v.rst @@ -19,8 +19,9 @@ compute nodes: - Windows Server 2012 R2 Server and Core (with the Hyper-V role enabled) - Hyper-V Server + Hyper-V configuration -~~~~~~~~~~~~~~~~~~~~~ +--------------------- The only OpenStack services required on a Hyper-V node are ``nova-compute`` and ``neutron-hyperv-agent``. Regarding the resources needed for this host you have @@ -34,7 +35,7 @@ configuration information should work for the Windows 2012 and 2012 R2 platforms. Local storage considerations ----------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Hyper-V compute node needs to have ample storage for storing the virtual machine images running on the compute nodes. You may use a single volume for @@ -43,7 +44,7 @@ all, or partition it into an OS volume and VM volume. .. _configure-ntp-windows: Configure NTP -------------- +~~~~~~~~~~~~~ Network time services must be configured to ensure proper operation of the OpenStack nodes. To set network time on your Windows host you must run the @@ -61,7 +62,7 @@ server. Note that in case of an Active Directory environment, you may do this only for the AD Domain Controller. Configure Hyper-V virtual switching ------------------------------------ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Information regarding the Hyper-V virtual Switch can be found in the `Hyper-V Virtual Switch Overview`__. @@ -83,7 +84,7 @@ following PowerShell may be used: __ https://technet.microsoft.com/en-us/library/hh831823.aspx Enable iSCSI initiator service ------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To prepare the Hyper-V node to be able to attach to volumes provided by cinder you must first make sure the Windows iSCSI initiator service is running and @@ -95,7 +96,7 @@ started automatically. PS C:\> Start-Service MSiSCSI Configure shared nothing live migration ---------------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Detailed information on the configuration of live migration can be found in `this guide`__ @@ -158,7 +159,7 @@ Additional Requirements: __ https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/Use-live-migration-without-Failover-Clustering-to-move-a-virtual-machine How to setup live migration on Hyper-V --------------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To enable 'shared nothing live' migration, run the 3 instructions below on each Hyper-V host: @@ -175,15 +176,16 @@ Hyper-V host: provide live migration. Additional Reading ------------------- +~~~~~~~~~~~~~~~~~~ This article clarifies the various live migration options in Hyper-V: `Hyper-V Live Migration of Yesterday <https://ariessysadmin.blogspot.ro/2012/04/hyper-v-live-migration-of-windows.html>`_ + Install nova-compute using OpenStack Hyper-V installer -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +------------------------------------------------------ In case you want to avoid all the manual setup, you can use Cloudbase Solutions' installer. You can find it here: @@ -201,28 +203,26 @@ its features can be found here: `Cloudbase <https://www.cloudbase.it>`_ + .. _windows-requirements: Requirements -~~~~~~~~~~~~ +------------ Python ------- - -Python 2.7 32bit must be installed as most of the libraries are not working -properly on the 64bit version. +~~~~~~ **Setting up Python prerequisites** -#. Download and install Python 2.7 using the MSI installer from here: +#. Download and install Python 3.8 using the MSI installer from the `Python + website`__. - `python-2.7.3.msi download - <https://www.python.org/ftp/python/2.7.3/python-2.7.3.msi>`_ + .. __: https://www.python.org/downloads/windows/ .. code-block:: none - PS C:\> $src = "https://www.python.org/ftp/python/2.7.3/python-2.7.3.msi" - PS C:\> $dest = "$env:temp\python-2.7.3.msi" + PS C:\> $src = "https://www.python.org/ftp/python/3.8.8/python-3.8.8.exe" + PS C:\> $dest = "$env:temp\python-3.8.8.exe" PS C:\> Invoke-WebRequest -Uri $src -OutFile $dest PS C:\> Unblock-File $dest PS C:\> Start-Process $dest @@ -233,34 +233,18 @@ properly on the 64bit version. .. code-block:: none PS C:\> $oldPath = [System.Environment]::GetEnvironmentVariable("Path") - PS C:\> $newPath = $oldPath + ";C:\python27\;C:\python27\Scripts\" + PS C:\> $newPath = $oldPath + ";C:\python38\;C:\python38\Scripts\" PS C:\> [System.Environment]::SetEnvironmentVariable("Path", $newPath, [System.EnvironmentVariableTarget]::User Python dependencies -------------------- - -The following packages need to be downloaded and manually installed: - -``setuptools`` - https://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11.win32-py2.7.exe - -``pip`` - https://pip.pypa.io/en/latest/installing/ - -``PyMySQL`` - http://codegood.com/download/10/ - -``PyWin32`` - https://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/pywin32-217.win32-py2.7.exe - -``Greenlet`` - http://www.lfd.uci.edu/~gohlke/pythonlibs/#greenlet - -``PyCryto`` - http://www.voidspace.org.uk/downloads/pycrypto26/pycrypto-2.6.win32-py2.7.exe +~~~~~~~~~~~~~~~~~~~ The following packages must be installed with pip: +* ``pywin32`` +* ``pymysql`` +* ``greenlet`` +* ``pycryto`` * ``ecdsa`` * ``amqp`` * ``wmi`` @@ -271,8 +255,9 @@ The following packages must be installed with pip: PS C:\> pip install amqp PS C:\> pip install wmi + Other dependencies ------------------- +~~~~~~~~~~~~~~~~~~ ``qemu-img`` is required for some of the image related operations. You can get it from here: http://qemu.weilnetz.de/. You must make sure that the @@ -281,7 +266,7 @@ it from here: http://qemu.weilnetz.de/. You must make sure that the Some Python packages need to be compiled, so you may use MinGW or Visual Studio. You can get MinGW from here: http://sourceforge.net/projects/mingw/. You must configure which compiler is to be used for this purpose by using the -``distutils.cfg`` file in ``$Python27\Lib\distutils``, which can contain: +``distutils.cfg`` file in ``$Python38\Lib\distutils``, which can contain: .. code-block:: ini @@ -291,29 +276,22 @@ You must configure which compiler is to be used for this purpose by using the As a last step for setting up MinGW, make sure that the MinGW binaries' directories are set up in PATH. + Install nova-compute -~~~~~~~~~~~~~~~~~~~~ +-------------------- Download the nova code ----------------------- +~~~~~~~~~~~~~~~~~~~~~~ #. Use Git to download the necessary source code. The installer to run Git on Windows can be downloaded here: - https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe + https://gitforwindows.org/ #. Download the installer. Once the download is complete, run the installer and follow the prompts in the installation wizard. The default should be acceptable for the purposes of this guide. - .. code-block:: none - - PS C:\> $src = "https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe" - PS C:\> $dest = "$env:temp\Git-1.9.2-preview20140411.exe" - PS C:\> Invoke-WebRequest -Uri $src -OutFile $dest - PS C:\> Unblock-File $dest - PS C:\> Start-Process $dest - #. Run the following to clone the nova code. .. code-block:: none @@ -321,7 +299,7 @@ Download the nova code PS C:\> git.exe clone https://opendev.org/openstack/nova Install nova-compute service ----------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To install ``nova-compute``, run: @@ -331,7 +309,7 @@ To install ``nova-compute``, run: PS C:\> python setup.py install Configure nova-compute ----------------------- +~~~~~~~~~~~~~~~~~~~~~~ The ``nova.conf`` file must be placed in ``C:\etc\nova`` for running OpenStack on Hyper-V. Below is a sample ``nova.conf`` for Windows: @@ -393,7 +371,7 @@ on Hyper-V. Below is a sample ``nova.conf`` for Windows: html5_proxy_base_url = https://IP_ADDRESS:4430 Prepare images for use with Hyper-V ------------------------------------ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Hyper-V currently supports only the VHD and VHDX file format for virtual machine instances. Detailed instructions for installing virtual machines on @@ -407,8 +385,12 @@ image to `glance` using the `openstack-client`: .. code-block:: none - PS C:\> openstack image create --name "VM_IMAGE_NAME" --property hypervisor_type=hyperv --public \ - --container-format bare --disk-format vhd + PS C:\> openstack image create \ + --name "VM_IMAGE_NAME" \ + --property hypervisor_type=hyperv \ + --public \ + --container-format bare \ + --disk-format vhd .. note:: @@ -422,12 +404,12 @@ image to `glance` using the `openstack-client`: PS C:\> New-VHD DISK_NAME.vhd -SizeBytes VHD_SIZE Inject interfaces and routes ----------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The ``interfaces.template`` file describes the network interfaces and routes available on your system and how to activate them. You can specify the location -of the file with the ``injected_network_template`` configuration option in -``/etc/nova/nova.conf``. +of the file with the :oslo.config:option:`injected_network_template` +configuration option in ``nova.conf``. .. code-block:: ini @@ -436,7 +418,7 @@ of the file with the ``injected_network_template`` configuration option in A default template exists in ``nova/virt/interfaces.template``. Run Compute with Hyper-V ------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~ To start the ``nova-compute`` service, run this command from a console in the Windows server: @@ -445,8 +427,9 @@ Windows server: PS C:\> C:\Python27\python.exe c:\Python27\Scripts\nova-compute --config-file c:\etc\nova\nova.conf -Troubleshoot Hyper-V configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Troubleshooting +--------------- * I ran the :command:`nova-manage service list` command from my controller; however, I'm not seeing smiley faces for Hyper-V compute nodes, what do I do? diff --git a/doc/source/admin/configuration/hypervisor-ironic.rst b/doc/source/admin/configuration/hypervisor-ironic.rst index 49f326db3c..bba01deffa 100644 --- a/doc/source/admin/configuration/hypervisor-ironic.rst +++ b/doc/source/admin/configuration/hypervisor-ironic.rst @@ -1,43 +1,43 @@ +====== Ironic ====== Introduction ------------ + The ironic hypervisor driver wraps the Bare Metal (ironic) API, enabling Nova to provision baremetal resources using the same user-facing API as for server management. -This is the only driver in nova where one compute service can map to many hosts -, meaning a ``nova-compute`` service can manage multiple ``ComputeNodes``. An -ironic driver managed compute service uses the ironic ``node uuid`` for the -compute node ``hypervisor_hostname`` (nodename) and ``uuid`` fields. -The relationship of ``instance:compute node:ironic node`` is ``1:1:1``. +This is the only driver in nova where one compute service can map to many +hosts, meaning a ``nova-compute`` service can manage multiple ``ComputeNodes``. +An ironic driver managed compute service uses the ironic ``node uuid`` for the +compute node ``hypervisor_hostname`` (nodename) and ``uuid`` fields. The +relationship of ``instance:compute node:ironic node`` is 1:1:1. Scheduling of bare metal nodes is based on custom resource classes, specified via the ``resource_class`` property on a node and a corresponding resource -property on a flavor (see the `flavor documentation`_). +property on a flavor (see the :ironic-doc:`flavor documentation +</install/configure-nova-flavors.html>`). The RAM and CPU settings on a flavor are ignored, and the disk is only used to determine the root partition size when a partition image is used (see the -`image documentation`_). +:ironic-doc:`image documentation +</latest/install/configure-glance-images.html>`). -.. _flavor documentation: https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html -.. _image documentation: https://docs.openstack.org/ironic/latest/install/configure-glance-images.html - Configuration ------------- -- `Configure the Compute service to use the Bare Metal service - <https://docs.openstack.org/ironic/latest/install/configure-compute.html>`_. +- :ironic-doc:`Configure the Compute service to use the Bare Metal service + </latest/install/configure-compute.html>`. -- `Create flavors for use with the Bare Metal service - <https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html>`__. +- :ironic-doc:`Create flavors for use with the Bare Metal service + </latest/install/configure-nova-flavors.html>`. -- `Conductors Groups - <https://docs.openstack.org/ironic/latest/admin/conductor-groups.html>`_. +- :ironic-doc:`Conductors Groups </admin/conductor-groups.html>`. -Scaling and Performance Issues +Scaling and performance issues ------------------------------ - The ``update_available_resource`` periodic task reports all the resources @@ -47,7 +47,6 @@ Scaling and Performance Issues :oslo.config:option:`ironic.partition_key`. - Known limitations / Missing features ------------------------------------ diff --git a/doc/source/admin/configuration/hypervisor-kvm.rst b/doc/source/admin/configuration/hypervisor-kvm.rst index 569e4e17be..5a453f4c44 100644 --- a/doc/source/admin/configuration/hypervisor-kvm.rst +++ b/doc/source/admin/configuration/hypervisor-kvm.rst @@ -2,9 +2,6 @@ KVM === -.. todo:: Some of this is installation guide material and should probably be - moved. - KVM is configured as the default hypervisor for Compute. .. note:: @@ -15,16 +12,6 @@ KVM is configured as the default hypervisor for Compute. on qemu-kvm, which installs ``/lib/udev/rules.d/45-qemu-kvm.rules``, which sets the correct permissions on the ``/dev/kvm`` device node. -To enable KVM explicitly, add the following configuration options to the -``/etc/nova/nova.conf`` file: - -.. code-block:: ini - - compute_driver = libvirt.LibvirtDriver - - [libvirt] - virt_type = kvm - The KVM hypervisor supports the following virtual machine image formats: * Raw @@ -35,38 +22,47 @@ The KVM hypervisor supports the following virtual machine image formats: This section describes how to enable KVM on your system. For more information, see the following distribution-specific documentation: -* `Fedora: Virtualization Getting Started Guide <http://docs.fedoraproject.org/ - en-US/Fedora/22/html/Virtualization_Getting_Started_Guide/index.html>`_ - from the Fedora 22 documentation. -* `Ubuntu: KVM/Installation <https://help.ubuntu.com/community/KVM/ - Installation>`_ from the Community Ubuntu documentation. -* `Debian: Virtualization with KVM <http://static.debian-handbook.info/browse/ - stable/sect.virtualization.html#idp11279352>`_ from the Debian handbook. -* `Red Hat Enterprise Linux: Installing virtualization packages on an existing - Red Hat Enterprise Linux system <http://docs.redhat.com/docs/en-US/ - Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_ - Installation_Guide/sect-Virtualization_Host_Configuration_and_Guest_Installa - tion_Guide-Host_Installation-Installing_KVM_packages_on_an_existing_Red_Hat_ - Enterprise_Linux_system.html>`_ from the ``Red Hat Enterprise Linux - Virtualization Host Configuration and Guest Installation Guide``. -* `openSUSE: Installing KVM <http://doc.opensuse.org/documentation/html/ - openSUSE/opensuse-kvm/cha.kvm.requires.html#sec.kvm.requires.install>`_ - from the openSUSE Virtualization with KVM manual. -* `SLES: Installing KVM <https://www.suse.com/documentation/sles-12/book_virt/ - data/sec_vt_installation_kvm.html>`_ from the SUSE Linux Enterprise Server - ``Virtualization Guide``. +* `Fedora: Virtualization Getting Started Guide`__ +* `Ubuntu: KVM/Installation`__ +* `Debian: KVM Guide`__ +* `Red Hat Enterprise Linux (RHEL): Getting started with virtualization`__ +* `openSUSE: Setting Up a KVM VM Host Server`__ +* `SLES: Virtualization with KVM`__. + +.. __: https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-virtualization/ +.. __: https://help.ubuntu.com/community/KVM/Installation +.. __: https://wiki.debian.org/KVM +.. __: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/getting-started-with-virtualization-in-rhel-8_configuring-and-managing-virtualization +.. __: https://doc.opensuse.org/documentation/leap/virtualization/html/book-virt/cha-qemu-host.html +.. __: https://documentation.suse.com/sles/11-SP4/html/SLES-all/book-kvm.html + + +Configuration +------------- + +To enable KVM explicitly, add the following configuration options to the +``/etc/nova/nova.conf`` file: + +.. code-block:: ini + + [DEFAULT] + compute_driver = libvirt.LibvirtDriver + + [libvirt] + virt_type = kvm + .. _enable-kvm: Enable KVM -~~~~~~~~~~ +---------- The following sections outline how to enable KVM based hardware virtualization on different architectures and platforms. To perform these steps, you must be logged in as the ``root`` user. -For x86 based systems ---------------------- +For x86-based systems +~~~~~~~~~~~~~~~~~~~~~ #. To determine whether the ``svm`` or ``vmx`` CPU extensions are present, run this command: @@ -175,8 +171,8 @@ Add these lines to ``/etc/modules`` file so that these modules load on reboot: kvm kvm-amd -For POWER based systems ------------------------ +For POWER-based systems +~~~~~~~~~~~~~~~~~~~~~~~ KVM as a hypervisor is supported on POWER system's PowerNV platform. @@ -224,8 +220,14 @@ KVM as a hypervisor is supported on POWER system's PowerNV platform. Because a KVM installation can change user group membership, you might need to log in again for changes to take effect. +For AArch64-based systems +~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. todo:: Populate this section. + + Configure Compute backing storage -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +--------------------------------- Backing Storage is the storage used to provide the expanded operating system image, and any ephemeral storage. Inside the virtual machine, this is normally @@ -259,8 +261,9 @@ Local `LVM volumes used. Set the :oslo.config:option:`libvirt.images_volume_group` configuration option to the name of the LVM group you have created. + Direct download of images from Ceph -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +----------------------------------- When the Glance image service is set up with the Ceph backend and Nova is using a local ephemeral store (``[libvirt]/images_type!=rbd``), it is possible to @@ -291,7 +294,7 @@ On the Nova compute node in nova.conf: Nested guest support -~~~~~~~~~~~~~~~~~~~~ +-------------------- You may choose to enable support for nested guests --- that is, allow your Nova instances to themselves run hardware-accelerated virtual @@ -299,7 +302,7 @@ machines with KVM. Doing so requires a module parameter on your KVM kernel module, and corresponding ``nova.conf`` settings. Host configuration ------------------- +~~~~~~~~~~~~~~~~~~ To enable nested KVM guests, your compute node must load the ``kvm_intel`` or ``kvm_amd`` module with ``nested=1``. You can enable @@ -315,7 +318,7 @@ content: A reboot may be required for the change to become effective. Nova configuration ------------------- +~~~~~~~~~~~~~~~~~~ To support nested guests, you must set your :oslo.config:option:`libvirt.cpu_mode` configuration to one of the following @@ -366,7 +369,7 @@ Custom (``custom``) More information on CPU models can be found in :doc:`/admin/cpu-models`. Limitations ------------ +~~~~~~~~~~~~ When enabling nested guests, you should be aware of (and inform your users about) certain limitations that are currently inherent to nested @@ -380,8 +383,9 @@ See `the KVM documentation <https://www.linux-kvm.org/page/Nested_Guests#Limitations>`_ for more information on these limitations. + Guest agent support -~~~~~~~~~~~~~~~~~~~ +------------------- Use guest agents to enable optional access between compute nodes and guests through a socket, using the QMP protocol. @@ -391,8 +395,9 @@ parameter on the image you wish to use to create the guest-agent-capable instances from. You can explicitly disable the feature by setting ``hw_qemu_guest_agent=no`` in the image metadata. + KVM performance tweaks -~~~~~~~~~~~~~~~~~~~~~~ +---------------------- The `VHostNet <http://www.linux-kvm.org/page/VhostNet>`_ kernel module improves network performance. To load the kernel module, run the following command as @@ -402,8 +407,9 @@ root: # modprobe vhost_net -Troubleshoot KVM -~~~~~~~~~~~~~~~~ + +Troubleshooting +--------------- Trying to launch a new virtual machine instance fails with the ``ERROR`` state, and the following error appears in the ``/var/log/nova/nova-compute.log`` file: diff --git a/doc/source/admin/configuration/hypervisor-lxc.rst b/doc/source/admin/configuration/hypervisor-lxc.rst index eb8d51f83e..bc0988ccf6 100644 --- a/doc/source/admin/configuration/hypervisor-lxc.rst +++ b/doc/source/admin/configuration/hypervisor-lxc.rst @@ -24,11 +24,17 @@ LXC than other hypervisors. the hypervisor. See the `hypervisor support matrix <http://wiki.openstack.org/HypervisorSupportMatrix>`_ for details. -To enable LXC, ensure the following options are set in ``/etc/nova/nova.conf`` -on all hosts running the ``nova-compute`` service. + +Configuration +------------- + +To enable LXC, configure :oslo.config:option:`DEFAULT.compute_driver` = +``libvirt.LibvirtDriver`` and :oslo.config:option:`libvirt.virt_type` = +``lxc``. For example: .. code-block:: ini + [DEFAULT] compute_driver = libvirt.LibvirtDriver [libvirt] diff --git a/doc/source/admin/configuration/hypervisor-powervm.rst b/doc/source/admin/configuration/hypervisor-powervm.rst index a0898ddeae..a2947ff608 100644 --- a/doc/source/admin/configuration/hypervisor-powervm.rst +++ b/doc/source/admin/configuration/hypervisor-powervm.rst @@ -1,8 +1,10 @@ +======= PowerVM ======= Introduction ------------ + OpenStack Compute supports the PowerVM hypervisor through `NovaLink`_. In the NovaLink architecture, a thin NovaLink virtual machine running on the Power system manages virtualization for that system. The ``nova-compute`` service @@ -12,22 +14,27 @@ Management Console) is needed. .. _NovaLink: https://www.ibm.com/support/knowledgecenter/en/POWER8/p8eig/p8eig_kickoff.htm + Configuration ------------- + In order to function properly, the ``nova-compute`` service must be executed by a member of the ``pvm_admin`` group. Use the ``usermod`` command to add the -user. For example, to add the ``stacker`` user to the ``pvm_admin`` group, execute:: +user. For example, to add the ``stacker`` user to the ``pvm_admin`` group, execute: + +.. code-block:: console - sudo usermod -a -G pvm_admin stacker + # usermod -a -G pvm_admin stacker The user must re-login for the change to take effect. -To enable the PowerVM compute driver, set the following configuration option -in the ``/etc/nova/nova.conf`` file: +To enable the PowerVM compute driver, configure +:oslo.config:option:`DEFAULT.compute_driver` = ``powervm.PowerVMDriver``. For +example: .. code-block:: ini - [Default] + [DEFAULT] compute_driver = powervm.PowerVMDriver The PowerVM driver supports two types of storage for ephemeral disks: @@ -59,9 +66,10 @@ processor, whereas 0.05 means 1/20th of a physical processor. E.g.: Volume Support -------------- + Volume support is provided for the PowerVM virt driver via Cinder. Currently, -the only supported volume protocol is `vSCSI`_ Fibre Channel. Attach, detach, +the only supported volume protocol is `vSCSI`__ Fibre Channel. Attach, detach, and extend are the operations supported by the PowerVM vSCSI FC volume adapter. :term:`Boot From Volume` is not yet supported. -.. _vSCSI: https://www.ibm.com/support/knowledgecenter/en/POWER8/p8hat/p8hat_virtualscsi.htm +.. __: https://www.ibm.com/support/knowledgecenter/en/POWER8/p8hat/p8hat_virtualscsi.htm diff --git a/doc/source/admin/configuration/hypervisor-qemu.rst b/doc/source/admin/configuration/hypervisor-qemu.rst index 6849b89c28..6cc72b04ae 100644 --- a/doc/source/admin/configuration/hypervisor-qemu.rst +++ b/doc/source/admin/configuration/hypervisor-qemu.rst @@ -19,17 +19,24 @@ The typical uses cases for QEMU are development or testing purposes, where the hypervisor does not support native virtualization for guests. -To enable QEMU, add these settings to ``nova.conf``: + +Configuration +------------- + +To enable QEMU, configure :oslo.config:option:`DEFAULT.compute_driver` = +``libvirt.LibvirtDriver`` and :oslo.config:option:`libvirt.virt_type` = +``qemu``. For example: .. code-block:: ini + [DEFAULT] compute_driver = libvirt.LibvirtDriver [libvirt] virt_type = qemu -For some operations you may also have to install the -:command:`guestmount` utility: +For some operations you may also have to install the :command:`guestmount` +utility: On Ubuntu: diff --git a/doc/source/admin/configuration/hypervisor-virtuozzo.rst b/doc/source/admin/configuration/hypervisor-virtuozzo.rst index 13c63daba6..354818949e 100644 --- a/doc/source/admin/configuration/hypervisor-virtuozzo.rst +++ b/doc/source/admin/configuration/hypervisor-virtuozzo.rst @@ -12,11 +12,17 @@ image. Some OpenStack Compute features may be missing when running with Virtuozzo as the hypervisor. See :doc:`/user/support-matrix` for details. -To enable Virtuozzo Containers, set the following options in -``/etc/nova/nova.conf`` on all hosts running the ``nova-compute`` service. + +Configuration +------------- + +To enable LXC, configure :oslo.config:option:`DEFAULT.compute_driver` = +``libvirt.LibvirtDriver`` and :oslo.config:option:`libvirt.virt_type` = +``parallels``. For example: .. code-block:: ini + [DEFAULT] compute_driver = libvirt.LibvirtDriver force_raw_images = False @@ -31,6 +37,7 @@ To enable Virtuozzo Virtual Machines, set the following options in .. code-block:: ini + [DEFAULT] compute_driver = libvirt.LibvirtDriver [libvirt] diff --git a/doc/source/admin/configuration/hypervisor-vmware.rst b/doc/source/admin/configuration/hypervisor-vmware.rst index 5b3a66e11e..9de1d0c2ae 100644 --- a/doc/source/admin/configuration/hypervisor-vmware.rst +++ b/doc/source/admin/configuration/hypervisor-vmware.rst @@ -3,7 +3,7 @@ VMware vSphere ============== Introduction -~~~~~~~~~~~~ +------------ OpenStack Compute supports the VMware vSphere product family and enables access to advanced features such as vMotion, High Availability, and Dynamic Resource @@ -23,8 +23,9 @@ vSphere features. The following sections describe how to configure the VMware vCenter driver. + High-level architecture -~~~~~~~~~~~~~~~~~~~~~~~ +----------------------- The following diagram shows a high-level view of the VMware driver architecture: @@ -59,8 +60,9 @@ configure OpenStack resources such as VMs through the OpenStack dashboard. The figure does not show how networking fits into the architecture. For details, see :ref:`vmware-networking`. + Configuration overview -~~~~~~~~~~~~~~~~~~~~~~ +---------------------- To get started with the VMware vCenter driver, complete the following high-level steps: @@ -77,7 +79,7 @@ high-level steps: .. _vmware-prereqs: Prerequisites and limitations -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +----------------------------- Use the following list to prepare a vSphere environment that runs with the VMware vCenter driver: @@ -142,8 +144,9 @@ assigned to a separate availability zone. This is required as the OpenStack Block Storage VMDK driver does not currently work across multiple vCenter installations. + VMware vCenter service account -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +------------------------------ OpenStack integration requires a vCenter service account with the following minimum permissions. Apply the permissions to the ``Datacenter`` root object, @@ -414,10 +417,11 @@ and select the :guilabel:`Propagate to Child Objects` option. - Import - + .. _vmware-vcdriver: VMware vCenter driver -~~~~~~~~~~~~~~~~~~~~~ +--------------------- Use the VMware vCenter driver (VMwareVCDriver) to connect OpenStack Compute with vCenter. This recommended configuration enables access through vCenter to @@ -425,7 +429,7 @@ advanced vSphere features like vMotion, High Availability, and Dynamic Resource Scheduling (DRS). VMwareVCDriver configuration options ------------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Add the following VMware-specific configuration options to the ``nova.conf`` file: @@ -478,10 +482,11 @@ against host failures. Many ``nova.conf`` options are relevant to libvirt but do not apply to this driver. + .. _vmware-images: Images with VMware vSphere -~~~~~~~~~~~~~~~~~~~~~~~~~~ +-------------------------- The vCenter driver supports images in the VMDK format. Disks in this format can be obtained from VMware Fusion or from an ESX environment. It is also possible @@ -492,7 +497,7 @@ sections provide additional details on the supported disks and the commands used for conversion and upload. Supported image types ---------------------- +~~~~~~~~~~~~~~~~~~~~~ Upload images to the OpenStack Image service in VMDK format. The following VMDK disk types are supported: @@ -745,7 +750,7 @@ of the supported guest OS: - Windows XP Professional Convert and load images ------------------------ +~~~~~~~~~~~~~~~~~~~~~~~ Using the ``qemu-img`` utility, disk images in several formats (such as, qcow2) can be converted to the VMDK format. @@ -806,7 +811,7 @@ is lsiLogic, which is SCSI, so you can omit the ``vmware_adaptertype`` property if you are certain that the image adapter type is lsiLogic. Tag VMware images ------------------ +~~~~~~~~~~~~~~~~~ In a mixed hypervisor environment, OpenStack Compute uses the ``hypervisor_type`` tag to match images to the correct hypervisor type. For @@ -826,7 +831,7 @@ Note that ``qemu`` is used for both QEMU and KVM hypervisor types. ubuntu-thick-scsi < ubuntuLTS-flat.vmdk Optimize images ---------------- +~~~~~~~~~~~~~~~ Monolithic Sparse disks are considerably faster to download but have the overhead of an additional conversion step. When imported into ESX, sparse disks @@ -885,7 +890,7 @@ In the previous cases, the converted vmdk is actually a pair of files: The file to be uploaded to the Image service is ``converted-flat.vmdk``. Image handling --------------- +~~~~~~~~~~~~~~ The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. As a result, the vCenter OpenStack Compute driver must @@ -899,7 +904,7 @@ Image service. Even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared data store. To avoid this copy, boot the image in linked_clone mode. To learn how to enable this -mode, see :ref:`vmware-config`. +mode, see :oslo.config:option:`vmware.use_linked_clone`. .. note:: @@ -929,10 +934,11 @@ section in the ``nova.conf`` file: * :oslo.config:option:`image_cache.remove_unused_base_images` * :oslo.config:option:`image_cache.remove_unused_original_minimum_age_seconds` + .. _vmware-networking: Networking with VMware vSphere -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +------------------------------ The VMware driver supports networking with the Networking Service (neutron). Depending on your installation, complete these configuration steps before you @@ -943,8 +949,9 @@ provision VMs: ``br-int``). All VM NICs are attached to this port group for management by the OpenStack Networking plug-in. + Volumes with VMware vSphere -~~~~~~~~~~~~~~~~~~~~~~~~~~~ +--------------------------- The VMware driver supports attaching volumes from the Block Storage service. The VMware VMDK driver for OpenStack Block Storage is recommended and should be @@ -954,159 +961,9 @@ this has not yet been imported and published). Also an iSCSI volume driver provides limited support and can be used only for attachments. -.. _vmware-config: - -Configuration reference -~~~~~~~~~~~~~~~~~~~~~~~ - -To customize the VMware driver, use the configuration option settings below. - -.. TODO(sdague): for the import we just copied this in from the auto generated - file. We probably need a strategy for doing equivalent autogeneration, but - we don't as of yet. - - Warning: Do not edit this file. It is automatically generated from the - software project's code and your changes will be overwritten. - - The tool to generate this file lives in openstack-doc-tools repository. - - Please make any changes needed in the code, then run the - autogenerate-config-doc tool from the openstack-doc-tools repository, or - ask for help on the documentation mailing list, IRC channel or meeting. - -.. _nova-vmware: - -.. list-table:: Description of VMware configuration options - :header-rows: 1 - :class: config-ref-table - - * - Configuration option = Default value - - Description - * - **[vmware]** - - - * - ``api_retry_count`` = ``10`` - - (Integer) Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc. - * - ``ca_file`` = ``None`` - - (String) Specifies the CA bundle file to be used in verifying the vCenter server certificate. - * - ``cache_prefix`` = ``None`` - - (String) This option adds a prefix to the folder where cached images are stored - - This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes. - - .. note:: - - This should only be used when the compute nodes are running on same host or they have a shared file system. - - Possible values: - - * Any string representing the cache prefix to the folder - * - ``cluster_name`` = ``None`` - - (String) Name of a VMware Cluster ComputeResource. - * - ``console_delay_seconds`` = ``None`` - - (Integer) Set this value if affected by an increased network latency causing repeated characters when typing in a remote console. - * - ``datastore_regex`` = ``None`` - - (String) Regular expression pattern to match the name of datastore. - - The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex="nas.*" selects all the data stores that have a name starting with "nas". - - .. note:: - - If no regex is given, it just picks the datastore with the most freespace. - - Possible values: - - * Any matching regular expression to a datastore must be given - * - ``host_ip`` = ``None`` - - (String) Hostname or IP address for connection to VMware vCenter host. - * - ``host_password`` = ``None`` - - (String) Password for connection to VMware vCenter host. - * - ``host_port`` = ``443`` - - (Port number) Port for connection to VMware vCenter host. - * - ``host_username`` = ``None`` - - (String) Username for connection to VMware vCenter host. - * - ``insecure`` = ``False`` - - (Boolean) If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. - - Related options: - - * ca_file: This option is ignored if "ca_file" is set. - * - ``integration_bridge`` = ``None`` - - (String) This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set. - - Possible values: - - * Any valid string representing the name of the integration bridge - * - ``maximum_objects`` = ``100`` - - (Integer) This option specifies the limit on the maximum number of objects to return in a single result. - - A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests. - * - ``pbm_default_policy`` = ``None`` - - (String) This option specifies the default policy to be used. - - If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used. - - Possible values: - - * Any valid storage policy such as VSAN default storage policy - - Related options: - - * pbm_enabled - * - ``pbm_enabled`` = ``False`` - - (Boolean) This option enables or disables storage policy based placement of instances. - - Related options: - - * pbm_default_policy - * - ``pbm_wsdl_location`` = ``None`` - - (String) This option specifies the PBM service WSDL file location URL. - - Setting this will disable storage policy based placement of instances. - - Possible values: - - * Any valid file path e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl - * - ``serial_port_proxy_uri`` = ``None`` - - (String) Identifies a proxy service that provides network access to the serial_port_service_uri. - - Possible values: - - * Any valid URI - - Related options: This option is ignored if serial_port_service_uri is not specified. - - * serial_port_service_uri - * - ``serial_port_service_uri`` = ``None`` - - (String) Identifies the remote system where the serial port traffic will be sent. - - This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs. - - Possible values: - - * Any valid URI - * - ``task_poll_interval`` = ``0.5`` - - (Floating point) Time interval in seconds to poll remote tasks invoked on VMware VC server. - * - ``use_linked_clone`` = ``True`` - - (Boolean) This option enables/disables the use of linked clone. - - The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don't have to copy the file again from the OpenStack Image service. - - If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM. - * - ``wsdl_location`` = ``None`` - - (String) This option specifies VIM Service WSDL Location - - If vSphere API versions 5.1 and later is being used, this section can be ignored. If version is less than 5.1, WSDL files must be hosted locally and their location must be specified in the above section. - - Optional over-ride to default location for bug work-arounds. - - Possible values: - - * http://<server>/vimService.wsdl - - * file:///opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl Troubleshooting -~~~~~~~~~~~~~~~ +--------------- Operators can troubleshoot VMware specific failures by correlating OpenStack logs to vCenter logs. Every RPC call which is made by an OpenStack driver has diff --git a/doc/source/admin/configuration/hypervisor-zvm.rst b/doc/source/admin/configuration/hypervisor-zvm.rst index cc7c0d5834..1915206b99 100644 --- a/doc/source/admin/configuration/hypervisor-zvm.rst +++ b/doc/source/admin/configuration/hypervisor-zvm.rst @@ -1,8 +1,9 @@ +=== zVM === z/VM System Requirements -~~~~~~~~~~~~~~~~~~~~~~~~ +------------------------ * The appropriate APARs installed, the current list of which can be found: z/VM OpenStack Cloud Information (http://www.vm.ibm.com/sysman/osmntlvl.html). @@ -12,15 +13,16 @@ z/VM System Requirements IBM z Systems hardware requirements are based on both the applications and the load on the system. + Active Engine Guide -~~~~~~~~~~~~~~~~~~~ +------------------- Active engine is used as an initial configuration and management tool during deployed machine startup. Currently the z/VM driver uses ``zvmguestconfigure`` and ``cloud-init`` as a two stage active engine. Installation and Configuration of zvmguestconfigure ---------------------------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Cloudlib4zvm supports initiating changes to a Linux on z Systems virtual machine while Linux is shut down or the virtual machine is logged off. @@ -37,7 +39,7 @@ cloudlib4zvm service to the reader of the virtual machine as a class X file. The cloud-init AE relies on tailoring performed by ``zvmguestconfigure``. Installation and Configuration of cloud-init --------------------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ OpenStack uses cloud-init as its activation engine. Some Linux distributions include cloud-init either already installed or available to be installed. @@ -62,14 +64,15 @@ During cloud-init installation, some dependency packages may be required. You can use zypper and python setuptools to easily resolve these dependencies. See https://pypi.python.org/pypi/setuptools for more information. + Image guide -~~~~~~~~~~~ +----------- This guideline will describe the requirements and steps to create and configure images for use with z/VM. Image Requirements ------------------- +~~~~~~~~~~~~~~~~~~ * The following Linux distributions are supported for deploy: diff --git a/doc/source/admin/configuration/hypervisors.rst b/doc/source/admin/configuration/hypervisors.rst index 39fde98f31..ed913b083f 100644 --- a/doc/source/admin/configuration/hypervisors.rst +++ b/doc/source/admin/configuration/hypervisors.rst @@ -5,7 +5,6 @@ Hypervisors .. toctree:: :maxdepth: 1 - hypervisor-basics hypervisor-kvm hypervisor-qemu hypervisor-lxc @@ -43,8 +42,7 @@ The following hypervisors are supported: on the Windows virtualization platform. * `Virtuozzo`_ 7.0.0 and newer - OS Containers and Kernel-based Virtual - Machines supported via libvirt virt_type=parallels. The supported formats - include ploop and qcow2 images. + Machines supported. The supported formats include ploop and qcow2 images. * `PowerVM`_ - Server virtualization with IBM PowerVM for AIX, IBM i, and Linux workloads on the Power Systems platform. @@ -55,7 +53,6 @@ The following hypervisors are supported: * `Ironic`_ - OpenStack project which provisions bare metal (as opposed to virtual) machines. - Nova supports hypervisors via virt drivers. Nova has the following in tree virt drivers: @@ -80,7 +77,6 @@ virt drivers: This driver does not spawn any virtual machines and therefore should only be used during testing. - .. _KVM: https://www.linux-kvm.org/page/Main_Page .. _LXC: https://linuxcontainers.org .. _QEMU: https://wiki.qemu.org/Manual diff --git a/doc/source/admin/configuration/index.rst b/doc/source/admin/configuration/index.rst index 5441d67081..d91c72f9de 100644 --- a/doc/source/admin/configuration/index.rst +++ b/doc/source/admin/configuration/index.rst @@ -1,6 +1,6 @@ -=============== - Configuration -=============== +============= +Configuration +============= To configure your Compute installation, you must define configuration options in these files: |