summaryrefslogtreecommitdiff
path: root/doc/source/install/compute-install-obs.rst
blob: c5c1d29fb3d8300f19c0d18109f9748806a2b264 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
Install and configure a compute node for openSUSE and SUSE Linux Enterprise
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This section describes how to install and configure the Compute service on a
compute node. The service supports several hypervisors to deploy instances or
virtual machines (VMs). For simplicity, this configuration uses the Quick
EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute
nodes that support hardware acceleration for virtual machines.  On legacy
hardware, this configuration uses the generic QEMU hypervisor.  You can follow
these instructions with minor modifications to horizontally scale your
environment with additional compute nodes.

.. note::

   This section assumes that you are following the instructions in this guide
   step-by-step to configure the first compute node. If you want to configure
   additional compute nodes, prepare them in a similar fashion to the first
   compute node in the :ref:`example architectures
   <overview-example-architectures>` section. Each additional compute node
   requires a unique IP address.

Install and configure components
--------------------------------

.. include:: shared/note_configuration_vary_by_distribution.rst

#. Install the packages:

   .. code-block:: console

      # zypper install openstack-nova-compute genisoimage qemu-kvm libvirt

#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions:

   * In the ``[DEFAULT]`` section, enable only the compute and metadata APIs:

     .. path /etc/nova/nova.conf
     .. code-block:: ini

        [DEFAULT]
        # ...
        enabled_apis = osapi_compute,metadata

   * In the ``[DEFAULT]`` section, set the ``compute_driver``:

     .. path /etc/nova/nova.conf
     .. code-block:: ini

        [DEFAULT]
        # ...
        compute_driver = libvirt.LibvirtDriver

   * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access:

     .. path /etc/nova/nova.conf
     .. code-block:: ini

        [DEFAULT]
        # ...
        transport_url = rabbit://openstack:RABBIT_PASS@controller

     Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
     account in ``RabbitMQ``.

   * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity
     service access:

     .. path /etc/nova/nova.conf
     .. code-block:: ini

        [api]
        # ...
        auth_strategy = keystone

        [keystone_authtoken]
        # ...
        www_authenticate_uri = http://controller:5000/
        auth_url = http://controller:5000/
        memcached_servers = controller:11211
        auth_type = password
        project_domain_name = Default
        user_domain_name = Default
        project_name = service
        username = nova
        password = NOVA_PASS

     Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
     the Identity service.

     .. note::

        Comment out or remove any other options in the ``[keystone_authtoken]``
        section.

   * In the ``[DEFAULT]`` section, configure the ``my_ip`` option:

     .. path /etc/nova/nova.conf
     .. code-block:: ini

        [DEFAULT]
        # ...
        my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

     Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the
     management network interface on your compute node, typically ``10.0.0.31``
     for the first node in the :ref:`example architecture
     <overview-example-architectures>`.

   * Configure the ``[neutron]`` section of **/etc/nova/nova.conf**. Refer to
     the :neutron-doc:`Networking service install guide
     <install/compute-install-obs.html>` for more details.

   * In the ``[vnc]`` section, enable and configure remote console access:

     .. path /etc/nova/nova.conf
     .. code-block:: ini

        [vnc]
        # ...
        enabled = true
        server_listen = 0.0.0.0
        server_proxyclient_address = $my_ip
        novncproxy_base_url = http://controller:6080/vnc_auto.html

     The server component listens on all IP addresses and the proxy
     component only listens on the management interface IP address of
     the compute node. The base URL indicates the location where you
     can use a web browser to access remote consoles of instances
     on this compute node.

     .. note::

        If the web browser to access remote consoles resides on
        a host that cannot resolve the ``controller`` hostname,
        you must replace ``controller`` with the management
        interface IP address of the controller node.

   * In the ``[glance]`` section, configure the location of the Image service
     API:

     .. path /etc/nova/nova.conf
     .. code-block:: ini

        [glance]
        # ...
        api_servers = http://controller:9292

   * In the ``[oslo_concurrency]`` section, configure the lock path:

     .. path /etc/nova/nova.conf
     .. code-block:: ini

        [oslo_concurrency]
        # ...
        lock_path = /var/run/nova

   * In the ``[placement]`` section, configure the Placement API:

     .. path /etc/nova/nova.conf
     .. code-block:: ini

        [placement]
        # ...
        region_name = RegionOne
        project_domain_name = Default
        project_name = service
        auth_type = password
        user_domain_name = Default
        auth_url = http://controller:5000/v3
        username = placement
        password = PLACEMENT_PASS

     Replace ``PLACEMENT_PASS`` with the password you choose for the
     ``placement`` user in the Identity service. Comment out any other options
     in the ``[placement]`` section.

#. Ensure the kernel module ``nbd`` is loaded.

   .. code-block:: console

      # modprobe nbd

#. Ensure the module loads on every boot by adding ``nbd`` to the
   ``/etc/modules-load.d/nbd.conf`` file.

Finalize installation
---------------------

#. Determine whether your compute node supports hardware acceleration for
   virtual machines:

   .. code-block:: console

      $ egrep -c '(vmx|svm)' /proc/cpuinfo

   If this command returns a value of ``one or greater``, your compute node
   supports hardware acceleration which typically requires no additional
   configuration.

   If this command returns a value of ``zero``, your compute node does not
   support hardware acceleration and you must configure ``libvirt`` to use QEMU
   instead of KVM.

   * Edit the ``[libvirt]`` section in the ``/etc/nova/nova.conf`` file as
     follows:

     .. path /etc/nova/nova.conf
     .. code-block:: ini

        [libvirt]
        # ...
        virt_type = qemu

#. Start the Compute service including its dependencies and configure them to
   start automatically when the system boots:

   .. code-block:: console

      # systemctl enable libvirtd.service openstack-nova-compute.service
      # systemctl start libvirtd.service openstack-nova-compute.service

.. note::

   If the ``nova-compute`` service fails to start, check
   ``/var/log/nova/nova-compute.log``. The error message ``AMQP server on
   controller:5672 is unreachable`` likely indicates that the firewall on the
   controller node is preventing access to port 5672.  Configure the firewall
   to open port 5672 on the controller node and restart ``nova-compute``
   service on the compute node.

Add the compute node to the cell database
-----------------------------------------

.. important::

   Run the following commands on the **controller** node.

#. Source the admin credentials to enable admin-only CLI commands, then confirm
   there are compute hosts in the database:

   .. code-block:: console

      $ . admin-openrc

      $ openstack compute service list --service nova-compute
      +----+-------+--------------+------+-------+---------+----------------------------+
      | ID | Host  | Binary       | Zone | State | Status  | Updated At                 |
      +----+-------+--------------+------+-------+---------+----------------------------+
      | 1  | node1 | nova-compute | nova | up    | enabled | 2017-04-14T15:30:44.000000 |
      +----+-------+--------------+------+-------+---------+----------------------------+

#. Discover compute hosts:

   .. code-block:: console

      # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

      Found 2 cell mappings.
      Skipping cell0 since it does not contain hosts.
      Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
      Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
      Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
      Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3

   .. note::

      When you add new compute nodes, you must run ``nova-manage cell_v2
      discover_hosts`` on the controller node to register those new compute
      nodes. Alternatively, you can set an appropriate interval in
      ``/etc/nova/nova.conf``:

      .. code-block:: ini

         [scheduler]
         discover_hosts_in_cells_interval = 300