summaryrefslogtreecommitdiff
path: root/doc/config-cluster/mkcephfs.rst
blob: 3392609f6d17eac5a91bf848b17bbd7a27708e29 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
=============================
 Deploying with ``mkcephfs``
=============================

Enable Login to Cluster Hosts as ``root``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To deploy with ``mkcephfs``, you will need to be able to login as ``root``
on each host without a password. For each host, perform the following:: 

	sudo passwd root

Enter a password for the root user. 

On the admin host, generate an ``ssh`` key without specifying a passphrase
and use the default locations. :: 

	ssh-keygen
	Generating public/private key pair.
	Enter file in which to save the key (/ceph-admin/.ssh/id_rsa): 
	Enter passphrase (empty for no passphrase): 
	Enter same passphrase again: 
	Your identification has been saved in /ceph-admin/.ssh/id_rsa.
	Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.

You may use RSA or DSA keys. Once you generate your keys, copy them to each 
OSD host. For example:: 

	ssh-copy-id root@myserver01
	ssh-copy-id root@myserver02	
	
Modify your ``~/.ssh/config`` file to login as ``root``, as follows:: 

	Host myserver01
		Hostname myserver01.fully-qualified-domain.com
		User root
	Host myserver02
		Hostname myserver02.fully-qualified-domain.com
		User root

Copy Configuration File to All Hosts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ceph's ``mkcephfs`` deployment script does not copy the configuration file you
created from the Administration host to the OSD Cluster hosts. Copy the
configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host
if you are using ``mkcephfs`` to deploy Ceph.

::

	cd /etc/ceph
	ssh myserver01 sudo tee /etc/ceph/ceph.conf <ceph.conf
	ssh myserver02 sudo tee /etc/ceph/ceph.conf <ceph.conf
	ssh myserver03 sudo tee /etc/ceph/ceph.conf <ceph.conf


Create the Default Directories
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``mkcephfs`` deployment script does not create the default server directories. 
Create server directories for each instance of a Ceph daemon. The ``host`` 
variables in the ``ceph.conf`` file determine which host runs each instance of 
a Ceph daemon. Using the exemplary ``ceph.conf`` file, you would perform 
the following:

On ``myserver01``::

	sudo mkdir /srv/osd.0
	sudo mkdir /srv/mon.a

On ``myserver02``::

	sudo mkdir /srv/osd.1
	sudo mkdir /srv/mon.b

On ``myserver03``::

	sudo mkdir /srv/osd.2
	sudo mkdir /srv/mon.c
	sudo mkdir /srv/mds.a

Run ``mkcephfs``
~~~~~~~~~~~~~~~~
Once you have copied your Ceph Configuration to the OSD Cluster hosts
and created the default directories, you may deploy Ceph with the 
``mkcephfs`` script.

.. note::  ``mkcephfs`` is a quick bootstrapping tool. It does not handle more 
           complex operations, such as upgrades.

For production environments, deploy Ceph using Chef cookbooks. To run 
``mkcephfs``, execute the following:: 

   cd /etc/ceph
   sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring
	
The script adds an admin key to the ``ceph.keyring``, which is analogous to a 
root password. See `Authentication`_ when running with ``cephx`` enabled.

When you start or stop your cluster, you will not have to use ``sudo`` or
provide passwords. For example:: 

	service ceph -a start

See `Start | Stop the Cluster`_ for details.


.. _Authentication: ../authentication
.. _Start | Stop the Cluster: ../../init/