summaryrefslogtreecommitdiff
path: root/doc/source/configuration.rst
blob: b4f6e55dfa92a7fa3f9174959343d73ed6d33431 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
..
      Copyright 2011-2012 OpenStack Foundation
      All Rights Reserved.

      Licensed under the Apache License, Version 2.0 (the "License"); you may
      not use this file except in compliance with the License. You may obtain
      a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
      WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
      License for the specific language governing permissions and limitations
      under the License.

====================
Configuring Keystone
====================

.. toctree::
   :maxdepth: 1

   man/keystone-manage
   man/keystone-all

Once Keystone is installed, it is configured via a primary configuration file
(``etc/keystone.conf``), a PasteDeploy configuration file
(``etc/keystone-paste.ini``), possibly a separate logging configuration file,
and initializing data into Keystone using the command line client.

By default, Keystone starts a service on `IANA-assigned port 35357
<http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt>`_.
This may overlap with your system's ephemeral port range, so another process
may already be using this port without being explicitly configured to do so. To
prevent this scenario from occurring, it's recommended that you explicitly
exclude port 35357 from the available ephemeral port range. On a Linux system,
this would be accomplished by:

.. code-block:: bash

    $ sysctl -w 'sys.net.ipv4.ip_local_reserved_ports=35357'

To make the above change persistent, `net.ipv4.ip_local_reserved_ports = 35357`
should be added to ``/etc/sysctl.conf`` or to ``/etc/sysctl.d/keystone.conf``.

Starting and Stopping Keystone
==============================

Start Keystone services using the command:

.. code-block:: bash

    $ keystone-all

Invoking this command starts up two ``wsgi.Server`` instances, ``admin`` (the
administration API) and ``main`` (the primary/public API interface). Both
services are configured to run in a single process.

Stop the process using ``Control-C``.

.. NOTE::

    If you have not already configured Keystone, it may not start as expected.


Configuration Files
===================

The Keystone configuration files are an ``ini`` file format based on Paste_, a
common system used to configure Python WSGI based applications.
The PasteDeploy configuration entries (WSGI pipeline definitions)
can be provided in a separate ``keystone-paste.ini`` file, while general and
driver-specific configuration parameters are in the primary configuration file
``keystone.conf``. The primary configuration file is organized into the
following sections:

* ``[DEFAULT]`` - General configuration
* ``[assignment]`` - Assignment system driver configuration
* ``[auth]`` - Authentication plugin configuration
* ``[cache]`` - Caching layer configuration
* ``[catalog]`` - Service catalog driver configuration
* ``[credential]`` - Credential system driver configuration
* ``[ec2]`` - Amazon EC2 authentication driver configuration
* ``[endpoint_filter]`` - Endpoint filtering extension configuration
* ``[endpoint_policy]`` - Endpoint policy extension configuration
* ``[federation]`` - Federation driver configuration
* ``[identity]`` - Identity system driver configuration
* ``[identity_mapping]`` - Identity mapping system driver configuration
* ``[kvs]`` - KVS storage backend configuration
* ``[ldap]`` - LDAP configuration options
* ``[memcache]`` - Memcache configuration options
* ``[oauth1]`` - OAuth 1.0a system driver configuration
* ``[os_inherit]`` - Inherited role assignment extension
* ``[paste_deploy]`` - Pointer to the PasteDeploy configuration file
* ``[policy]`` - Policy system driver configuration for RBAC
* ``[revoke]`` - Revocation system driver configuration
* ``[saml]`` - SAML configuration options
* ``[signing]`` - Cryptographic signatures for PKI based tokens
* ``[ssl]`` - SSL configuration
* ``[token]`` - Token driver & token provider configuration
* ``[trust]`` - Trust extension configuration

The Keystone primary configuration file is expected to be named ``keystone.conf``.
When starting Keystone, you can specify a different configuration file to
use with ``--config-file``. If you do **not** specify a configuration file,
Keystone will look in the following directories for a configuration file, in
order:

* ``~/.keystone/``
* ``~/``
* ``/etc/keystone/``
* ``/etc/``

PasteDeploy configuration file is specified by the ``config_file`` parameter in
``[paste_deploy]`` section of the primary configuration file. If the parameter
is not an absolute path, then Keystone looks for it in the same directories as
above. If not specified, WSGI pipeline definitions are loaded from the primary
configuration file.

Domain-specific Drivers
-----------------------

.. NOTE::

    This functionality is new in Juno.

Keystone supports the option (disabled by default) to specify identity driver
configurations on a domain by domain basis, allowing, for example, a specific
domain to have its own LDAP or SQL server. This is configured by specifying the
following options:

.. code-block:: ini

 [identity]
 domain_specific_drivers_enabled = True
 domain_config_dir = /etc/keystone/domains

Setting ``domain_specific_drivers_enabled`` to ``True`` will enable this
feature, causing Keystone to look in the ``domain_config_dir`` for config files
of the form::

 keystone.<domain_name>.conf

Options given in the domain specific configuration file will override those in
the primary configuration file for the specified domain only. Domains without a
specific configuration file will continue to use the options from the primary
configuration file.

.. NOTE::

    Keystone does not support moving the contents of a domain (i.e. "it's"
    users and groups) from one backend to another, nor group membership across
    backend boundaries.

.. NOTE::

    Although Keystone supports multiple LDAP backends via domain specific
    configuration files, it currently only supports one SQL backend. This
    could be either the default driver or a single domain-specific backend,
    perhaps for storing service users in a predominantly LDAP installation.

Due to the need for user and group IDs to be unique across an OpenStack
installation and for Keystone to be able to deduce which domain and backend to
use from just a user or group ID, it dynamically builds a persistent identity
mapping table from a public ID to the actual domain, local ID (within that
backend) and entity type. The public ID is automatically generated by Keystone
when it first encounters the entity.  If the local ID of the entity is from
a backend that does not guarantee to generate UUIDs, a hash algorithm will
generate a public ID for that entity, which is what will be exposed by
Keystone.

The use of a hash will ensure that if the public ID needs to be regenerated
then the same public ID will be created.  This is useful if you are running
multiple keystones and want to ensure the same ID would be generated whichever
server you hit.

While Keystone will dynamically maintain the identity mapping, including
removing entries when entities are deleted via the Keystone, for those
entities in backends that are managed outside of Keystone (e.g. a Read Only
LDAP), Keystone will not know if entities have been deleted and hence will
continue to carry stale identity mappings in its table.  While benign, keystone
provides an ability for operators to purge the mapping table of such stale
entries using the keystone-manage command, for example:

.. code-block:: bash

    $ keystone-manage mapping_purge --domain-name DOMAINA --local-id abc@de.com

A typical usage would be for an operator to obtain a list of those entries
in an external backend that had been deleted out-of-band to Keystone, and then
call keystone-manage to purge those entries by specifying the domain and
local-id.  The type of the entity (i.e. user or group) may also be specified
if this is needed to uniquely identify the mapping.

Since public IDs are be regeneratable **with the correct generator
implementation**, then, if the details of those entries that have
been deleted are not available, then it is safe to simply bulk purge
identity mappings periodically, for example:

.. code-block:: bash

    $ keystone-manage mapping_purge --domain-name DOMAINA

will purge all the mappings for DOMAINA. The entire mapping table can be
purged with the following command:

.. code-block:: bash

    $ keystone-manage mapping_purge --all

Public ID Generators
--------------------

Keystone supports a customizable public ID generator and it is specified in the
``[identity_mapping]`` section of the configuration file. Keystone provides a
sha256 generator as default, which produces regeneratable public IDs. The
generator algorithm for public IDs is a balance between key size (i.e. the
length of the public ID), the probability of collision and, in some
circumstances, the security of the public ID.  The maximum length of public ID
supported by Keystone is 64 characters, and the default generator (sha256) uses
this full capability.  Since the public ID is what is exposed externally by
Keystone and potentially stored in external systems, some installations may
wish to make use of other generator algorithms that have a different trade-off
of attributes. A different generator can be installed by configuring the
following property:

* ``generator`` - identity mapping generator. Defaults to
  ``keystone.identity.generators.sha256.Generator``

.. WARNING::

    Changing the generator may cause all existing public IDs to be become
    invalid, so typically the generator selection should be considered
    immutable for a given installation.

Authentication Plugins
----------------------

.. NOTE::

    This feature is only supported by Keystone for the Identity API v3 clients.

Keystone supports authentication plugins and they are specified
in the ``[auth]`` section of the configuration file. However, an
authentication plugin may also have its own section in the configuration
file. It is up to the plugin to register its own configuration options.

* ``methods`` - comma-delimited list of authentication plugin names
* ``<plugin name>`` - specify the class which handles to authentication method,
  in the same manner as one would specify a backend driver.

Keystone provides three authentication methods by default. ``password`` handles password
authentication and ``token`` handles token authentication.  ``external`` is used in conjunction
with authentication performed by a container web server that sets the ``REMOTE_USER``
environment variable. For more details, refer to :doc:`External Authentication
<external-auth>`.

How to Implement an Authentication Plugin
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

All authentication plugins must extend the
``keystone.auth.core.AuthMethodHandler`` class and implement the
``authenticate()`` method. The ``authenticate()`` method expects the
following parameters.

* ``context`` - Keystone's request context
* ``auth_payload`` - the content of the authentication for a given method
* ``auth_context`` - user authentication context, a dictionary shared by all
  plugins. It contains ``method_names`` and ``extras`` by default.
  ``method_names`` is a list and ``extras`` is a dictionary.

If successful, the ``authenticate()`` method must provide a valid ``user_id``
in ``auth_context`` and return ``None``. ``method_name`` is used to convey
any additional authentication methods in case authentication is for re-scoping.
For example, if the authentication is for re-scoping, a plugin must append
the previous method names into ``method_names``. Also, a plugin may add any
additional information into ``extras``. Anything in ``extras`` will be
conveyed in the token's ``extras`` field.

If authentication requires multiple steps, the ``authenticate()`` method must
return the payload in the form of a dictionary for the next authentication
step.

If authentication is unsuccessful, the ``authenticate()`` method must raise a
``keystone.exception.Unauthorized`` exception.

Simply add the new plugin name to the ``methods`` list along with your plugin
class configuration in the ``[auth]`` sections of the configuration file
to deploy it.

If the plugin require addition configurations, it may register its own section
in the configuration file.

Plugins are invoked in the order in which they are specified in the ``methods``
attribute of the ``authentication`` request body. If multiple plugins are
invoked, all plugins must succeed in order to for the entire
authentication to be successful. Furthermore, all the plugins invoked must
agree on the ``user_id`` in the ``auth_context``.

The ``REMOTE_USER`` environment variable is only set from a containing webserver.
However, to ensure that a user must go through other authentication mechanisms,
even if this variable is set, remove ``external`` from the list of plugins
specified in ``methods``. This effectively disables external authentication.
For more details, refer to :doc:`ExternalAuthentication <external-auth>`.


Token Persistence Driver
------------------------

Keystone supports customizable token persistence drivers. These can be specified
in the ``[token]`` section of the configuration file. Keystone provides three
non-test persistence backends. These can be set with the ``[token]\driver``
configuration option.

The drivers Keystone provides are:

* ``keystone.token.persistence.backends.sql.Token`` - The SQL-based (default)
  token persistence engine. This backend stores all token data in the same SQL
  store that is used for Identity/Assignment/etc.

* ``keystone.token.persistence.backends.memcache.Token`` - The memcached based
  token persistence backend. This backend relies on ``dogpile.cache`` and stores
  the token data in a set of memcached servers. The servers urls are specified
  in the ``[memcache]\servers`` configuration option in the Keystone config.

* ``keystone.token.persistence.backends.memcache_pool.Token`` - The pooled memcached
  token persistence engine. This backend supports the concept of pooled memcache
  client object (allowing for the re-use of the client objects). This backend has
  a number of extra tunable options in the ``[memcache]`` section of the config.


.. WARNING::
    It is recommended you use the ``keystone.token.persistence.backend.memcache_pool.Token``
    backend instead of ``keystone.token.persistence.backend.memcache.Token`` as the token
    persistence driver if you are deploying Keystone under eventlet instead of
    Apache + mod_wsgi. This recommendation are due to known issues with the use of
    ``thread.local`` under eventlet that can allow the leaking of memcache client objects
    and consumption of extra sockets.


Token Provider
--------------

Keystone supports customizable token provider and it is specified in the
``[token]`` section of the configuration file. Keystone provides both UUID and
PKI token providers. However, users may register their own token provider by
configuring the following property.

* ``provider`` - token provider driver. Defaults to
  ``keystone.token.providers.uuid.Provider``

Note that ``token_format`` in the ``[signing]`` section is deprecated but still
being supported for backward compatibility. Therefore, if ``provider`` is set
to ``keystone.token.providers.pki.Provider``, ``token_format`` must be ``PKI``.
Conversely, if ``provider`` is ``keystone.token.providers.uuid.Provider``,
``token_format`` must be ``UUID``.

For a customized provider, ``token_format`` must not set to ``PKI`` or ``UUID``.

PKI or UUID?
^^^^^^^^^^^^

UUID-based tokens are randomly generated opaque strings that are issued and
validated by the identity service. They must be persisted by the identity
service in order to be later validated, and revoking them is simply a matter of
deleting them from the token persistence backend.

PKI-based tokens are Cryptographic Message Syntax (CMS) strings that can be
verified offline using keystone's public signing key. The only reason for them
to be persisted by the identity service is to later build token revocation
lists (explicit lists of tokens that have been revoked), otherwise they are
theoretically ephemeral. PKI tokens should therefore have much better scaling
characteristics (decentralized validation). They are base-64 encoded (and are
therefore not URL-friendly without encoding) and may be too long to fit in
either headers or URLs if they contain extensive service catalogs or other
additional attributes.

.. WARNING::
    Both UUID and PKI-based tokens are bearer tokens, meaning that they must
    be protected from unnecessary disclosure to prevent unauthorized access.

The current architectural approaches for both UUID and PKI-based tokens have
pain points exposed by environments under heavy load (search bugs and
blueprints for the latest details and potential solutions).

Caching Layer
-------------

Keystone supports a caching layer that is above the configurable subsystems (e.g. ``token``,
``identity``, etc).  Keystone uses the `dogpile.cache`_ library which allows for flexible
cache backends. The majority of the caching configuration options are set in the ``[cache]``
section.  However, each section that has the capability to be cached usually has a ``caching``
boolean value that will toggle caching for that specific section.  The current default
behavior is that subsystem caching is enabled, but the global toggle is set to disabled.

``[cache]`` configuration section:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

* ``enabled`` - enables/disables caching across all of keystone
* ``debug_cache_backend`` - enables more in-depth logging from the cache backend (get, set, delete, etc)
* ``backend`` - the caching backend module to use e.g. ``dogpile.cache.memcached``

    .. NOTE::
        A given ``backend`` must be registered with ``dogpile.cache`` before it
        can be used.  The default backend is the ``Keystone`` no-op backend
        (``keystone.common.cache.noop``). If caching is desired a different backend will
        need to be specified.  Current functional backends are:

    * ``dogpile.cache.memcached`` - Memcached backend using the standard `python-memcached`_ library
    * ``dogpile.cache.pylibmc`` - Memcached backend using the `pylibmc`_ library
    * ``dogpile.cache.bmemcached`` - Memcached using `python-binary-memcached`_ library.
    * ``dogpile.cache.redis`` - `Redis`_ backend
    * ``dogpile.cache.dbm`` - local DBM file backend
    * ``dogpile.cache.memory`` - in-memory cache
    * ``keystone.cache.mongo`` - MongoDB as caching backend
    * ``keystone.cache.memcache_pool`` - An eventlet safe implementation of ``dogpile.cache.memcached``.
                                         This implementation also provides client connection re-use.

        .. WARNING::
            ``dogpile.cache.memory`` is not suitable for use outside of unit testing
            as it does not cleanup it's internal cache on cache expiration, does
            not provide isolation to the cached data (values in the store can be
            inadvertently changed without extra layers of data protection added),
            and does not share cache between processes.  This means that caching
            and cache invalidation will not be consistent or reliable
            when using ``Keystone`` and the ``dogpile.cache.memory`` backend under
            any real workload.

        .. WARNING::
            Do not use ``dogpile.cache.memcached`` backend if you are deploying
            Keystone under eventlet. There are known issues with the use of ``thread.local``
            under eventlet that can allow the leaking of memcache client objects and
            consumption of extra sockets.

* ``expiration_time`` - int, the default length of time to cache a specific value. A value of ``0``
    indicates to not cache anything.  It is recommended that the ``enabled`` option be used to disable
    cache instead of setting this to ``0``.
* ``backend_argument`` - an argument passed to the backend when instantiated
    ``backend_argument`` should be specified once per argument to be passed to the
    back end and in the format of ``<argument name>:<argument value>``.
    e.g.: ``backend_argument = host:localhost``
* ``proxies`` - comma delimited list of `ProxyBackends`_ e.g. ``my.example.Proxy, my.example.Proxy2``

Current Keystone systems that have caching capabilities:
    * ``token``
        The token system has a separate ``cache_time`` configuration option, that
        can be set to a value above or below the global ``expiration_time`` default,
        allowing for different caching behavior from the other systems in ``Keystone``.
        This option is set in the ``[token]`` section of the configuration file.

        The Token Revocation List cache time is handled by the configuration option
        ``revocation_cache_time`` in the ``[token]`` section.  The revocation
        list is refreshed whenever a token is revoked. It typically sees significantly
        more requests than specific token retrievals or token validation calls.
    * ``assignment``
        The assignment system has a separate ``cache_time`` configuration option,
        that can be set to a value above or below the global ``expiration_time``
        default, allowing for different caching behavior from the other systems in
        ``Keystone``.  This option is set in the ``[assignment]`` section of the
        configuration file.

        Currently ``assignment`` has caching for ``project``, ``domain``, and ``role``
        specific requests (primarily around the CRUD actions).  Caching is currently not
        implemented on grants.  The list (``list_projects``, ``list_domains``, etc)
        methods are not subject to caching.

        .. WARNING::
            Be aware that if a read-only ``assignment`` backend is in use, the cache
            will not immediately reflect changes on the back end.  Any given change
            may take up to the ``cache_time`` (if set in the ``[assignment]``
            section of the configuration) or the global ``expiration_time`` (set in
            the ``[cache]`` section of the configuration) before it is reflected.
            If this type of delay (when using a read-only ``assignment`` backend) is
            an issue, it is recommended that caching be disabled on ``assignment``.
            To disable caching specifically on ``assignment``, in the ``[assignment]``
            section of the configuration set ``caching`` to ``False``.

For more information about the different backends (and configuration options):
    * `dogpile.cache.backends.memory`_
    * `dogpile.cache.backends.memcached`_
    * `dogpile.cache.backends.redis`_
    * `dogpile.cache.backends.file`_
    * :py:mod:`keystone.common.cache.backends.mongo`

.. _`dogpile.cache`: http://dogpilecache.readthedocs.org/en/latest/
.. _`python-memcached`: http://www.tummy.com/software/python-memcached/
.. _`pylibmc`: http://sendapatch.se/projects/pylibmc/index.html
.. _`python-binary-memcached`: https://github.com/jaysonsantos/python-binary-memcached
.. _`Redis`: http://redis.io/
.. _`dogpile.cache.backends.memory`: http://dogpilecache.readthedocs.org/en/latest/api.html#memory-backend
.. _`dogpile.cache.backends.memcached`: http://dogpilecache.readthedocs.org/en/latest/api.html#memcached-backends
.. _`dogpile.cache.backends.redis`: http://dogpilecache.readthedocs.org/en/latest/api.html#redis-backends
.. _`dogpile.cache.backends.file`: http://dogpilecache.readthedocs.org/en/latest/api.html#file-backends
.. _`ProxyBackends`: http://dogpilecache.readthedocs.org/en/latest/api.html#proxy-backends
.. _`PyMongo API`: http://api.mongodb.org/python/current/api/pymongo/index.html


Certificates for PKI
--------------------

PKI stands for Public Key Infrastructure.  Tokens are documents,
cryptographically signed using the X509 standard.  In order to work correctly
token generation requires a public/private key pair.  The public key must be
signed in an X509 certificate, and the certificate used to sign it must be
available as Certificate Authority (CA) certificate.  These files can be
generated either using the keystone-manage utility, or externally generated.

``keystone-manage pki_setup`` is a development tool. We recommend that you do
not use ``keystone-manage pki_setup`` in a production environment. In
production, an external CA should be used instead. This is because the CA
secret key should generally be kept apart from the token signing secret keys
so that a compromise of a node does not lead to an attacker being able to
generate valid signed Keystone tokens. This is a low probability attack
vector, as compromise of a Keystone service machine's filesystem security
almost certainly means the attacker will be able to gain direct access to the
token backend.

The files need to be in the locations specified by the top level Keystone
configuration file as specified in the above section.  Additionally, the
private key should only be readable by the system user that will run Keystone.
The values that specify where to read the certificates are under the
``[signing]`` section of the configuration file.  The configuration values are:

* ``token_format`` - Determines the algorithm used to generate tokens.  Can be
  either ``UUID`` or ``PKI``. Defaults to ``PKI``. This option must be used in
  conjunction with ``provider`` configuration in the ``[token]`` section.
* ``certfile`` - Location of certificate used to verify tokens.  Default is
  ``/etc/keystone/ssl/certs/signing_cert.pem``
* ``keyfile`` - Location of private key used to sign tokens.  Default is
  ``/etc/keystone/ssl/private/signing_key.pem``
* ``ca_certs`` - Location of certificate for the authority that issued the
  above certificate. Default is ``/etc/keystone/ssl/certs/ca.pem``
* ``ca_key`` - Default is ``/etc/keystone/ssl/private/cakey.pem``
* ``key_size`` - Default is ``2048``
* ``valid_days`` - Default is ``3650``

Signing Certificate Issued by External CA
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

You may use a signing certificate issued by an external CA instead of generated
by keystone-manage. However, certificate issued by external CA must satisfy
the following conditions:

* all certificate and key files must be in Privacy Enhanced Mail (PEM) format
* private key files must not be protected by a password

When using signing certificate issued by an external CA, you do not need to
specify ``key_size``, ``valid_days`` and ``ca_key`` as they
will be ignored.

The basic workflow for using a signing certificate issued by an external CA involves:

1. `Request Signing Certificate from External CA`_
2. Convert certificate and private key to PEM if needed
3. `Install External Signing Certificate`_


Request Signing Certificate from External CA
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

One way to request a signing certificate from an external CA is to first
generate a PKCS #10 Certificate Request Syntax (CRS) using OpenSSL CLI.

First create a certificate request configuration file (e.g. ``cert_req.conf``):

.. code-block:: ini

    [ req ]
    default_bits            = 2048
    default_keyfile         = keystonekey.pem
    default_md              = default

    prompt                  = no
    distinguished_name      = distinguished_name

    [ distinguished_name ]
    countryName             = US
    stateOrProvinceName     = CA
    localityName            = Sunnyvale
    organizationName        = OpenStack
    organizationalUnitName  = Keystone
    commonName              = Keystone Signing
    emailAddress            = keystone@openstack.org

Then generate a CRS with OpenSSL CLI. **Do not encrypt the generated private
key. Must use the -nodes option.**

For example:

.. code-block:: bash

    $ openssl req -newkey rsa:2048 -keyout signing_key.pem -keyform PEM -out signing_cert_req.pem -outform PEM -config cert_req.conf -nodes


If everything is successfully, you should end up with ``signing_cert_req.pem``
and ``signing_key.pem``. Send ``signing_cert_req.pem`` to your CA to request a
token signing certificate and make sure to ask the certificate to be in PEM
format. Also, make sure your trusted CA certificate chain is also in PEM format.


Install External Signing Certificate
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Assuming you have the following already:

* ``signing_cert.pem`` - (Keystone token) signing certificate in PEM format
* ``signing_key.pem`` - corresponding (non-encrypted) private key in PEM format
* ``cacert.pem`` - trust CA certificate chain in PEM format

Copy the above to your certificate directory. For example:

.. code-block:: bash

    $ mkdir -p /etc/keystone/ssl/certs
    $ cp signing_cert.pem /etc/keystone/ssl/certs/
    $ cp signing_key.pem /etc/keystone/ssl/certs/
    $ cp cacert.pem /etc/keystone/ssl/certs/
    $ chmod -R 700 /etc/keystone/ssl/certs

**Make sure the certificate directory is root-protected.**

If your certificate directory path is different from the default
``/etc/keystone/ssl/certs``, make sure it is reflected in the
``[signing]`` section of the configuration file.


Service Catalog
---------------

Keystone provides two configuration options for your service catalog.

SQL-based Service Catalog (``sql.Catalog``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

A dynamic database-backed driver fully supporting persistent configuration.

``keystone.conf`` example:

.. code-block:: ini

    [catalog]
    driver = keystone.catalog.backends.sql.Catalog

.. NOTE::

    A `template_file` does not need to be defined for the sql.Catalog driver.

To build your service catalog using this driver, see the built-in help:

.. code-block:: bash

    $ openstack --help
    $ openstack help service create
    $ openstack help endpoint create

You can also refer to `an example in Keystone (tools/sample_data.sh)
<https://github.com/openstack/keystone/blob/master/tools/sample_data.sh>`_.

File-based Service Catalog (``templated.Catalog``)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The templated catalog is an in-memory backend initialized from a read-only
``template_file``. Choose this option only if you know that your
service catalog will not change very much over time.

.. NOTE::

    Attempting to change your service catalog against this driver will result in
    ``HTTP 501 Not Implemented`` errors. This is the expected behavior. If you
    want to use these commands, you must instead use the SQL-based Service
    Catalog driver.

``keystone.conf`` example:

.. code-block:: ini

    [catalog]
    driver = keystone.catalog.backends.templated.Catalog
    template_file = /opt/stack/keystone/etc/default_catalog.templates

The value of ``template_file`` is expected to be an absolute path to your
service catalog configuration. An example ``template_file`` is included in
Keystone, however you should create your own to reflect your deployment.

Another such example is `available in devstack
(files/default_catalog.templates)
<https://github.com/openstack-dev/devstack/blob/master/files/default_catalog.templates>`_.

Logging
-------

Logging is configured externally to the rest of Keystone. Configure the path
to your logging configuration file using the ``[DEFAULT] log_config`` option of
``keystone.conf``. If you wish to route all your logging through syslog, set
the ``[DEFAULT] use_syslog`` option.

A sample ``log_config`` file is included with the project at
``etc/logging.conf.sample``. Like other OpenStack projects, Keystone uses the
`Python logging module`, which includes extensive configuration options for
choosing the output levels and formats.

.. _Paste: http://pythonpaste.org/
.. _`Python logging module`: http://docs.python.org/library/logging.html

SSL
---

Keystone may be configured to support SSL and 2-way SSL out-of-the-box.
The X509 certificates used by Keystone can be generated by keystone-manage or
obtained externally and configured for use with Keystone as described in this
section.
Here is the description of each of them and their purpose:

Types of certificates
^^^^^^^^^^^^^^^^^^^^^

* ``cacert.pem``: Certificate Authority chain to validate against.
* ``ssl_cert.pem``: Public certificate for Keystone server.
* ``middleware.pem``: Public and private certificate for Keystone middleware/client.
* ``cakey.pem``: Private key for the CA.
* ``ssl_key.pem``: Private key for the Keystone server.

Note that you may choose whatever names you want for these certificates, or combine
the public/private keys in the same file if you wish.  These certificates are just
provided as an example.

Configuration
^^^^^^^^^^^^^

To enable SSL modify the etc/keystone.conf file accordingly
under the [ssl] section.  SSL configuration example using the included sample
certificates:

.. code-block:: ini

    [ssl]
    enable = True
    certfile = <path to keystone.pem>
    keyfile = <path to keystonekey.pem>
    ca_certs = <path to ca.pem>
    ca_key = <path to cakey.pem>
    cert_required = False

* ``enable``:  True enables SSL.  Defaults to False.
* ``certfile``:  Path to Keystone public certificate file.
* ``keyfile``:  Path to Keystone private certificate file.  If the private key is included in the certfile, the keyfile maybe omitted.
* ``ca_certs``:  Path to CA trust chain.
* ``cert_required``:  Requires client certificate.  Defaults to False.

When generating SSL certificates the following values are read

* ``key_size``: Key size to create. Defaults to 1024.
* ``valid_days``: How long the certificate is valid for. Defaults to 3650 (10 years).
* ``ca_key``: The private key for the CA. Defaults to ``/etc/keystone/ssl/certs/cakey.pem``.
* ``cert_subject``: The subject to set in the certificate. Defaults to /C=US/ST=Unset/L=Unset/O=Unset/CN=localhost.
  When setting the subject it is important to set CN to be the address of the server so client validation will
  succeed. This generally means having the subject be at least /CN=<keystone ip>

Generating SSL certificates
^^^^^^^^^^^^^^^^^^^^^^^^^^^

Certificates for secure HTTP communication can be generated by:

.. code-block:: bash

    $ keystone-manage ssl_setup

This will create a private key, a public key and a certificate that will be
used to encrypt communications with keystone. In the event that a Certificate
Authority is not given a testing one will be created.

It is likely in a production environment that these certificates will be
created and provided externally. Note that ``ssl_setup`` is a development tool
and is only recommended for developments environment. We do not recommend using
``ssl_setup`` for production environments.


User CRUD
---------

Keystone provides a user CRUD filter that can be added to the public_api
pipeline. This user crud filter allows users to use a HTTP PATCH to change
their own password. To enable this extension you should define a
user_crud_extension filter, insert it after the ``*_body`` middleware
and before the ``public_service`` app in the public_api WSGI pipeline in
``keystone-paste.ini`` e.g.:

.. code-block:: ini

    [filter:user_crud_extension]
    paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory

    [pipeline:public_api]
    pipeline = url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension user_crud_extension public_service

Each user can then change their own password with a HTTP PATCH :

.. code-block:: bash

    $ curl -X PATCH http://localhost:5000/v2.0/OS-KSCRUD/users/<userid> -H "Content-type: application/json"  \
    -H "X_Auth_Token: <authtokenid>" -d '{"user": {"password": "ABCD", "original_password": "DCBA"}}'

In addition to changing their password all of the users current tokens will be
deleted (if the backend used is SQL)


Inherited Role Assignment Extension
-----------------------------------

Keystone provides an optional extension that adds the capability to assign
roles to a domain that, rather than affect the domain itself, are instead
inherited to all projects owned by that domain.  This extension is disabled by
default, but can be enabled by including the following in ``keystone.conf``:

.. code-block:: ini

    [os_inherit]
    enabled = True


Token Binding
-------------

Token binding refers to the practice of embedding information from external
authentication providers (like a company's Kerberos server) inside the token
such that a client may enforce that the token only be used in conjunction with
that specified authentication. This is an additional security mechanism as it
means that if a token is stolen it will not be usable without also providing the
external authentication.

To activate token binding you must specify the types of authentication that
token binding should be used for in ``keystone.conf`` e.g.:

.. code-block:: ini

    [token]
    bind = kerberos

Currently only ``kerberos`` is supported.

To enforce checking of token binding the ``enforce_token_bind`` parameter
should be set to one of the following modes:

* ``disabled`` disable token bind checking
* ``permissive`` enable bind checking, if a token is bound to a mechanism that
  is unknown to the server then ignore it. This is the default.
* ``strict`` enable bind checking, if a token is bound to a mechanism that is
  unknown to the server then this token should be rejected.
* ``required`` enable bind checking and require that at least 1 bind mechanism
  is used for tokens.
* named enable bind checking and require that the specified authentication
  mechanism is used. e.g.:

  .. code-block:: ini

    [token]
    enforce_token_bind = kerberos

  *Do not* set ``enforce_token_bind = named`` as there is not an authentication
  mechanism called ``named``.

Limiting the number of entities returned in a collection
--------------------------------------------------------

Keystone provides a method of setting a limit to the number of entities
returned in a collection, which is useful to prevent overly long response times
for list queries that have not specified a sufficiently narrow filter. This
limit can be set globally by setting ``list_limit`` in the default section of
``keystone.conf``, with no limit set by default.  Individual driver sections
may override this global value with a specific limit, for example:

.. code-block:: ini

    [assignment]
    list_limit = 100

If a response to ``list_{entity}`` call has been truncated, then the response
status code will still be 200 (OK), but the ``truncated`` attribute in the
collection will be set to ``true``.

Sample Configuration Files
--------------------------

The ``etc/`` folder distributed with Keystone contains example configuration
files for each Server application.

* ``etc/keystone.conf.sample``
* ``etc/keystone-paste.ini``
* ``etc/logging.conf.sample``
* ``etc/default_catalog.templates``

.. _`API protection with RBAC`:

Keystone API protection with Role Based Access Control (RBAC)
=============================================================

Like most OpenStack projects, Keystone supports the protection of its APIs
by defining policy rules based on an RBAC approach.  These are stored in a
JSON policy file, the name and location of which is set in the main Keystone
configuration file.

Each Keystone v3 API has a line in the policy file which dictates what level
of protection is applied to it, where each line is of the form::

  <api name>: <rule statement> or <match statement>

where:

``<rule statement>`` can contain ``<rule statement>`` or ``<match statement>``

``<match statement>`` is a set of identifiers that must match between the token
provided by the caller of the API and the parameters or target entities of
the API call in question. For example:

.. code-block:: javascript

    "identity:create_user": [["role:admin", "domain_id:%(user.domain_id)s"]]

Indicates that to create a user you must have the admin role in your token and
in addition the domain_id in your token (which implies this must be a domain
scoped token) must match the domain_id in the user object you are trying to
create.  In other words, you must have the admin role on the domain in which
you are creating the user, and the token you are using must be scoped to that
domain.

Each component of a match statement is of the form::

  <attribute from token>:<constant> or <attribute related to API call>

The following attributes are available

* Attributes from token: user_id, the domain_id or project_id depending on
  the scope, and the list of roles you have within that scope

* Attributes related to API call: Any parameters that are passed into the
  API call are available, along with any filters specified in the query
  string. Attributes of objects passed can be referenced using an
  object.attribute syntax (e.g. user.domain_id). The target objects of an
  API are also available using a target.object.attribute syntax.  For instance:

  .. code-block:: javascript

    "identity:delete_user": [["role:admin", "domain_id:%(target.user.domain_id)s"]]

  would ensure that the user object that is being deleted is in the same
  domain as the token provided.

Every target object has an `id` and a `name` available as
`target.<object>.id` and `target.<object>.name`. Other attributes are
retrieved from the database and vary between object types. Moreover,
some database fields are filtered out (e.g. user passwords).

List of object attributes:

* role:
    * target.role.id
    * target.role.name

* user:
    * target.user.default_project_id
    * target.user.description
    * target.user.domain_id
    * target.user.enabled
    * target.user.id
    * target.user.name

* group:
    * target.group.description
    * target.group.domain_id
    * target.group.id
    * target.group.name

* domain:
    * target.domain.enabled
    * target.domain.id
    * target.domain.name

* project:
    * target.project.description
    * target.project.domain_id
    * target.project.enabled
    * target.project.id
    * target.project.name

The default policy.json file supplied provides a somewhat basic example of
API protection, and does not assume any particular use of domains. For
multi-domain configuration installations where, for example, a cloud
provider wishes to allow administration of the contents of a domain to
be delegated, it is recommended that the supplied policy.v3cloudsample.json
is used as a basis for creating a suitable production policy file. This
example policy file also shows the use of an admin_domain to allow a cloud
provider to enable cloud administrators to have wider access across the APIs.

A clean installation would need to perhaps start with the standard policy
file, to allow creation of the admin_domain with the first users within
it. The domain_id of the admin domain would then be obtained and could be
pasted into a modified version of policy.v3cloudsample.json which could then
be enabled as the main policy file.

.. _`prepare your deployment`:

Preparing your deployment
=========================

Step 1: Configure keystone.conf
-------------------------------

Ensure that your ``keystone.conf`` is configured to use a SQL driver:

.. code-block:: ini

    [identity]
    driver = keystone.identity.backends.sql.Identity

You may also want to configure your ``[sql]`` settings to better reflect your
environment:

.. code-block:: ini

    [sql]
    connection = sqlite:///keystone.db
    idle_timeout = 200

.. NOTE::

    It is important that the database that you specify be different from the
    one containing your existing install.

Step 2: Sync your new, empty database
-------------------------------------

You should now be ready to initialize your new database without error, using:

.. code-block:: bash

    $ keystone-manage db_sync

To test this, you should now be able to start ``keystone-all`` and use the
OpenStack Client to list your projects (which should successfully return an
empty list from your new database):

.. code-block:: bash

    $ openstack --os-token ADMIN --os-url http://127.0.0.1:35357/v2.0/ project list

.. NOTE::

    We're providing the default OS_TOKEN and OS_URL values from ``keystone.conf``
    to connect to the Keystone service. If you changed those values, or deployed
    Keystone to a different endpoint, you will need to change the provided
    command accordingly.

Initializing Keystone
=====================

``keystone-manage`` is designed to execute commands that cannot be administered
through the normal REST API. At the moment, the following calls are supported:

* ``db_sync``: Sync the database.
* ``db_version``: Print the current migration version of the database.
* ``mapping_purge``: Purge the identity mapping table.
* ``pki_setup``: Initialize the certificates used to sign tokens.
* ``saml_idp_metadata``: Generate identity provider metadata.
* ``ssl_setup``: Generate certificates for SSL.
* ``token_flush``: Purge expired tokens

Invoking ``keystone-manage`` by itself will give you additional usage
information.

The private key used for token signing can only be read by its owner.  This
prevents unauthorized users from spuriously signing tokens.
``keystone-manage pki_setup`` Should be run as the same system user that will
be running the Keystone service to ensure proper ownership for the private key
file and the associated certificates.

Adding Users, Projects, and Roles via Command Line Interfaces
=============================================================

Keystone APIs are protected by the rules in the policy file. The default policy
rules require admin credentials to administer ``users``, ``projects``, and
``roles``. See section `Keystone API protection with Role Based Access Control (RBAC)`_
for more details on policy files.

The Keystone command line interface packaged in `python-keystoneclient`_ only
supports the Identity v2.0 API. The OpenStack common command line interface
packaged in `python-openstackclient`_  supports both v2.0 and v3 APIs.

With both command line interfaces there are two ways to configure the client to
use admin credentials, using either an existing token or password credentials.

.. NOTE::

    As of the Juno release, it is recommended to use ``python-openstackclient``,
    as it supports both v2.0 and v3 APIs. For the purpose of backwards compatibility,
    the CLI packaged in ``python-keystoneclient`` is not being removed.

.. _`python-openstackclient`: http://docs.openstack.org/developer/python-openstackclient/
.. _`python-keystoneclient`: http://docs.openstack.org/developer/python-keystoneclient/

Authenticating with a Token
---------------------------

.. NOTE::

    If your Keystone deployment is brand new, you will need to use this
    authentication method, along with your ``[DEFAULT] admin_token``.

To authenticate with Keystone using a token and ``python-openstackclient``, set
the following flags.

* ``--os-url OS_URL``: Keystone endpoint the user communicates with
* ``--os-token OS_TOKEN``: User's service token

To administer a Keystone endpoint, your token should be either belong to a user
with the ``admin`` role, or, if you haven't created one yet, should be equal to
the value defined by ``[DEFAULT] admin_token`` in your ``keystone.conf``.

You can also set these variables in your environment so that they do not need
to be passed as arguments each time:

.. code-block:: bash

    $ export OS_URL=http://localhost:35357/v2.0
    $ export OS_TOKEN=ADMIN

Instead of ``python-openstackclient``, if using ``python-keystoneclient``,
set the following:

* ``--os-endpoint OS_SERVICE_ENDPOINT``: equivalent to ``--os-url OS_URL``
* ``--os-service-token OS_SERVICE_TOKEN``: equivalent to ``--os-token OS_TOKEN``


Authenticating with a Password
------------------------------

To authenticate with Keystone using a password and ``python-openstackclient``, set
the following flags, note that the following user referenced below should
be granted the ``admin`` role.

* ``--os-username OS_USERNAME``: Name of your user
* ``--os-password OS_PASSWORD``: Password for your user
* ``--os-project-name OS_PROJECT_NAME``: Name of your project
* ``--os-auth-url OS_AUTH_URL``: URL of the Keystone authentication server

You can also set these variables in your environment so that they do not need
to be passed as arguments each time:

.. code-block:: bash

    $ export OS_USERNAME=my_username
    $ export OS_PASSWORD=my_password
    $ export OS_PROJECT_NAME=my_project
    $ export OS_AUTH_URL=http://localhost:35357/v2.0

If using ``python-keystoneclient``, set the following instead:

* ``--os-tenant-name OS_TENANT_NAME``: equivalent to ``--os-project-name OS_PROJECT_NAME``


Example usage
-------------

``python-openstackclient`` is set up to expect commands in the general form of:

.. code-block:: bash

  $ openstack [<global-options>] <object-1> <action> [<object-2>] [<command-arguments>]

For example, the commands ``user list`` and ``project create`` can be invoked
as follows:

.. code-block:: bash

    # Using token authentication, with environment variables
    $ export OS_URL=http://127.0.0.1:35357/v2.0/
    $ export OS_TOKEN=secrete_token
    $ openstack user list
    $ openstack project create demo

    # Using token authentication, with flags
    $ openstack --os-token=secrete --os-url=http://127.0.0.1:35357/v2.0/ user list
    $ openstack --os-token=secrete --os-url=http://127.0.0.1:35357/v2.0/ project create demo

    # Using password authentication, with environment variables
    $ export OS_USERNAME=admin
    $ export OS_PASSWORD=secrete
    $ export OS_PROJECT_NAME=admin
    $ export OS_AUTH_URL=http://localhost:35357/v2.0
    $ openstack user list
    $ openstack project create demo

    # Using password authentication, with flags
    $ openstack --os-username=admin --os-password=secrete --os-project-name=admin --os-auth-url=http://localhost:35357/v2.0 user list
    $ openstack --os-username=admin --os-password=secrete --os-project-name=admin --os-auth-url=http://localhost:35357/v2.0 project create demo

For additional examples using ``python-keystoneclient`` refer to `python-keystoneclient examples`_,
likewise, for additional examples using ``python-openstackclient``, refer to `python-openstackclient examples`_.

.. _`python-keystoneclient examples`: cli_examples.html#using-python-keystoneclient
.. _`python-openstackclient examples`: cli_examples.html#using-python-openstackclient


Removing Expired Tokens
=======================

In the SQL backend expired tokens are not automatically removed. These tokens
can be removed with:

.. code-block:: bash

    $ keystone-manage token_flush

The memcache backend automatically discards expired tokens and so flushing
is unnecessary and if attempted will fail with a NotImplemented error.


Configuring the LDAP Identity Provider
======================================

As an alternative to the SQL Database backing store, Keystone can use a
directory server to provide the Identity service.  An example Schema
for OpenStack would look like this::

  dn: dc=openstack,dc=org
  dc: openstack
  objectClass: dcObject
  objectClass: organizationalUnit
  ou: openstack

  dn: ou=Projects,dc=openstack,dc=org
  objectClass: top
  objectClass: organizationalUnit
  ou: groups

  dn: ou=Users,dc=openstack,dc=org
  objectClass: top
  objectClass: organizationalUnit
  ou: users

  dn: ou=Roles,dc=openstack,dc=org
  objectClass: top
  objectClass: organizationalUnit
  ou: roles

The corresponding entries in the Keystone configuration file are:

.. code-block:: ini

  [ldap]
  url = ldap://localhost
  user = dc=Manager,dc=openstack,dc=org
  password = badpassword
  suffix = dc=openstack,dc=org
  use_dumb_member = False
  allow_subtree_delete = False

  user_tree_dn = ou=Users,dc=openstack,dc=org
  user_objectclass = inetOrgPerson

  project_tree_dn = ou=Projects,dc=openstack,dc=org
  project_objectclass = groupOfNames

  role_tree_dn = ou=Roles,dc=openstack,dc=org
  role_objectclass = organizationalRole

The default object classes and attributes are intentionally simplistic.  They
reflect the common standard objects according to the LDAP RFCs.  However,
in a live deployment,  the correct attributes can be overridden to support a
preexisting, more complex schema.  For example,  in the user object,  the
objectClass posixAccount from RFC2307 is very common.  If this is the
underlying objectclass, then the *uid* field should probably be *uidNumber* and
*username* field either *uid* or *cn*.  To change these two fields,  the
corresponding entries in the Keystone configuration file are:

.. code-block:: ini

  [ldap]
  user_id_attribute = uidNumber
  user_name_attribute = cn


There is a set of allowed actions per object type that you can modify
depending on your specific deployment. For example, the users are managed by
another tool and you have only read access, in such case the configuration
is:

.. code-block:: ini

  [ldap]
  user_allow_create = False
  user_allow_update = False
  user_allow_delete = False

  project_allow_create = True
  project_allow_update = True
  project_allow_delete = True

  role_allow_create = True
  role_allow_update = True
  role_allow_delete = True

There are some configuration options for filtering users, tenants and roles,
if the backend is providing too much output, in such case the configuration
will look like:

.. code-block:: ini

  [ldap]
  user_filter = (memberof=CN=openstack-users,OU=workgroups,DC=openstack,DC=org)
  project_filter =
  role_filter =

In case that the directory server does not have an attribute enabled of type
boolean for the user, there is several configuration parameters that can be used
to extract the value from an integer attribute like in Active Directory:

.. code-block:: ini

  [ldap]
  user_enabled_attribute = userAccountControl
  user_enabled_mask      = 2
  user_enabled_default   = 512

In this case the attribute is an integer and the enabled attribute is listed
in bit 1, so the if the mask configured *user_enabled_mask* is different from 0,
it gets the value from the field *user_enabled_attribute* and it makes an ADD
operation with the value indicated on *user_enabled_mask* and if the value matches
the mask then the account is disabled.

It also saves the value without mask to the user identity in the attribute
*enabled_nomask*. This is needed in order to set it back in case that we need to
change it to enable/disable a user because it contains more information than the
status like password expiration. Last setting *user_enabled_mask* is needed in order
to create a default value on the integer attribute (512 = NORMAL ACCOUNT on AD)

In case of Active Directory the classes and attributes could not match the
specified classes in the LDAP module so you can configure them like:

.. code-block:: ini

  [ldap]
  user_objectclass          = person
  user_id_attribute         = cn
  user_name_attribute       = cn
  user_mail_attribute       = mail
  user_enabled_attribute    = userAccountControl
  user_enabled_mask         = 2
  user_enabled_default      = 512
  user_attribute_ignore     = tenant_id,tenants
  project_objectclass       = groupOfNames
  project_id_attribute      = cn
  project_member_attribute  = member
  project_name_attribute    = ou
  project_desc_attribute    = description
  project_enabled_attribute = extensionName
  project_attribute_ignore  =
  role_objectclass          = organizationalRole
  role_id_attribute         = cn
  role_name_attribute       = ou
  role_member_attribute     = roleOccupant
  role_attribute_ignore     =

Debugging LDAP
--------------

For additional information on LDAP connections, performance (such as slow
response time), or field mappings, setting ``debug_level`` in the [ldap]
section is used to enable debugging:

.. code-block:: ini

  debug_level = 4095

This setting in turn sets OPT_DEBUG_LEVEL in the underlying python library.
This field is a bit mask (integer), and the possible flags are documented
in the OpenLDAP manpages. Commonly used values include 255 and 4095, with
4095 being more verbose.

.. WARNING::
  Enabling ``debug_level`` will negatively impact performance.

Enabled Emulation
-----------------

Some directory servers do not provide any enabled attribute. For these
servers, the ``user_enabled_emulation`` and ``project_enabled_emulation``
attributes have been created. They are enabled by setting their respective
flags to True. Then the attributes ``user_enabled_emulation_dn`` and
``project_enabled_emulation_dn`` may be set to specify how the enabled users
and projects (tenants) are selected.  These attributes work by using a
``groupOfNames`` and adding whichever users or projects (tenants) that
you want enabled to the respective group. For example, this will
mark any user who is a member of ``enabled_users`` as enabled:

.. code-block:: ini

  [ldap]
  user_enabled_emulation = True
  user_enabled_emulation_dn = cn=enabled_users,cn=groups,dc=openstack,dc=org

The default values for user and project (tenant) enabled emulation DN is
``cn=enabled_users,$user_tree_dn`` and ``cn=enabled_tenants,$project_tree_dn``
respectively.

Secure Connection
-----------------

If you are using a directory server to provide the Identity service,
it is strongly recommended that you utilize a secure connection from
Keystone to the directory server.  In addition to supporting ldaps,  Keystone
also provides Transport Layer Security (TLS) support. There are some
basic configuration options for enabling TLS, identifying a single
file or directory that contains certificates for all the Certificate
Authorities that the Keystone LDAP client will recognize, and declaring
what checks the client should perform on server certificates.  This
functionality can easily be configured as follows:

.. code-block:: ini

  [ldap]
  use_tls = True
  tls_cacertfile = /etc/keystone/ssl/certs/cacert.pem
  tls_cacertdir = /etc/keystone/ssl/certs/
  tls_req_cert = demand

A few points worth mentioning regarding the above options.  If both
tls_cacertfile and tls_cacertdir are set then tls_cacertfile will be
used and tls_cacertdir is ignored.  Furthermore, valid options for
tls_req_cert are demand, never, and allow.  These correspond to the
standard options permitted by the TLS_REQCERT TLS option.

Read Only LDAP
--------------

Many environments typically have user and group information in directories that
are accessible by LDAP. This information is for read-only use in a wide array
of applications. Prior to the Havana release, we could not deploy Keystone with
read-only directories as backends because Keystone also needed to store
information such as projects, roles, domains and role assignments into the
directories in conjunction with reading user and group information.

Keystone now provides an option whereby these read-only
directories can be easily integrated as it now enables its identity
entities (which comprises users, groups, and group memberships) to be
served out of directories while assignments (which comprises projects, roles,
role assignments, and domains) are to be served from a different Keystone
backend (i.e. SQL). To enable this option, you must have the following
``keystone.conf`` options set:

.. code-block:: ini

  [identity]
  driver = keystone.identity.backends.ldap.Identity

  [assignment]
  driver = keystone.assignment.backends.sql.Assignment

With the above configuration, Keystone will only lookup identity related
information such users, groups, and group membership from the directory,
while assignment related information will be provided by the SQL backend.
Also note that if there is an LDAP Identity, and no assignment backend is
specified, the assignment backend will default to LDAP. Although this may seem
counterintuitive, it is provided for backwards compatibility. Nonetheless,
the explicit option will always override the implicit option, so specifying
the options as shown above will always be correct.  Finally, it is also
worth noting that whether or not the LDAP accessible directory is to be
considered read only is still configured as described in a previous section
above by setting values such as the following in the ``[ldap]`` configuration
section:

.. code-block:: ini

  [ldap]
  user_allow_create = False
  user_allow_update = False
  user_allow_delete = False

Connection Pooling
------------------

Various LDAP backends in Keystone use a common LDAP module to interact with
LDAP data. By default, a new connection is established for LDAP operations.
This can become highly expensive when TLS support is enabled which is a likely
configuraton in enterprise setup. Re-using of connectors from a connection
pool drastically reduces overhead of initiating a new connection for every
LDAP operation.

Keystone now provides connection pool support via configuration. This change
will keep LDAP connectors alive and re-use for subsequent LDAP operations. A
connection lifespan is going to be configurable with other pooling specific
attributes. The change is made in LDAP handler layer logic which is primarily
responsible for LDAP connection and shared common operations.

In LDAP identity driver, Keystone authenticates end user by LDAP bind with user
DN and provided password. These kind of auth binds can fill up the pool pretty
quickly so a separate pool is provided for those end user auth bind calls.
If a deployment does not want to use pool for those binds, then it can disable
pooling selectively by ``use_auth_pool`` as false. If a deployment wants to
use pool for those auth binds, then ``use_auth_pool`` needs to be true. For
auth pool, a different pool size (``auth_pool_size``) and connection lifetime
(``auth_pool_connection_lifetime``) can be specified. With enabled auth pool,
its connection lifetime should be kept short so that pool frequently re-binds
the connection with provided creds and works reliably in end user password
change case. When ``use_pool`` is false (disabled), then auth pool
configuration is also not used.

Connection pool configuration is added in ``[ldap]`` configuration section:

.. code-block:: ini

  [ldap]
  # Enable LDAP connection pooling. (boolean value)
  use_pool=false

  # Connection pool size. (integer value)
  pool_size=10

  # Maximum count of reconnect trials. (integer value)
  pool_retry_max=3

  # Time span in seconds to wait between two reconnect trials.
  # (floating point value)
  pool_retry_delay=0.1

  # Connector timeout in seconds. Value -1 indicates indefinite wait for
  # response. (integer value)
  pool_connection_timeout=-1

  # Connection lifetime in seconds.  (integer value)
  pool_connection_lifetime=600

  # Enable LDAP connection pooling for end user authentication. If use_pool
  # is disabled, then this setting is meaningless and is not used at all.
  # (boolean value)
  use_auth_pool=false

  # End user auth connection pool size. (integer value)
  auth_pool_size=100

  # End user auth connection lifetime in seconds. (integer value)
  auth_pool_connection_lifetime=60