summaryrefslogtreecommitdiff
path: root/src/backend/storage/lmgr/README-SSI
blob: c079e38b90327bd87c8c5f783ea7c56a3f0eb1ff (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
src/backend/storage/lmgr/README-SSI

Serializable Snapshot Isolation (SSI) and Predicate Locking
===========================================================

This is currently sitting in the lmgr directory because about 90% of
the code is an implementation of predicate locking, which is required
for SSI, rather than being directly related to SSI itself.  When
another use for predicate locking justifies the effort to tease these
two things apart, this README file should probably be split.


Credits
-------

This feature was developed by Kevin Grittner and Dan R. K. Ports,
with review and suggestions from Joe Conway, Heikki Linnakangas, and
Jeff Davis.  It is based on work published in these papers:

	Michael J. Cahill, Uwe Röhm, and Alan D. Fekete. 2008.
	Serializable isolation for snapshot databases.
	In SIGMOD '08: Proceedings of the 2008 ACM SIGMOD
	international conference on Management of data,
	pages 729-738, New York, NY, USA. ACM.
	http://doi.acm.org/10.1145/1376616.1376690

	Michael James Cahill. 2009.
	Serializable Isolation for Snapshot Databases.
	Sydney Digital Theses.
	University of Sydney, School of Information Technologies.
	http://hdl.handle.net/2123/5353


Overview
--------

With true serializable transactions, if you can show that your
transaction will do the right thing if there are no concurrent
transactions, it will do the right thing in any mix of serializable
transactions or be rolled back with a serialization failure.  This
feature has been implemented in PostgreSQL using SSI.


Serializable and Snapshot Transaction Isolation Levels
------------------------------------------------------

Serializable transaction isolation is attractive for shops with
active development by many programmers against a complex schema
because it guarantees data integrity with very little staff time --
if a transaction can be shown to always do the right thing when it is
run alone (before or after any other transaction), it will always do
the right thing in any mix of concurrent serializable transactions.
Where conflicts with other transactions would result in an
inconsistent state within the database, or an inconsistent view of
the data, a serializable transaction will block or roll back to
prevent the anomaly. The SQL standard provides a specific SQLSTATE
for errors generated when a transaction rolls back for this reason,
so that transactions can be retried automatically.

Before version 9.1 PostgreSQL did not support a full serializable
isolation level. A request for serializable transaction isolation
actually provided snapshot isolation. This has well known anomalies
which can allow data corruption or inconsistent views of the data
during concurrent transactions; although these anomalies only occur
when certain patterns of read-write dependencies exist within a set
of concurrent transactions. Where these patterns exist, the anomalies
can be prevented by introducing conflicts through explicitly
programmed locks or otherwise unnecessary writes to the database.
Snapshot isolation is popular because performance is better than
serializable isolation and the integrity guarantees which it does
provide allow anomalies to be avoided or managed with reasonable
effort in many environments.


Serializable Isolation Implementation Strategies
------------------------------------------------

Techniques for implementing full serializable isolation have been
published and in use in many database products for decades. The
primary technique which has been used is Strict 2 Phase Locking
(S2PL), which operates by blocking writes against data which has been
read by concurrent transactions and blocking any access (read or
write) against data which has been written by concurrent
transactions. A cycle in a graph of blocking indicates a deadlock,
requiring a rollback. Blocking and deadlocks under S2PL in high
contention workloads can be debilitating, crippling throughput and
response time.

A new technique for implementing full serializable isolation in an
MVCC database appears in the literature beginning in 2008. This
technique, known as Serializable Snapshot Isolation (SSI) has many of
the advantages of snapshot isolation. In particular, reads don't
block anything and writes don't block reads. Essentially, it runs
snapshot isolation but monitors the read-write conflicts between
transactions to identify dangerous structures in the transaction
graph which indicate that a set of concurrent transactions might
produce an anomaly, and rolls back transactions to ensure that no
anomalies occur. It will produce some false positives (where a
transaction is rolled back even though there would not have been an
anomaly), but will never let an anomaly occur. In the two known
prototype implementations, performance for many workloads (even with
the need to restart transactions which are rolled back) is very close
to snapshot isolation and generally far better than an S2PL
implementation.


Apparent Serial Order of Execution
----------------------------------

One way to understand when snapshot anomalies can occur, and to
visualize the difference between the serializable implementations
described above, is to consider that among transactions executing at
the serializable transaction isolation level, the results are
required to be consistent with some serial (one-at-a-time) execution
of the transactions[1]. How is that order determined in each?

S2PL locks rows used by the transaction in a way which blocks
conflicting access, so that at the moment of a successful commit it
is certain that no conflicting access has occurred. Some transactions
may have blocked, essentially being partially serialized with the
committing transaction, to allow this. Some transactions may have
been rolled back, due to cycles in the blocking. But with S2PL,
transactions can always be viewed as having occurred serially, in the
order of successful commit.

With snapshot isolation, reads never block writes, nor vice versa, so
there is much less actual serialization. The order in which
transactions appear to have executed is determined by something more
subtle than in S2PL: read/write dependencies. If a transaction
attempts to read data which is not visible to it because the
transaction which wrote it (or will later write it) is concurrent
(one of them was running when the other acquired its snapshot), then
the reading transaction appears to have executed first, regardless of
the actual sequence of transaction starts or commits (since it sees a
database state prior to that in which the other transaction leaves
it). If one transaction has both rw-dependencies in (meaning that a
concurrent transaction attempts to read data it writes) and out
(meaning it attempts to read data a concurrent transaction writes),
and a couple other conditions are met, there can appear to be a cycle
in execution order of the transactions. This is when the anomalies
occur.

SSI works by watching for the conditions mentioned above, and rolling
back a transaction when needed to prevent any anomaly. The apparent
order of execution will always be consistent with any actual
serialization (i.e., a transaction which run by itself can always be
considered to have run after any transactions committed before it
started and before any transacton which starts after it commits); but
among concurrent transactions it will appear that the transaction on
the read side of a rw-dependency executed before the transaction on
the write side.


PostgreSQL Implementation
-------------------------

The implementation of serializable transactions for PostgreSQL is
accomplished through Serializable Snapshot Isolation (SSI), based on
the work of Cahill, et al.  Fundamentally, this allows snapshot
isolation to run as it has, while monitoring for conditions which
could create a serialization anomaly.

    * Since this technique is based on Snapshot Isolation (SI), those
areas in PostgreSQL which don't use SI can't be brought under SSI.
This includes system tables, temporary tables, sequences, hint bit
rewrites, etc.  SSI can not eliminate existing anomalies in these
areas.

    * Any transaction which is run at a transaction isolation level
other than SERIALIZABLE will not be affected by SSI.  If you want to
enforce business rules through SSI, all transactions should be run at
the SERIALIZABLE transaction isolation level, and that should
probably be set as the default.

    * If all transactions are run at the SERIALIZABLE transaction
isolation level, business rules can be enforced in triggers or
application code without ever having a need to acquire an explicit
lock or to use SELECT FOR SHARE or SELECT FOR UPDATE.

    * Those who want to continue to use snapshot isolation without
the additional protections of SSI (and the associated costs of
enforcing those protections), can use the REPEATABLE READ transaction
isolation level.  This level will retain its legacy behavior, which
is identical to the old SERIALIZABLE implementation and fully
consistent with the standard's requirements for the REPEATABLE READ
transaction isolation level.

    * Performance under this SSI implementation will be significantly
improved if transactions which don't modify permanent tables are
declared to be READ ONLY before they begin reading data.

    * Performance under SSI will tend to degrade more rapidly with a
large number of active database transactions than under less strict
isolation levels.  Limiting the number of active transactions through
use of a connection pool or similar techniques may be necessary to
maintain good performance.

    * Any transaction which must be rolled back to prevent
serialization anomalies will fail with SQLSTATE 40001, which has a
standard meaning of "serialization failure".

    * This SSI implementation makes an effort to choose the
transaction to be cancelled such that an immediate retry of the
transaction will not fail due to conflicts with exactly the same
transactions.  Pursuant to this goal, no transaction is cancelled
until one of the other transactions in the set of conflicts which
could generate an anomaly has successfully committed.  This is
conceptually similar to how write conflicts are handled.  To fully
implement this guarantee there needs to be a way to roll back the
active transaction for another process with a serialization failure
SQLSTATE, even if it is "idle in transaction".


Predicate Locking
-----------------

Both S2PL and SSI require some form of predicate locking to handle
situations where reads conflict with later inserts or with later
updates which move data into the selected range.  PostgreSQL didn't
already have predicate locking, so it needed to be added to support
full serializable transactions under either strategy. Practical
implementations of predicate locking generally involve acquiring
locks against data as it is accessed, using multiple granularities
(tuple, page, table, etc.) with escalation as needed to keep the lock
count to a number which can be tracked within RAM structures, and
this was used in PostgreSQL.  Coarse granularities can cause some
false positive indications of conflict. The number of false positives
can be influenced by plan choice.


Implementation overview
-----------------------

New RAM structures, inspired by those used to track traditional locks
in PostgreSQL, but tailored to the needs of SIREAD predicate locking,
are used.  These refer to physical objects actually accessed in the
course of executing the query, to model the predicates through
inference.  Anyone interested in this subject should review the
Hellerstein, Stonebraker and Hamilton paper[2], along with the
locking papers referenced from that and the Cahill papers.

Because the SIREAD locks don't block, traditional locking techniques
were be modified.  Intent locking (locking higher level objects
before locking lower level objects) doesn't work with non-blocking
"locks" (which are, in some respects, more like flags than locks).

A configurable amount of shared memory is reserved at postmaster
start-up to track predicate locks. This size cannot be changed
without a restart.

    * To prevent resource exhaustion, multiple fine-grained locks may
be promoted to a single coarser-grained lock as needed.

    * An attempt to acquire an SIREAD lock on a tuple when the same
transaction already holds an SIREAD lock on the page or the relation
will be ignored. Likewise, an attempt to lock a page when the
relation is locked will be ignored, and the acquisition of a coarser
lock will result in the automatic release of all finer-grained locks
it covers.


Heap locking
------------

Predicate locks will be acquired for the heap based on the following:

    * For a table scan, the entire relation will be locked.

    * Each tuple read which is visible to the reading transaction
will be locked, whether or not it meets selection criteria; except
that there is no need to acquire an SIREAD lock on a tuple when the
transaction already holds a write lock on any tuple representing the
row, since a rw-dependency would also create a ww-dependency which
has more aggressive enforcement and will thus prevent any anomaly.


Index AM implementations
------------------------

Since predicate locks only exist to detect writes which conflict with
earlier reads, and heap tuple locks are acquired to cover all heap
tuples actually read, including those read through indexes, the index
tuples which were actually scanned are not of interest in themselves;
we only care about their "new neighbors" -- later inserts into the
index which would have been included in the scan had they existed at
the time.  Conceptually, we want to lock the gaps between and
surrounding index entries within the scanned range.

Correctness requires that any insert into an index generates a
rw-conflict with a concurrent serializable transaction if, after that
insert, re-execution of any index scan of the other transaction would
access the heap for a row not accessed during the previous execution.
Note that a non-HOT update which expires an old index entry covered
by the scan and adds a new entry for the modified row's new tuple
need not generate a conflict, although an update which "moves" a row
into the scan must generate a conflict.  While correctness allows
false positives, they should be minimized for performance reasons.

Several optimizations are possible:

    * An index scan which is just finding the right position for an
index insertion or deletion need not acquire a predicate lock.

    * An index scan which is comparing for equality on the entire key
for a unique index need not acquire a predicate lock as long as a key
is found corresponding to a visible tuple which has not been modified
by another transaction -- there are no "between or around" gaps to
cover.

    * As long as built-in foreign key enforcement continues to use
its current "special tricks" to deal with MVCC issues, predicate
locks should not be needed for scans done by enforcement code.

    * If a search determines that no rows can be found regardless of
index contents because the search conditions are contradictory (e.g.,
x = 1 AND x = 2), then no predicate lock is needed.

Other index AM implementation considerations:

    * If a btree search discovers that no root page has yet been
created, a predicate lock on the index relation is required;
otherwise btree searches must get to the leaf level to determine
which tuples match, so predicate locks go there.

    * GiST searches can determine that there are no matches at any
level of the index, so there must be a predicate lock at each index
level during a GiST search. An index insert at the leaf level can
then be trusted to ripple up to all levels and locations where
conflicting predicate locks may exist.

    * The effects of page splits, overflows, consolidations, and
removals must be carefully reviewed to ensure that predicate locks
aren't "lost" during those operations, or kept with pages which could
get re-used for different parts of the index.


Innovations
-----------

The PostgreSQL implementation of Serializable Snapshot Isolation
differs from what is described in the cited papers for several
reasons:

   1. PostgreSQL didn't have any existing predicate locking. It had
to be added from scratch.

   2. The existing in-memory lock structures were not suitable for
tracking SIREAD locks.
          * The database products used for the prototype
implementations for the papers used update-in-place with a rollback
log for their MVCC implementations, while PostgreSQL leaves the old
version of a row in place and adds a new tuple to represent the row
at a new location.
          * In PostgreSQL, tuple level locks are not held in RAM for
any length of time; lock information is written to the tuples
involved in the transactions.
          * In PostgreSQL, existing lock structures have pointers to
memory which is related to a connection. SIREAD locks need to persist
past the end of the originating transaction and even the connection
which ran it.
          * PostgreSQL needs to be able to tolerate a large number of
transactions executing while one long-running transaction stays open
-- the in-RAM techniques discussed in the papers wouldn't support
that.

   3. Unlike the database products used for the prototypes described
in the papers, PostgreSQL didn't already have a true serializable
isolation level distinct from snapshot isolation.

   4. PostgreSQL supports subtransactions -- an issue not mentioned
in the papers.

   5. PostgreSQL doesn't assign a transaction number to a database
transaction until and unless necessary.

   6. PostgreSQL has pluggable data types with user-definable
operators, as well as pluggable index types, not all of which are
based around data types which support ordering.

   7. Some possible optimizations became apparent during development
and testing.

Differences from the implementation described in the papers are
listed below.

    * New structures needed to be created in shared memory to track
the proper information for serializable transactions and their SIREAD
locks.

    * Because PostgreSQL does not have the same concept of an "oldest
transaction ID" for all serializable transactions as assumed in the
Cahill these, we track the oldest snapshot xmin among serializable
transactions, and a count of how many active transactions use that
xmin. When the count hits zero we find the new oldest xmin and run a
clean-up based on that.

    * Because reads in a subtransaction may cause that subtransaction
to roll back, thereby affecting what is written by the top level
transaction, predicate locks must survive a subtransaction rollback.
As a consequence, all xid usage in SSI, including predicate locking,
is based on the top level xid.  When looking at an xid that comes
from a tuple's xmin or xmax, for example, we always call
SubTransGetTopmostTransaction() before doing much else with it.

    * PostgreSQL does not use "update in place" with a rollback log
for its MVCC implementation.  Where possible it uses "HOT" updates on
the same page (if there is room and no indexed value is changed).
For non-HOT updates the old tuple is expired in place and a new tuple
is inserted at a new location.  Because of this difference, a tuple
lock in PostgreSQL doesn't automatically lock any other versions of a
row.  We don't try to copy or expand a tuple lock to any other
versions of the row, based on the following proof that any additional
serialization failures we would get from that would be false
positives:

          o If transaction T1 reads a row (thus acquiring a predicate
lock on it) and a second transaction T2 updates that row, must a
third transaction T3 which updates the new version of the row have a
rw-conflict in from T1 to prevent anomalies?  In other words, does it
matter whether this edge T1 -> T3 is there?

          o If T1 has a conflict in, it certainly doesn't. Adding the
edge T1 -> T3 would create a dangerous structure, but we already had
one from the edge T1 -> T2, so we would have aborted something
anyway.

          o Now let's consider the case where T1 doesn't have a
conflict in. If that's the case, for this edge T1 -> T3 to make a
difference, T3 must have a rw-conflict out that induces a cycle in
the dependency graph, i.e. a conflict out to some transaction
preceding T1 in the serial order. (A conflict out to T1 would work
too, but that would mean T1 has a conflict in and we would have
rolled back.)

          o So now we're trying to figure out if there can be an
rw-conflict edge T3 -> T0, where T0 is some transaction that precedes
T1. For T0 to precede T1, there has to be has to be some edge, or
sequence of edges, from T0 to T1. At least the last edge has to be a
wr-dependency or ww-dependency rather than a rw-conflict, because T1
doesn't have a rw-conflict in. And that gives us enough information
about the order of transactions to see that T3 can't have a
rw-dependency to T0:
 - T0 committed before T1 started (the wr/ww-dependency implies this)
 - T1 started before T2 committed (the T1->T2 rw-conflict implies this)
 - T2 committed before T3 started (otherwise, T3 would be aborted
                                   because of an update conflict)

          o That means T0 committed before T3 started, and therefore
there can't be a rw-conflict from T3 to T0.

          o In both cases, we didn't need the T1 -> T3 edge.

    * Predicate locking in PostgreSQL will start at the tuple level
when possible, with automatic conversion of multiple fine-grained
locks to coarser granularity as need to avoid resource exhaustion.
The amount of memory used for these structures will be configurable,
to balance RAM usage against SIREAD lock granularity.

    * A process-local copy of locks held by a process and the coarser
covering locks with counts, are kept to support granularity promotion
decisions with low CPU and locking overhead.

    * Conflicts will be identified by looking for predicate locks
when tuples are written and looking at the MVCC information when
tuples are read. There is no matching between two RAM-based locks.

    * Because write locks are stored in the heap tuples rather than a
RAM-based lock table, the optimization described in the Cahill thesis
which eliminates an SIREAD lock where there is a write lock is
implemented by the following:
         1. When checking a heap write for conflicts against existing
predicate locks, a tuple lock on the tuple being written is removed.
         2. When acquiring a predicate lock on a heap tuple, we
return quickly without doing anything if it is a tuple written by the
reading transaction.

    * Rather than using conflictIn and conflictOut pointers which use
NULL to indicate no conflict and a self-reference to indicate
multiple conflicts or conflicts with committed transactions, we use a
list of rw-conflicts. With the more complete information, false
positives are reduced and we have sufficient data for more aggressive
clean-up and other optimizations.
          o We can avoid ever rolling back a transaction until and
unless there is a pivot where a transaction on the conflict *out*
side of the pivot committed before either of the other transactions.
          o We can avoid ever rolling back a transaction when the
transaction on the conflict *in* side of the pivot is explicitly or
implicitly READ ONLY unless the transaction on the conflict *out*
side of the pivot committed before the READ ONLY transaction acquired
its snapshot. (An implicit READ ONLY transaction is one which
committed without writing, even though it was not explicitly declared
to be READ ONLY.)
          o We can more aggressively clean up conflicts, predicate
locks, and SSI transaction information.

    * Allow a READ ONLY transaction to "opt out" of SSI if there are
no READ WRITE transactions which could cause the READ ONLY
transaction to ever become part of a "dangerous structure" of
overlapping transaction dependencies.

    * Allow the user to request that a READ ONLY transaction wait
until the conditions are right for it to start in the "opt out" state
described above. We add a DEFERRABLE state to transactions, which is
specified and maintained in a way similar to READ ONLY. It is
ignored for transactions which are not SERIALIZABLE and READ ONLY.

    * When a transaction must be rolled back, we pick among the
active transactions such that an immediate retry will not fail again
on conflicts with the same transactions.

    * We use the PostgreSQL SLRU system to hold summarized
information about older committed transactions to put an upper bound
on RAM used. Beyond that limit, information spills to disk.
Performance can degrade in a pessimal situation, but it should be
tolerable, and transactions won't need to be cancelled or blocked
from starting.


R&D Issues
----------

This is intended to be the place to record specific issues which need
more detailed review or analysis.

    * WAL file replay. While serializable implementations using S2PL
can guarantee that the write-ahead log contains commits in a sequence
consistent with some serial execution of serializable transactions,
SSI cannot make that guarantee. While the WAL replay is no less
consistent than under snapshot isolation, it is possible that under
PITR recovery or hot standby a database could reach a readable state
where some transactions appear before other transactions which would
have had to precede them to maintain serializable consistency. In
essence, if we do nothing, WAL replay will be at snapshot isolation
even for serializable transactions. Is this OK? If not, how do we
address it?

    * External replication. Look at how this impacts external
replication solutions, like Postgres-R, Slony, pgpool, HS/SR, etc.
This is related to the "WAL file replay" issue.

    * Weak-memory-ordering machines. Make sure that shared memory
access which involves visibility across multiple transactions uses
locks as needed to avoid problems. On the other hand, ensure that we
really need volatile where we're using it.
http://archives.postgresql.org/pgsql-committers/2008-06/msg00228.php

    * UNIQUE btree search for equality on all columns. Since a search
of a UNIQUE index using equality tests on all columns will lock the
heap tuple if an entry is found, it appears that there is no need to
get a predicate lock on the index in that case. A predicate lock is
still needed for such a search if a matching index entry which points
to a visible tuple is not found.

    * Planner index probes. To avoid problems with data skew at the
ends of an index which have historically caused bad plans, the
planner now probes the end of an index to see what the maximum or
minimum value is when a query appears to be requesting a range of
data outside what statistics shows is present. These planner checks
don't require predicate locking, but there's currently no easy way to
avoid it. What can we do to avoid predicate locking for such planner
activity?

    * Minimize touching of shared memory. Should lists in shared
memory push entries which have just been returned to the front of the
available list, so they will be popped back off soon and some memory
might never be touched, or should we keep adding returned items to
the end of the available list?


Footnotes
---------

[1] http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt
Search for serial execution to find the relevant section.

[2] http://db.cs.berkeley.edu/papers/fntdb07-architecture.pdf
Joseph M. Hellerstein, Michael Stonebraker and James Hamilton. 2007.
Architecture of a Database System. Foundations and Trends(R) in
Databases Vol. 1, No. 2 (2007) 141-259.
  Of particular interest:
    * 6.1 A Note on ACID
    * 6.2 A Brief Review of Serializability
    * 6.3 Locking and Latching
    * 6.3.1 Transaction Isolation Levels
    * 6.5.3 Next-Key Locking: Physical Surrogates for Logical