Title: Concurrency Control and Recovery
1Concurrency Control and Recovery
- Zachary G. Ives
- University of Pennsylvania
- CIS 650 Implementing Data Management Systems
- February 7, 2005
Some content on recovery courtesy Hellerstein,
Ramakrishnan, Gehrke
2Administrivia
- No normal office hour this week (out of town)
- Upcoming talks of interest
- George Candea, Stanford, recovery-oriented
computing, 2/24 - Mike Swift, U Wash., restartable OS device
drivers, 3/1 - Andrew Whitaker, U Wash., paravirtualization,
3/15 - Sihem Amer-Yahia, ATT Research, text queries
over XML, 3/17 - Krishna Gummadi, U Wash., analysis of P2P
systems, 3/29 - Muthian Sivathanu, U Wisc., speeding disk access,
3/31
3Todays Trivia Question
4Recall the Fundamental Concepts of Updatable DBMSs
- Transactions as atomic units of operation
- always commit or abort
- can be terminated and restarted by the DBMS an
essential property - typically logged, restartable, recoverable
- ACID properties
- Atomicity transactions may abort (rollback)
due to error or deadlock (Mohan) - Consistency guarantee of consistency between
transactions - Isolation guarantees serializability of
schedules (Gray, Kung Rob.) - Durability guarantees recovery if DBMS stops
running (Mohan)
5Serializability and Concurrent Deposits
Deposit 1 Deposit
2 read(X.bal)
read(X.bal) X.bal X.bal 50 X.bal
X.bal 10 write(X.bal)
write(X.bal)
6Violations of Serializability
- Dirty data data written by an uncommitted
transaction a dirty read is a read of dirty data
(WR conflict) - Unrepeatable read a transaction reads the same
data item twice and gets different values (RW
conflict) - Phantom problem a transaction retrieves a
collection of tuples twice and sees different
results
7Two Approaches to Serializability (or other
Consistency Models)
- Locking a pessimistic strategy
- First paper (Gray et al.) hierarchical locking,
plus ways of compromising serializability for
performance - Optimistic concurrency control
- Second paper (Kung Robinson) allow writes by
each session in parallel, then try to substitute
them in (or reapply to merge)
8Locking
- A lock manager grants and releases locks on
objects - Two basic types of locks
- Shared locks (read locks) allow other shared
locks to be granted on the same item - Exclusive locks (write locks) do not coexist
with any other locks - Generally granted in two phase locking (2PL)
model - Growing phase locks are granted
- Shrinking phase locks are released (no new
locks granted) - Well-formed, two-phase locking guarantees
serializability - Strict 2PL shrinking phase is at the end of the
transaction - (Note that deadlocks are possible!)
9Gray et al. Granularity of Locks
- For performance and concurrency, want different
levels of lock granularity, i.e., a hierarchy of
locks - database
- extent
- table
- page
- row
- attribute
- But a problem arises
- What if T1 S-locks a row and T2 wants to X-lock a
table? - How do we easily check whether we should give a
lock to T2?
10Intention Locks
- Two basic types
- Intention to Share (IS) a descendant item will
be locked with a share lock - Intention Exclusive lock a descendant item will
be locked with an exclusive lock - Locks are granted top-down, released bottom-up
- T1 grabs IS lock on table, page S lock on row
- T2 cant get X-lock on table until T1 is done
- But T3 can get an IS or S lock on the table
11Lock Compatibility Matrix
12Lock Implementation
- Maintain as a hash table based on items to lock
- Lock/unlock are atomic operations in critical
sections - First-come, first-served queue for each locked
object - All adjacent, compatible items are a compatible
group - The groups mode is the most restrictive of its
members - What if a transaction wants to convert (upgrade)
its lock? Should we send it to the back of the
queue? - No will almost assuredly deadlock!
- Handle conversions immediately after the current
group
13Degrees of Consistency
- Full locking, guaranteeing serializability, is
generally very expensive - So they propose several degrees of consistency as
a compromise (these are roughly the SQL isolation
levels) - Degree 0 T doesnt overwrite dirty data of
other transactions - Degree 1 above, plus T does not commit writes
before EOT - Degree 2 above, plus T doesnt read dirty data
- Degree 3 above, plus other transactions dont
dirty any data T read
14Degrees and Locking
- Degree 0 short write locks on updated items
- Degree 1 long write locks on updated items
- Degree 2 long write locks on updated items,
short read locks on read items - Degree 3 long write and read locks
- Does Degree 3 prevent phantoms? If not, how do
we fix this?
15What If We Dont Want to Lock?
- Conflicts may be very uncommon so why incur the
overhead of locking? - Typically hundreds of instructions for every
lock/unlock - Examples read-mostly DBs large DB with few
collisions append-mostly hierarchical data - Kung Robinson break lock into three phases
- Read and write to private copy of each page
(i.e., copy-on-write) - Validation make sure no conflicts between
transactions - Write swap the private copies in for the public
ones
16Validation
- Goal guarantee that only serializable schedules
result in merging Ti and Tj writes - Approach find an equivalent serializable
schedule - Assign each transaction a number
- Ensure equivalent serializable schedule as
follows - If TN(Ti)
- Ti finishes writing before Tj starts reading
(serial) - WS(Ti) disjoint from RS(Tj) and Ti finishes
writing before Tj writes - WS(Ti) disjoint from RS(Tj) and WS(Ti) disjoint
from WS(Tj), and Ti finishes read phase before Tj
completes its read phase
17Why Does This Work?
- Condition 1 obvious since its serial
- Condition 2
- No W-R conflicts since disjoint
- In all R-W conflicts, Ti precedes Tj since Ti
reads before it writes (and thats before Tj) - In all W-W conflicts, Ti precedes Tj
- Condition 3
- No W-R conflicts since disjoint
- No W-W conflicts since disjoint
- In all R-W conflicts, Ti precedes Tj since Ti
reads before it writes (and thats before Tj)
18The Achilles Heel
- How do we assign TNs?
- Not optimistically they get assigned at the end
of read phase - Note that we need to maintain all of the read and
write sets for transactions that are going on
concurrently long-lived read phases cause
difficulty here - Solution bound buffer, abort and restart
transactions when out of space - Drawback starvation need to solve by locking
the whole DB!
19Serial Validation
- Simple writes wont be interleaved, so test
- Ti finishes writing before Tj starts reading
(serial) - WS(Ti) disjoint from RS(Tj) and Ti finishes
writing before Tj writes - Put in critical section
- Get TN
- Test 1 and 2 for everyone up to TN
- Write
- Long critical section limits parallelism of
validation, so can optimize - Outside critical section, get a TN and validate
up to there - Before write, in critical section, get new TN,
validate up to that, write - Reads no need for TN just validate up to
highest TN at end of read phase (no critical
section)
20Parallel Validation
- For allowing interleaved writes
- Save active transactions (finished reading, not
writing) - Abort if intersect current read/write set
- Validate
- CRIT Get TN copy active set add self to
active set - Check (1), (2) against everything from start to
finish - Check (3) against all active set
- If OK, write
- CRIT Increment TN counter, remove self from
active - Drawback might conflict in condition (3) with
someone who gets aborted
21Whos the Top Dog?Optimistic vs. Non-Optimistic
- Drawbacks of the optimistic approach
- Generally requires some sort of global state,
e.g., TN counter - If theres a conflict, requires abort and full
restart - Study by Agrawal et al. comparing optimistic vs.
locking - Need load control with low resources
- Locking is better with moderate resources
- Optimistic is better with infinite or high
resources - Both of these provided isolation transactions
and policies ensure consistency what about
atomicity, durability?
22Rollback and Recovery
- The Recovery Manager provides
- Atomicity
- Transactions may abort (rollback) to start or
to a savepoint. - Durability
- What if DBMS stops running? (Causes?)
- Desired behavior after system restarts
- T1, T2 T3 should be durable
- T4 T5 should be aborted (effects not seen)
crash!
T1 T2 T3 T4 T5
23Assumptions in Recovery Schemes
- Were using concurrency control via locks
- Strict 2PL at least at the page, possibly record
level - Updates are happening in place
- No shadow pages data is overwritten on (deleted
from) the disk - ARIES
- Algorithm for Recovery and Isolation Exploiting
Semantics - Attempts to provide a simple, systematic simple
scheme to guarantee atomicity durability with
good performance - Lets begin with some of the issues faced by any
DBMS recovery scheme
24Managing Pages in the Buffer Pool
- Buffer pool is finite, so
- Q How do we guarantee durability of committed
data? - A Need policy on what happens when a
transaction completes, what transactions can do
to get more pages - Force write of buffer pages to disk at commit?
- Provides durability
- But poor response time
- Steal buffer-pool frames from uncommited Xacts?
- If not, poor throughput
- If so, how can we ensure atomicity?
No Steal
Steal
Force
Trivial
Desired
No Force
25More on Steal and Force
- STEAL (why enforcing Atomicity is hard)
- To steal frame F Current page in F (say P) is
written to disk some Xact holds lock on P - What if the Xact with the lock on P aborts?
- Must remember the old value of P at steal time
(to support UNDOing the write to page P) - NO FORCE (why enforcing Durability is hard)
- What if system crashes before a modified page is
written to disk? - Write as little as possible, in a convenient
place, at commit time, to support REDOing
modifications
26Basic Idea Logging
- Record REDO and UNDO information, for every
update, in a log - Sequential writes to log (put it on a separate
disk) - Minimal info (diff) written to log, so multiple
updates fit in a single log page - Log An ordered list of REDO/UNDO actions
- Log record contains
-
- and additional control info (which well see soon)
27Write-Ahead Logging (WAL)
- The Write-Ahead Logging Protocol
- Force the log record for an update before the
corresponding data page gets to disk - Guarantees Atomicity
- Write all log records for a Xact before commit
- Guarantees Durability (can always rebuild from
the log) - Is there a systematic way of doing logging (and
recovery!)? - The ARIES family of algorithms
28WAL the Log
RAM
LSNs
pageLSNs
flushedLSN
- Each log record has a unique Log Sequence Number
(LSN) - LSNs always increase
- Each data page contains a pageLSN
- The LSN of the most recent log record
for an update to
that page - System keeps track of flushedLSN
- The max LSN flushed so far
- WAL Before a page is written,
- pageLSN flushedLSN
Log records flushed to disk
Log tail in RAM
29Log Records
- Possible log record types
- Update
- Commit
- Abort
- End (signifies end of commit or abort)
- Compensation Log Records (CLRs)
- To log UNDO actions
LogRecord fields
update records only
30Other Log-Related State
- Transaction Table
- One entry per active Xact
- Contains XID, status (running/commited/aborted),
and lastLSN - Dirty Page Table
- One entry per dirty page in buffer pool
- Contains recLSN the LSN of the log record which
first caused the page to be dirty
31Normal Execution of an Xact
- Series of reads writes, followed by commit or
abort - We will assume that write is atomic on disk
- In practice, additional details to deal with
non-atomic writes - Strict 2PL
- STEAL, NO-FORCE buffer management, with
Write-Ahead Logging
32Checkpointing
- Periodically, the DBMS creates a checkpoint
- Minimizes recovery time in the event of a system
crash - Write to log
- begin_checkpoint record when checkpoint began
- end_checkpoint record current Xact table and
dirty page table - A fuzzy checkpoint
- Other Xacts continue to run so these tables
accurate only as of the time of the
begin_checkpoint record - No attempt to force dirty pages to disk
effectiveness of checkpoint limited by oldest
unwritten change to a dirty page. (So its a good
idea to periodically flush dirty pages to disk!) - Store LSN of checkpoint record in a safe place
(master record)
33The Big Picture Whats Stored Where
LOG
RAM
DB
LogRecords
Xact Table lastLSN status Dirty Page
Table recLSN flushedLSN
Data pages each with a pageLSN
master record
34Simple Transaction Abort
- For now, consider an explicit abort of a Xact
- No crash involved
- We want to play back the log in reverse order,
UNDOing updates - Get lastLSN of Xact from Xact table
- Can follow chain of log records backward via the
prevLSN field - Before starting UNDO, write an Abort log record
- For recovering from crash during UNDO!
35Abort, cont.
- To perform UNDO, must have a lock on data!
- No problem no one else can be locking it
- Before restoring old value of a page, write a
CLR - You continue logging while you UNDO!!
- CLR has one extra field undoNextLSN
- Points to the next LSN to undo (i.e. the prevLSN
of the record were currently undoing). - CLRs never Undone (but they might be Redone when
repeating history guarantees Atomicity!) - At end of UNDO, write an end log record.
36Transaction Commit
- Write commit record to log
- All log records up to Xacts lastLSN are flushed
- Guarantees that flushedLSN ³ lastLSN
- Note that log flushes are sequential, synchronous
writes to disk - Many log records per log page
- Commit() returns
- Write end record to log
37Crash Recovery Big Picture
Oldest log rec. of Xact active at crash
- Start from a checkpoint (found via master record)
- Three phases
- Figure out which Xacts committed since
checkpoint, which failed (Analysis) - REDO all actions
- (repeat history)
- UNDO effects of failed Xacts
Smallest recLSN in dirty page table after Analysis
Last chkpt
CRASH
A
R
U
38Recovery The Analysis Phase
- Reconstruct state at checkpoint
- via end_checkpoint record
- Scan log forward from checkpoint
- End record Remove Xact from Xact table
- Other records Add Xact to Xact table, set
lastLSNLSN, change Xact status on commit - Update record If P not in Dirty Page Table,
- Add P to D.P.T., set its recLSNLSN
39Recovery The REDO Phase
- We repeat history to reconstruct state at crash
- Reapply all updates (even of aborted Xacts!),
redo CLRs - Scan forward from log rec containing smallest
recLSN in D.P.T. For each CLR or update log rec
LSN, REDO the action unless - Affected page is not in the Dirty Page Table, or
- Affected page is in D.P.T., but has recLSN LSN,
or - pageLSN (in DB) ³ LSN
- To REDO an action
- Reapply logged action
- Set pageLSN to LSN. No additional logging!
40Recovery The UNDO Phase
- ToUndo l l a lastLSN of a loser Xact
- Repeat
- Choose largest LSN among ToUndo.
- If this LSN is a CLR and undoNextLSNNULL
- Write an End record for this Xact.
- If this LSN is a CLR, and undoNextLSN ! NULL
- Add undoNextLSN to ToUndo
- (Q what happens to other CLRs?)
- Else this LSN is an update. Undo the update,
write a CLR, add prevLSN to ToUndo. - Until ToUndo is empty.
41Example of Recovery
LSN LOG
begin_checkpoint end_checkpoint update T1
writes P5 update T2 writes P3 T1 abort CLR Undo
T1 LSN 10 T1 End update T3 writes P1 update T2
writes P5 CRASH, RESTART
00 05 10 20 30 40
45 50 60
prevLSNs
Xact Table lastLSN status Dirty Page
Table recLSN flushedLSN
ToUndo
42Example Crash During Restart
LSN LOG
begin_checkpoint, end_checkpoint update T1
writes P5 update T2 writes P3 T1 abort CLR Undo
T1 LSN 10, T1 End update T3 writes P1 update T2
writes P5 CRASH, RESTART CLR Undo T2 LSN 60 CLR
Undo T3 LSN 50, T3 end CRASH, RESTART CLR Undo
T2 LSN 20, T2 end
00,05 10 20 30 40,45 50
60 70 80,85 90
undonextLSN
Xact Table lastLSN status Dirty Page
Table recLSN flushedLSN
ToUndo
43Additional Crash Issues
- What happens if system crashes during Analysis?
During REDO? - How do you limit the amount of work in REDO?
- Flush asynchronously in the background.
- Watch hot spots!
- How do you limit the amount of work in UNDO?
- Avoid long-running Xacts.
44Summary of Logging/Recovery
- Recovery Manager guarantees Atomicity
Durability - Use WAL to allow STEAL/NO-FORCE w/o sacrificing
correctness - LSNs identify log records linked into backwards
chains per transaction (via prevLSN) - pageLSN allows comparison of data page and log
records
45Summary, Continued
- Checkpointing A quick way to limit the amount
of log to scan on recovery. - Recovery works in 3 phases
- Analysis Forward from checkpoint
- Redo Forward from oldest recLSN
- Undo Backward from end to first LSN of oldest
Xact alive at crash - Upon Undo, write CLRs
- Redo repeats history Simplifies the logic!