Isolation Concepts - PowerPoint PPT Presentation

About This Presentation
Title:

Isolation Concepts

Description:

Isolation Concepts Chapter 7 in Gray and Reuter Adapted from s by J. Gray & A. Reuter – PowerPoint PPT presentation

Number of Views:166
Avg rating:3.0/5.0
Slides: 82
Provided by: rutg166
Category:

less

Transcript and Presenter's Notes

Title: Isolation Concepts


1
Isolation Concepts
Chapter 7 in Gray and Reuter
  • Adapted from slides by J. Gray A. Reuter

2
Why Lock?
  • Give each transaction the illusion that there are
    no concurrent updates
  • Hide concurrency anomalies.
  • Do it automatically
  • Goal
  • Although there is concurrency in system execution
    is equivalent to some serial execution of the
    system
  • Not deterministic outcome, just a consistent
    transformation

3
The Essentials
  • Notation
  • Every transaction T has a Read Set denoted R(T)
    and a Write Set denoted W(T)
  • Definition
  • T1 and T2 conflict IFF W(T2) ? (R(T1) ? W(T1)) ¹
    Ø or
  • W(T1) ? (R(T2) ? W(T2)) ¹ Ø
  • If they conflict, delay one until the other
    finishes

4
Laws Of Concurrency Control
  • First Law of Concurrency ControlConcurrent
    execution should not cause application programs
    to malfunction.
  • Second Law of Concurrency ControlConcurrent
    execution should not have lower throughput or
    much higher response times than serial execution.

5
Transactions and Serializability
  • Database modeled as a set of elements
    representing relations, pages, tuples or
    whatever.
  • Transactions are sets of operations which access
    these elements
  • rix wix c a
  • A concurrent execution of several transactions is
    serializable if it is equivalent to a serial
    execution of the same transactions
  • Two histories are equivalent if all transactions
    read the same values in both histories and the
    final states of the database are identical

6
Why Serialization?
  • deposit(item x, int amt)
  • t read(x)
  • write(x, t amt)
  • commit()
  • T1 r1x w1x c1
  • transfer(item x, item y, int amt)
  • f read(x)
  • if (f lt amt)
  • abort()
  • else
  • write(x, f-amt)
  • t read(y)
  • write(y, tamt)
  • commit()

7
Why Serialization?
  • 1. Lost updates - Deposit made by T1 is lost
  • T1 r1x w1x c1
  • T2 r2x w2x r2y w2y c2
  • 2.Dirty Reads- Amount deducted from x by T2
    disappears
  • T1 r1x w1x c1
  • T2 r2x w2x ... A2

8
Why Serialization?
  • Incorrect Summary - (Similar to unrepeatable read
    problem)
  • print_sum(item x, item y)
  • f read(x)
  • g read(y)
  • printf("d\n", fg)
  • commit()
  • T2 r2x w2x r2y w2y c2
  • T3 r3x r3y c3
  • Sum is short by amount being transferred.

9
Serializability Theory - Chapter 2 Bernstein
  • Transactions
  • Definition- A transaction Ti is a partial order
    with ordering relation lti where
  • 1. Ti ? rix, wix x is a data item ? ai,
    ci and
  • 2. ai ? Ti iff ci ? Ti and
  • 3. If t is ci or ai, then for any other operation
    p ? Ti, p lti t and
  • 4. If rix ? Ti and wix ? Ti, then either
    rix lti wix or wix lti rix.

10
Transactions
  • A transaction is a partial order (?i, lti)
  • ?i is the set of operations
  • lti is the happened-before relation for
    operations in ?i.

11
Histories
  • Histories are used to model concurrent executions
  • Definition- Let T T1, T2, ..., Tn be a set of
    transactions. A complete history H over T is a
    partial order with ordering relation ltH where
  • 1. H
  • 2. ltH
  • 3. For any two conflicting operations p, q ? H,
    p ? q, either p ltH q or q ltH p.

12
Histories
  • Example- A history H over T T1, T2
  • Summary
  • H contains all of the operations in T
  • ltH honors all of the orderings of the
    transactions T1, T2, ..., Tn.
  • All conflicting operations are ordered.

13
History Prefixes
  • Definition- A history H (?H, ltH) is a prefix
    of a history H (?H, ltH) if
  • 1. ?H ? ?H and
  • 2. ?p, q ? ?H, p ltH q iff p ltH q and
  • 3. ?p ? ?H, ?q ? ?H, q ltH p ? q ? ?H
  • Example- A prefix H of H above.

14
Serializability
  • Definition- The committed projection of a history
    H, C(H), is the history obtained by removing all
    operations of transactions that are not committed
    from H.
  • Example- C(H)

15
Serializability
  • Example- Serial histories over T T1, T2

16
Conflict Serializability
  • Definition- Two histories H and H are conflict
    equivalent (?) if
  • 1. They are defined over the same set of
    transactions and have the same set of operations
    and
  • 2. For all conflicting operations pi and qj such
    that ai, aj ? H (or H),
  • pi ltH qj iff pi ltH qj
  • Definition- A history is conflict serializable if
    C(H) is conflict equivalent to some serial
    history Hs.

17
Conflict Serializability
  • Example- History H is not conflict serializable
    because
  • w1x ltH w2x rules out equivalence with T2T1
  • w2y ltH r1y rules out equivalence with T1T2

18
Conflict Serializability
  • Example- History H is conflict serializable. It
    is equivalent to T1T2.

19
Serializability Theorem
  • Definition- The serialization graph of a history
    H, SG(H), is a directed graph with nodes
    corresponding to committed transactions in H and
    includes all edges Ti ? Tj such that pi ltH qj and
    pi conflicts with qj.
  • Examples-
  • 1. SG(H)

20
Serializability Theorem
  • 2. SG(H)
  • 3. SG(H1)
  • H1 r1x w2x c2 w1y c1 r3x w3y c3

21
Serializability Theorem
  • 4. SG(H2)
  • H2 r1x w2y w2x r3x w1y c1 w3y c3 c2
  • Does H2 really violate consistency?
  • T1 reads original version of x.
  • T2 does not read any values and writes final
    value of x.
  • T3 reads x from T2 and writes final value of y.
  • Exactly what would happen in T1T2T3.

22
Serializability Theorem
  • Serializability Theorem- A history H is conflict
    serializable iff SG(H) is acyclic.
  • Proof
  • 1. If SG(H) is acyclic then H is conflict
    serializable.
  • a) Assume SG(H) is acyclic (we will show H is
    CSR)
  • b) Let Hs be a serial history including all
    committed transactions in H.
  • c) Order transactions in Hs such that if Ti ? Tj
    then Ti precedes Tj in Hs, i.e., order of
    transactions in Hs is a topological sort of
    SG(H). (SG(H) is acyclic)

23
Serializability Theorem (Proof Cont.)
  • d) C(H) ? Hs
  • Let pi and qj be conflicting operations from
    distinct transactions in C(H) such that pi ltH qj
  • Ti and Tj are committed (in C(H)), so they must
    be included in SG(H)
  • There must be an edge Ti ? Tj in SG(H) because pi
    and qj conflict and pi ltH qj.
  • Our construction ensures that Ti precedes Tj in
    H, and thus, pi ltHs qj

24
Serializability Theorem (Proof Cont.)
  • 2. If H is conflict serializable then SG(H) is
    acyclic.
  • a) Assume H is conflict serializable (we will
    show SG(H) is acyclic)
  • b) Let Hs be a serial history such that C(H) ? Hs
    (There must be one because H is conflict
    serializable.)
  • c) Ti ? Tj in SG(H) implies Ti ltHs Tj
  • Because Ti ? Tj in SG(H) there must be
    conflicting operations pi and qj in C(H) where
    pi ltH qj
  • Since C(H) ? Hs, we know that pi ltHs qj and
    therefore Ti ltHs Tj

25
Serializability Theorem (Proof Cont.)
  • d) SG(H) is acyclic
  • Assume for the sake of a contradiction that there
    is a cycle T1 ? T2 ? ... ? Tn ? T1 in SG(H)
  • From argument in (c) this implies T1 ltHs T2 ltHs
    ... ltHs Tn ltHs T1
  • By transitivity we get T1 ltHs T1 which is clearly
    impossible

26
Properties of Histories
  • Definition- Transaction Ti reads-x-from Tj in H
    if
  • 1. wjx ltH rix and
  • 2. Aj ltH rix and
  • 3. ?wkx ? H, wjx ltH wkx ltH rix implies
    ak ltH rix.
  • Example-
  • H w1x w2y r1y w2x a2 r1x
  • T1 reads-y-from T2
  • T1 reads-x-from T1

27
Properties of Histories
  • Definition- A history H is recoverable (RC) if Ti
    reads-from Tj (i ? j) and ci ? H implies cj ltH ci
  • Example- H r1x w2y w2x r3x w1y c1
    w3y c3 c2
  • T3 reads-x-from T2 but c2 does not precede c3 in
    H, thus H is not RC.
  • Definition- A history H avoids cascading aborts
    (ACA) if Ti reads-x-from Tj (i ? j) implies
    cj ltH rix
  • Definition- A history H is strict (ST) if wix
    ltH ojx (i ? j) implies either
  • ai ltH oj or
  • ci ltH oj.

28
View Serializability
  • Definition of serializability based on the view
    transactions have of the database
  • Definition- A write wix is the final write of x
    in H if
  • wix ? H and
  • ai ? H and
  • ?wjx ? H (i ? j), wjx ltH wix or aj ? H

29
View Serializability (Cont.)
  • Definition- Two histories H and H are view
    equivalent if
  • 1. They are over the same transactions and have
    the same operations and
  • 2. For all Ti and Tj not aborted in H, Ti
    reads-x-from Tj in H iff Ti reads-x-from Tj in
    H and
  • 3. wix is the final write of x in H iff wix
    is the final write of x in H.
  • Definition- A history H is view serializable if
    for every prefix H of H, C(H) is view
    equivalent to a serial schedule

30
View Serializability (Cont.)
  • Example-
  • H r1x w2y w2x c2 r3x w1y c1 w3y c3
  • 1. H1 r1x w2y w2x c2
  • C(H1) w2y w2x c2
  • C(H1) is view equivalent to a serial history
    (itself)
  • 2. H2 r1x w2y w2x c2 r3x w1y c1
  • C(H2) r1x w2y w2x c2 w1y c1
  • C(H2) is not view equivalent to T1T2 (w1y not
    w2y is final write of y in H)
  • C(H2) is not view equivalent to T2T1

31
View Serializability (Cont.)
  • Example-
  • H r1x w2y w2x r3x w1y c1 w3y c3 c2
  • 2. H2 r1x w2y w2x r3x w1y c1 w3y
    c3
  • C(H2) r1x r3x w1y c1 w3y c3
  • C(H2) is view equivalent to T1T3
  • T3 writes final version of y in both histories
  • 3. H3 r1x w2y w2x r3x w1y c1 w3y
    c3 c2
  • C(H3) r1x w2y w2x r3x w1y c1 w3y
    c3 c2
  • C(H3) is view equivalent to T1T2T3
  • T3 reads-x-from T2 in both histories.
  • T2 writes final (only) version of x in both
    histories
  • T3 writes final version of y in both histories

32
Two-Phase Locking (2PL)
  • Notation
  • rlix - A read lock for element x granted to
    transaction Ti
  • wlix - A write lock for x granted to Ti
  • ruix - Release a read lock for x held by
    transaction Ti
  • wuix - Release a write lock for x held by Ti

33
Basic 2PL
  • Protocol
  • 1. Before executing pix, a lock plix must be
    acquired on Tis behalf. If another transaction
    Tj is holding a lock qljx that conflicts with
    plix then the operation is delayed until the
    lock can be set.
  • 2. The scheduler cannot release the lock plix
    at least until the completion of pix has been
    acknowledged.
  • 3. A transaction cannot acquire any new locks
    after it has released a lock.

34
Correctness of 2PL
  • Characteristics of 2PL Histories
  • 1. If oix ? C(H) then
  • a) olix ? C(H) and olix ltH oix and
  • b) ouix ? C(H) implies oix ltH ouix
  • 2. If pix and qjx (i ? j) are conflicting
    operations on x in C(H), then either
  • a) puix ltH qljx or
  • b) qujx ltH plix
  • 3. If pix and qiy are in C(H), then plix ltH
    quiy

35
Correctness of 2PL (Cont.)
  • Theorem- Every 2PL history is CSR
  • Proof
  • I. Ti ? Tj ? SG(H) implies there is an element x
    for which puix ltH qljx
  • a) The edge Ti ? Tj ? SG(H) implies that pi ltH
    qj.
  • b) By 2 above, we know that puix ltH qljx or
    qujx ltH plix
  • c) Assume qujx ltH plix. By 1(a), 1(b) and
    transititivity, we get qj ltH pi. But this
    contradicts I(a).

36
Correctness of 2PL (Cont.)
  • II. If T1 ? T2 ? ... ? Tk is a path in SG(H) then
    pu1x ltH qlky for some x and y.
  • a) Basis T1 ? Tk ? SG(H)
  • pu1x ltH qlkx holds by argument in I.
  • b) Induction Step Assume that hypothesis holds
    for path T1 ? T2 ? ... ? Tk. Now consider path T1
    ? T2 ? ... ? Tk ? Tk1.
  • qlkz ltH puky follows from 3 above.
  • Becuase Tk ? Tk1 ? SG(H), from I we know there
    is a y, puky ltH qlk1y
  • Thus, qlkz ltH qlk1y. Combining this with
    induction hypothesis (pu1x ltH qlkz), we get
    pu1x ltH qlky

37
Correctness of 2PL (Cont.)
  • III. Assume that SG(H) contains a cycle T1 ? T2 ?
    ... ? Tn ? T1. From II, this implies pu1x ltH
    ql1y. But this contradicts the two-phase rule
    (3). Must assume that 2PL schedules are CSR.

38
Other Variations
  • Conservative 2PL
  • 1. When a transaction starts, it predeclares the
    set of elements it will read and write
  • 2. A transaction must acquire all of its locks
    before it executes any operations. If all locks
    cannot be acquired, any acquired locks are
    released.
  • 3. Deadlock is avoided because no locks are held
    while requesting other locks.
  • Strict 2PL (Rigorous 2PL)
  • 1. Release all locks only after transaction
    commits.
  • 2. Rigorous histories ensure that serialization
    order is compatible with commit order.

39
Distributed 2PL
  • Each resource manager carries out 2PL at its own
    site.
  • Must ensure that all sites agree on serialization
    order.
  • Example-
  • T1A w1Ax c1A
  • SA
  • T2A r2Ax c2A
  • T1B w1By c1B
  • SB
  • T2B r2By c2B

40
Distributed 2PL
  • History is strict, 2PL and the commit order is
    same at both sites (c1 lt c2). History is not
    globally serializable. T2 reads-x-from T1 at site
    SA and T1 reads-y-from T2 at site SB.
  • Because read lock in T2B is released before
    commit, commitment order at site SB does not
    match commit order.

41
Serializability Requirements
  • Lock everything transaction accesses
  • Do not lock after unlock.
  • Backout may have to undo a unlock ( lock).
  • So do not release locks prior to commit

42
Degrees of Isolation
  • SQL allows client to trade-off isolation against
    performance by specifying a degree of isolation.
  • 0 - Does not overwrite another transactions
    dirty data if the other transaction is 1 or
    better.
  • transaction gets short xlocks for writes (well
    formed writes not 2Ø, no read locks)
  • 1 - No lost updates
  • transaction gets no read locks (well formed and
    2Ø writes,)
  • 2 - No lost updates or dirty reads
  • transaction releases read locks right after read
    (well formed with respect to reads but not 2Ø
    with respect to reads)
  • 3 - No lost updates and repeatable reads
    (implies serializability)
  • well formed and 2Ø

43
Isolation Levels Theorem
  • What is effect of some transactions employing an
    isolation level lower than 3?
  • If others lock 1 or better and I obey 0, 1, 2
    or 3 any legal history will give me 1, 2 or 3
    isolation.
  • But the DB may be corrupted!
  • Must ensure that if I allow dirty reads, I can
    still produce consistent updates.

44
Comparison of Isolation Levels
Rollback supported
45
Comparison of Isolation Levels
46
The Phantom Problem
  • Phantom Records
  • If I try to read hair "red" and eyes "blue"
    and get not found, what gets locked? No records
    have been accessed so no records get locked
  • If I delete a record, what gets locked? (the
    record is gone)
  • Predicate Locks can solve this problem
  • Page Locks (done right) can also solve this
    problem
  • lock the red hair page and the blue eye page,
  • prevents others red hair and blue eye inserts
    updates
  • High volume TP systems use esoteric locking
    mechanisms
  • Key Range Locks to protect b-trees
  • Hole Locks to protect space for uncommitted
    deletes

47
Predicate Locks
  • Read and write sets can be defined by predicates
    (e.g. Where clauses in SQL statements)
  • When a transaction accesses a set for the first
    time,
  • Automatically capture the predicate
  • Do set intersection with predicates of others.
  • Delay this transaction if conflict with others.
  • Problems with predicate locks
  • Set intersection predicate satisfiability is NP
    complete (slow).
  • Hard to capture predicates
  • Pessimistic Jim locks eye blue Andreas
    locks hairred
  • Predicate says conflict, but DB may not have blue
    eyed red haired person.

48
Precision Locks Lazy Predicate Locks
  • Method
  • Check returned records against predicates on each
    read/write
  • Example
  • Andreas can't insert/read blue eyes
  • Jim can't insert/read red hair.
  • Evaluate predicates against records as they go
    by.

49
Granular Locks
  • Idea
  • Pick a fixed set of predicates
  • They form a lattice under and, or
  • This can be represented as a graph
  • Lock the nodes in this graph
  • Example
  • Can lock whole DB, whole file, or just one key
    value.
  • Size of lock is called granule.

50
Lock Granularity
  • Batch wants to lock whole DB
  • Interactive wants to lock records
  • How can we allow both granularities?
  • Intention mode locks on coarse granules

51
Lock Granularity refined intent modes
  • Intent mode locks say locks being set at finer
    granularity
  • If only reading at finer granularity then I
    compatible with S.
  • Introduce IS intend to set fine S locks
  • IX intend to set fine S or X locks
  • SIX S IX

52
Granularity Example
  • T1
  • has record locks in file 1 and file 2.
  • T2
  • all of file 3 locked shared mode
  • most of file 2 locked shared (fine granularity)
  • T3
  • waiting
  • Rules for a granularity tree
  • Lock root to leaf
  • If set X,S below get IX or IS above
  • Rules for a DAG (Directed Acyclic Graph)
  • Get ONE IS,IS,...,S path for reads
  • Get ALL IX,IX,...,X paths for a write

53
Update Mode Locks
  • Most common form of deadlock
  • T1 READ A (lock A shared)
  • T2 READ A (lock A shared)
  • T1 UPDATE A (lock A exclusive, wait for T2)
  • T2 UPDATE A (deadlock A exclusive, wait for T1)
  • Introduce update mode lock

U compatible with S so updaters do not hurt
readers. S is not compatible with U - makes
other readers wait.
54
Lock Conversion
  • If requested lock already held in one mode, new
    mode is max (old, requested)

55
Key Range Locking (for Phantoms)
  • Operations
  • Read unique(x) / return value associated with
    key x /
  • Read next(x) / return value associated with
    first key value following x /
  • Insert(x, v) / associate value v with key
    value x /
  • Delete(x) / delete value associated with key
    value x /
  • Insert between X and Y
  • must test to see that no one else cares that
    X,Y was empty, but is now full, i.e., no other
    concurrent trans did a Read Next("X")).

56
Key Range Locking (static ranges)
  • Static Ranges - A,B), B,C),...., Z,)
  • Insert(x, v) and Delete(x) must get exclusive
    lock on range x falls in.
  • Read(x) gets shared lock on range x falls in
  • Read Next(x) gets shared lock on all key ranges
    between that of x and the range of the next key
    value (the value returned).
  • Lock first element in range as surrogate for the
    range.
  • Must get any necessary intention locks as well.

57
Key Range Locking (dynamic ranges)
  • Dynamic Ranges
  • Use actual key values in relation to construct
    key ranges.
  • Insertions and deletions will change the set of
    ranges.
  • Example - Relation contains A, C, T, W, X.
  • Ranges are A, C), C, T), T, W), W, X), X, ?)
  • Insertion of U, will split range T, W) into T,
    U) and U, W).
  • Locking Protocol
  • Read unique(x)
  • If found need to protect it from deletion (how?)
  • If not found need to protect against its later
    insertion (how?)
  • Read next(x)
  • Assume that transaction holds lock on x already
    and the next key value is y.
  • Need to protect against an insertion between x
    and y (how?)
  • Need to protect y against deletion (wait until we
    consider delete)

58
Key Range Locking (dynamic ranges)
  • Insert(x)
  • Assume that F is being inserted into A, C, T, W,
    X.
  • Will transform C, T) to C, F) and F, T).
  • Get exclusive lock on C, T) to ensure that no
    other transaction has it locked.
  • Get exclusive lock on F, T). Can now drop lock
    on C, T) but hold lock on F, T) until commit.
  • Delete(x)
  • Assume that F is being deleted from A, C, F, T,
    W, X
  • Will merge C, F) and F, T) into C, T).
  • Get exclusive lock on F, T) and then one on C,
    F). Why is this order necessary?
  • When can these locks be released?
  • Comments
  • After lock wait may need to revalidate the key
    range to make sure it has not changed in the mean
    time.
  • How do we know this protocol ensures isolation?

59
DAG Locking
  • In general predicate locks and key-range locks
    form a DAG not just a tree
  • a lock can have many parents.
  • Blue-eye key range
  • Blonde-hair key range.
  • Hierarchical locks work for this.
  • Read locks any path
  • Writes lock all paths.

60
Discussion
  • How can we avoid phantom problem in project if we
    do page locking?
  • Which pages must be locked (until end of
    transaction)?
  • Data dictionary pages?
  • Hash bucket pages?
  • Pages in relation? Which ones?
  • How can we avoid phantoms with tuple level
    locking?
  • Relations with an index?
  • Those without an index?

61
Phantom Solutions for Hash Index
  • How can we synchronize search in hash file?
  • 1. Lock header block. Get exclusive lock for
    insertion or deletion, shared locks for
    searching.
  • This solution suffers from low concurrency
  • 2. Acquire shared or exclusive locks on hash
    bucket.
  • This solution is much better.

62
Phantom Solutions for Hash Indexes
  • 3. An Intention Locking Approach
  • Search for key K
  • Acquire a shared semaphore on appropriate hash
    bucket.
  • Acquire a shared lock on K where K is largest
    key in hash bucket less than or equal to K.
  • Release semaphore.
  • Insertion
  • Get exclusive semaphore on appropriate hash
    bucket.
  • Acquire an ix lock on K where K is largest key
    in hash bucket less than or equal to K.
  • Acquire an exclusive lock on K.
  • Release intention lock.
  • Release semaphore.

63
An Intention Locking Approach
  • Deletion
  • Get exclusive semaphore on appropriate hash
    bucket.
  • Get an ix lock on K.
  • Release semaphore.
  • Discussion of approach 3.
  • Is concurrency better than approach 2?
  • Does it solve the phantom problem?
  • Can the protocol be improved (i.e., fewer or less
    restrictive locks)?

64
Locking API
  • lock(name, - name of resource
  • mode, - S, X, SIX, IX, IS, U
  • duration - instant, short, long
  • wait) - no, timeout, yes
  • unlock(name, - name of resource
  • clear) - decrement count to zero or
    not.
  • Locks must count
  • if lock twice and unlock once, lock kept

65
Requirements
  • Lock and unlock operations must be atomic
  • Lock manager must be fast for common case (i.e.,
    lock can be acquired immediately)
  • Must keep lock table small
  • Lock table must allow high concurrency

66
Approach
  • Lock table implemented as hash table
  • Hash on the lock name
  • One semaphore per hash chain
  • Locks not held by any transaction are removed
    from lock table.
  • Use efficient methods to allocate and deallocate
    lock table entries
  • Pool of preallocated lock table entries
  • These operations will be common because lock
    conflicts are rare.

67
Lock Table Implementation
68
Lock Table Implementation
69
LOCK Control Flow
  • Hash name Search lock table
  • Not Found (lock is free)
  • Construct lock header and lock request
  • Add to lock table and exit
  • Lock Already Granted To Requestor?
  • Yes (conversion case)
  • Requested Mode Compatible With Other Granted
    requests?
  • Yes
  • Grant, Increment Count, Exit
  • No
  • Increment Count, Set convert mode, Wait
  • No (new request case)
  • Allocate lock request and insert at end of queue.
  • Anyone Waiting?
  • Yes
  • Mark lock request waiting, Wait
  • No
  • Compatible With Grantees?
  • Yes - Then Grant

70
UNLOCK Control Flow
  • Hash Name Search lock table
  • Find lock request In Queue
  • Decrement Count
  • If Count gt 0 then
  • Exit - lock remains held
  • Remove lock request from queue
  • If Queue Empty then
  • Deallocate Lock Header and exit
  • For Each Waiting Conversion
  • If Compatible With Granted Group then
  • Mark lock request granted Wakeup
  • If No Conversions Waiting Then
  • For Each Waiter (in FIFO order)
  • If Compatible With Granted then
  • Mark Lock Request granted Wakeup
  • Else
  • Exit
  • Exit

71
Blocking Transactions
  • One approach
  • Transaction must be blocked if conflicting lock
    is held by another transaction.
  • Assume that a lock request contains a condition
    variable. To block transaction, do a wait on
    condition variable.
  • Before blocking transaction, register a wakeup
    call. Alarm thread wakes up occasionally and
    unblocks threads that have registered wakeups.
  • When a lock is released, pick one of the pending
    requests, grant lock and unblock waiting thread
    (i.e., signal condition variable in lock request).

72
Blocking Transactions (Cont.)
  • When a transaction becomes unblocked, it must
    check to see if the lock was granted. If not,
    remove lock request node and return lock_timeout
    response.
  • When a transaction receives a lock_timeout
    response, it must clean up and then return a
    error response to the client. Clean up includes
  • Free allocated memory
  • Unfix buffers

73
Locking Performance
  • Goal- Predict the effect of various design
    decisions on throughput and response time of
    transactions
  • Assumptions
  • Data items are accessed with uniform probability
  • All locks are exclusive
  • Strict 2PL
  • Performance objective-
  • Provide maximum throughput, while keeping
    response time below t seconds for 90 of all
    transactions (TPC Benchmark)

74
Data Contention and Thrashing
  • Resource Contention
  • Resources- memory, CPU time, or I/O channels
  • When resources become overcommitted, system
    becomes less productive using more resources on
    unproductive work, e.g., page faults.
  • Data Contention
  • Transactions contend for locks causing
  • Blocking
  • Restarts (deadlock)

75
Data Contention and Throughput
  • Thrashing
  • throughput The rate at which transactions
    complete
  • MPL The number of active transactions
  • Initially, increasing the MPL increases
    throughput.
  • At the point of thrashing, further increases in
    MPL reduce throughput.

76
Data Contention Thrashing
  • DC-Thrashing
  • Blocking is main cause of DC-Thrashing
  • Restart rate is low at onset of thrashing (1-2)
  • After onset of DC-Thrashing, adding one
    transaction blocks more than one transaction
  • Very little blocking is necessary to cause
    DC-Thrashing. At onset
  • Average length of lock queues can be less than
    one
  • Most deadlock cycles will have only two
    transactions
  • If half the transactions are blocked, there is a
    good chance of DC-Thrashing.

77
A Simple Model for Blocking
  • Based on Section 7.11.5 of text.
  • Definitions
  • N MPL of the system
  • k Number of locks acquired by each transaction
  • D The number of data elements in the database
  • Blocking probability
  • Each transaction holds approximately k/2 locks.
  • At any given time, the number of locks held by
    other transactions is

78
A Simple Model for Blocking
  • The probability a request is blocked is
  • PW  
  • The probability a transaction is blocked is
  • DC-Workload
  • Defined as
  • DC-Thrashing begins roughly when PW(T) .75

79
Deadlock
  • Probability of Deadlock
  • Cycle of length 2 - Transactions T1 and T2
  • Probability that T1 waits for some transaction is
    PW(T). Call the transaction it waits for T2.
  • Probability that T2 waits for the transaction T1
    is PW(T)/(N - 1), i.e., 1/(N - 1) times the
    probability it waits for some transaction
  • Probability a transaction participates in a cycle
    of length 2 is
  • Probability a transaction participates in a cycle
    of length 3 is proportional to PW(T)3

80
Deadlock (Cont.)
  • For low probability of waiting, cycles of length
    3 can be ignored.
  • Probability any transaction deadlocks is
    approximately

81
Granularity
  • The effect of granularity
  • The number of locks acquired by a transaction, k,
    is a function of D.
  • As D increases so does k until the granularity of
    an access matches that of the locks.
  • If a transaction accesses a few tuples in a large
    database, moving from page level granularity to
    tuple level granularity (D increases) will not
    increase k significantly.
  • To reduce granularity to the bit level will
    increase D and also increase k in the same
    proportion.

82
Granularity (Cont.)
  • Example Assume that k ?D in some region.
  • PW(T) ((N - 1) ?2 D)/2
  • Thus, as D increases (granularity is made finer)
    so does probability of waiting. (Throughput
    decreases)
  • Example Assume that k in some region.
  • PW(T)
  • Thus, as D increases (granularity is made finer)
    probability of waiting decreases. (Throughput
    increases)
  • At some point, we may see a downturn in
    throughput for further increases in D due to
    increased resource contention.
Write a Comment
User Comments (0)
About PowerShow.com