Generalized Privacy Amplification - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

Generalized Privacy Amplification

Description:

Generalized Privacy Amplification. Charles H. Bennett Gilles Brassard Claude Cr peau. Alice and Bob want to be able to communicate securely over an insecure channel. ... – PowerPoint PPT presentation

Number of Views:64
Avg rating:3.0/5.0
Slides: 23
Provided by: Ste8342
Category:

less

Transcript and Presenter's Notes

Title: Generalized Privacy Amplification


1
Generalized Privacy Amplification
  • Charles H. Bennett Gilles Brassard Claude
    Crépeau

2

Alice
Bob
Eve
  • Alice and Bob want to be able to communicate
    securely over an insecure channel.
  • Eve is an eavesdropper who can perfectly receive
    all messages sent between Alice and Bob, but
    cannot alter them without being noticed.
  • In order for Alice and Bob to communicate
    securely, they need to be able to generate a
    shared secret key that Eve has almost no
    information about.

3
Overview
  • If Alice, Bob and Eve have correlated random
    variables X, Y and Z respectively, it is possible
    for Alice and Bob to generate a shared secret
    key.

Alice
Bob
X
Y
Z
Eve
4
Overview
  • If Alice, Bob and Eve have correlated random
    variables X, Y and Z respectively, it is possible
    for Alice and Bob to generate a shared secret
    key.

Alice
Bob
X
Y
Z
Eve
Surprisingly, this is possible even if Z is more
strongly correlated to X and Y than X is
correlated to Y.
5
Overview
  • Obtaining a shared secret key has 3 phases

Alice
Bob
X
Y
Z
Eve
6
Overview
  • Obtaining a shared secret key has 3 phases
  • Phase 1 Advantage Distillation

Alice
Bob
X
Y
Z
Eve
Alice and Bob communicate information about their
random variables.
7
Overview
  • Obtaining a shared secret key has 3 phases
  • Phase 1 Advantage Distillation

Alice
Bob
XC
YC
ZC
Eve
Alice and Bob communicate information about their
random variables. These communications are summed
up as C.
8
Overview
  • Obtaining a shared secret key has 3 phases
  • Phase 1 Advantage Distillation

Alice
Bob
XC
YC
W
ZC
Eve
Then Alice uses C to construct a message W, such
that YC is correlated to W better than ZC.
9
Overview
  • Obtaining a shared secret key has 3 phases
  • Phase 2 Information Reconciliation

Alice
Bob
XC
YC
W
ZC
Eve
Alice and Bob exchange just enough redundant
information and error checking so that Bob can
figure out W with a very high probability, but
Eve is left with incomplete information about W.
10
Overview
  • Obtaining a shared secret key has 3 phases
  • Phase 3 Privacy Amplification

Alice
Bob
XC
YC
K
ZC
Eve
Alice and Bob distill a smaller K from W such
that Eve knows practically nothing about K.
11
Overview
  • This paper is only concerned with the last
    phase, privacy amplication.

12
Definition of Entropy
  • Entropy is the amount of randomness or
    uncertainty associated with an object. Objects
    with higher entropies are harder to predict or
    guess.
  • We are concerned with two measures of entropy
  • Rényi entropy (of order two)
  • Shannon entropy

13
Definition of Entropy
  • Let X be a random variable with alphabet A and
    probability distribution PX. The Rényi entropy of
    X, called R(X), is defined as
  • R(X) -log 2 ( ? PX(a) )
  • -log 2 ( EPX(X) )
  • The Shannon Entropy of X, called H(X), is
    defined as
  • H(X) -E log 2 (PX(X))

2
a?A
14
Rényi entropy vs. Shannon entropy
  • For comparison
  • R(X) -log 2 ( EPX(X) )
  • H(X) -E log 2 (PX(X))
  • It follows from Jensens inequality that
  • R(X) H(X)
  • R(X) and H(X) are only equal iff X has a uniform
    distribution.

15
Meaning of Rényi Entropy
  • What does -log 2 ( EPX(X) ) mean?
  • EPX(X) (or PC(X) for short) is called the
    collision probability of X. Its the probability
    that X will take on the same value twice in two
    independent trials.
  • Notice that 0 PC(X) 1, and that the more
    random X is, the lower PC(X) is. Because of this,
    -log 2 ( PC(X) ) ranges from 0 to 8, and is
    higher the more random X is.
  • If X is completely determined, PC(X) 1, so
    R(X) 0.
  • If X is a continuous random variable (in other
    words, theres no chance of a collision
    occurring), then PC(X) 0, so R(X) 8.

16
Meaning of Rényi Entropy
  • Just what does it mean if R(X) 5?
  • Notice that if Y is an n-bit random binary
    string with uniform distribution, then PC(Y) 2
    , so R(Y) -log 2 ( 2 ) n.
  • In other words, if R(X) 5, it means that X has
    the same chance of collisions as a random binary
    string with 5 bits.

-n
-n
17
Meaning of Shannon Entropy
  • H(X) behaves very similarly to R(X) in many ways
  • If X is determined, H(X) 0.
  • If X has a continuous distribution, H(X) 8.
  • If X is an n-bit random binary string with
    uniform distribution, then H(X) n.
  • In general, I think its reasonable to think of
    R(X) or H(X) as a measure of how much information
    about X isnt known.

18
Definition of Universal Hash Functions
  • A class G of functions A ? B is universal2 (or
    just universal for short) if, for any distinct x1
    and x2 in A, the probability that g(x1) g(x2)
    is at most 1/B when g is chosen at random from
    G with uniform distribution.
  • In other words, G is universal if on average its
    functions have uniformly distributed output.

19
Review of Goal
  • Remember that our goal was to find some way to
    transform a W about which Eve knows partial
    information into a K about which she knows
    negligible information.

Alice
Bob
W
Eve
20
Review of Goal
  • Remember that our goal was to find some way to
    transform a W about which Eve knows partial
    information into a K about which she knows
    negligible information.

Alice
Bob
K
Eve
21
Universal Hash Functions and Secret Key
Distillation
  • Universal hash function classes can be used to
    distill K from W.
  • Let W be an n-bit binary string, and assume Eve
    knows t bits of information about that string,
    represented by V. Assume t lt n.
  • Let g be a randomly chosen element of a universal
    class of hash functions that maps n-bit strings
    to r-bit strings, and K be g(W).
  • This paper shows that I(KGV) 2 /ln(2)
  • In other words, this is sort of saying that if
    the length of K is less than the number of bits
    of W that Eve doesnt know, then V tells Eve
    practically nothing about K.

r-(n-t)
22
Proofs
  • To show the previous statement, the paper proves
    Theorem 3 and Corollaries 4 and 5.
  • Theorem 3 Given X with entropy R(X) and a
    universal hash function class G,
  • H(G(X)G) R(G(X)G) r - log 2 ( 12
    ) r 2 /ln(2)
  • My interpretation is that X represents the
    possible values that W could have based on
    knowledge obtained from V. If we let K G(X),
    this theorem is saying that we can make Eves
    lack of knowledge about K arbitrarily close to r
    bits, by increasing her uncertainty of X.
  • Corollary 4 is similar to theorem 3, except its
    definition is changed to better match the above
    description.

r - R(X)
r - R(X)
Write a Comment
User Comments (0)
About PowerShow.com