Title: Robust Declassification
1Robust Declassification
- Steve Zdancewic
- Andrew Myers
- Cornell University
2Information Flow Security
Information flow policies are a natural way to
specify precise, system-wide, multi-level
security requirements. Enforcement mechanisms
are often too restrictive prevent desired
policies. Information flow controls provide
declassification mechanisms to accommodate
intentional leaks. But hard to understand
end-to-end system behavior.
3Declassification
Declassification (downgrading) is the intentional
release of confidential information.
Policy governs use of declassification operation.
4Password Example
public query
confidential password
Password Checker
This system includes declassification.
public result
5Attack Copy data into password
public query
high security data
copy
Password Checker
high security data
Attacker can launder data through the password
checker.
leaked result
6Robust Declassification
Goal Formalize the intuition that an
attacker should not be able to abuse the
downgrade mechanisms provided by the system to
cause more information to be declassified than
intended.
7How to Proceed?
- Characterize what information is declassified.
- Make a distinction between intentional and
unintentional information flow. - Explore some of the consequences of robust
declassification.
8A Simple System Model
9Views of a System
A view of (S, ?) is an equivalence relation on S.
Example S String ? Integer "integer
component is visible"
(x,i) ?I (y,j) iff i j
("attack at dawn", 3) ?I ("retreat", 3)
/
("attack at dawn", 3) ?I ("retreat", 4)
10Example Views
Example S String ? Integer "string
component is visible" (x,i) ?S (y,j)
iff x y "integer is even or
odd" (x,i) ?E (y,j) iff i2 j2
"complete view" (x,i) ??
(y,j) iff (x,i) (y,j)
11Passive Observers
A view induces an observation of a trace
t1 ("x",1)?("y",1)?("z",2)?("z",3) t1 through
view ?I 1 ? 1 ? 2 ? 3 t2
("a",1)?("b",2)?("z",2)?("c",3) t2 through view
?I 1 ? 2 ? 2 ? 3
12Observational Equivalence
- The induced observational equivalence is S?
- S? s' if the traces from s look the same
- as the traces from s' through the view ?.
13Simple Example
?
?
14Simple Example
S?
?S?
15Why Did We Do This?
- is a view of S indicating what
- an observer sees directly.
- S? is a view of S indicating what
- an observer learns by watching S evolve.
Need some way to compare them
16An Information Lattice
The views of a system S form a lattice. Order
?A ? ?B
17Lattice Order
?? ?I ? ?S
?I
?S
?E
??
18Information Learned via Observation
??
S?
?
Lemma ? ? S?
??
19Natural Security Condition
??
Definition A system S is ?-secure whenever
? S?
? S?
Closely related to standard definitions of
non-interference style properties.
??
20Declassification
Declassification is intentional leakage of
information. Implies that ? S?
/
We want to characterize unintentional
declassification.
21A Simple Attack Model
An ?A-attack is a system A (S, ?A) such that
?A A?A
?A is the attacker's view ?A is a set of
additional transitions introduced by the
attacker ?A A?A means "fair environment"
22Attacked Systems
Given a system S (S, ?) and attack A
(S, ?A) the attacked system is S?A (S, ???A)
?
s
23More Intuition
S? describes the information intentionally decla
ssified by the system a specification for how
S ought to behave.
(S?A)?A describes the information obtained by
an attack A.
24Example Attack
?A
?A
25Example Attack
Attack transitions affect the system.
?A
?A
26Example Attack
Attacked system may reveal more.
?(S?A)?A
S?
?S?A
27Robust Declassification
A system S is robust with respect to attack A
whenever (S?A)?A ? S?A.
??
S?A
(S?A)?A
?A
??
Intuition Attack reveals no additional
information.
28Secure Systems are Robust
Theorem If S is ?A-secure then S is ?A-robust
with respect to all ?A-attacks.
Intuition S doesn't leak any information
to ?A-observer, so no declassifications to
exploit.
29Characterizing Attacks
Given a system S and a view ?A, for
what ?A-attacks is S robust?
- Need to rule out
- Attack transitions
- Like this
- (integrity property)
S?
?S?
30Integrity of Policy
Attack circumvents or alters the policy decision
made by the system.
S?
?S?
31Providing Robustness
- Identify a class of relevant attacks
- Example attacker may modify / copy files
- Example untrusted host may send bogus requests
- Try to verify that the system is robust vs. that
class of attacks. - Proof fails system is insecure.
- Possible to provide enforcement mechanisms
against certain classes of attacks - - Protecting integrity of downgrade policy
32Future Work
This is a simple model extend definition
of robustness to better models.
How does robustness fit in with
intransitive non-interference?
We're putting the idea of robustness into
practice in a distributed version of Jif.
33Conclusions
Its critical to prevent downgrading mechanisms
in a system from being abused.
Robustness is an attempt to capture this idea in
a formal way.
Suggests that integrity and confidentiality are
linked in systems with declassification.
34Correction to Paper
Bug-fixed version of paper available http//www.cs
.cornell.edu/zdance
35A Non-Attack
Uses information attacker can't see!
?A
?A
36Observational Equivalence
s
Are s and s' observationally equivalent with
respect to ? ?
S? ?
s'
37Observational Equivalence
s3
s1
s2
s
s4
Trcs(S)
s5
s6
Take the sets of traces starting at s and s'.
s2'
s1'
s'
Trcs'(S)
s4'
s5'
s3'
38Observational Equivalence
s3
s1
s2
s
s4
s5
s6
Look at the traces modulo ?.
s2'
s1'
s'
s4'
s5'
s3'
39Observational Equivalence
s3
s1
s2
s
s4
s5
s6
Look at the traces modulo ?.
s2'
s1'
s'
s4'
s5'
s3'
40Observational Equivalence
Obss(S, ?)
s3
s1
s2
s
s4
s5
s6
Obss'(S, ?)
s2'
s1'
s'
s4'
s5'
s3'
41Observational Equivalence
Obss(S, ?)
Are they stutter equivalent?
?
Yes! s S? s'
Obss'(S, ?)
?