AEGIS: A Single-Chip Secure Processor - PowerPoint PPT Presentation

About This Presentation
Title:

AEGIS: A Single-Chip Secure Processor

Description:

Title: PUF Overview & Key Management Author: SDTZ Last modified by: Edward Suh Created Date: 7/16/2004 12:52:37 AM Document presentation format: On-screen Show – PowerPoint PPT presentation

Number of Views:407
Avg rating:3.0/5.0
Slides: 89
Provided by: SDTZ
Category:

less

Transcript and Presenter's Notes

Title: AEGIS: A Single-Chip Secure Processor


1
AEGISA Single-Chip Secure Processor
  • G. Edward Suh
  • Massachusetts Institute of Technology

2
New Security Challenges
  • Computing devices are becoming distributed,
    unsupervised, and physically exposed
  • Computers on the Internet (with untrusted owners)
  • Embedded devices (cars, home appliances)
  • Mobile devices (cell phones, PDAs, laptops)
  • Attackers can physically tamper with devices
  • Invasive probing
  • Non-invasive measurement
  • Install malicious software
  • Software-only protections are not enough

3
Distributed Computation
  • How can we trust remote computation?

DistComp() x Receive() result
Func(x) Send(result) Receive()
Send() Func()
  • Need a secure platform
  • Authenticate itself (device)
  • Authenticate software
  • Guarantee the integrity and privacy of
    execution

4
Existing Approaches
5
Our Approach
  • Build a secure platform with a single-chip
    processor (AEGIS) as the only trusted hardware
    component

Security Kernel (trusted part of an OS)
Protect
Memory
Identify
  • A single chip is easier and cheaper to protect
  • The processor authenticates itself, identifies
    the security kernel, and protects program state
    in off-chip memory

6
Contributions
  • Integration of Physical Random Functions (PUFs)
  • Cheap and secure way to authenticate the
    processor
  • Architecture to minimize the trusted code base
  • Efficient use of protection mechanisms
  • Reduce the code to be verified
  • Off-chip protection mechanisms
  • Should be secure, yet efficient
  • No existing trusted platform provides proper
    off-chip protection
  • Integrity verification and encryption algorithms
  • A fully-functional RTL implementation
  • Area Estimate
  • Performance Measurement

7
Physical Uncloneable Functions
8
Authenticating the Processor
  • Each processor should be unique (has a secret
    key) so that a user can authenticate the processor

Sign/MAC a message the key ? A server can
authenticate the processor
Encrypt for the processor key ? Only the
processor can decrypt
  • Challenge ? Embedding secrets in the processor in
    a way that is resistant to physical attacks

9
Problem
Storing digital information in a device in a way
that is resistant to physical attacks is
difficult and expensive.
  • Adversaries can physically extract secret keys
    from EEPROM while processor is off
  • Trusted party must embed and test secret keys in
    a secure location
  • EEPROM adds additional complexity to
    manufacturing

10
Our SolutionPhysical Random Functions (PUFs)
  • Generate keys from a complex physical system

Hard to fully characterize or predict
characterize
configure
Use as a secret
Response (n-bits)
Can generate many secrets by changing the
challenge
Challenge (c-bits)
  • Security Advantage
  • Keys are generated on demand ? No non-volatile
    secrets
  • No need to program the secret
  • Can generate multiple master keys
  • What can be hard to predict, but easy to measure?

11
Silicon PUF Concept by Gassend et al
  • Because of random process variations, no two
    Integrated Circuits even with the same layouts
    are identical
  • Variation is inherent in fabrication process
  • Hard to remove or predict
  • Relative variation increases as the fabrication
    process advances
  • Experiments in which identical circuits with
    identical layouts were placed on different ICs
    show that path delays vary enough across ICs to
    use them for identification.

Challenge c-bits
Response n-bits
Combinatorial Circuit
12
A (Simple) Silicon PUF VLSI04
c-bit Challenge
0
1
1
1
1
D
Q
1 if top path is faster, else 0
0
0

0
RisingEdge
0
0
0
G
1
1
1
  • Each challenge creates two paths through the
    circuit that are excited simultaneously. The
    digital response of 0 or 1 is based on a
    comparison of the path delays by the arbiter
  • We can obtain n-bit responses from this circuit
    by either duplicate the circuit n times, or use n
    different challenges
  • Only use standard digital logic ? No special
    fabrication

13
PUF Experiments
  • Fabricated 200 identical chips with PUFs in
    TSMC 0.18m on 5 different wafer runs
  • Security
  • What is the probability that a challenge produces
    different responses on two different PUFs?
  • Reliability
  • What is the probability that a PUF output for a
    challenge changes with temperature?
  • With voltage variation?

14
Inter-Chip Variation
  • Apply random challenges and observe 100 response
    bits

Can identify individual ICs
15
Environmental Variations
  • What happens if we change voltage and temperature?

Measurement noise at 125C (baseline at 20C) 3.5
bits
Even with environmental variation, we can still
distinguish two different PUFs
16
Reliable PUFs
PUFs can be made more secure and reliable by
adding extra control logic
Challenge
Response
PUF
n
c
For calibration
For Re-generation
  • Hash function (SHA-1,MD5) precludes PUF
    model-building attacks since, to obtain PUF
    output, adversary has to invert a one-way function
  • Error Correcting Code (ECC) can eliminate the
    measurement noise without compromising security

17
Security Discussion
  • What does it take to find PUF secrets?
  • Open the chip, completely characterize PUF delays
  • Invasive attacks are likely to change delays
  • Read on-chip register with a digital secret from
    PUF while the processor is powered up
  • More difficult proposition than reading
    non-volatile memory
  • Still, this only reveals ONE secret from PUF
  • Use two keys with only one key present on-chip at
    any time
  • PUF is a cheap way of generating more secure
    secrets

18
Processor Architecture
19
Authentication
  • The processor identifies security kernel by
    computing the kernels hash (on the l.enter.aegis
    instruction)
  • Similar to ideas in TCG TPM and Microsoft NGSCB
  • Security kernel identifies application programs
  • H(SKernel) is included in a signature by the
    processor
  • Security kernel includes H(App) in the signature

Application (DistComp)
H(App)
  • H(Skernel), H(App) in a signature
  • A server can authenticate the processor,
  • the security kernel, and the application

Security Kernel
H(SKernel)
20
Protecting Program States
  • On-chip registers and caches
  • Security kernel handles context switches
  • Virtual memory permission checks in MMU
  • Off-chip memory
  • Memory Encryption MICRO36
  • Counter-mode encryption
  • Integrity Verification HPCA03,MICRO36,IEEE SP
    05
  • Hash trees and Log-hash schemes
  • Swapped pages in secondary storage
  • Security kernel encrypts and verifies the
    integrity

21
A Simple Protection Model
  • How should we apply the authentication and
    protection mechanisms?
  • What to protect?
  • All instructions and data
  • Both integrity and privacy
  • What to trust?
  • The entire program code
  • Any part of the code can read/write protected data

Uninitialized Data (stack, heap)
Encrypted Integrity Verified
Initialized Data (.rodata, .bss)
Program Code (Instructions)
Hash ? Program Identity
Memory Space
22
What Is Wrong?
  • Large Trusted Code Base
  • Difficult to verify to be bug-free
  • How can we trust shared libraries?
  • Applications/functions have varying security
    requirements
  • Do all code and data need privacy?
  • Do I/O functions need to be protected?
  • Unnecessary performance and power overheads
  • Architecture should provide flexibility so that
    software can choose the minimum required trust
    and protection

23
Distributed Computation Example
  • If Func(x) is a private computation
  • Needs both privacy and integrity
  • Calling a signing system call to security kernel
  • Only needs integrity
  • Receiving the input and sending the result (I/O)
  • No need for protection
  • No need to be trusted

DistComp() x Receive() result
Func(x) sig sys_pksign(x,result)
Send(result,sig)
24
AEGIS Memory Protection
  • Architecture provides five different memory
    regions
  • Applications choose how to use
  • Static (read-only)
  • Integrity verified
  • Integrity verified encrypted
  • Dynamic (read-write)
  • Integrity verified
  • Integrity verified encrypted
  • Unprotected
  • Only authenticate code in the verified regions







Receive(), Send() data
Unprotected
Dynamic Encrypted
Func() data
Dynamic Verified
DistComp() data
Static Encrypted
Static Verified
DistComp(), Func()
Receive(), Send()
Unprotected
Memory Space
25
Suspended Secure Processing (SSP)
Insecure (untrusted) Modes
  • Two security levels within a process
  • Untrusted code such as Receive() and Send()
    should have less privilege
  • Architecture ensures that SSP mode cannot tamper
    with secure processing
  • No permission for protected memory
  • Only resume secure processing at a specific point

Start-up
STD
SSP
Resume
Compute Hash
Suspend
TE/PTR
Secure Modes
26
Memory Integrity Verification
27
Microsoft Xbox
  • Xbox is a Trusted PC Platform
  • All executables are digitally signed and verified
    before execution
  • Trusted (encrypted) bootloader/OS prevents from
  • Using copied or imported games
  • Online game cheating
  • Using Xbox as a stock PC

28
Inside Xbox How was It Broken?
  • Southbridge ASIC
  • Secure boot code
  • 128-bit secret key
  • Flash ROM
  • Encrypted Bootloader
  • Broken by tapping bus
  • Read the 128-bit key
  • Cost about 50
  • Observation
  • Adversary can easily
  • read anything on off-
  • chip bus

From Andrew Bunnie Huangs Webpage
29
Off-Chip Memory Protection
External Memory
Processor
write
ENCRYPT / DECRYPT
INTEGRITY VERIFICATION
read
  • Privacy Protection MICRO36
  • Encrypt private information on off-chip memory
  • Counter-mode encryption
  • Integrity Verification HPCA03,MICRO36,IEEE SP
    05
  • Check if a value from external memory is the most
    recent value stored at the address by the
    processor

30
Previous Work (1) Use Message Authentication
Code (MAC)
  • Hash Functions
  • Functions such as MD5 or SHA-1
  • Given an output, hard to find what input produced
    it
  • Also, hard to find two inputs with the same output

Hash Function
Input
MAC
Secret Key
  • Message Authentication Code (MAC) can be seen as
    a keyed hash function
  • Only an entity with the correct key can generate
    a valid MAC
  • An entity with the correct key can verify the MAC

31
Previous Work (1) MAC-based Integrity
Verification Algorithm
Untrusted RAM
Processor
write
VERIFY
read
120, MAC(0x45, 120)
MAC Key
  • Store MAC(address, value) on writes, and check
    the MAC on reads (used by XOM architecture by
    David Lie)
  • Does NOT work ? Replay attacks
  • Need to securely remember the off-chip memory
    state

32
Previous Work (2)Hash Trees (Merkle Trees)
Processor
  • Construct a hash tree
  • On-chip Storage 16-20 Bytes
  • Read/Write Cost logarithmic
  • ? 10x slowdown

V1
V3
V4
V2
MISS
Untrusted Memory
33
Cached Hash Trees HPCA03
Processor
Cache hashes on-chip ? On-chip cache is trusted
? Stop checking earlier
DONE!!!
V1
V2
V3
V4
MISS
MISS
Untrusted Memory
34
Hardware Implementation Issue ISCA05
  • What happens if a new block evicts a dirty one? ?
    write-back
  • If we cache all hashes
  • Each access can cause another read and a
    write-back
  • ? Need a large buffer
  • Cache hashes only if we have buffer space
    available

Processor
Hash H
Cache
Buffer for IV






Data B
Hash I
Hash K
Hash K
Data C
Hash H
Data D
Data A
Hash J
Hash J
Data D
MISS
UPDATE Parent Hash
WRITE-BACK
READ
WRITE-BACK
READ
Hash J
Data D
Hash K
Hash H
Untrusted Memory
35
Hiding Verification Latency
  • Integrity verification takes at least 1 hash
    computation
  • SHA-1 has 80 rounds ? 80 cycles to compute
  • Should we wait for integrity verification to
    complete?
  • No need for precise integrity violation
    exceptions they should never happen. If they do,
    we simply abort
  • Memory verification can be performed in the
    background, so no latency is added except for
    instructions that can compromise security

36
Implementation Evaluation
37
Implementation
  • Fully-functional system on an FPGA board
    ISCA05
  • AEGIS (Virtex2 FPGA), Memory (256MB SDRAM), I/O
    (RS-232)
  • Based on openRISC 1200 (a simple 4-stage
    pipelined RISC)
  • AEGIS instructions are implemented as special
    traps

Processor (FPGA)
RS-232
External Memory
38
Area Estimate
  • Synopsys DC with TSMC 0.18u lib
  • New instructions and PUF add 30K gates, 2KB mem
    (1.12x larger)
  • Off-chip protection adds 200K gates, 20KB memory
    (1.9x larger total)
  • The area can be further optimized

I/O (UART, SDRAM ctrl, debug unit) 0.258mm2
IV Unit (5 SHA-1)
Encryption Unit (3 AES)
Cache (16KB)
0.864mm2
1.075mm2
1.050mm2
Cache (4KB)
0.504mm2
Code ROM (11KB)
Scratch Pad (2KB)
0.138mm2
0.261mm2
D-Cache (32KB)
I-Cache (32KB)
PUF 0.027mm2

0.086mm2
Core
1.815mm2
2.512mm2
0.512mm2
39
EEMBC/SPEC Performance
  • Performance overhead comes from off-chip
    protections
  • 5 EEMBC kernels and 1 SPEC benchmark
  • EEMBC kernels have negligible slowdown
  • Low cache miss-rate
  • Only ran 1 iteration
  • SPEC twolf also has reasonable slowdown

Benchmark Slowdown () Slowdown ()
Benchmark Integrity Integrity Privacy
routelookup 0.0 0.3
ospf 0.2 3.3
autocor 0.1 1.6
conven 0.1 1.3
fbital 0.0 0.1
twolf (SPEC) 7.1 15.5
40
High Performance Processors
  • Assume that the entire program and data protected
  • If we add encryption, the overhead is 31 on
    average and 59 in the worst case

Normalized Performance (IPC)
L2 Caches with 64B blocks
41
Related Projects
  • XOM (eXecution Only Memory)
  • Stated goal Protect integrity and privacy of
    code and data
  • Operating system is completely untrusted
  • Memory integrity checking does not prevent replay
    attacks
  • Privacy enforced for all code and data
  • TCG TPM / Microsoft NGSCB / ARM TrustZone
  • Protects from software attacks
  • Off-chip memory integrity and privacy are assumed
  • AEGIS provides higher security with smaller
    Trusted Computing Base (TCB)

42
Security Review
  • Have we built a secure computing system?

Secure - Integrity verification and encryption
Flash
SDRAM
Kernel
SDRAM
  • Secure
  • Verified using hashes
  • Protected by PUFs
  • Attacks should be carried out while the
    processor is running
  • Probing on-chip memory
  • Side channels (address bus, power,
    electromagnetic fields)
  • ? Can be further secured using techniques
    from smartcards

43
Applications
  • Critical operations on computers w/ untrusted
    owners
  • Commercial grid computing, utility computing
  • Mobile agents
  • Software IP protection, DRM
  • Collaboration of mutually mistrusting parties
  • Peer-to-peer applications
  • Trusted third party computation
  • Securing physically exposed devices (embedded and
    mobile devices)
  • Secure sensor networks
  • Cell phones, laptops, automobiles, etc.

44
Summary
  • Physical attacks are becoming more prevalent
  • Untrusted owners, physically exposed devices
  • Requires secure hardware platform to trust remote
    computation
  • The trusted computing base should be small to be
    secure and cheap
  • Hardware single-chip secure processor
  • Physical random functions
  • Memory protection mechanisms
  • Software suspended secure processing
  • There are many exciting applications that need to
    be investigated

45
Questions?
46
Extra Slides
47
Performance Slowdown
  • Performance overhead comes from off-chip
    protections
  • Synthetic benchmark
  • Reads 4MB array with a varying stride
  • Measures the slowdown for a varying cache
    miss-rate
  • Slowdown is reasonable for realistic miss-rates
  • Less than 20 for integrity
  • 5-10 additional for encryption

D-Cache miss-rate Slowdown () Slowdown ()
D-Cache miss-rate Integrity Integrity Privacy
6.25 3.8 8.3
12.5 18.9 25.6
25 31.5 40.5
50 62.1 80.3
100 130.0 162.0
48
Part IVOngoing and Future Work
49
Summary
  • Security challenges will dominate the design,
    implementation and use of the Expanding Internet
  • AEGIS can potentially be a building block that
    improves security in diverse platforms
  • There are many exciting problems that need to be
    solved that span computer security, architecture,
    cryptography, and operating systems

50
Secure Sensor Networks
  • Wide deployment of sensor networks for sensitive
    applications require physical security
  • Platform
  • Power efficiency
  • Security kernel
  • Programming the system
  • Distributed
  • Multiple security levels
  • Upgrades and debugging
  • Heterogeneity
  • Secure nodes and insecure nodes

Sensor/actuator
Physically secure node
Physically secure gateway
G
Internet
G
51
Other Research Challenges
  • Bugs in applications
  • Hardware support to protect against malicious
    software attacks ASPLOS04
  • Secure storage
  • Log hash algorithm can be very efficient to check
    a long sequence of memory accesses MICRO36
  • Uses a new cryptographic primitive Asiacrypt03
  • Adaptive algorithm can give the best of both hash
    trees and log hashes IEEE Security and Privacy
    2005
  • Integrity verification of databases and
    distributed storages

52
My Other Research Interests
  • High Performance Memory Systems
  • Intelligent SRAM DAC03
  • Caching and Job Scheduling
  • Analytical cache models ICS01
  • Cache partitioning PDCS01, the Journal of
    Supercomputing 2004
  • Memory-aware scheduling JSSPP 2001
  • Cache monitors HPCA-8, 2002

53
Controlled PUFs
54
Attacks on a PUF
  • How can we predict a response without a PUF?
  • Duplication Make more PUFs from the original
    layout and hope for a match ? Only 23 chance per
    bit
  • Brute-force attack Exhaustively characterize the
    PUF by trying all challenges ? Exponential number
  • Direct measurement Open the PUF and attempt to
    directly measure its delays ? Difficult to
    precisely measure delays
  • Model building attack Try to build a model of
    the PUF that has a high probability of outputting
    the same value ? possibility for a simple PUF
    circuit

55
Improving PUFs ISCA05
PUFs can be made more secure and reliable by
adding extra control logic
Challenge
Response
PUF
n
c
For bootstrapping
For re-generation
  • Hash function (SHA-1,MD5) precludes PUF
    model-building attacks since to obtain PUF
    output, adversary has to invert a one-way function
  • Error Correcting Code (ECC) can eliminate the
    measurement noise without compromising security

56
ECC Security Discussion
Dimension of Code Number of code words 2k
Code word length Length of PUF response
Minimum distance 2t 1
  • Suppose we choose a (n, k, d) code
  • Syndrome n k bits
  • ? public, reveal information about responses
  • Security parameter is k
  • ? At least k bits remain secret after revealing
    the syndrome
  • BCH (255, 107, 45) will correct 22 errors out of
    255 response bits and effectively produce 107-bit
    secret key
  • Can get arbitrary size keys
  • Theoretical study in fuzzy extractor reference

57
PUF Secret Sharing Protocols
  • Now, we know how to extract a secret key from an
    IC using PUFs
  • A trusted party can securely embed a private key
    using the PUF secret
  • Blaise et al developed a PUF protocol for secret
    sharing
  • Each User can share a unique challenge-response-pa
    ir (CRP) with a PUF
  • Using a CRP, a user can share a secret with a
    specific program running on the processor with a
    PUF
  • PUFs can be used for any current symmetric key or
    public/private key application

58
Controlled PUFs (CPUFs)
  • Restrict access to the PUF using digital control
    logic to produce a Controlled PUF (CPUF)
  • Different control algorithms provide different
    capabilities
  • Many types of CPUFs possible, each tailored to a
    particular application
  • Will describe two types of CPUFs
  • A user can share a secret with a remote CPUF that
    has been bootstrapped
  • Error correction ensures PUF reliability
    equivalent to digital logic
  • CPUFs can be used for any current symmetric key
    or public/private key application

59
Controlled PUF Implementations
  • Will describe two useful CPUF implementations
  • First Controlled PUF Implementation
  • Initial bootstrap step to express key returns
    an access code
  • Secret generation in the field by applying public
    access code

60
A Controlled PUF
Response
RorS
Response
PreChall
One-Way Function (OWF)
1
PUF
Error Correction (EC)
Challenge
XOR
0
AccessCode
Volatile Key
Decryption Unit
Decrypted data
Encrypted data
Note Everything but the PUF is conventional
digital logic and can also be
implemented in software
61
One-Way Function
  • Hash function such as MD5 or SHA-1
  • Given an output, hard to find what input produced
    it

One-Way Function (OWF)
Challenge
PreChall
  • Given PreChall easy to compute Challenge
  • Given Challenge, hard to find PreChall that
    produces that Challenge

62
Bootstrapping
Response
RorS 1
PreChall
SHA-1
PUF
Prior to CPUF deployment (with no
eavesdroppers) Apply PreChall to obtain Response
from CPUF User software Compute Chall
OWF(PreChall) Choose
Key, Compute AccessCode Response XOR
Key PreChall, Key are secret, Chall,
AccessCode are public Chall, Key can be same for
different CPUFs, but Response and AccessCode will
be different for each CPUF
63
Secret Key Generation
RorS 0
Response
PUF
EC
Challenge
XOR
AccessCode
Volatile Key
Decryption Unit
Decrypted message
Encrypted message
When the CPUF is in a remote location User
Encrypt Message with Key to get EKey(Message)
Send EKey(message), Chall, AccessCode to
CPUF CPUF chip takes Chall, AccessCode and
generates Key internally CPUF uses Key to decrypt
Message
64
Controlled PUF Implementations
  • Will describe two useful CPUF implementations
  • Second Controlled PUF Implementation
  • Initial bootstrap step does not expose the PUF
    response (or key). This is accomplished using
    public-key encryption.
  • Secret generation is a matter of applying a
    challenge

65
Public Key Cryptography
  • Have a public key PK and a private key SK that
    form a pair
  • If a message is encrypted with PK, then only SK
    can decrypt it properly
  • If a message is signed with SK, then the
    signature can be verified using PK

66
An Asymmetric Controlled PUF Bootstrapping
Allowing Eavesdroppers
Response
Key
n
Public Key Encryption
PUF
ESP PK (Key)
Challenge
Hash
k
SP public key
Error Correction Syndrome Encoding
n - k
Syndrome for Response (public information)
  • Device during bootstrapping simply writes out
    Syndrome(Response) and ESP PK (Key)
  • Only SP can decrypt ESP PK (Key) with its private
    key to obtain Key for its public key challenge
    for each chip

67
An Asymmetric Controlled PUFSecret Sharing
PUF
Challenge
Response
Key
SP public key
Error Correction Syndrome Decoding
Hash
Syndrome for Response (public information)
  • Only SP can decrypt ESP PK (Key) with its private
    key to obtain Key for its public key challenge
    for each chip
  • Key is re-generated reliably using the Syndrome
    and was never exposed at any time

68
Discussion
Response
Key
Public Key Encryption
PUF
n
ESP PK (Key)
Challenge
Hash
k
SP public key
Error Correction Syndrome Encoding
n - k
Syndrome for Response (public information)
  • A device cannot impersonate a CPUF device
  • To detect fake devices that pretend to have PUFs,
    a list of public legitimate ESP PK (Key)
    identifiers can be maintained (other options
    possible)

69
Applications for CPUFs
70
Applications
  • Anonymous Computation
  • Alice wants to run computations on Bobs
    computer, and wants to make sure that she is
    getting correct results. A certificate is
    returned with her results to show that they were
    correctly executed.
  • Software Licensing
  • Alice wants to sell Bob a program which will only
    run on Bobs chip (identified by a PUF). The
    program is copy-protected so it will not run on
    any other chip.

How can we enable the above applications by
trusting only a single-chip processor that
contains a silicon PUF?
71
Sharing a Secret with a Silicon PUF
  • Suppose Alice wishes to share a secret with the
    silicon PUF
  • She has a challenge response pair that no one
    else knows, which can authenticate the PUF
  • She asks the PUF for the response to a challenge

PUF
Alice
72
Restricting Access to the PUF
  • To prevent the attack, the man in the middle must
    be prevented from finding out the response.
  • Alices program must be able to establish a
    shared secret with the PUF, the attackers
    program must not be able to get the secret.
  • Combine response with hash of program.
  • The PUF can only be accessed via the GetSecret
    function

73
Getting a Challenge-Response Pair
  • Now Alice can use a Challenge-Response pair to
    generate a shared secret with the PUF equipped
    device.
  • But Alice cant get a Challenge-Response pair in
    the first place since the PUF never releases
    responses directly.
  • An extra function that can return responses is
    needed.

74
Getting a Challenge-Response Pair - 2
  • Let Alice use a Pre-Challenge.
  • Use program hash to prevent eavesdroppers from
    using the pre-challenge.
  • The PUF has a GetResponse function

Hash
PUF
Hash(Program)
Challenge
Response
Pre-Challenge
Add this to PUF
75
Controlled PUF Implementation
Hash
Hash(Program)
Pre-Challenge
PUF
Challenge
Response
GetResponse
Hash
Secret
GetSecret
Hash(Program)
76
Challenge-Response Pair Management Bootstrapping
  • When a CPUF has just been produced, the
    manufacturer wants to generate a
    challenge-response pair.
  • Manufacturer provides Pre-challenge and Program.
  • CPUF produces Response.
  • Manufacturer gets Challenge by computing
    Hash(Hash(Program), PreChallenge).
  • Manufacturer has (Challenge, Response) pair where
    Challenge, Program, and Hash(Program) are public,
    but Response is not known to anyone since
    Pre-challenge is thrown away

Manufacturer
CPUF
77
Software Licensing
  • Program (Ecode, Challenge)
  • Secret GetSecret( Challenge )
  • Code Decrypt( Ecode, Secret )
  • Run Code
  • Ecode has been encrypted with Secret by
    Manufacturer
  • Secret is known to the manufacturer because he
    knows Response to Challenge and can compute
  • Secret Hash(Hash(Program), Response)
  • Adversary cannot determine Secret because he does
    not know Response or Pre-Challenge
  • If adversary tries a different program, a
    different secret will be generated because
    Hash(Program) is different

Hash(Program)
78
Certified Execution
  • Program (Code, Challenge)
  • Result Run Code
  • Secret GetSecret( Challenge )
  • Cert (Result, MACSecret(Result,
    Hash(Code)))
  • Output Cert
  • Alice can verify Cert because she knows Response,
    and hence Secret, and she knows Hash(Code)
  • Adversary cannot fake Cert because he does not
    know Secret
  • Note Secret is not dependent on Code, but
    dependent on Program

Hash(Program)
79
CRP ManagementSecurity Discussions
80
Challenge-Response Pair Management
  • How does Alice get Challenge-response pairs?
  • Protocols
  • Bootstrapping Manufacturer or Certifier has CPUF
    in their possession and can obtain
    challenge-response pairs without fear of
    eavesdroppers
  • Introduction Certifier gives Alice a
    challenge-response pair over a secure channel.
  • Renewal Given a challenge-response pair, Alice
    shares a secret with the remote CPUF and creates
    a secure channel. New challenge-response pairs
    can now be obtained. Alice can now license
    software to the CPUF.

81
Challenge-Response Pair ManagementIntroduction
  • Alice wants to get a challenge-response pair from
    a certifier who already has a challenge-response
    pair.
  • The certifier can do renewal before he does
    introduction if he doesnt want to give away his
    challenge-response pair.
  • The Certifier and Alice set up a secure
    connection.
  • The Certifier sends Alice a Challenge-Response
    Pair.

Challenge-Response pair
82
Challenge-Response Pair ManagementRenewal
  • Alice, who has a challenge-response pair, wants
    to generate new challenge-response pairs.
  • Initial CRP is used to establish encrypted and
    authenticated link from the CPUF to Alice.
  • Alice provides a pre-challenge.
  • The CPUF generates a CRP.
  • The CRP is returned to Alice.

Challenge-Response pair
83
Challenge-Response Pair ManagementPrivate Renewal
  • Alice got a challenge-response pair froma
    certifier.
  • Alice trusts the certifier to notto actively
    tamper with her communication, but does not
    trusthim not to eavesdrop.
  • Alice can use a public key system to generate a
    challenge-response pair that the certifier does
    not know.
  • The CRP and Alices public-key are used to
    establish an encrypted and authenticated link
    from the CPUF to Alice.
  • Same as normal renewal.

Challenge-Response pair
84
Putting it all Together
Introduction
Introduction
Renewal
Private Renewal
Renewal
Bootstrapping
Application
Private Renewal
Challenge-Response pair
85
CPUFs vs. Digital Keys
  • CPUFs express volatile keys when addressed by
    public challenges and access codes
  • Volatile keys much harder to extract from running
    chip
  • Challenges and access codes can be stored in
    exposed form on-chip, on a different chip, or
    sent across the network
  • Crypto-processor does not need EEPROM
  • CPUFs can express many independent master keys
  • Isolate service provider secrets
    reduce/eliminate dependence of secondary keys on
    single master key
  • Update (potentially compromised) master keys in
    the field by addressing the PUF with a different
    challenge and access code
  • Asymmetric CPUFs can be used in a manner that
    ensures that secret keys never have to be exposed
  • Error correction is required, and produces
    reliability equivalent to digital logic

86
Security Discussion
Dimension of Code Number of code words 2k
Minimum distance 2t 1
Code word length Length of PUF response
  • Suppose we choose a (n, k, d) code
  • Syndrome n k bits
  • Security parameter is k
  • BCH (255, 107, 45) will correct 22 errors out of
    255 response bits and effectively produce 107-bit
    secret key
  • Can get arbitrary size keys
  • Hash function precludes PUF model-building
    attacks since to obtain PUF output, adversary has
    to invert a one-way function

87
Measurement Attacks and Software Attacks
  • Can an adversary create a software clone of a
    given PUF chip?

88
Measurement Attacks and Software Attacks
  • Can an adversary create a software clone of a
    given PUF chip?

Distance between Chip X and Y responses 23
At 125C measurement noise for chip X 9
Measurement noise for Chip X 0.5
Model-building appears hard even for
simple circuits
Write a Comment
User Comments (0)
About PowerShow.com