Scalability, Fidelity, and Containment in the Potemkin Virtual Honeyfarm - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

Scalability, Fidelity, and Containment in the Potemkin Virtual Honeyfarm

Description:

Scalability, Fidelity, and Containment in the Potemkin Virtual Honeyfarm ... Strict Containment Policies Loss of Fidelity ... Offer high fidelity, similar to ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 18
Provided by: pli4
Learn more at: http://www.cs.ucf.edu
Category:

less

Transcript and Presenter's Notes

Title: Scalability, Fidelity, and Containment in the Potemkin Virtual Honeyfarm


1
Scalability, Fidelity, and Containment in the
Potemkin Virtual Honeyfarm
  • Michael Vrable, Justin Ma, Jay Chen, David Moore,
    Erik Vandekieft,
  • Alex C. Snoeren, Geoffrey M. Voelker, Stefan
    Savage

Symposium on Operating Systems Principles, 2005
Tracy Wagner CDA 6938 February 8, 2007
2
Outline
  • Introduction
  • Potemkin Virtual Honeyfarm
  • Evaluation
  • Contributions/Strengths
  • Weaknesses
  • Further Research

3
Introduction
  • Problem Sharp tradeoff between scalability,
    fidelity, and containment when deploying a
    honeyfarm.
  • Scalability how well a solution works when the
    size of the problem increases
  • Fidelity adherence to fact or detail accuracy
    or exactness
  • Containment the act of containing, keeping
    something from spreading

4
Introduction
  • Inherent tension between
  • Scalability and Fidelity
  • Low Interaction ? High Scalability
  • High Interaction ? High Fidelity
  • Containment and Fidelity
  • Strict Containment Policies ? Loss of Fidelity
  • Example No outbound packets allowed/Will not
    let a trojan phone home

5
Introduction
  • Proposal A honeyfarm system architecture that
    can
  • Monitor hundreds of thousands of IP addresses,
    providing scalability
  • Offer high fidelity, similar to high-interaction
    honeypots
  • Support customized containment policies

6
Potemkin Virtual Honeyfarm
  • Prototype Honeyfarm System
  • Virtual Machines
  • Physical Memory Sharing
  • Idleness
  • Major Components
  • Gateway
  • Virtual Machine Monitor (VMM)

7
Gateway
  • Direct Inbound Traffic
  • Contain Outbound Traffic
  • Implement Resource Management
  • Interface With Other Components

8
Gateway
  • Direct Inbound Traffic
  • Traffic Arrival
  • Routing, Tunneling
  • Load Balance Backend Honeyfarm servers
  • Random, Based on Platform
  • Programmable Filters
  • Eliminate short-lived VMs as a result of
    same-service port scans across large IP range

9
Gateway
  • Contain Outbound Traffic
  • Only physical connection between honeyfarm
    servers and Internet
  • Customizable Containment Policies
  • DNS Traffic
  • Traffic that does not pass containment filter may
    be reflected back into honeyfarm

10
Gateway
  • Implement Resource Management
  • Dedicate only a subset of servers to reflection
  • Limit number of reflections with identical
    payload
  • Determine when to reclaim VM resources
  • Interface With Other Components
  • Detection
  • Analysis
  • User-Interface

11
Virtual Machine Monitor
  • Flash Cloning
  • Instantiates Virtual Machines Quickly
  • Copies and modifies a host reference image
  • Delta Virtualization
  • Optimizes the Flash Cloning Operation
  • Utilizes Copy-on-Write

12
Virtual Machine Monitor
13
Architecture
14
Evaluation - /16 network
  • 156 destination addresses multiplexed per active
    VM instance
  • Hundreds of honeypot VMs per physical server
  • Hundreds of distinct VMs can be supported running
    simple services
  • Live deployment created 2100 VMs dynamically in a
    10-minute period possible to create honeyfarms
    with both scale and fidelity!

15
Contributions/Strengths
  • Flash Cloning and Delta Virtualization allows for
    a highly scalable system with high fidelity
  • Improvement in scale of up to six orders of
    magnitude as implemented can support 64K
    addresses
  • Internal Reflection can offer significant insight
    into spreading dynamics of new worms
  • Customizable containment policies allow testing
    of various scenarios

16
Weaknesses
  • Reflection must be carefully managed to avoid
    resource starvation
  • VM cannot respond until cloning is complete (too
    long may cause loss of traffic)
  • Scalability depends upon Gateway
  • Router function renders honeyfarm visible to
    anyone using traceroute
  • Attacker techniques exist for determining
    virtualized honeyfarm

17
Further Research
  • Defer creation of new VM until a complete session
    is established
  • Optimization of all aspects of flash cloning
  • Optimization of gateway
  • Offer support for disk devices
  • Develop support for Windows hosts
Write a Comment
User Comments (0)
About PowerShow.com