Title: Defending Against DDoS Attacks Using Maxmin Fair Server Centric Router Throttles
1Defending Against DDoS Attacks Using Max-min Fair
Server Centric Router Throttles
- David K.Y. Yau John C.S. Lu
- CS Dept, Purdue University CSE Dept,CUHK
2Motivations
- Internet is an open and democratic environment
- increasingly used for mission-critical work and
commercial applications. - Many security threats are present or appearing
- Easy to launch, even for naïve users.
- need effective and flexible defenses to
detect/trace/counter attacks - Goals
- protect innocent users
- prosecute criminals
Ambitious goals
3Network Denial-of-service Attacks
- Some attacks quite subtle
- securing protocols and intrusion detection (e.g.,
BGP, TCP-syn attack) - at routing infrastructure, malicious dropping of
packets, etc (low-rate TCP) - Others by brute force
- - flooding (e.g., UDP, valid Web Request)
- Cripples victim
- - precludes any sophisticated defense at
victim site - Philosophical question what is an attacker?
- Viewed as resource management problem
4Flooding Attack
Server
5Server-centric Router Throttle
- Installed by server when under stress, at a set
deployment routers - can be sent by multicast
- Specifies leaky bucket rate at which router can
forward traffic to the server - aggressive traffic for server dropped before
reaching server - rate determined by a feedbak control algorithm
Issues (1) Which set of routers? (2)
What is the proper dropping rate?
6Router Throttle
Aggressive flow
To S
Deployment router
C Each victim has a leaky bucket for rate limit.
Small memory and computationoverhead!
7Key Design Problems
- Resource allocation who is entitled to what?
- need to keep server operating within load limits
- notion of fairness, and how to achieve it?
- Need global, rather than router-local, fairness
- How to respond to network and user dynamics
(e.g., fluctuation of traffic)? - Feedback control strategy is needed
8What is being fair?
- Baseline approach of dropping a fraction f, say
½, of traffic for each flow wont work well - a flow can cause more damage to other flows
simply by being more aggressive! - Rather, no flow should get a higher rate than
another flow that has unmet demands - this way, we penalize aggressive flows only,
but protect the well-behaving ones
9Fairness Notion
- Since we proactively drop packets ahead of
congestion point, we need a global fairness
notion - max-min fairness among level-k routing points,
R(k), i.e., routers about k hops away from
destination
Standard knowledge we learn
Deployment points
10Level-k Deployment Points
- Deployment points parameterized by an integer k
- R(k) -- set of routers that are either k hops
away from server S, or less than k hops away from
S but are directly connected to a host - Fairness across global routing points R(k)
11Level-3 Deployment
Server
12Feedback Control Strategy
- Hysteresis control
- high and low water marks for server load, to
strengthen or relax router throttle - Additive increase/multiplicative decrease rate
adjustment - increases when server load exceeds US, and
decreases when server load falls below LS - throttle removed when a relaxed rate does not
result in significant server load increase
13Fairness Definition
- A resource control algorithm achieves level-k
max-min fairness among the routers R(k) if the
allowed forwarding rate of traffic for S at each
router is the routers max-min fair share of some
rate r satisfying LS r US
14Fair Throttle Algorithm
15Example Max-min Rates (L18, H22)
Server
16Interesting Questions
- Can we preferentially drop attacker traffic over
good user traffic? - Can we successfully keep server operating within
design limits, so that good user traffic that
makes it gets acceptable service? - How stable is such a control algorithm? How does
it converge?
17Algorithm Evaluation
- Control-theoretic analysis (fluid analysis)
- algorithm stability and convergence under
different system parameters - Packet network simulations (packet level
analysis) - Test under UDP and TCP traffic. Also test with
Web traces - System implementation (the real thing, baby !!!)
- deployment costs
18Control-theoretic Model
Throttle signal from victim
Step size
Adjusted traffic from source i
When throttle signal is high, server is
underloaded. When throttle signal is low, server
is overloaded.
ANALOGY!!!
19Feedback Control Model (Us1750Ls1650)
Constant Source of 20
Constant Source of 30
Constant Source of 25
Constant Source of 4000
Constant Source of 2800
20Output for good traffic (total from source 1)
21Output for attack traffic (total from source 5)
22Output for attack traffic (total from source 6)
23Total traffic to server (Us1750Ls1650)
24Case 2 variable attack traffic (Us1750,Ls1650)
Square Pulse
25Output of attack traffic 1
26Output of attack traffic 2
27Total traffic to server (Us1750Ls1650)
28Feedback Control Model(sources and server)
29Feedback Control Model (server throttle signal)
30Feedback Control Model (sources process throttle)
31Throttle Rate (L900 U1100)
32Server Load (L 900 U 1100)
33Throttle Rate (U 1100)
34Server Load (U 1100)
35Throttle Rate (L1050U1100)
36Server Load (L1050 U1100)
37NS2 UDP Simulation Experiments
- Global network topology reconstructed from real
traceroute data - ATT Internet mapping project 709,310 traceroute
paths, single source to 103,402 other
destinations - randomly select 5,000 paths, with 135,821 nodes
of which 3879 are hosts - Randomly select x of hosts to be attackers
- good users send at rate 0,r, attackers at rate
0,R
3820 Evenly Distributed Aggressive (101) Attackers
3940 Evenly Distributed Aggressive (51) Attackers
40Evenly Distributed meek Attackers
41Deployment Extent
42NS2 TCP Simulation Experiment
- Clients access web server via HTTP 1.0 over TCP
Reno - Simulated network subset of ATT traceroute
topology - 85 hosts, 20 attackers
- Web clients make request probabilistically with
empirical document size and inter-request time
distributions
43Web Server Protection
44Web Server Traffic Control
45System Implementation
- On Linux router
- loadable kernel module
- CPU resource reservation
- Deployment platform
- Pentium 4/2G Hz PC
- multiple 10/100 Mb/s Ethernet interfaces
46System Implementation cont
- OPERA An Open-Source Extensible Router
Architecture - http//www.cse.cuhk.edu.hk/cslui/ANSRlab/softw
are/opera/ - A Linux-based package for implementing a software
programmable router architecture with the aim to
facilitate networking experiments for the
research community. Using this architecture, one
can dynamically load new extension and services
into the programmable router. Some interesting
extensions include QoS support and traceback of
DDoS attacks.) - Dynamic module loading
- Resource reservation
- General extension framework
- Secured Communication
47 Network Architecture
Web code server
ISP
client
48Future Work
- Offered load-aware control algorithm for
computing throttle rate - impact on convergence and stability
- Policy-based notion of fairness
- heterogeneous network regions, by size,
susceptibility to attacks, tariff payment - Selective deployment issues
- Impact on real user applications
- Defense for other forms of DDoS like the
reflector attack, BGP cascading failure..etc.
49Conclusions
- Extensible routers can help improve network
health - Presented a server-centric router throttle
mechanism for DDoS flooding attacks - can better protect good user traffic from
aggressive attacker traffic - can keep server operational under an ongoing
attack - has efficient implementation
50Existing Networks
ISP
server
router simple forwarding
client
51Level-3 Deployment
Server
52Routers Forwarding Paths
Function dispatcher
Input queues
Resource allocation manager
Output network queues
Packet classifier
53Level-3 Deployment
54Example Level-k Max-min Fair Rates (L18, H22)
55Routing Infrastructure
- Router software critical to network health
- patches for security bugs
- new defenses against new attacks
- Scalable distribution of router software to many
routing points - minimal disruptions to existing services
- little human intervention
- Exploit software-programmable router technology