20755: The Internet Lecture 12: Scalable services - PowerPoint PPT Presentation

About This Presentation
Title:

20755: The Internet Lecture 12: Scalable services

Description:

Proxy caches. Multiple clients, multiple servers ... Applications of proxy servers ... to know IP address of proxy. Centralized logging of all HTTP requests. ... – PowerPoint PPT presentation

Number of Views:157
Avg rating:3.0/5.0
Slides: 43
Provided by: camp77
Learn more at: http://www.cs.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: 20755: The Internet Lecture 12: Scalable services


1
20-755 The InternetLecture 12 Scalable services
  • David OHallaron
  • School of Computer Science and
  • Department of Electrical and Computer Engineering
  • Carnegie Mellon University
  • Institute for eCommerce, Summer 1999

2
Todays lecture
  • Speeding up servers (30 min)
  • Break (10 min)
  • Caching (50 min)

3
Scalable servers
  • Question How do we provide services that scale
    well with the number of requests?
  • Goals for high-volume sites
  • Minimize request latency (response time) for our
    clients.
  • want to avoid the dreaded hourglass
  • Minimize the amount of traffic over our
    high-speed Internet connection.
  • Many ISPs charge monthly rates based on actual
    bandwidth usage.
  • Recall MCI T1 and T3 pricing from Lecture 6
    (programming the Internet).

4
Scalability approaches
  • Speed up the servers
  • Use multiple processes to handle requests
  • concurrent servers
  • pre-forking servers (not covered here)
  • Use multiple computers to process requests.
  • clustering (not covered here)
  • e.g., Microsoft cluster, HotBot cluster
  • distributed servers (not covered here)
  • use DNS to send requests to geographically
    distributed mirror sites.
  • Move the content closer to the clients.
  • Caching
  • Crucial concept (and big business)

5
Iterative servers
  • An iterative server processes one connection at a
    time.

simple iterative server while (1) connfd
accept() connfd
6
Iterative servers
  • Step 1 server accepts connect request from
    client A.

client B
connection request
server
client A
listen socket
7
Iterative servers
  • Step 2 Server processes request from client A
    (using As connection socket)
  • Client B initiates connection request and waits
    for server to accept it.

connection request
client B
server
listen socket
client A
client As connection socket
process request
8
Iterative servers
  • Step 3 Server finishes processing request from
    Client A.
  • Accepts connection request from Client B.

client B
connection request
server
client A
listen socket
9
Iterative servers
  • Step 4 Server processes request from client B
    (using Bs connection socket)

client B
process request
server
listen socket
client A
client Bs connection socket
10
Iterative servers
  • Step 5 Server finishes process client Bs
    request.
  • Server waits for connection request from next
    client.

client B
server
client A
listen socket
11
Iterative servers
  • Pros
  • Simple
  • Minimizes latency of short requests.
  • Cons
  • Higher latencies and lower throughput
    (requests/sec) for large requests
  • large response bodies that must be served off
    disk
  • long running CGI scripts that access disk files
    or databases.
  • no other requests can be served while other work
    is being done.

12
Concurrent servers
  • A concurrent server accepts connections from a
    parent process and creates children to process
    the requests.

concurrent server while (1) connfd
accept() pid fork() if (pid 0)
child process process exit()
13
Concurrent servers
  • Step 1 server accepts connect request from
    client A.

client B
connection request
server
client A
listen socket
14
Concurrent servers
  • Step 2 Server creates child process to handle
    request.
  • Client B initiates connection request and waits
    for server to accept it.

client B
connection request
server
listen socket
client A
client As connection socket
child
process request
15
Concurrent servers
  • Step 3 Server accepts connection request from
    client B and creates child process to handle
    request.

child B
process request
client B
server
listen socket
client A
client As connection socket
client Bs connection socket
child A
process request
16
Concurrent servers
  • Step 4 Servers children finish processing
    requests from clients A and B.
  • Server waits for next connection request.

client B
server
client A
listen socket
17
Concurrent servers
  • Pros
  • Can decrease latency for large requests
    (decreases time waiting for connection request to
    be accepted)
  • Can increase overall server throughput
    (requests/sec).
  • Cons
  • More complex
  • Potential for fork bombs
  • must limit number of active children
  • Variant Pre-forking servers
  • Create a fixed number of children to handle
    requests ahead of time
  • Approach used by Apache.

18
Break time!
19
Todays lecture
  • Speeding up servers (30 min)
  • Break (10 min)
  • Caching (50 min)

20
Caching
  • A cache is a storage area (either in memory or on
    disk) that holds copies of frequently accessed
    data.
  • Typically smaller than primary storage area, but
    cheaper and faster to access.
  • Fundamental computer systems technique
  • Memory systems (register files, L1, L2, and L3
    caches)
  • File and database systems (OS I/O buffers)
  • Internet systems (Web caches)

21
Accessing objects from a cache
  • Initially, the remote storage holds objects (data
    items) and associated keys that identify the
    objects.
  • Program wants to fetch A, B, then A again

program
22
Accessing objects from a cache
  • Program fetches object A by passing key(A) to the
    cache.

key(A)
program
23
Accessing objects from a cache
  • Object A is not in cache, so cache retrieves a
    copy of A from primary storage and returns it to
    program.
  • Cache keeps a copy of A and its key in its
    storage area

nearby cache storage
key(A)
A, key(A)
program
A
A
24
Accessing objects from a cache
  • Program accesses object B.
  • Cache keeps a copy of B and its key in its
    storage area.

nearby cache storage
key(B)
key(B)
A, key(A B, key(B)
program
B
B
25
Accessing objects from a cache
  • Program accesses object A.
  • Cache returns object directly without accessing
    remote storage

nearby cache storage
key(A)
A, key(A B, key(B)
program
A
26
Impact of caching
  • Reduces latency of cached objects
  • e.g., we can access object A from nearby storage
    rather than faraway storage.
  • Reduces load on remote storage area
  • Remote storage area never sees requests satisfied
    by cache.

27
Web caching
  • Objects are web pages, keys are URLs
  • Browser caches
  • One client, multiple servers
  • Proxy caches
  • Multiple clients, multiple servers
  • Examples Squid, Harvest, Apache, every major
    vendor.
  • Based on proxy servers
  • Reverse proxy caches
  • Multiple clients, one server
  • Example Inktomi TrafficServer
  • Based on proxy servers
  • Also called inverse caches or http accelerators

28
Browser caches
  • One client - many servers
  • Caches objects that come from requests of a
    single client to many servers
  • Browser caches are located on the disk and in the
    memory of a local machine.

client machine
server
disk browser cache
browser
server
server
29
Proxy servers
  • A proxy server (or proxy) acts as an intermediary
    between clients and origin servers
  • Acts as a server to the client...
  • Acts as a client to the origin server...

origin server
request
forwarded request
proxy
client
response
fowarded response
30
Applications of proxy servers
  • Allow users on secure nets behind firewalls to
    access Internet services
  • Original motivating application (Luotonen and
    Altis, 1994)

remote HTTP server
HTTP
remote FTP server
FTP
proxy server on firewall machine
clients inside the firewall
HTTP
NNTP
remote news server
Secure subnet inside firewall
SNMP
remote mail server
31
A proxied HTTP transaction
complete URL
partial URL
GET http//server.com/index.html HTTP/1.0
GET /index.html HTTP/1.0
origin server
client
proxy
HTTP/1.0 200 OK
HTTP/1.0 200 OK
32
Motivation for proxy servers
  • Lightweight client machines.
  • Only need to support HTTP
  • Local machines with DNS can still use Internet
  • only needs to know IP address of proxy
  • Centralized logging of all HTTP requests.
  • Centralized filtering and access control of
    client requests.
  • Centralized authentication site.
  • Facilitates caching.

33
Web proxy caches
  • Multiple clients - multiple servers
  • Typically installed on the border of an
    organizations internal network and the Internet.
  • Motivation
  • decrease request latency for the clients on the
    organizations network.
  • decrease traffic on the organizations connection
    to the Internet
  • The organization can be on the scale of a
    university department, company, ISP, or country.
  • Important for overseas sites because most content
    is in US and connections between most countries
    and US is slow.

34
Web proxy caches
  • The requested object is stored locally (along
    with any cache relevant response headers) in the
    proxy cache for later use.
  • Request can come from the same client or a
    different client

forwarded request
request
proxy server
origin server
client
response
proxy cache
35
Web proxy caches
  • If an up-to-date object is in the cache, then the
    object can be served locally from the proxy cache.

request
proxy server
origin server
client
response
proxy cache
36
Web proxy caches
  • How does a proxy know that its local copy is
    up-to-date?
  • An object is considered fresh (i.e., able to be
    sent to client without checking first with the
    origin server) if
  • Its origin server served it with an expiration
    controlling header and the current time precedes
    this expiration time.
  • Expires and Cache-Control response headers
  • The proxy cache has seen the object recently and
    it was modified relatively long ago.
  • Last-Modified response header.

37
Web proxy caches
  • Objects that are not known to be fresh must be
    validated by querying the origin server for the
    time the object was last modified on the origin
    server.
  • Last-Modified response header in HEAD method
  • Compare with Last-Modified header of cached copy
  • E-tag is recomputed each object is changed.
  • After validation, if the object is stale it must
    be fetched from the origin server.
  • Otherwise, it is served directly from the proxy
    cache.

38
Reverse proxy caches
  • Many clients - one server
  • Reverse proxy caches are proxy caches that are
    located near high-volume servers.
  • Also called reverse proxies or httpd
    accellerators
  • Goal is to reduce server load.

Remote server site
client
large expensive high-latency database
reverse proxy cache
client
server
client
client
39
Case studyThe Akamai FreeFlow cache
Source akamai.com
40
Case studyThe Akamai FreeFlow cache
The Akamai server is chosen dynamically to
maximize some performance metric based on
existing network conditions droh
Web pages on this server were previously
Akamaized offline by the FreeFlow Launcher
tool droh
41
The Akamai network(Aug 1999)
Number of Servers 900 Number of
Networks 25 Number of Countries 15 Total
Capacity 12 Gigabits/second Average Load (at
peak utilization) 500 Megabits/second Average
Network Utilization 5 Average Hits Per Day ¼
Billion
Source akamai.com
42
Example Akamaized page
Dave O'Hallaron's Home
Page g src"http//a516.g.akamaitech.net/7/516/1/
3b3a087c3d0ea3/www.cs.cmu.edu/droh/droh.quake.gif
" align"left"
David O'Hallaron
Associate
Professor, ..
.
  • Questions
  • Authentication of requests to Akamai servers?
  • Accurately monitoring a dynamic net?
Write a Comment
User Comments (0)
About PowerShow.com