Power Aware Computing - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

Power Aware Computing

Description:

Better management of power translates into longer battery life or into smaller ... If nodes have a limited power supply, such as portable computers, extreme usage ... – PowerPoint PPT presentation

Number of Views:924
Avg rating:3.0/5.0
Slides: 37
Provided by: JamesC2
Category:
Tags: aware | computing | power

less

Transcript and Presenter's Notes

Title: Power Aware Computing


1
Power Aware Computing
  • The concept of power aware computing is to save
    energy without losing performance.
  • It is the primary focus of designers of portable
    and battery-powered computing systems.
  • Better management of power translates into longer
    battery life or into smaller batteries, which in
    turn implies smaller and lighter devices.
  • The three areas of power aware computing that we
    will focus on power aware routing, power aware
    cache management and power aware
    microarchitecture.

2
  • POWER-AWARE ROUTING
  • James R. Cooper

3
Power Aware Routing
  • In a typical network, the route of a packet will
    be determined by calculating which path is either
    fastest, or has the least amount of hops.
  • This may mean that some nodes in the network get
    far more usage than others.
  • If nodes have a limited power supply, such as
    portable computers, extreme usage could quickly
    drain the battery.
  • A temporary mobile network, such as an ad hoc
    network, would benefit from Power Aware Routing.
    Examples Soldiers in the battlefield or rescue
    workers after a disaster.

4
Power Aware Routing
  • In this network, it is clear that node 6 will
    receive the most usage, causing its batteries to
    become quickly depleted.
  • This will cause node 6 to crash, disrupting the
    network.
  • Other nodes will also quickly crash as they try
    to absorb the traffic of node 6.

We will examine three power aware routing
algorithms. These techniques attempt to extend
the life of the individual nodes, as well as the
network as a whole.
5
PAMAS
  • PAMAS (Power-Aware Multiple Access protocol with
    Signaling)
  • The basic idea of PAMAS is for a node to power
    off when it is not needed.
  • A node powers off when
  • It is overhearing a transmission and does not
    have a packet to transmit.
  • At least one neighbor is transmitting and one
    neighbor is receiving. (So as not to interfere
    with neighbors reception.)
  • All neighbors are transmitting and node is not a
    recipient.
  • A fundamental problem is knowing how long to
    remain powered off.

6
PAMAS
  • Before a transmission, a node sends a RTS message
    (ready to send) to the recipient, which replies
    with a CTS message (clear to send). The RTS and
    CTS contain the length of the message. This gives
    other surrounding nodes an idea of who is
    involved and for how long the transactions may
    take.
  • If a node awakens during a transmission, it is
    able to query the transmitter on the remaining
    length of the message.
  • As much as 70 power can be saved using this
    method.

7
Power consumption metrics
  • When you can adjust the transmission power of
    nodes, hop count may be replaced by consumption
    metrics.
  • A node sends out a control message at a set
    power. Other nodes can determine the distance of
    the sending node based on the strength of the
    signal.
  • Messages will typically be sent through a series
    of shortest hops until it reaches its
    destination. This is done to minimize the energy
    expended by any single node.
  • This method helps to find the most power
    efficient path of transmission. (Many short hops
    as opposed to one long hop.)

8
LEAR
  • LEAR (Local Energy-Aware Routing) achieves
    balanced energy consumption among all
    participating mobile nodes.
  • A node will transmit to the node that is closest
    and is power-rich.
  • The sending node will transmit a ROUTE_REQ
    message. Nodes will only respond if its power
    levels are above a preset limit. (Usually start
    at 90 of a batterys initial power.)
  • The first node to reply represents the closest
    power rich node. LEAR is non-blocking, so it
    will select the first response and ignore all
    others.

9
LEAR
  • The ROUTE_REQ contains a sequence number, which
    will be incremented each time the message has to
    be retransmitted.
  • Retransmission will only occur if there is no
    response. This means that no node has a power
    threshold above its current limit.
  • When an intermediate node receives a ROUTE_REQ
    with an increased sequence number, it will lower
    its threshold. (Between 10 and 40)
  • It will then be able to accept and forward the
    message onto its destination.

10
LEAR
  • An important consideration When a node receives
    a ROUTE_REQ which it does not accept, it must
    transmit a DROP_ROUTE_REQ. This message will let
    other nodes in the path know that they too should
    lower their threshold when they receive a
    ROUTE_REQ with the increased sequence number.
  • The DROP_ROUTE_REQ helps to avoid cascading
    effect of retransmissions at each new node in the
    path.
  • LEAR is able to find the most power efficient
    path, while also extending the life of the
    network as a whole. Using LEAR may help to extend
    the life of the network as much as 35.

11
POWER-AWARE CACHE MANAGEMENT Siraj Shaik
12
Overview
  • Save power without degrading performance (cache
    hit ratio).
  • But there will always be a tradeoff between
    power and performance.
  • The best we can do is to have an adaptive scheme
    that can dynamically optimize performance or
    power based on available resources and
    performance requirements. Prefetch makes this
    possible.
  • We will discuss the results of this approach thru
    a simulation study.

13
The Simulation Model
14
PAR, ß, d
  • Cache data consistency cache invalidation
    methods(IR, UIR, etc)
  • client prefetch data intelligently that are most
    likely to be used in future
  • PAR prefetches / accesses (
  • PAR ß (non-prefetch) e.g, high update rate
    data
  • Client marks d number of high access rate cache
    entries as prefetch.
  • Cache Hit Ratio ? performance
  • Power ? prefetches
  • Delay ? 1/Cache Hit Ratio

15
PAR, ß, d
PAR prefetches / Accesses 10/100 0.1, CHR
PWR
PAR prefetches / Accesses 100/10 10, CHR
PWR
Speedometer Analogy
ß 2 (100/50)
ß
ß 10
d 0
d 200
Power (gas) CHR(mileage)
16
Client-Server Algorithms
  • Server (algorithm) constructs IR, UIR, receives
    request from client and broadcasts.
  • Client (algorithm) receives IR, UIR, queries,
    prefetches.

17
The effects of d (Tu vs. prefetches)
  • The number of prefetches increases as the Tu
    decreases.
  • When Tu decreases, data are updated more
    frequently and more clients have cache misses. As
    a result, server broadcasts more data during each
    IR and the clients prefetch more data to their
    local cache.
  • Since d represents of cache entries marked as
    prefetch, the number of prefetches increase as d
    increases.

18
The effects of d (Tu vs. CHR)
  • CHR drops as delta decreases.
  • When Tu is high, there are not too many data
    updates and most of the queried data can be
    served locally.
  • The no-prefetch approach has very low CHR when
    Tu10s and high CHR when Tu1000s.
  • prefetches is related to CHR.

19
Conclusions
As d changes prefetches changes, resulting in a
tradeoff between CHR(delay) and
prefetches(power). In proactive/adaptive scheme
using PAR concept we can dynamically optimize
performance or power based on available resources
and performance requirements. Conclusion of this
simulation study is we can keep the advantage
of prefetch with low power consumption.
20
References
1 G. Cao. Proactive Power-Aware Cache
Management for Mobile Computing Systems, IEEE
Transactions on Computers, vol. 51, no. 6, pp.
608-621, June, 2002. Available
http//www.cse.psu.edu/gcao/paper/TC02.pdf
2 G. Cao. Adaptive Power-Aware Cache
Management for Mobile Computing Systems, In The
Eleventh International World Wide Web Conference,
pp. 7-11 May 2002. Available http//www2002.org/C
DROM/poster/88.pdf
21
Power Aware Microarchitecture
  • -Dynamic Voltage Scaling
  • -Dynamic Power Management
  • -Gus Tsesmelis

22
Dynamic Voltage Scaling
  • The dynamic adjustment of processor clock
    frequency and processor voltage according to
    current and past system metrics.

23
Dynamic Voltage Scaling
  • Energy is wasted to maintain high clock frequency
    while it is not being utilized.
  • The system slows the processor speed while it is
    idle to conserve energy.
  • System raises the clock frequency and supply
    voltage only for those moments when high
    throughput is desired.

24
Voltage Scheduler
  • Dictates clock frequency and supply voltage in
    response to computational load demands.
  • Analyzes the current and past state of the system
    in order to predict the future workload of the
    processor.

25
Voltage Scheduler
  • Interval-based voltage scheduler periodically
    analyzes system utilization at a global level.
  • Example if the preceding time interval was
    greater than 50 active, increase processor speed
    and voltage for the next time interval.

26
Voltage Scheduler
  • Interval-based scheduling is easy to implement,
    but may incorrectly predict future workloads. Not
    the optimal design.
  • Current research into thread-based voltage
    scheduling seeks to overcome these issues.

27
Dynamic Power Management
  • Selectively places system components in a
    low-power sleep state while not in use.
  • Accomplished through the use of deterministic
    algorithms and prediction schemes.

28
Dynamic Power Management
  • System may have several power states that it may
    be in at any given moment
  • Active
  • Idle
  • Standby
  • Off

29
Dynamic Power Management
  • A Power Manager dictates what state the system
    components should be in according to the
    algorithms policies.
  • Policies are obtained using one of two models
    Renewal Theory model and the Time-Indexed
    Semi-Markov Decision Process model.

30
Renewal Theory Model
  • Renewal theory describes counting processes for
    which the request interarrival times are
    independent and identically distributed with
    arbitrary distributions.
  • The complete cycle of transition from doze state
    through other states and then back to doze state
    can be viewed as a renewal period.

31
TISMDP
  • TISMDP Time-Indexed Semi-Markov Decision
    Process.
  • The TISMDP model is needed to handle the
    non-exponential user request interarrival times
    in order to keep the history information.

32
TISMDP
33
Power Manager
  • At run-time, the power manager observes
  • Request arrivals and service completion times
    (frame arrivals and decoding times).
  • The number of jobs in the queue (the number of
    frames in a buffer).
  • The time elapsed since last entry into idle state.

34
Power Manager
  • When in the active state, the power manager
    checks if the rate of incoming or decoding frames
    has changed, and then adjusts the CPU frequency
    and voltage accordingly.
  • Once the decoding is completed, the system enters
    an idle state.

35
Power Manager
  • Once in an idle state, the power manager observes
    the time spent in the idle state, and depending
    on the policy obtained using either the renewal
    theory or the TISMDP model, the power manager
    then decides when to transition into one of the
    sleep states.

36
Power Aware Microarchitecture
  • Dynamic power management used in conjunction with
    voltage scaling results in a range of
    performances and power consumption available for
    tradeoff at run time.
  • The implementation of these techniques are common
    where power supply is limited but high
    performance, or perceived high performance is
    expected.
Write a Comment
User Comments (0)
About PowerShow.com