Title: Array Allocation Taking into Account SDRAM Characteristics
1Array Allocation Taking into Account SDRAM
Characteristics
- Hong-Kai Chang
- Youn-Long Lin
- Department of Computer Science
- National Tsing Hua University
- HsinChu, Taiwan, R.O.C.
2Outline
- Introduction
- Related Work
- Motivation
- Solving Problem
- Proposed Algorithms
- Experimental Results
- Conclusions and Future Work
3Introduction
- SDRAMs multi-bank architecture enables new
optimizations in scheduling - We assign arrays to different SDRAM banks to
increase data access rate
- Performance gap between memory and processor
- Systems without cache
- Application specific
- Embedded DRAM
- Optimize DRAM performance by utilize its special
characteristics
4Related Work
- Previous research eliminate memory bottleneck by
- Using local memory (cache)
- Prefetch data as fast as possible
- Panda, Dutt, and Nicolau utilizing page mode
access to improve scheduling using EDO DRAM - Research about array mapping to physical memories
for low power, lower cost, better performance
5Motivation
- DRAM operations
- Row decode
- Column decode
- Precharge
- SDRAM characteristics
- Multiple banks
- Burst transfer
- Synchronous
Traditional DRAM
2-bank SDRAM
6Address Mapping Table
Host Address a16a0
Memory Address BA, A7-A0 Page Size for host
Page Size for
DRAM 128 words (a6a0)
256 words (A7A0)
-If we exchange the mapping of a0 and a7...
7Motivational Example
BABankActive RowDecode R/WRead/Write
ColumnDecode BPPrecharge
8Motivational Example
BABankActive RowDecode R/WRead/Write
ColumnDecode BPPrecharge
9Assumptions
- Harvard architecture Separated program/data
memory - Paging policy of the DRAM controller
- Does not perform precharge after read/write
- If next access reference to different page,
perform precharge, followed by bank active,
before read/write - As many pages can be opened at once as the number
of banks - Resource constraints
10Problem Definition
- Input a data flow graph, the resource
constraints, and the memory configuration - Perform our bank allocation algorithm
- Schedule the operations with a static list
scheduling algorithm considering SDRAM timing
constraints - Output a schedule of operations, a bank
allocation table, and the total cycle counts
11Bank Allocation Algorithm
- Calculate Node distances
- Calculate Array distances
- Give arrays with the shorter distances higher
priority - Allocate arrays to different banks if possible
12Example SOR
main() float aNN, bNN, CNN,
dNN, eNN, fNN float omega, resid,
uNN int j,l for (j2 jltN j) for
(l1lltNl2) resid ajluj1l
bjluj-1l
cjlujl1
djlujl-1 ejlujl
fjl ujl -
omegaresid/ejl
13Node Distance
- Distances between current node and the nearest
node that access array a, b, c,. Shown in - Ex. 1,-,-,-,-,-,-,1,- means the distances to
the node that access array aj and uj-1 are
both 1. - - means the distance is still unknown
- When propagate downstream, the distance
increases.
14Array Distance
- The distance between nodes that access arrays
- Calculate from node distance of corresponding
arrays - Get the minimum value
- Ex. AD(aj, uj-1)min(2,4)2
15Example SOR
Bank allocation Bank 0 c,d,e,f Bank 1
a,b,u
16Experimental Characteristics
- We divided our benchmarks into two groups
- First group benchmarks access multiple 1-D arrays
- Apply our algorithm to arrays
- Second group benchmarks access single 2-D arrays
- Apply our algorithm to array rows
- Memory configurations
- Multi-bank configuration 2 banks/ 4banks
- Multi-chip configuration 2 chips/ 4chips
- Multi-chip vs mulit-bank relieves bus contention
- Utilizing page mode access or not
17(No Transcript)
18(No Transcript)
19(No Transcript)
20Experimental Results
- From the average results, we can see that
- Scheduling using SDRAM with our bank allocation
algorithm do improve the performance - Utilizing page mode access relieves the traffic
of address bus, thus the use of multiple chips
does not make obvious improvement
21Conclusions
- We presented a bank allocation algorithm
incorporated in our scheduler to take advantages
of SDRAM - The scheduling results have a great improvement
from the coarse one and beat Pandas work in some
cases - Our work is based on a common paging policy
- Several different memory configurations are
exploited - Scheduling results are verified and meet Intels
PC SDRAMs spec
22Future Works
- Extending our research to Rambus DRAM
- Grouping arrays to incorporating burst transfer
- Integration with other scheduling /allocation
techniques