Title: Implementation of Shared Memory
1Implementation of Shared Memory
- Considerations
- Network traffic due to create/read/write
- Latency of create/read/write
- Synchronization
- Network traffic v. Latency
- Copy on Read (No local caching)
- Local read and write is fast, no traffic
- Remote Read is slow, generates traffic (2-way)
- Local write is fast, no traffic
- Remote write is fast, generates traffic
- Copy on Write (local caching)
- Local read is fast, no traffic
- Remote read is fast, no traffic
- Local write is fast, generates traffic
- Remote write is fast, generates traffic
- Central Server and Single Copy are basically the
same. Except that central server does not allow
you to optimize the location of the master copy
for the process that needs it the most
2Choice of Policy
Frequently read(X)
Frequently read(X)
Infrequently write(X,1) write(X,2)
Copy-on-write Local caching far less traffic
Infrequently read(X)
Infrequently read(X)
Frequently write(X,1) write(X,2)
Copy-on-read no local caching has less traffic
3Choice of place to Store
Case 1
Frequently read(X)
Frequently read(X)
Infrequently write(X,1) write(X,2)
Copy-on-write Local caching far less
traffic Put the master copy on the most frequent
writer
Case 2
Infrequently read(X)
Infrequently read(X)
Frequently write(X,1) write(X,2)
Copy-on-read no local caching has less
traffic Put the master copy on the most frequent
writer What do to when reading and writing are
both frequent?
4Implementation
- Message Types
- Publish broadcast, tells everyone location of
master copy - Subscribe to publisher tells publisher location
of subscriber - Post to publisher on a subscriber write
- Post, to all subscribers in response to receiving
a post (CW) - Update, to publisher sent by subscriber to
request a post (CR). Subscriber blocks the app,
waiting for the post. - Messages are not functions
- It is not the job of the transport layer to
guarantee deliver, this is assumed. It is the job
of the data link layer - What about order of delivery of these messages?
- Is it important? Whose responsibility is it?
- Is it good to have subscribers known or should we
just broadcast all posts (CW). What is the
trade-off here? - How would you implement a test and set operation
(CW, CR)? - Could there be race to publish?
5Synchronization
- Write atomicity
- Order (which is important)
- Consistency
- Actual time of write
6Synchronization
What to compare it to two tasks running in a
shared memory space.
Case 1
no Caching!
Task 2 for (i 1 i lt N-1 i) x i if
(x gt N) error()
Task 1 for (i 1 i lt N i) x i if (x
gt N) error()
How are they different?
Case 2
Task 2 for (i 1 i lt N-1 i) lock() x
i unlock() if (x gt N) error()
Task 1 for (i 1 i lt N i) lock() x
i unlock() if (x gt N) error()
What would we want the system to guarantee? At a
minimum
No guarantees about value of x at the end of the
loop, but they will eventually agree on the value.
7Compare to copy-on-read (no cache)
Multiprocessor, no shared physical memory
Case 2
Task 2 for (i 1 i lt N-1 i) lock() x
i unlock() if (x gt N) error()
Task 1 for (i 1 i lt N i) lock() x
i unlock() if (x gt N) error()
Case 1
Task 2 for (i 1 i lt N-1 i)
write(X,i) if (read(X) gt N) error()
Assume Copy-on-read Read blocks.
Task 1 for (i 1 i lt N i) write(X,
i) if (read(X) gt N) error()
Now, will they eventually agree on the value of X
at the end? Is this different in any way than the
true shared memory case?
8Network shared memory w/ caching
Case 2
Task 2 for (i 1 i lt N-1 i) lock() x
i unlock() if (x gt N) error()
Task 1 for (i 1 i lt N i) lock() x
i unlock() if (x gt N) error()
Case 1
Task 2 for (i 1 i lt N-1 i)
write(X,i) if (read(X) gt N) error()
Assume Copy-on-write Read returns last Received
value on The network.
Task 1 for (i 1 i lt N i) write(X,
i) if (read(X) gt N) error()
Corruption is not a problemthe actual local
assignment takes place in the OS. Guaranteed
atomic. Local behavior is determined by the order
in which write messages are received at each
task Will they eventually agree on the value of
X? Is it sufficient for the transport layer to
just to send all write messages in order?
9Other Ideas
- It is not the same as shared memory cache
coherency problem. We can send messages to each
other - It is not necessary to lock every write
- Single writer
- Order v. Atomicity
- Can use semaphore to protect critical sections
- Where does the error handling godo we need ACK
NACK at the transport layer? - Leases (publisher has to renew periodically)
- Is Broadcast worse than sending only to list of
subscribers?
10Homework
- Questions for Friday
- Extend the protocol stack to support signal(var)
and wait(var) system calls - wait
- if address is gt 0 decrement and return true
- If address is lt 0 block (if time out and return
falsebe careful) - Signal
- increment var if signaler is legitimate
- Propose a scheme to ensure that data-link
transmits messages in order with respect to each
receiver
11Implementation
- Application Responsible for the application
semantics what does the value of the shared
variable mean? - Transport Implements shared memory interface
by using the datalink system to send guaranteed
messages between transport layers running on
different processors. Might also implement fifos,
semaphores, etc. - Exports to application
- publish(addr) subscribe(addr) post(addr,var)
update(addr,var) - Exports to datalinkÂ
- transport_recv(message)
- DataLink Guarantees error free delivery of
messages from one transport to another (in
order?) using the available physical layer. Can
implement a wide variety of retransmit schemes. - Exports to transport layer
- datalink_send(message)
- Exports to physical layer
- datalink_recv(packet)
- Physical Converts a packet into a set of frames
for transmission over the bus. Frames are
reconstructed and passed back to datalink layer
at other end. Knows how to drive the physical
bus. - Exports to datalink_layer
- physical_send(packet)
- Exports to physical layer
- ISRs to deal with events on the bus such as
start, stop, byte transmission
12Example The Fuel Cell Controller