What we will cover - PowerPoint PPT Presentation

About This Presentation
Title:

What we will cover

Description:

Title: 4th Edition: Chapter 1 Author: Jim Kurose and Keith Ross Last modified by: Shamik Created Date: 10/8/1999 7:08:27 PM Document presentation format – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 44
Provided by: JimKurosea352
Category:

less

Transcript and Presenter's Notes

Title: What we will cover


1
What we will cover
  • Processes
  • Process Concept
  • Process Scheduling
  • Operations on Processes
  • Interprocess Communication
  • Communication in Client-Server Systems (Reading
    Materials)
  • Threads
  • Overview
  • Multithreading Models
  • Threading Issues

2
What is a process
  • An operating system executes a variety of
    programs
  • Batch system jobs
  • Time-shared systems user programs or tasks
  • Single-user Microsoft Windows or Macintosh OS
  • User runs many programs
  • Word processor, web browser, email
  • Informally, a Process is just one such program in
    execution progress in sequential fashion
  • Similar to any high level language programs code
    (C/C/Java code etc.) written by users
  • However, formally, a process is something more
    than just the program code (text section)!

3
Process in Memory
  • In addition to the text section
  • A process includes
  • program counter
  • contents of the processors registers
  • stack
  • Contains temporary data
  • Method parameters
  • Return addresses
  • Local variables
  • data section
  • While a program is a passive entity, a process is
    known as an active entity

4
Process State
  • As a process executes, goes from creation to
    termination, goes through various states
  • new The process is being created
  • running Instructions are being executed
  • waiting The process is waiting for some event
    to occur
  • ready The process is waiting to be assigned to
    a processor
  • terminated The process has finished execution

5
Diagram of Process State
6
Process Control Block (PCB)
  • A process contains numerous information
  • A system has many processes
  • How to manage all the process information
  • Each process is represented by a Process Control
    Block
  • a table full of information for each process
  • Process state
  • Program counter
  • CPU registers
  • CPU scheduling information
  • Memory-management information
  • I/O status information

7
Process Control Block (PCB)
8
CPU Switch From Process to Process
9
Process Scheduling
  • In a multiprogramming environment, there will be
    many processes
  • many of them ready to run
  • Many of them waiting for some other events to
    occur
  • How to manage the architecture?
  • Queuing
  • Job queue set of all processes in the system
  • Ready queue set of all processes residing in
    main memory, ready and waiting to execute
  • Device queues set of processes waiting for an
    I/O device
  • Processes migrate among these various queues

10
A Representation of Process Scheduling
11
OS Queue structure (implemented with link list)
12
Schedulers
  • A process migrates among various queues
  • Often more processes are there than can be
    executed immediately
  • Stored in mass-storage devices (typically, disk)
  • Must be brought into main memory for execution
  • OS selects processes in some fashion
  • Selection process carried out by a scheduler
  • Two schedulers in effect
  • Long-term scheduler (or job scheduler) selects
    which processes should be brought into the memory
  • Short-term scheduler (or CPU scheduler)
    selects which process should be executed next and
    allocates CPU

13
Schedulers (Cont)
  • Short-term scheduler is invoked very frequently
    (milliseconds) ? (must be fast)
  • Long-term scheduler is invoked very infrequently
    (seconds, minutes) ? (may be slow)
  • The long-term scheduler controls the degree of
    multiprogramming
  • Long-term scheduler has another big
    responsibility
  • Processes can be described as either
  • I/O-bound process spends more time doing I/O
    than computations, many short CPU bursts
  • CPU-bound process spends more time doing
    computations few very long CPU bursts
  • Balance between two types of processes

14
Addition of Medium Term Scheduling
15
Context Switch
  • All the earlier mentioned process scheduling has
    a trade-off
  • When CPU switches to another process, the system
    must save the state of the old process and load
    the saved state for the new process via a context
    switch
  • Time dependent on hardware support
  • Context-switch time is pure overhead the system
    does no useful work while switching

16
Interprocess Communication
  • Concurrent processes within a system may be
    independent or cooperating
  • Cooperating process can affect or be affected by
    other processes, including sharing data
  • Reasons for cooperating processes
  • Information sharing several users may be
    interested in a shared file
  • Computation speedup break a task into subtasks
    and work in parallel
  • Convenience
  • Need InterProcess Communication (IPC)
  • Two models of IPC
  • Shared memory
  • Message passing

17
Communications Models
Shared-memory
Message-passing
18
Shared memory Producer-Consumer Problem
  • Paradigm for cooperating processes
  • producer process produces information that is
    consumed by a consumer process
  • IPC implemented by a shared buffer
  • unbounded-buffer places no practical limit on the
    size of the buffer
  • bounded-buffer assumes that there is a fixed
    buffer size
  • More practical
  • Lets design!

19
Bounded-Buffer Shared-Memory Solution design
  • Three steps in the design problem
  • Design the buffer
  • Design the producer process
  • Design the consumer process
  • Shared buffer (implemented as circular array with
    two logical pointers in and out)

define BUFFER_SIZE 10 typedef struct . . .
item item bufferBUFFER_SIZE int in 0 int
out 0
20
Bounded-Buffer Producer Consumer process
design
  • 2. Producer design
  • while (true) / Produce an item /
  • while (((in 1) BUFFER SIZE) out)
  • / do nothing -- no free buffers /
  • bufferin nextProduced
  • in (in 1) BUFFER SIZE
  • 3. Consumer design
  • while (true)
  • while (in out)
  • // do nothing -- nothing to
    consume
  • // remove an item from the buffer
  • nextConsumed bufferout
  • out (out 1) BUFFER SIZE

21
Shared Memory design
  • Previous design is correct, but can only use
    BUFFER_SIZE-1 elements!!!
  • Exercise for you to design a solution where
    BUFFER_SIZE items can be in the buffer
  • Part of Assignment 1

22
Interprocess Communication Message Passing
  • Processes communicate with each other without
    resorting to shared memory
  • IPC facility provides two operations
  • send(message) message size fixed or variable
  • receive(message)
  • If P and Q wish to communicate, they need to
  • establish a communication link between them
  • exchange messages via send/receive

23
Direct Communication
  • Processes must name each other explicitly
  • send (P, message) send a message to process P
  • receive(Q, message) receive a message from
    process Q
  • Properties of communication link
  • A link is associated with exactly one pair of
    communicating processes
  • Between each pair there exists exactly one link
  • Symmetric (both sender receiver must name the
    other to communicate)
  • Asymmetric (receiver not required to name the
    sender)

24
Indirect Communication
  • Messages are directed and received from mailboxes
    (also referred to as ports)
  • Each mailbox has a unique id
  • Processes can communicate only if they share a
    mailbox
  • Properties of communication link
  • Link established only if processes share a common
    mailbox
  • A link may be associated with many processes
  • Each pair of processes may share several
    communication links
  • Link may be unidirectional or bi-directional

25
Communications in Client-Server Systems
  • Socket connection

26
Sockets
  • A socket is defined as an endpoint for
    communication
  • Concatenation of IP address and port
  • The socket 161.25.19.81625 refers to port 1625
    on host 161.25.19.8
  • Communication consists between a pair of sockets

27
Socket Communication
28
Threads
  • Process model discussed so far assumed that a
    process was sequentially executed program with a
    single thread
  • Increased scale of computing
  • putting pressure on programmers, challenges
    include
  • Dividing activities
  • Balance
  • Data splitting
  • Data dependency
  • Testing and debugging
  • Think of a busy web server!

29
Single and Multithreaded Processes
30
Benefits
  • Responsiveness
  • Resource Sharing
  • Economy
  • Scalability

31
Multithreaded Server Architecture
32
Concurrent Execution on a Single-core System
33
Parallel Execution on a Multicore System
34
User and Kernel Threads
  • User threads Thread management done by
    user-level threads library
  • Kernel threads Supported by the Kernel
  • Windows XP
  • Solaris
  • Linux
  • Mac OS X

35
Multithreading Models
  • Many-to-One
  • One-to-One
  • Many-to-Many

36
Many-to-One
  • Many user-level threads mapped to single kernel
    thread
  • Examples
  • Solaris Green Threads
  • GNU Portable Threads

37
One-to-One
  • Each user-level thread maps to kernel thread
  • Examples
  • Windows NT/XP/2000
  • Linux

38
Many-to-Many Model
  • Allows many user level threads to be mapped to
    many kernel threads
  • Allows the operating system to create a
    sufficient number of kernel threads
  • Solaris prior to version 9

39
Many-to-Many Model
40
Threading Issues
  • Thread cancellation of target thread
  • Dynamic unbound usage of threads

41
Thread Cancellation
  • Terminating a thread before it has finished
  • General approaches
  • Asynchronous cancellation terminates the target
    thread immediately
  • Problems?
  • Deferred cancellation allows the target thread to
    periodically check if it should be cancelled

42
Dynamic usage of threads
  • Create thread as and when needed
  • Disadvantages
  • Amount of time to create a thread
  • Nevertheless, this thread will be discarded once
    it has completed work no reusage
  • No bound on the total number of threads created
    in the system
  • may result in severe resource scarcity

43
Solution Thread Pools
  • Create a number of threads in a pool where they
    await work
  • Advantages
  • Usually faster to service a request with an
    existing thread than create a new thread
  • Allows the number of threads in the
    application(s) to be bound to the size of the
    pool
  • Almost all modern OS provide kernel support for
    threads Windows XP, MAC, Linux
Write a Comment
User Comments (0)
About PowerShow.com