Title: Prepared by Kevin C. Dittman for
1Introduction
- The chapter will address the following questions
- What is an information systems architecture in
terms of DATA, PROCESSES, INTERFACES, and
NETWORKS the building blocks of all information
systems? - What are both centralized and distributed
computing alternatives for information system
design, including various client/server and
Internet/intranet options? - What are the database and data distribution
alternatives for information system design? - What are the make versus buy alternatives and
variations for information system design? - What are the user and system interface
alternatives for information system design?
2Introduction
- The chapter will address the following questions
- What are the various networking topologies and
their importance in information system design? - What are the methods for general application
architecture and design? - What are the differences between logical and
physical data flow diagrams, and explain how
physical data flow diagrams are used to model
application architecture and guide process
design? - How do you draw physical data flow diagrams for a
system/application?
3General System Design
- During general systems design the basic technical
decisions are made. These decisions include - Will the system use centralized or distributed?
- Will the systems data stores be centralized or
distributed? If distributed, how so? What data
storage technology(s) will be used? - Will software be purchased, built in-house, or
both? For programs to be written, what
technology(s) will be used? - How will users interface with the system? How
will data be input? How will outputs be
generated? - How will the system interface to other, existing
systems?
4General System Design
- The decisions made during general systems design
constitute the application architecture of the
system. - An application architecture defines the
technologies to be used by (and to build) one,
more, or all information systems in terms of its
data, process, interface, and network components.
It serves as a framework for general design.
5(No Transcript)
6Information Technology Architecture
- Network Architectures for Client/Server Computing
- What is client/server computing?
- A client is single-user computer that provides
(1) user interface services, appropriate database
and processing services and (2) connectivity
services to servers (and possibly other clients). - A server is a multiple-user computer that
provides (1) shared database, processing, and
interface services and (2) connectivity to
clients and other servers. - In client/server computing an information
systems database, software, and interfaces are
distributed across a network of clients and
servers which communicate and cooperate to
achieve system objectives. Despite the
distribution of computing resources, each system
user perceives that a single computer (their own
client PC) is doing all the work.
7Information Technology Architecture
- Network Architectures for Client/Server Computing
- Client/server computing is an alternative to
traditional centralized computing. - In centralized computing, a multi-user computer
(usually a mainframe or minicomputer) hosts all
of the information system components including
(1) the data storage (files and databases), (2)
the business logic (software and programs), (3)
the user interfaces (input and output), and (4(
any system interfaces (networking to other
computers and systems). The user may interact
with this host computer via a terminal (or,
today, a PC emulating a terminal), but all of
work is actually done on the host computer.
8(No Transcript)
9Information Technology Architecture
- Network Architectures for Client/Server Computing
- Centralized Computing
- Centralized process architectures were once
dominant because the cost of placing computers
closer to the end-user was prohibitive. - Many (if not most) legacy applications remain
centralized on large mainframe computers (such as
IBMs S/370 and 3090 families of computers) or
smaller minicomputers (such as IBMs AS/400).
10Information Technology Architecture
- Network Architectures for Client/Server Computing
- Distributed Presentation
- This alternative builds upon and enhances
centralized computing applications. - The old character user interfaces are stripped
from the centralized applications and regenerated
as graphical user interfaces that will run on the
PC. - The user interface (or presentation) is
distributed off the server and onto the client. - All other elements of the centralized application
remain on the server, but the system users get a
friendlier graphical user interface to the system.
11Information Technology Architecture
- Network Architectures for Client/Server Computing
- Distributed Presentation
- Distributed presentation computing advantages
- It can be implemented relatively quickly since
most aspects of the legacy application remain
unchanged. - Users get a friendly and familiar interface to
existing systems - The useful lifetime of legacy applications can be
extended until such a time as resources warrant a
wholesale redevelopment of the application. - Distributed presentation computing disadvantages
- The applications functionality cannot be
significantly improved, and the solution does not
maximize the potential of the clients desktop
computer by only dealing with the user interface.
12Information Technology Architecture
- Network Architectures for Client/Server Computing
- Distributed Data
- Sometimes called two-tiered client/server.
- This architecture places the information systems
stored data on a server, and the business logic
and user interfaces on the clients. - A local or wide area network usually connects the
clients to the server. - A local area network (or LAN) is a set of client
computers (usually PCs) connected to one or more
server computers (usually microprocessor-based,
but could also include mainframes or
minicomputers) through cable over relatively
short distances. - A wide area network (or WAN) is an interconnected
set of LANs, or the connection of PCs over a
longer distance.
13Information Technology Architecture
- Network Architectures for Client/Server Computing
- Distributed Data
- The database server is fundamental to this
architecture and its technology is different
from a file server. - File servers store the database, but the client
computers must execute all database instructions.
This means that entire databases and tables may
have to be transported to and from the client
across the network. - Database servers also store the database, but the
database commands are also executed on those
servers. The clients merely send their database
commands to the server. The server only returns
the result of the database command processing
not entire databases or tables. Thus, database
servers generate much less network traffic.
14Information Technology Architecture
- Network Architectures for Client/Server Computing
- Distributed Data
- The clients in the distributed database solution
typically run the business logic of the
information system application. - Distributed data computing advantages
- Separates data and business logic to (1) isolate
each from changes to the other, (2) make the data
more available to users, and (3) retain the data
integrity of centralized computing through
centrally managed servers. - Distributed data computing disadvantages
- The application logic must be maintained on all
of the clients.
15Information Technology Architecture
- Network Architectures for Client/Server Computing
- Distributed Data and Logic
- Referred to as three-tiered or n-tiered
client/server computing. - This approach distributes databases and business
logic to separate servers. - Uses the same database server(s) as in the
two-tiered approach. - Uses an application server.
- The application server provides a transaction
monitor such as to manage transactions. - Some or all of the business logic of the
application can be moved from the client to the
application server. - Only the user interface and some relatively
stable or personal business logic need be
executed on the clients.
16Information Technology Architecture
- Network Architectures for Client/Server Computing
- Distributed Data and Logic
- Distributed data and logic computing
disadvantages - Very complex to design and development.
- The most difficult aspect of three-tier
client/server application design is partitioning. - Partitioning is the act of determining how to
best distribute or duplicate application
components (data, process, and interfaces) across
the network.
17Information Technology Architecture
- Network Architectures for Client/Server Computing
- The Internet and Intranets
- The Internet is an (but not necessarily the)
information superhighway that permits computers
of all types and sizes, all over the world to
exchange data and information using standard
languages and protocols. - An intranet is a secure network, usually
corporate, that uses Internet technology to
integrate desktop, workgroup, and enterprise
computing into a single cohesive framework. - The intranet provides management and users with a
common interface to applications and information
18Information Technology Architecture
- Network Architectures for Client/Server Computing
- The Internet and Intranets
- Java is a cross-platform programming language
designed specifically to exploit the Internet
standards. - Java applets (modular software components) are
stored on an Internet or intranet server and
downloaded to the client when they access the
application. - Java applets can execute on any client computing
platform. - A network computer (or NC) is designed to only
run Internet-based applications (such as web
browsers and Java applets). - The NC (also called a thin client) is simpler,
and much cheaper than personal computers
(increasingly called a fat client).
19Information Technology Architecture
- Network Architectures for Client/Server Computing
- The Role of Network Technologies
- The well designed network provides connectivity
and interoperability. - Connectivity defines how computers are connected
to talk to one another. - Interoperability is an ideal state in which
connected computers cooperate with one another in
a manner that is transparent to their users (the
clients). - Network topology describes how a network provides
connectivity between the computers on that
network.
20(No Transcript)
21Information Technology Architecture
- Network Architectures for Client/Server Computing
- The Role of Network Technologies
- The Bus network topology
- A direct point-to-point link between any two
computer systems. - The simplest network topology.
- The network can contain mainframes, minicomputers
(or mid-range computers), personal computers, and
dumb and intelligent terminals. - To completely connect all points between n
computers, you would need n times (n-1)/2 direct
paths. - Only one computer can send data through the bus
at any given time.
22Information Technology Architecture
- Network Architectures for Client/Server Computing
- The Role of Network Technologies
- The Ring network topology
- Connects multiple computers and some peripherals
into a ring-like structure. - Each computer can transmit messages,
instructions, and data (called packets) to only
one other computer (or node on the network). - Every transmission includes an address.
- When a computer receives a packet, it checks the
address and if the packets address is different
than the computers address, it passes it on to
the next computer or node. - Ring networks generally transmit packets in one
direction therefore, many computers can transmit
at the same time to increase network throughput.
23Information Technology Architecture
- Network Architectures for Client/Server Computing
- The Role of Network Technologies
- The Star network topology
- Links multiple computer systems through a central
computer. - The central computer does not have to be a
mainframe or minicomputer. - Central computer could be an application server
that manages the transmission of data and
messages between the other clients and servers
(as in the n-tier model).
24Information Technology Architecture
- Network Architectures for Client/Server Computing
- The Role of Network Technologies
- The Hierarchical network topology
- Can be thought of as a multiple star network,
where the communications processors are arranged
in a hierarchy. - The top computer system (usually a mainframe)
controls the entire network. - All network topologies operate according to
established network protocols that permit
different types of computers to communicate and
interoperate.
25Information Technology Architecture
- Data Architectures for Distributed Relational
Databases - The underlying technology of client/server
computing has made it possible to distribute data
without loss of centralized control. - This control is being accomplished through
distributed relational databases. - A relational database stores data in a tabular
form. Each file is implemented as a table. Each
field is a column in the table. Each records in
the file is a row in the table. Related records
between two tables are implemented by
intentionally duplicating columns in the two
tables. - A distributed relational database distributes or
duplicates tables to multiple database servers
(and in rare cases, clients).
26Information Technology Architecture
- Data Architectures for Distributed Relational
Databases - The software required to implement distributed
relational databases is called a distributed
relational database management system. - A distributed relational database management
system (or distributed RDBMS) is a software
program that controls access to, and maintenance
of the stored data. It also provides for backup,
recovery and security. It is sometimes called a
client/server database management system.
27Information Technology Architecture
- Data Architectures for Distributed Relational
Databases - What sets a distributed RDBMS apart from a PC
RDBMS is the database engine. - The database engine is that part of the DBMS that
executes database commands to create, read,
update, and delete records (rows) in the tables. - In a PC RDBMS, the database engine that processes
all database commands must execute on the client
PC, even if the data is actually stored on the
server. - In a distributed RDBMS, the database engine that
processes all database commands executes on the
database server.
28Information Technology Architecture
- Data Architectures for Distributed Relational
Databases - True data distribution partitions data to one or
more database servers. - Entire tables can be allocated to different
servers, or subsets of rows in a table can be
allocated to different servers. - An RDBMS controls access to and manages each
server. - Data replication duplicates data on one or more
database servers. - Entire tables can be duplicated on different
servers, or subsets of rows in a table can be
duplicated to different servers. - The RDBMS not only controls access to, and
management of each server database it also
ensures that updates on one server are updated on
any server where the data is duplicated.
29Information Technology Architecture
- Interface Architectures - Inputs, Outputs,
Middleware - Batch Input/Output
- In batch processing, transactions are accumulated
into batches for periodic processing. - The batch inputs are processed against master
files or databases. - Transaction files or databases may also be
created or updated by the transactions. - Most outputs tend to be generated to paper or
microfiche on a scheduled basis.
30Information Technology Architecture
- Interface Architectures - Inputs, Outputs,
Middleware - On-line Processing
- The majority of systems have slowly evolved from
batch processing to on-line processing. - On-line systems provide for a conversational
dialogue between user and computer. - Business transactions and inquiries are often
best processed when they occur. - Errors are identified and corrected more quickly.
- Transactions tend to be processed earlier since
on-line systems eliminate the need for batch data
file preparation. - On-line methods permit greater human interaction
in decision making, even if the data arrives in
natural batches.
31Information Technology Architecture
- Interface Architectures - Inputs, Outputs,
Middleware - Remote Batch
- Remote batch combines the best aspects of batch
and on-line I/O. - Distributed on-line computers handle data input
and editing. - Edited transactions are collected into a batch
file for later transmission to host computers
that process the file as a batch. - Results are usually transmitted as a batch back
to the original computers.
32Information Technology Architecture
- Interface Architectures - Inputs, Outputs,
Middleware - Keyless Data Entry
- Keying errors have always been a major source of
errors in computer inputs (and inquiries). - In batch systems, keying errors can be eliminated
through optical character reading (OCR) and
optical mark reading (OMR) technology. - The real advances in keyless data entry are
coming for on-line systems in the form of
auto-identification systems. - Bar coding systems (similar to universal product
code systems that are commonplace in the grocery
and retail industries) are widely available for
many modern applications.
33Information Technology Architecture
- Interface Architectures - Inputs, Outputs,
Middleware - Pen Input
- Some businesses use this technology for remote
data collection. - For example, UPS.
- A promising technology is emerging in the form of
handheld PCs (HPCs). - Similar to personal organizers and personal data
assistants, these HPCs offer greater
compatibility with desktop and laptop PCs. - Based on Microsofts Windows CE operating system,
they can be programmed to become disconnected
clients in a client/server application.
34Information Technology Architecture
- Interface Architectures - Inputs, Outputs,
Middleware - Graphical User Interfaces
- GUI technology has become the user interface of
choice for client/server applications. - GUIs do not automatically make an application
better. - Poorly designed GUIs can negate the alleged
advantages of consistent user interfaces.
35Information Technology Architecture
- Interface Architectures - Inputs, Outputs,
Middleware - Graphical User Interfaces
- Most users interface with the Internet via a
client software tool called a browser. - The browser paradigm is based on hypertext and
hyperlinks. - Hypertext are keywords that are clearly
highlighted as a link to a new page of
information. - Hyperlinks are links from graphics, buttons, and
areas that link to a different page of
information. - These links may it easy to navigate from
page-to-page and application-to-application.
36Information Technology Architecture
- Interface Architectures - Inputs, Outputs,
Middleware - Electronic Messaging and Work Group Technology
- Information systems are being designed to
directly incorporate the electronic mail. - For example, Microsoft Outlook and Exchange
Server and IBM/Lotus Notes allow for the
construction of intelligent electronic forms that
can be integrated into an application.
37Information Technology Architecture
- Interface Architectures - Inputs, Outputs,
Middleware - Electronic Data Interchange
- Businesses that operate in many locations and
businesses that seek more efficient exchange of
transactions with their suppliers and/or
customers often utilize electronic data
interchange. - Electronic data interchange (EDI) is the
electronic flow of business transactions between
customers and suppliers. - With EDI, a business can eliminate its dependence
on paper documents and mail, plus dramatically
reduce response time.. - Various EDI standards exist for the standardized
exchange of data between organizations within the
same industry.
38Information Technology Architecture
- Interface Architectures - Inputs, Outputs,
Middleware - Imaging and Document Interchange
- Similar to EDI except that the actual images of
forms and data are transmitted and received. - It is particularly useful in applications in
which the form images or graphics are required.
(insurance industry)
39Information Technology Architecture
- Interface Architectures - Inputs, Outputs,
Middleware - Middleware
- Information systems must also interface to other
information systems. - System integration is the process of making
heterogeneous information systems (and computer
systems) interoperate. - A key technology used to interface and integrate
systems is middleware. - Middleware is utility software that serves to
interface systems built with incompatible
technologies. Middleware serves as a consistent
bridge between two or more technologies. It may
be built into operating systems, but it is also
frequently sold as a separate product.
40Information Technology Architecture
- Interface Architectures - Inputs, Outputs,
Middleware - Selecting User and System Interface
Technologies - The preferred or approved user and system
interface technologies may be specified as part
of the Interface architecture. - An organization may leave interface technologies
as a decision to be made on a project-by-project
basis. - An organization may establish macro guidelines
for interfaces and leave the micro decisions to
individual projects.
41Information Technology Architecture
- Process Architecture - The Software Development
Environment and System Management - The PROCESS architecture of an application is
defined in terms of the software languages and
tools that will be used to develop the business
logic and application programs. - This is expressed as a menu of choices since
different software development environments
(SDEs) are suited to different applications. - A software development environment is a language
and tool kit for constructing information system
applications. They are usually built around one
or more programming languages such as COBOL,
Basic, C or C, Pascal, Smalltalk, or Java.
42Information Technology Architecture
- Process Architecture - The Software Development
Environment and System Management - SDEs for Centralized Computing Distributed
Presentation - The software development environment for
centralized computing consists of - An editor and compiler, usually COBOL, to write
programs. - A transaction monitor, usually CICS, to manage
on-line transactions and terminal screens. - A file management system, such as VSAM, or a
database management system, such as DB2.
43Information Technology Architecture
- Process Architecture - The Software Development
Environment and System Management - SDEs for Centralized Computing Distributed
Presentation - The personal computer brought many new COBOL
development tools down to the mainframe. - A PC-based COBOL SDE provided the programmer with
more powerful editors, and testing and debugging
tools at the workstation level. - A programmer could do much of the development
work at the PC level, and then upload the code to
the central computer for system testing,
performance tuning, and production. - The SDE could be interfaced with a CASE tool and
code generator to take advantage of process
models developed during systems analysis.
44Information Technology Architecture
- Process Architecture - The Software Development
Environment and System Management - SDEs for Centralized Computing Distributed
Presentation - SDEs provide tools to develop distributed
presentation client/server. - The Micro Focus Dialog Manager provided COBOL
Workbench users with tools to build Windows-based
user interfaces that could cooperate with the
CICS transaction monitors and the mainframe COBOL
programs.
45Information Technology Architecture
- Process Architecture - The Software Development
Environment and System Management - SDEs for Two-Tier Client/Server
- The SDE for two-tiered client/server applications
(also called distributed data) consists of a
client-based programming language with built-in
SQL connectivity to one or more server database
engines. - SDEs provide the following
- Rapid application development (RAD) for quickly
building the graphical user interface that will
be replicated and executed on all of the client
PCs. - Automatic generation of the template code for the
above GUI and associated system events (such as
mouse-clicks, keystrokes, etc.) that use the GUI.
The programmer only has to add the code for the
business logic.
46Information Technology Architecture
- Process Architecture - The Software Development
Environment and System Management - SDEs for Two-Tier Client/Server
- SDEs provide the following (continued)
- A programming language that is compiled for
replication and execution on the client PCs. - Connectivity (in the above language) for various
relational database engines, and interoperability
with those engines. Interoperability is achieved
by including SQL database commands (to, for
example, create, read, update, delete, and sort
records) that will be sent to the database engine
for execution on the server. - A sophisticated code testing and debugging
environment for the client.
47Information Technology Architecture
- Process Architecture - The Software Development
Environment and System Management - SDEs for Two-Tier Client/Server
- SDEs provide the following (continued)
- A system testing environment that helps the
programmer develop, maintain, and run a reusable
test script of user data, actions, and events
against the compiled programs to ensure that code
changes do not introduce new or unforeseen
problems. - A report writing environment to simply the
creation of new end-user reports off a remote
database. - A help authoring system for the client PCs.
48Information Technology Architecture
- Process Architecture - The Software Development
Environment and System Management - SDEs for MultiTier Client/Server
- Unlike two-tied applications, n-tiered
applications must support more than 100 users
with mainframe-like transaction response time and
throughput with 100 gigabyte or larger
databases. - The SDEs in this class must provide the all of
the capabilities typically associated with
two-tiered SDEs plus the following - Support for heterogeneous computing platforms,
both client and server, including Windows, OS/2,
UNIX, Macintosh, and legacy mainframes and
minicomputers. - Code generation and programming for both clients
and servers. Most tools in this genre support
pure object-oriented languages such as C and
Smalltalk.
49Information Technology Architecture
- Process Architecture - The Software Development
Environment and System Management - SDEs for MultiTier Client/Server
- The SDEs in this class must provide the all of
the capabilities typically associated with
two-tiered SDEs plus the following (continued) - A strong emphasis on reusability using software
application frameworks, templates, components,
and objects. - Bundled mini-case tools for analysis and design
that interoperate with code generators and
editors. - Tools to help analysts and programmers partition
application components between the clients and
servers. - Tools to help developers deploy and manage the
finished application to clients and servers. This
generally includes security management tools.
50Information Technology Architecture
- Process Architecture - The Software Development
Environment and System Management - SDEs for MultiTier Client/Server
- The SDEs in this class must provide the all of
the capabilities typically associated with
two-tiered SDEs plus the following (continued) - Ability to automatically scale the application
to larger and different platforms, client and
server. This issue of scalability was always
assumed in the mainframe computing era, but is
relatively new to the client/server computing
era. - Sophisticated software version control and
application management.
51Information Technology Architecture
- Process Architecture - The Software Development
Environment and System Management - SDEs for Internet and Intranet Client/Server
- Most of these rapid application development tools
are built around three core standard
technologies - HTML (Hypertext Markup Language) the language
used to construct world wide web pages and links. - CGI (Computer Graphics Interface) a language
for publishing graphical world wide web
components and links - Java a general purpose programming language for
creating platform-independent programs and
applets that can execute across the world wide
web. - These SDEs can create both Internet, intranet,
and non-Internet/intranet applications.
52Information Technology Architecture
- Process Architecture - The Software Development
Environment and System Management - System Management
- Client/server computing applications usually
require one or more of the following common
process development and management tools - Transaction Processing (TP) Monitors software
that ensures that all of the data associated with
a single business transaction is processed as a
single transaction amongst all of the parallel
business transactions that may be in the system
at the same time. - Version Control and Configuration Managers
software that tracks on-going changes to software
that is usually developed by teams of
programmers. The software also allows management
to rollback to a prior version of an application
if the current version encounters unanticipated
problems.
53Application Architecture Strategies Design
Implications
- The Enterprise Application Architecture Strategy
- In this strategy, the organization develops a
enterprise wide information technology
architecture to be followed in all subsequent
information system development projects. - This IT architecture defines the following
- The approved network, data, interface, and
processing technologies and development tools
(inclusive of hardware and software and clients
and servers). - A strategy for integrating legacy systems and
technologies into the application architecture. - An on-going process for continuously reviewing
the application architecture for currency and
appropriateness.
54Application Architecture Strategies Design
Implications
- The Enterprise Application Architecture Strategy
- This IT architecture defines the following
(continued) - An on-going process for researching emerging
technologies and making recommendations for their
inclusion in the application architecture. - A process for analyzing requests for variances
from the approved application architecture. (You
may recall that SoundStage received such a
variance to prototype object technology in the
member services system project.)
55Application Architecture Strategies Design
Implications
- The Tactical Application Architecture Strategy
- In the absence of an enterprise wide application
architecture, each project must define its own
architecture for the information system being
developed. - The developers usually have somewhat greater
latitude in requesting new technologies, but they
must be defended and approved as feasible. - IT feasibility usually includes the following
aspects - Technical feasibility This can either be a
measure of a technologys maturity, or a measure
of the technologys suitability to the
application being designed, or a measure of the
technologys ability too work with other
technologies.
56Application Architecture Strategies Design
Implications
- The Tactical Application Architecture Strategy
- IT feasibility usually includes the following
aspects (continued) - Operational feasibility This is a measure of
how comfortable the business management and users
are with the technology, and how comfortable the
technology managers and support personnel are
with the technology. - Economic feasibility This a measure of both
whether or not the technology can be afforded,
and whether it is cost effective, meaning the
benefits outweigh the costs.
57Application Architecture Strategies Design
Implications
- Build versus Buy Implications
- One of the most fundamental software decisions in
any project is build versus buy should we
design and build the software in-house, or should
we purchase the software as a package? - The packages are referred to as COTS commercial
of the shelf. - Many organizations have made an enterprise IT
decision to always purchase those applications
that do not significantly add competitive
advantage to the business. - Typically, these include human resources (and
payroll), financial systems, and systems subject
to frequent regulatory change (such as college
financial aid).
58Application Architecture Strategies Design
Implications
- Build versus Buy Implications
- The purchase of application software does not
invalidate systems analysis and design, but it
does, however, change the methodology and project
in the following ways - Alternative application software packages must be
analyzed against the user requirements, and any
unfilled business requirements must be
identified. - Options and preferences within the chosen
software package must be analyzed and selected.
(Most packages allow various one-time and
on-going customization.) - Business processes and documents must be analyzed
and redesigned to interoperate with the selected
software.
59Application Architecture Strategies Design
Implications
- Build versus Buy Implications
- The purchase of application software does not
invalidate systems analysis and design, but it
does, however, change the methodology and project
in the following ways (continued) - Transition processes must be analyzed and
designed to import data from legacy systems into
the new softwares files and databases. - The application softwares interfaces to other
information systems must be analyzed and
designed. - Any unmet business requirements are subject to
analysis and design as extensions to the chosen
software package - Purchased application software may also be
subject to any IT architectural standards adopted
by a business.
60Modeling Application Architecture Information
System Processes
- Physical Data Flow Diagrams
- DFDs can also be used to model the physical
(meaning technical) design of the system. - Physical data flow diagrams model the technical
and human design decisions to be implemented as
part of an information system. They communicate
technical and other design constraints to those
who will actually implement the systemin other
words, they serve as a technical blueprint for
the implementation. - Physical DFDs use the same shapes and connections
as logical DFDs processes, external agents, data
stores, and data flows. Only the naming standards
(and a few new rules) are changed to extend the
language to document technology and design
decisions.
61(No Transcript)
62Modeling Application Architecture Information
System Processes
- Physical Data Flow Diagrams
- Physical Processes
- Recall that processes are the key shapes on any
DFD. - A physical process is either (1) a processor,
such as a client PC, network server, or robot or
(2) specific work or actions to be performed on
incoming data flows to produce outgoing data
flows. In the latter case, the physical process
must clearly designate which person(s) or what
technology(s) will be assigned to do the work.
63Modeling Application Architecture Information
System Processes
- Physical Data Flow Diagrams
- Physical Processes
- There are two elements to physical data flow
diagrams - Logical processes are frequently assigned to
specific physical processors such as clients,
servers, or other devices in a computer network.
To this end, we might draw a physical DFD called
a network topology data flow diagram for the
information system. - Subsequently, logical processes are usually
implemented as one or more physical processes.
64Modeling Application Architecture Information
System Processes
- Physical Data Flow Diagrams
- Physical Processes
- Some logical processes must be split into
multiple physical processes for the following
reasons - To split the process into that portion performed
by a person and that portion performed by the
computer. - To split the process into that portion to
implemented with one technology, and that portion
to be implemented with a different technology. - To show multiple, different implementations of
the same logical process (such as processing a
paper order versus processing a phone order). - To add processes that are necessary to implement
audit and control requirements, or handle
exceptions.
65Modeling Application Architecture Information
System Processes
- Physical Data Flow Diagrams
- Physical Processes
- Process names use the action verb object clause
convention, however, the name is preceded or
followed by an implementation method. The format
is - implementation method action verb object
clause - action verb object clause implementation
method - If a logical process is to be implemented
partially by people and partially by software, it
must be split into separate physical processes
and appropriate data flows must be added between
the physical processes. - The name of a physical process to be performed by
people, not software, should indicate who will
perform that process.
66Modeling Application Architecture Information
System Processes
- Physical Data Flow Diagrams
- Physical Processes
- For computerized processes, the implementation
method is, in part, chosen from one of the
following methods - A purchased software package, possibly to be
selected. - A productivity or utility program.
- An existing application program from a program
library. - A program to be written.
67Modeling Application Architecture Information
System Processes
- Physical Data Flow Diagrams
- Physical Processes
- The number of processes on a physical DFD will
usually be greater than the number of processes
on its equivalent logical DFD. - Processes may be added to reflect data flow
collection, filtering, forwarding, preparation,
business controlsall in response to the
implementation target that has been selected. - Some logical processes may be split into multiple
physical processes to reflect portions of a
process to be done manually versus by a computer,
to be implemented with different technology, or
to be distributed to clients, servers, or
different host computers.
68Modeling Application Architecture Information
System Processes
- Physical Data Flow Diagrams
- Physical Data Flows
- Recall that all processes have at least one input
and one output data flow. - A physical data flow represents the planned
implementation of an input to or output from a
physical process. It can also indicate database
action such as create, delete, read, or update a
record. It can also represent the import of data
from, or the export of data to another
information system across a network. Finally, it
can represent the data flows data between two
modules or subroutines within the same program.
69Modeling Application Architecture Information
System Processes
- Physical Data Flow Diagrams
- Physical Data Flows
- Physical data flow names use one of the following
general formats - implementation medium data flow name
- data flow name implementation method
- For data transmitted across a network, the
implementation media should indicate the file
transfer protocol to be used. - For data transmitted between processes that are
to be part of the same program, you could specify
parameters and variables to be passed between the
program modules.
70Modeling Application Architecture Information
System Processes
- Physical Data Flow Diagrams
- Physical Data Flows
- Physical DFDs must also indicate any data flows
to be implemented as business forms. - Business forms frequently use a multiple (carbon
or carbonless) copy implementation. - At some point in processing, the different copies
are split and travel to different manual
processes. - This is shown on an physical DFD as a diverging
data flow. - Each copy should be uniquely named.
71Modeling Application Architecture Information
System Processes
- Physical Data Flow Diagrams
- Physical Data Flows
- Most logical data flows are carried forward to
the physical DFDs. - Some may be consolidated into single physical
data flows that represent business forms. - Some may be split into multiple flows as a result
of having split logical processes into multiple
physical processes. - Some may be duplicated as multiple flows with
different technical implementations.
72Modeling Application Architecture Information
System Processe