Title: LCDR Jane Lochner DASN C4IEWSpace
1LCDR Jane LochnerDASN (C4I/EW/Space)
16 Practices for Improving Software Project
Success
Software Technology Conference3-6 May 1999
(703) 602-6887lochner.jane_at_hq.navy.mil
2Productivity vs. Size
Function Points Per Person Month
Software Size in Function Points
Capers Jones, Becoming Best in Class, Software
Productivity Research, 1995 briefing
3Percent Cancelled vs. Size(1)
Software Size in Function Points(2)
Probability of Software Project Being Cancelled
The drivers of the bottom line change
dramatically with software size.
(1) Capers Jones, Becoming Best in Class,
Software Productivity Research, 1995 briefing (2)
80 SLOC of Ada to code 1 function point, 128 SLOC
of C
4Management is the Problem, But Complexity is the
Villain
- When a software development is cancelled,
management failure is usually the reason. - "The task force is convinced that today's major
problems with military software development are
not technical problems, but management problems."
- When tasks of a team effort are interrelated, the
total effort increases in proportion to the
square of the number of persons on the team - Complexity of effort, difficulty of coordination
Report of the Defense Science Task force on
Military Software, Sept 1987, Fred Brooks Chairman
5Software Program Managers Network (SPMN)
- Consulting
- Software Productivity Centers of Excellence
- Bring academia industry together
- Delta One
- Pilot program to attract, train, retain software
workers for Navy and DoD programs - Airlie Council
- Identify fundamental processes and proven
solutions essential for large-scale software
project success - Board of Directors
- Chaired by Bgen Nagy, USAF
6Airlie Council
- Membership
- Successful managers of large-scale s/w projects
- Recognized methodologists Metric authorities
- Prominent Consultants
- Executives from major s/w companies
- Product Approval
- Software Advisory Group (SAG)
7Three Foundations for Project Success
83 Foundations 16 Essential PracticesTM
Product Integrity and Stability
Project Integrity
Construction Integrity
- Adopt Continuous Risk Management
- Estimate Empirically Cost and Schedule
- Use Metrics to Manage
- Track Earned Value
- Track Defects against Quality Targets
- Treat People as the Most Important Resource
- Adopt Life Cycle Configuration Management
- Manage and Trace Requirements
- Use System-Based Software Design
- Ensure Data and Database Interoperability
- Define and Control Interfaces
- Design Twice, Code Once
- Assess Reuse Risks and Costs
- Inspect Requirements and Design
- Manage Testing as a Continuous Process
- Compile Smoke Test Frequently
9Project Integrity
- Management Practices that
- Give early indicators of potential problems
- Coordinate the work and the communications of
the development team - Achieve a stable development team with needed
skills - Are essential to deliver the complete product
on-time, within budget, and with all
documentation required to maintain the product
after delivery
10Project Integrity
- Adopt Continuous Risk Management
- Estimate Empirically Cost and Schedule
- Use Metrics to Manage
- Track Earned-value
- Track Defects Against Quality Targets
- Treat People as the Most Important Resource
11 Risk Management Practice Essentials
- Identify risks over entire life cycle including
at least - Cost schedule technical staffing external
dependencies supportability sustainability
political - For EACH risk, estimate
- Likelihood that it will become a Problem
- Impact if it does
- Mitigation Contingency plans
- Measurement Method
- Update Report risk status at least monthly
- ALARMS
- Trivial Risks
- Risks from unproven technology not identified
- No trade studies for high-risk technical
requirements - Management Workers have different understanding
of risks
12Cost and Schedule Estimation Practice Essentials
- ALARMS
- High productivity estimates based on unproven
technology - Estimators not familiar with industry norms
- No cost associated with code reuse
- System requirements are incomplete
- Bad earned value metrics
- No risk materialization costs
- Use actual costs measured on past comparable
projects - Identify all reused code (COTS/GOTS), evaluate
applicability and estimate amount of code
modification and new code required to integrate - Compare empirical top-down cost estimate with a
bottom-up engineering estimate - Never compress the schedule more than 85 (from
nominal)
13Use Metrics to Manage Practice Essentials
- Collect metrics on
- Risk Materialization
- Product Quality
- Process Effectiveness
- Process Conformance
- Make decisions based on data not older than one
week - Make metrics data available to all team members
- Define thresholds that trigger predefined actions
- ALARMS
- Large price tag attached to request for metrics
data - Not reported at least monthly
- Rebaselining
- Inadequate task activity network
14Track Earned Value Practice Essentials
- Establish unambiguous exit criteria for EACH task
- Take BCWP credit for tasks when the exit
criteria have been verified as passed and report
ACWP for those tasks - Establish cost and schedule budgets that are
within uncertainty acceptable to the project - Allocate labor and other resources to each task
- ALARMS
- Software tasks not separate from non-software
tasks - More than 20 of the total development effort is
LOE - Task durations greater than 2 weeks
- Rework doesnt appear as separate task
- Data is old
15Track Defects Against Quality Targets Practice
Essentials
- Establish Unambiguous Quality Goals at Project
Inception - Understandability Reliability Maintainability
Modularity Defect density - Classify defects by
- Type Severity Urgency Discovery Phase
- Report defects by
- When created When found Number of inspections
present but not found Number closed and
currently open by category
- ALARMS
- Defects not managed by CM
- Culture penalizes discovery of defects
- Not aware of effectiveness of defect removal
methods - Earned-Value credit is taken before defects are
fixed or formally deferred - Quality target failures not recorded as one or
more defects
16Treat People as Most Important Resource Practice
Essentials
- Provide staff the tools to be efficient and
productive - Software
- Equipment
- Facilities, work areas
- Recognize team members for performance
- Individual goals
- Program requirements
- Make professional growth opportunities available
- Technical
- Managerial
- ALARMS
- Excessive or unpaid overtime
- Excessive pressure
- Large, unidentified, staff increases
- Key software staff not receiving competitive
compensation - Staff turnover greater than industry/locale norms
17Construction Integrity
- Development Practices that
- Provide a stable, controlled, predictable
development or maintenance environment - Increase the probability that what was to be
built is actually in the product when delivered -
18Construction Integrity
- Adopt Life-cycle Configuration Management
- Manage and Trace Requirements
- Use System-Based Software Design
- Ensure Data and Database Interoperability
- Define and Control Interfaces
- Design Twice, Code Once
- Assess Reuse Risks and Costs
19Configuration Management Practice Essentials
- Institute CM for
- COTS, GOTS, NDI and other shared engineering
artifacts - Design Documentation
- Code
- Test Documentation
- Defects
- Incorporate CM activities as tasks within project
plans and activity network - Conduct Functional Physical Configuration
Audits - Maintain version and semantic consistency between
CIs
- ALARMS
- Developmental baseline not under CM control
- CM activities dont have budgets, products and
unambiguous exit criteria - CM does not monitor and control the delivery and
release-to-operation process - Change status not reported
- No ICWGs for external interfaces
- CCCBs dont assess system impact
20 Manage Trace Requirements Practice Essentials
- ALARMS
- Layer design began before requirements for
Performance Reliability Safety External
interfaces and Security had been allocated - System Requirements
- Not defined by real end users
- Did not include Operational Scenarios
- Did not specify inputs that will stress the
system - Traceability is not to the code level
- Trace System Requirements down through all
derived requirements and layers of design to the
lowest level and to individual test cases - Trace each CI back to one or more System
Requirements - For Incremental Release Model, develop Release
Build-plan that traces all System Requirements
into planned releases - For Evolutionary Model, trace new requirements
into Release Build-plan as soon as they are
defined
21System Based Software Design Practice Essentials
- Develop System and Software Architectures IAW
structured methodologies - Develop System and Software Architectures from
the same partitioning perspective - Information
- Objects
- Functions
- States
- Design System and Software Architecture to give
views of static, dynamic and physical structures
- ALARMS
- Modifications and additions to reused
legacy/COTS/GOTS software not minimized - Security, Reliability, Performance, Safety, and
Interoperability Requirements not included - Design not specified for all internal and
external interfaces - Requirements not verified through MS before
start of Software design - Software Engineers did not participate in
Architecture development
22Data and Database Interoperability Practice
Essentials
- Design Information Systems with Very Loose
Coupling Between Hardware, Persistent Data, and
application software - Define data element names, definitions, minimum
accuracy, data type, units of measure, and range
of values - Identified using several processes
- Minimizes the amount of translation required to
share data with external systems - Relationships between data items defined based on
queries to be made on the database
- ALARMS
- Data Security requirements, business rules and
high-volume transactions on the database not
specified before database physical design begins - Compatibility analysis not performed because DBMS
is SQL Compliant - No time/resources budgeted to translate COTS
databases to DoD standards
23Data and Database Interoperability Practice
Essentials
- References from DoD Data Administration Home Page
- Implementing DoD Standard Data
- http//www-datadmn.itsi.disa.mil/brochure.html
- Data Administration Procedures (8320.1-M-)
- http//www-datadmn.itsi.disa.mil/8320-1-m.html
- Data Standardization Procedures (8320.1-M-1)
- http//www-datadmn.itsi.disa.mil/8320_1m1.html
- DDDS Meta-Data Fields
- http//www-datadmn.itsi.disa.mil/metadata-info.htm
- DoD Data Model
- http//www-datadmn.itsi.disa.mil/ddm.html
24Define and Control Interfaces Practice Essentials
- ALARMS
- Assumption that two interfacing applications that
comply with the JTA and DII COE TAFIM Interface
Standards are interoperable - E.g. JTA and TAFIM include both Microsoft and
UNIX interface standards and MS and UNIX design
these standards to exclude the other. - Interface testing not done under heavy stress
- Reasons for using proprietary features not
documented
- Ensure interfaces comply with applicable public,
open API standards and data interoperability
standards - Define user interface requirements through user
participation - Avoid use of proprietary features of COTS product
interfaces - Place interface under CM Control before
developing software components that use it - Track external interface dependencies in activity
network
25Design Twice, Code Once Practice Essentials
- ALARMS
- Graphics not used to describe different views of
the design - Operational scenarios not defined that show how
the different views of design interact - Reuse, COTS, GOTS, and program library components
not mapped to the software/database components - System and Software Requirements not traced to
the software/database components
- Describe
- Execution process characteristics and features
- End-user functionality
- Physical software components and their interfaces
- Mapping of software components onto hardware
components - States and State Transitions
- Use design methods that are consistent with those
used for System and are defined in SDP
26Assess Reuse Risks Costs Practice Essentials
- ALARMS
- Development of "wrappers" needed to translate
reuse software external interfaces - Positive and negative impact of COTS
proprietary features not identified - No analysis of GOTS sustainment organization
- No plan/cost for COTS upgrades
- Cost of reuse code is less than 30 of new code
- Reuse code has less than 25 functionality fit
- Conduct trade study to select reuse or new
architecture - Establish quantified selection criteria and
acceptability thresholds - analyze full lifecycle costs of each candidate
component - Identify reuse code at program inception before
start of architecture design - Use architectural frameworks that dominate
commercial markets - CORBA/JavaBeans
- ActiveX/DCOM
27Product Integrity
- Quality Practices that
- Help assure that, when delivered, the product
will meet customer quality requirements - Provide an environment where defects are caught
when inserted and any which leak are caught as
early as possible
28Product Integrity
- Inspect Requirements and Design
- Manage Testing As a Continuous Process
- Compile and Smoke Test Frequently
29Inspect Requirements and Design Practice
Essentials
- ALARMS
- Less than 80 of defects discovered by
inspections - Predominant inspection is informal code
walkthrough - Less than 100 inspection of architecture
design products - Less than 50 inspection of test plans
- Inspect products that will be inputs to other
tasks - Establish a well-defined, structured inspection
technique - Train employees how to conduct inspections
- Collect Report defect metrics for each formal
inspection
30Manage Testing as a Continuous Process Practice
Essentials
- Deliver inspected test products IAW
integration-test plan - Ensure every CSCI requirement has at least one
test case - Include both White- and Black-Box tests
- Functional, interface, error recovery,
out-of-bounds input, and stress tests - Scenarios designed to model field operation
- ALARMS
- Builds for all tests not done by CM
- Pass/Fail criteria not established for each test
- No test stoppage criteria
- No automated test tools
- High-risk and safety- or security-critical code
not tested early on - Compressed test schedules
31Compile and Test FrequentlyPractice Essentials
- Use orderly integration build process with VDD
- Identifies version of software units in the build
- Identifies open and fixed defects against the
build - Use independent test organization to conduct
integration tests - Include evolving regression testing
- Document defects CM track defects
- ALARMS
- Integration build and test done less than weekly
- Builds not done by CM small CM staff
- Excessive use of patches
- Lack of automated build tools
32There is Hope!!!
A small number of high-leverage proven practices
can be put in place quickly to achieve
relatively rapid bottom-line improvements.
The 16-Point PlanTM