The Real World of Software Testing
July 29, 2003
 
The Product Quality Measures

1. Customer satisfaction index
(Quality ultimately is measured in terms of customer satisfaction.)
Surveyed before product delivery and after product delivery
(and on-going on a periodic basis, using standard questionnaires)
Number of system enhancement requests per year
Number of maintenance fix requests per year
User friendliness: call volume to customer service hotline
User friendliness: training time per new user
Number of product recalls or fix releases (software vendors)
Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities
Normalized per function point (or per LOC)
At product delivery (first 3 months or first year of operation)
Ongoing (per year of operation)
By level of severity
By category or cause, e.g.: requirements defect, design defect, code defect,
documentation/on-line help defect, defect introduced by fixes, etc.

3. Responsiveness (turnaround time) to users
Turnaround time for defect fixes, by level of severity
Time for minor vs. major enhancements; actual vs. planned elapsed time

4. Product volatility
Ratio of maintenance fixes (to repair the system & bring it into
compliance with specifications), vs. enhancement requests
(requests by users to enhance or change functionality)

5. Defect ratios
Defects found after product delivery per function point
Defects found after product delivery per LOC
Pre-delivery defects: annual post-delivery defects
Defects per function point of the system modifications

6. Defect removal efficiency
Number of post-release defects (found by clients in field operation),
categorized by level of severity
Ratio of defects found internally prior to release (via inspections and testing),
as a percentage of all defects
All defects include defects found internally plus externally (by
customers) in the first year after product delivery

7. Complexity of delivered product
McCabe's cyclomatic complexity counts across the system
Halstead’s measure
Card's design complexity measures
Predicted defects and maintenance costs, based on complexity measures

8. Test coverage
Breadth of functional coverage
Percentage of paths, branches or conditions that were actually tested
Percentage by criticality level: perceived level of risk of paths
The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects
Business losses per defect that occurs during operation
Business interruption costs; costs of work-arounds
Lost sales and lost goodwill
Litigation costs resulting from defects
Annual maintenance cost (per function point)
Annual operating cost (per function point)
Measurable damage to your boss's career

10. Costs of quality activities
Costs of reviews, inspections and preventive measures
Costs of test planning and preparation
Costs of test execution, defect tracking, version and change control
Costs of diagnostics, debugging and fixing
Costs of tools and tool support
Costs of test case library maintenance
Costs of testing & QA education associated with the product
Costs of monitoring and oversight by the QA organization
(if separate from the development and test organizations)

11. Re-work
Re-work effort (hours, as a percentage of the original coding hours)
Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)
Re-worked software components (as a percentage of the total delivered components)

12. Reliability
Availability (percentage of time a system is available, versus the time
the system is needed to be available)
Mean time between failure (MTBF)
Mean time to repair (MTTR)
Reliability ratio (MTBF / MTTR)
Number of product recalls or fix releases
Number of production re-runs as a ratio of production runs
July 28, 2003
 
TEST AUTOMATION FRAMEWORKS
An excellent ebook by "Carl Nagle"
 
Three Questions About Each Bug You Find

1. Is this mistake somewhere else also?

2. What next bug is hidden behind this one?

3. What should I do to prevent bugs like this?

for more information, click on the title....


 
Risk Management

Risk avoidance: Risk is avoided by obviating the possibility that the undesirable event will happen. You refuse to commit to meeting milestone M by feature F - don't sign the contract until the software is done. This avoids the risk. As long as you enter into the contract to deliver specific scope by a specific date, the risk that it won't come about exists.

Risk reduction: this consists of minimizing the likelihood of the undesirable event. XP reduces the likelihood that you will lack some features at each milestone by reducing the amount of "extra" work to be done, such as paperwork or documentation, and improving overall quality so as to make development faster.

Risk mitigation: this consists of minimizing the impact of the undesirable event. XP has active mitigation for the "schedule risk", by insisting that the most valuable features be done first; this reduces the likelihood that important features will be left out of milestone M.

Risk acceptance: just grit your teeth and take your beating. So we're missing feature F by milestone M - we'll ship with what we have by that date. After reduction and mitigation, XP manages any residual risk this way.

Risk transfer: this consists of getting someone else to take the risk in your place. Insurance is a risk transfer tactic. You pay a definite, known-with-certainty amount of money; the insurer will reimburse you if the risk of not completing feature F by milestone M materializes. No provision in XP. Has anyone ever insured a software project against schedule/budget overrun ?

Contingency planning: substituting one risk for another, so that if the undesirable event occurs you have a "Plan B" which can compensate for the ill consequences. If we miss critical milestone M1 with feature set F1, we'll shelve the project and reassign all resources to our back-burner project which is currently being worked on by interns.

Key point from all the above: risk management starts with identifying specific risks. Also, I think you can perform conscious risk management using any process, method, technique or approach. It's important to recognize that any process, etc. simply changes the risk landscape; your project will always have one single biggest risk, then a second biggest risk, and so on.

Also: risks, like requirements, don't have the courtesy to stay put over the life of a project. They will change - old ones will bow out as risk tactics take effect, new ones will take their place.

Risk management is like feedback. If you're not going to pay attention to it, you're wasting your time. More than once I've tried to adopt a risk-oriented approach to projects, only to have management react something like, "Oh, you think that's a risk. Well, thank you for telling us. We're happy to have had that risk reduced. Now proceed as before."

One risk I often raise in projects is skills risk. Developers are supposed to crank out Java code who have only ever written Visual Basic, that sort of thing. Not once have I seen a response of risk avoidance (substituting other, trained team members for the unskilled ones), reduction (training the worker in Java), or mitigation (making provision for closer review of the person's code). It's always been acceptance - "We know it's less than ideal to have this guy working on that project, but he's what we've got at the moment. Can't hire anyone on short order, no time for training, no time for more reviews."

If you only ever have one tactic for dealing with risk, your risk "management" is a no-brainer.

---- From the Laurent Bossavit weblog


 
Defect Management Process
An excellent place for Defect Management Process. The content in this site are same as mentioned in Knowledge domain 9 of CSTE
The topic covered in this web site are

  • Defect Prevention
  • Deliverable Baseline
  • Defect Discovery
  • Defect Resolution
  • Process Improvement
  • Management Reporting


  • July 18, 2003
     
    Common definitions for testing - A Set of Testing Myths:
    “Testing is the process of demonstrating that defects are not present in the application that was developed.”

    “Testing is the activity or process which shows or demonstrates that a program or system performs all intended functions correctly.”

    “Testing is the activity of establishing the necessary “confidence” that a program or system does what it is supposed to do, based on the set of requirements that the user has specified.”


    These myths are still entrenched in much of how we collectively view testing and this mind-set sets us up for failure even before we start really testing! So what is the real definition of testing?

    “Testing is the process of executing a program/system with the intent of finding errors.”

    The primary axiom for the testing equation within software development is this:

    “A test when executed that reveals a problem in the software is a success.”
     
    Why Test?
  • Test for defects so they can be fixed, and

  • Test for confidence in the software
  • July 17, 2003
     
    Q&A's > CSTE > Knowledge Domain 6 > Test Planning Process
    7) What is the objective of test plan?

    The objective of test plan is to describe all testing that is to be accomplished, together with the resources and schedule necessary for completion. The test plan should provide background information on the software being tested, a test objective and risks, and specific tests to be performed.

    8) What are the concerns testers faces?

    Not enough training
    Us-versus-them mentality: This common problem arises when developers and testers are on opposite sides of the testing issue
    Lack of test tools
    Lack of management understanding/support of testing
    Lack of customer and user involvement
    Not enough time for testing
    Over reliance on independent testers: also called as “throw it over the wall”
    Rapid Change
    Testers are in lose-lose situation: On the one hand, if the testers report too many defects, they are blamed for delaying the project. Conversely, if the testers do not find the critical defects, they are blamed for poor quality
    Having to say No: saying No, the software is not ready for production

    9) What are different approaches to organize test team? Or what are different methods for test team composition? Or what are the different ways to form a test team?

    Test Team Approach: Internal IT
    Composition of test team members: Project Team
    Advantages: Minimize cost, training, and Knowledge of Project
    Disadvantages: Time allocation, lack of independence, and lack of objectivity

    Test Team Approach: External IT
    Composition of test team members: QA Professional Testers
    Advantages: Independent view, IT Professionals, and Multiple Project testing experience
    Disadvantages: Cost, over reliance, and competition

    Test Team Approach: Non-IT
    Composition of test team members: Users, Auditors, and Consultants
    Advantages: Independent view, independence in assessment, and ability to act
    Disadvantages: Cost, lack of IT knowledge, and lack of project knowledge

    Test Team Approach: Combination
    Composition of test team members: Any or all of the above
    Advantages: Multiple Skills, Education, and Clout
    Disadvantages: Cost, Scheduling reviews, and diverse backgrounds


    10) List five skills a competent tester should have?

    Test Process Knowledge
    Excellent written and oral communication skills
    Analytical ability
    Knowledge of test tools
    Understanding of defect tools

    July 15, 2003
     
    Q&A's > CSTE > Knowledge Domain 6 > Test Planning Process

    1) Test plan should begin at what time of testing life cycle?

    Test Planning should begin at the same time requirements definition starts. The plan will be detailed in parallel with application requirements. During the analysis stage of project, the Test Plan defines and communicates test requirements and the amount of testing needed so that accurate test estimates can be made and incorporated into the project plan

    2) IEEE standards for test plan?

    Several standards suggest what a test plan should contain, including the IEEE.
    The standards are:
    IEEE standards:

    829-1983 IEEE Standard for Software Test Documentation
    1008-1987 IEEE Standard for Software Unit Testing
    1012-1986 IEEE Standard for Software Verification & Validation Plans
    1059-1993 IEEE Guide for Software Verification & Validation Plans

    I am not sure about the above answer; anyone let me know more about IEEE standards for test plan?

    3) What is test design?

    Test Design details what types of tests must be conducted, what stages of testing are required (e.g. Unit, Integration, System, Performance, Usability), and then outlines the sequence and timing of tests

    4) Is the test design is part of test plan? Or both are different?

    Yes, Test design is a part of test plan.
    Test Plan is defined as an overall document providing direction for all testing activity.
    Test design refines the test approach and identifies the features to be covered by the design and its associated tests (according to IEEE)
    Test plans and designs can be developed for any level of testing, and more often combined in the same document

    5) Why plan tests?

    The primary purpose of test planning is to define the testing activities required to achieve sufficient confidence in a solution to put it into production. In the absence of a test plan, testing stops when you run out of time.
    Documented tests are repeatable, controllable, and insure adequate test coverage when executed (please see CBOK for the definition of Repeatable, Controllable, & Coverage)


    6) What are the main contents in test plan?

    Test Scope
    Test Objectives
    Assumptions
    Risk Analysis
    Test Design
    Roles & Responsibilities
    Test Schedule & Resources
    Test Data Management
    Test Environment
    Communication Approach
    Test Tools

    These are all the contents of test plan. Please take most important ones to answer the question




    July 07, 2003
     
    Definitions
    Smoke Testing (ensuring that all navigation through an application works properly);

    Configuration Testing (making sure the application works correctly on different operating systems, processors, or web browsers, as well as machines equipped with varying amounts of memory).
    July 04, 2003
     
    Regression Testing Goals
    1. To ensure that the current system will work when updates/changes are applied to the system.

    2. To implement lifecycle testing for end to end testing.
     
    What is COTS?
    COTS. The term "COTS" is meant to refer to things that one can buy, ready-made, from some manufacturer's virtual store shelf (e.g., through a catalogue or from a price list). It carries with it a sense of getting, at a reasonable cost, something that already does the job. It replaces the nightmares of developing unique system components with the promises of fast, efficient acquisition of cheap (or at least cheaper) component implementations.

    The salient characteristics of a COTS product are

    it exists a priori
    it is available to the general public
    it can be bought (or leased or licensed)

    Source: Carnegie Mellon Software Engineering Institue & An Architecture for COTS Based Software Systems
     
    Metrics for evaluating application system testing
    Metric = Formula

    Test Coverage = Number of units (KLOC/FP) tested / total size of the system
    Number of tests per unit size = Number of test cases per KLOC/FP
    Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria
    Defects per size = Defects detected / system size
    Test cost (in %) = Cost of testing / total cost *100
    Cost to locate defect = Cost of testing / the number of defects located
    Achieving Budget = Actual cost of testing / Budgeted cost of testing
    Defects detected in testing = Defects detected in testing / total system defects
    Defects detected in production = Defects detected in production/system size
    Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100
    Effectiveness of testing to business = Loss due to problems / total resources processed by the system.
    System complaints = Number of third party complaints / number of transactions processed
    Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10
    Source Code Analysis = Number of source code statements changed / total number of tests.
    Effort Productivity =
  • Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation
  • Test Execution Productivity = No of Test cycles executed / Actual Effort for testing


  • July 03, 2003
     
    How Many Bugs Do Regression Tests Find?
    What percentage of bugs are found by rerunning tests? That is, what's the value of this equation:

    number of bugs in a release found by re-executing tests
    100 X ------------------------------------------------------------------------------- ?
    number of bugs found by running all tests (for 1st or Nth time)

    Excellent article, click on the title for more.......
     
    Testing Questions
    Testing Philosophy

  • What is software quality assurance?
  • What is the value of a testing group? How do you justify your work and budget?
  • What is the role of the test group vis-à-vis documentation, tech support, and so forth?
  • How much interaction with users should testers have, and why?
  • How should you learn about problems discovered in the field, and what should you learn from those problems?
  • What are the roles of glass-box and black-box testing tools?
  • What issues come up in test automation, and how do you manage them?
  • What development model should programmers and the test group use?
  • How do you get programmers to build testability support into their code?
  • What is the role of a bug tracking system?

    Technical Breadth

  • What are the key challenges of testing?
  • Have you ever completely tested any part of a product? How?
  • Have you done exploratory or specification-driven testing?
  • Should every business test its software the same way?
  • Discuss the economics of automation and the role of metrics in testing.
  • Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams.
  • When have you had to focus on data integrity?
  • What are some of the typical bugs you encountered in your last assignment?

    Project Management

  • How do you prioritize testing tasks within a project?
  • How do you develop a test plan and schedule? Describe bottom-up and top-down approaches.
  • When should you begin test planning?
  • When should you begin testing?
  • Do you know of metrics that help you estimate the size of the testing effort?
  • How do you scope out the size of the testing effort?
  • How many hours a week should a tester work?
  • How should your staff be managed? How about your overtime?
  • How do you estimate staff requirements?
  • What do you do (with the project tasks) when the schedule fails?
  • How do you handle conflict with programmers?
  • How do you know when the product is tested well enough?
  •  
    Deciding on the Correct Ratio of Developers to Testers
    Many of us would like a precise answer to the question: "What's the correct staffing ratio for developers to testers in my product development organization?" Usually though, the only answer is "It depends". Your answer depends on your situation: the kind of project you're working on, your schedule constraints, the culture you work in, and the quality expectations for the product. This paper discusses the thought process involved in deciding on your correct staffing ratios.


     
    Why Software Fails
    This note summarizes conclusions from a three year study about why released software fails. Our method was to obtain mature-beta or retail versions of real software applications and stress test them until they fail. From an analysis of the causal faults, we have synthesized four reasons why software fails. This note presents these four classes of failures and discusses the challenges they present to developers and testers. The implications for software testers are emphasized


     
    Success with Test Automation
    This paper describes several principles for test automation. These principles were used to develop a system of automated tests for a new family of client/server applications. It encourages applying standard software development processes to test automation. It identifies criteria for selecting appropriate tests to be automated and advantages of a Testcase Interpreter. It describes how cascading failures prevent unattended testing. It identifies the most serious bug that can affect test automation systems and describes ways to avoid it. It circumscribes reasonable limits on test automation goals.
     
    Totally Data-Driven Automated Testing
    The purpose of this document is to provide the reader with a clear understanding of what is actually required to successfully implement cost-effective automated testing. Rather than engage in a theoretical dissertation on this subject, I have endeavored to be as straightforward and brutally honest as possible in discussing the issues, problems, necessities, and requirements involved in this enterprise.
     
    Testing Papers
    An excellent resource for Testing, Quality Assurance Paper. Some of the best papers are

  • An Introduction to Software Testing
  • Software Testing and Software Development Lifecycles
  • Why Bother to Unit Test?
  • Organisational Approaches for Unit Testing
  • Designing Unit Test Cases
  • Host / Target Testing
  • Structural Coverage Metrics: Their Strengths and Weaknesses
  • Complete Application Testing
  • A Strategy for Testing C++
  • C++ - It's Testing Jim, But Not As We Know It!
  • Testing Embedded C++ with Cantata++

  •  
    Testing Java Applets and Applications
    Very good presentation on Testging Java Applets and Applications by Kevin A. Smith, Software Test Engineer, JavaSoft, Sun Microsystems, Inc
     
    Black-Box Testing Techniques
    "Boundary value analysis" one of the most fruitful forms of black-box testing, requires that test cases be generated which are on, and immediately around, the boundaries of the input and output for a given piece of software.

    "Equivalence class partitioning" is a formalization of the way many people already test software. An equivalence class is a collection of items which can all be regarded as identical at a given level of abstraction, e.g., a set of data items which will all evoke the same general behavior from a given software module.

    "cause-effect graphing" - In situations where there are many different combinations of inputs possible suggests a black-box technique called "cause-effect graphing." This technique helps software engineers identify those specific combinations of inputs which will be the most error-prone.
     
    White-box Testing
    White-Box Testing: White-box testing is the testing of the underlying implementation of a piece of software (e.g., source code) without regard to the specification (external description) for that piece of software. The goal of white-box testing of source code is to identify such items as (unintentional) infinite loops, paths through the code which should be allowed, but which cannot be executed (e.g., [Frankel and Weyuker, 1987]), and dead (unreachable) code.

    Probably the most commonly used example of a white-box testing testing technique is "basis path testing." For an opposing view see McCabe's approach requires that we determine the number of linearly independent paths through a piece of software (what he refers to as the cyclomatic complexity), and use that number coupled with a graph of the control flow through the same piece of software to come up with a set of test cases which will cause executable statements to be executed at least once.

    McCabe's approach is an attempt to systematically address an even older concept in white- box testing, i.e., coverage. Coverage is simply a measure of the number and type of statements executed, as well as how these statements are executed. Glen Myer describes several types of coverage. "Statement coverage," the weakest acceptable form of coverage, requires that enough test cases be written so that we can be assured that all executable statements will be executed at least once. "Condition coverage" requires that all statements be executed at least once, and that all binary decisions have a
    true and a false outcome at least once.

     
    Testing & Debugging
    Testing is the process of examining something with the intention of finding errors. While testing may reveal a symptom of an error, it may not uncover the exact cause of the error.

    Debugging is the process of locating the exact cause of an error, and removing that cause.

     
    "Testing proves the presence, not the absence, of bugs."

    -- E.W. Dijkstra

    "Absence of evidence is not evidence of absence."

    -- Source Unknown

     
    Thesis - Testing of a Computer Program
    Good thesis on Testing of a Computer Program on the Example of a Medical Application with Diversification and other Methods. Contains info regarding Psychology and Software Tests, Kinds of Software Testing, and black/white box testing, etc.



    Powered by Blogger