The Real World of Software Testing
October 06, 2003
 
Quality Guru's
The early Americans

W Edwards Deming introduced concepts of variation to the Japanese and also a systematic approach to problem solving, which later became known as the Deming or PDCA cycle. Later in the West he concentrated on management issues and produced his famous 14 Points. He remains active today and he has attempted a summary of his 60 years experience in his System of Profound Knowledge

Deming encouraged the Japanese to adopt a systematic approach to problem solving, which later became known as the Deming or PDCA (Plan, Do, Check, Action) cycle. He also pushed senior managers to become actively involved in their company's quality improvement programmes.

Deming produced his 14 Points for Management, in order to help people understand and implement the necessary transformation. Deming said that adoption of, and action on, the 14 points are a signal that management intend to stay in business. They apply to small or large organisations, and to service industries as well as to manufacturing. However the 14 points should not be seen as the whole of his philosophy, or as a recipe for improvement. They need careful discussion in the context of one's own organisation.

Before his death Deming appears to have attempted a summary of his 60 years' experience. This he called the System of Profound Knowledge. It describes four interrelated parts:

Appreciation for a system
This emphasises the need for managers to understand the relationships between functions and activities. Everyone should understand that the long term aim is for everybody to gain - employees, share holders, customers, suppliers, and the environment. Failure to accomplish the aim causes loss to everybody in the system.

Knowledge of statistical theory
This includes knowledge about variation, process capability, control charts, interactions and loss function. All these need to be understood to accomplish effective leadership, teamwork etc.

Theory of knowledge
All plans require prediction based on past experience. An example of success cannot be successfully copied unless the theory is understood.

Knowledge of psychology
It is necessary to understand human interactions. Differences between people must be used for optimisation by leaders. People have intrinsic motivation to succeed in many areas. Extrinsic motivators in employment may smother intrinsic motivation. These include pay rises and performance grading, although these are sometimes viewed as a way out for managers.

Joseph M Juran focused on Quality Control as an integral part of management control in his lectures to the Japanese in the early 1950s. He believes that Quality does not happen by accident, it must be planned, and that Quality Planning is part of the trilogy of planning, control and improvement. He warns that there are no shortcuts to quality.

There are many aspects to Juran's message on quality. Intrinsic is the belief that quality does not happen by accident, it must be planned. His recent book Juran on Planning for Quality is perhaps the definitive guide to Juran's current thoughts and his structured approach to company-wide quality planning. His earlier Quality Control Handbook was much more technical in nature.

Juran sees quality planning as part of the quality trilogy of quality planning, quality control and quality improvement. The key elements in implementing company-wide strategic quality planning are in turn seen as identifying customers and their needs; establishing optimal quality goals; creating measurements of quality; planning processes capable of meeting quality goals under operating conditions; and producing continuing results in improved market share, premium prices, and a reduction of error rates in the office and factory.

Juran's Quality Planning Road Map consists of the following steps:

  • Identify who are the customers.
  • Determine the needs of those customers.
  • Translate those needs into our language.
  • Develop a product that can respond to those needs.
  • Optimise the product features so as to meet our needs as well as customer needs.
  • Develop a process which is able to produce the product.
  • Optimise the process.
  • Prove that the process can produce the product under operating conditions.
  • Transfer the process to Operations.

    Illustration of Quality Trilogy via a Control Chart

    Juran concentrates not just on the end customer, but identifies other external and internal customers. This effects his concept of quality since one must also consider the 'fitness of use' of the interim product for the following internal customers. He illustrates this idea via the Quality Spiral.

    His formula for results is:

  • Establish specific goals to be reached.
  • Establish plans for reaching the goals.
  • Assign clear responsibility for meeting the goals.
  • Base the rewards on results achieved.

    Dr Juran warns that there are no shortcuts to quality and is sceptical of companies that rush into applying Quality Circles, since he doubts their effectiveness in the West. He believes that the majority of quality problems are the fault of poor management, rather than poor workmanship on the shop-floor. In general, he believes that management controllable defects account for over 80% of the total quality problems. Thus he claims that Philip Crosby's Zero Defects approach does not help, since it is mistakenly based on the idea that the bulk of quality problems arise because workers are careless and not properly motivated.

    Armand V Feigenbaum is the originator of Total Quality Control. He sees quality control as a business method rather than technically, and believes that quality has become the single most important force leading to organisational success and growth.

    Dr Armand V Feigenbaum is the originator of Total Quality Control. The first edition of his book Total Quality Control was completed whilst he was still a doctoral student at MIT.

    In his book Quality Control: Principles, Practices and Administration, Feigenbaum strove to move away from the then primary concern with technical methods of quality control, to quality control as a business method. Thus he emphasised the administrative viewpoint and considered human relations as a basic issue in quality control activities. Individual methods, such as statistics or preventive maintenance, are seen as only segments of a comprehensive quality control programme.

    Quality control itself is defined as:
    'An effective system for co-ordinating the quality maintenance and quality improvement efforts of the various groups in an organisation so as to enable production at the most economical levels which allow for full customer satisfaction.'

    He stresses that quality does not mean best but best for the customer use and selling price. The word control in quality control represents a management tool with 4 steps:

  • Setting quality standards
  • Appraising conformance to these standards
  • Acting when standards are exceeded
  • Planning for improvements in the standards.

    Quality control is seen as entering into all phases of the industrial production process, from customer specification and sale through design, engineering and assembly, and ending with shipment of product to a customer who is happy with it. Effective control over the factors affecting product quality is regarded as requiring controls at all important stages of the production process. These controls or jobs of quality control can be classified as:

  • New-design control
  • Incoming material control
  • Product control
  • Special process studies.

    Quality is seen as having become the single most important force leading to organisational success and company growth in national and international markets. Further, it is argued that:

    Quality is in its essence a way of managing the organisation and that, like finance and marketing, quality has now become an essential element of modern management.

    Thus a Total Quality System is defined as:

    The agreed company-wide and plantwide operating work structure, documented in effective, integrated technical and managerial procedures, for guiding the co-ordinated actions of the people, the machines and the information of the company and plant in the best and most practical ways to assure customer quality satisfaction and economical costs of quality.

    Operating quality costs are divided into:

  • Prevention costs including quality planning.
  • Appraisal costs including inspection.
  • Internal failure costs including scrap and rework.
  • External failure costs including warranty costs, complaints etc.

    Reductions in operating quality costs result from setting up a total quality system for two reasons:

  • Lack of existing effective customer-orientated customer standards may mean current quality of products is not optimal given use
  • Expenditure on prevention costs can lead to a severalfold reduction in internal and external failure costs.

    The new 40th Anniversary edition of Dr A V Feigenbaum's book, Total Quality Control, now further defines TQC for the 1990s in the form of ten crucial benchmarks for total quality success. These are that:

  • Quality is a company-wide process.
  • Quality is what the customer says it is.
  • Quality and cost are a sum, not a difference.
  • Quality requires both individual and team zealotry.
  • Quality is a way of managing.
  • Quality and innovation are mutually dependent.
  • Quality is an ethic.
  • Quality requires continuous improvement.
  • Quality is the most cost-effective, least capital-intensive route to productivity.
  • Quality is implemented with a total system connected with customers and suppliers.

    These are the ten benchmarks for total quality in the 1990s. They make quality a way of totally focusing the company on the customer - whether it be the end user or the man or woman at the next work station or next desk. Most importantly, they provide the company with foundation points for successful implementation of its international quality leadership.
  • September 17, 2003
     
    Manual or Automated?
    Summary:Automated test tools are powerful aids to improving the return on the testing investment when used wisely. Some tests inherently require an automated approach to be effective, but others must be manual. In addition, automated testing projects that fail are expensive and politically dangerous. How can we recognize whether to automate a test or run it manually, and how much money should we spend on a test?

    When Test Automation Makes Sense

    Let’s start with the tests that ideally are automated. These include:


    Other tests that are well-suited for automation exist, such as the static testing of complexity and code standards compliance that I mentioned in the previous article. In general, automated tests have higher upfront costs—tools, test development, environments, and so forth—and lower costs to repeat the test.

    When to Focus on Manual Testing


    Wildcards

    In some cases, tests can be done manually, be automated, or both.

    Higher per-test costs and needs for human skills, judgment, and interaction push towards manual testing. A need to repeat tests many times or reduce the cycle time for test execution pushes towards automated testing.

    Reasons to Be Careful with Automation

    Automated testing is a huge investment, one of the biggest that organizations make in testing. Tool licenses can easily hit six or seven figures. Neophytes can’t use most of these tools—regardless of what any glossy test tool brochure says—so training, consulting, and expert contractors can cost more than the tools themselves. Then there’s maintenance of the test scripts, which generally is more difficult and time consuming than maintaining manual test cases.


    September 16, 2003
     
    Investing in Software Testing
    What Does Quality Cost?

    The title of Phil Crosby book says it all: Quality Is Free. Why is quality free? Like Crosby and J.M. Juran, Jim Campenella also illustrates a technique for analyzing the costs of quality in Principles of Quality Costs. Campenella breaks down those costs as follows:

    Cost of Quality = Cost of conformance + Cost of nonconformance

    Conformance Costs include Prevention Costs and Appraisal Costs.
    Prevention costs include money spent on quality assurance tasks like training, requirements and code reviews, and other activities that promote good software. Appraisal costs include money spent on planning test activities, developing test cases and data, and executing those test cases once.

    Nonconformance costs come in two flavors: Internal Failures and External Failures. The costs of internal failure include all expenses that arise when test cases fail the first time they are run, as they often do. A programmer incurs a cost of internal failure while debugging problems found during her own unit and component testing.

    Once we get into formal testing in an independent test team, the costs of internal failure increase. Think through the process: The tester researches and reports the failure, the programmer finds and fixes the fault, the release engineer produces a new release, the system administration team installs that release in the test environment, and the tester retests the new release to confirm the fix and to check for regression.

    The costs of external failure are those incurred when, rather than a tester finding a bug, the customer does. These costs will be even higher than those associated with either kind of internal failure, programmer-found or tester-found. In these cases, not only does the same process described for tester-found bugs occur, but you also incur the technical support overhead and the more expensive process of releasing a fix to the field rather than to the test lab. In addition, consider the intangible costs: angry customers, damage to the company image, lost business, and maybe even lawsuits.

    Two observations lay the foundation for the enlightened view of testing as an investment. First, like any cost equation in business, we will want to minimize the cost of quality. Second, while it is often cheaper to prevent problems than to repair them, if we must repair problems, internal failures cost less than external failures.

    The Risks to System Quality

    Myriad risks - i.e., factors possibly leading to loss or injury menace software development. When these risks become realities, some projects fail. Wise project managers plan for and manage risks. In any software development project, we can group risks into four categories.
    Financial risks: How might the project overrun the budget?
    Schedule risks: How might the project exceed the allotted time?
    Feature risks: How might we build the wrong product?
    Quality risks: How might the product lack customer-satisfying behaviors or possess customer-dissatisfying behaviors?

    Testing allows us to assess the system against the various risks to system quality, which allows the project team to manage and balance quality risks against the other three areas.

    Classes of Quality Risks
    It's important for test professionals to remember that many kinds of quality risks exist. The most obvious is functionality: Does the software provide all the intended capabilities? For example, a word processing program that does not support adding new text in an existing document is worthless.
    While functionality is important, remember my self-deprecating anecdote in the last article. In that example, my test team and I focused entirely on functionality to the exclusion of important items like installation. In general, it's easy to over-emphasize a single quality risk and misalign the testing effort with customer usage. Consider the following examples of other classes of quality risks.
  • Use cases: working features fail when used in realistic sequences.
  • Robustness: common errors are handled improperly.
  • Performance: the system functions properly, but too slowly.
  • Localization: problems with supported languages, time zones, currencies, etc.
  • Data quality: a database becomes corrupted or accepts improper data.
  • Usability: the software's interface is cumbersome or inexplicable.
  • Volume/capacity: at peak or sustained loads, the system fails.
  • Reliability: too often -- especially at peak loads -- the system crashes, hangs, kills sessions, and so forth.

    Tailoring Testing to Quality Risk Priority

    To provide maximum return on the testing investment, we have to adjust the amount of time, resources, and attention we pay to each risk based on its priority. The priority of a risk to system quality arises from the extent to which that risk can and might affect the customers’ and users’ experiences of quality. In other words, the more likely a problem or the more serious the impact of a problem, the more testing that problem area deserves.

    You can prioritize in a number of ways. One approach I like is to use a descending scale from one (most risky) to five (least risky) along three dimensions.

    Severity: How dangerous is a failure of the system in this area?
    Priority: How much does a failure of the system in this area compromise the value of the product to customers and users?
    Likelihood: What are the odds that a user will encounter a failure in this area, either due to usage profiles or the technical risk of the problem?

    Many such scales exist and can be used to quantify levels of quality risk.

    Analyzing Quality Risks

    A slightly more formal approach is the one described in the International Standards Organization document ISO 9126. This standard proposes that the quality of a software system can be measured along six major characteristics:

    Functionality: Does the system provide the required capabilities?
    Reliability: Does the system work as needed when needed?
    Usability: Is the system intuitive, comprehensible, and handy to the users?
    Efficiency: Is the system sparing in its use of resources?
    Maintainability: Can operators, programmers, and customers upgrade the system as needed?
    Performance: Does the system fulfill the users’ requests speedily?

    Not every quality risk can be a high priority. When discussing risks to system quality, I don’t ask people, "Do you want us to make sure this area works?" In the absence of tradeoffs, everyone wants better quality. Setting the standard for quality higher requires more money spent on testing, pushes out the release date, and can distract from more important priorities—like focusing the team on the next release. To determine the real priority of a potential problem, ask people, "How much money, time, and attention would you be willing to give to problems in this area? Would you pay for an extra tester to look for bugs in this area, and would you delay shipping the product if that tester succeeded in finding bugs?" While achieving better quality generates a positive return on investment in the long run, as with the stock market, you get a better return on investment where the risk is higher. Happily, unlike the stock market, the risk of your test effort failing does not increase when you take on the most important risks to system quality, but rather your chances of test success increase.




  •  
    XP Testing Without XP: Taking Advantage of Agile Testing Practices
    Extreme Programming is a discipline of software development based on values of simplicity, communication, feedback, and courage. It works by bringing the whole team together in the presence of simple practices, with enough feedback to enable the team to see where they are and to tune the practices to their unique situation. (www.xprogramming.com)

    How XP Testing is Different
    XP testing was different in many ways from ‘traditional’ testing. The biggest difference is that on an XP project, the entire development team takes responsibility for quality. This means the whole team is responsible for all testing tasks, including acceptance test automation. When testers and programmers work together, the approaches to test automation can be pretty creative!

    As Ron Jeffries says, XP isn’t about ‘roles’, it’s about a tight integration of skills and behaviors. Testing is an integrated activity on an XP team. The development team needs continual feedback, with the customer expressing their needs in terms of tests, and programmers expressing design and code in terms of tests. On an XP team, the tester will play both the customer and programmer ‘roles’. She’ll focus on acceptance testing and work to transfer her testing and quality assurance skills to the rest of the team.

    XP Tester Activities

    Here are some activities testers perform on XP teams.

  • Negotiate quality with the customer (it’s not YOUR standard of quality, it’s what the customer desires and is willing to pay for!)
  • Clarify stories, flush out hidden assumptions
  • Enable accurate estimates for both programming and testing tasks
  • Make sure the acceptance tests verify the quality specified by the customer
  • Help the team automate tests
  • Help the team produce testable code
  • Form an integral part of the continuous feedback loop that keeps the team on track.

    The Nature of XP Testing

    The biggest difference between XP projects and most ‘traditional’ software development projects is the concept of test-driven development. With XP, every chunk of code is covered by unit tests, which must all pass all the time. The absence of unit-level and regression bugs means that testers actually get to focus on their job: making sure the code does what the customer wanted. The acceptance tests define the level of quality the customer has specified (and paid for!)

    Testers who are new to XP should keep in mind the XP values: communication, simplicity, feedback and courage. Courage may be the most important. As a tester, the idea of writing, automating and executing tests in speedy two or three week iterations, without the benefit of traditional requirements documents, can be daunting.

    Testers need courage to let the customers make mistakes and learn from them. They need courage to determine the minimum testing that will prove the successful completion of a story. They need courage to ask their teammates to pair for test automation. They need courage to remind the team that we are all responsible for quality and testing. To bolster this courage, testers on XP teams should remind themselves that an XP tester is never alone – your team is always there to help you!

    For more information on XP Testing, Please click here "XP Testing Without XP: Taking Advantage of Agile Testing Practices"

  • September 15, 2003
     
    FAQ - CSTE - Set 1
    Will automated testing tools make testing easier?
    Possibly. For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable

    What makes a good test engineer?
  • 'Test to break' attitude
  • An ability to take the point of view of the customer
  • A strong desire for quality
  • An attention to detail
  • Tact and diplomacy
  • An ability to communicate with both technical and non-technical people
  • Understanding of S/W development Process

    What is a 'test case'?
    A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

    What should be done after a bug is found?
    The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes did not create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes.

    What is 'configuration management'?
    Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes

    What if the software is so buggy it can't really be tested at all?
    The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem.

    How can it be known when to stop testing?
    This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
  • Deadlines (release deadlines, testing deadlines, etc.)
  • Test cases completed with certain percentage passed
  • Test budget depleted
  • Coverage of code/functionality/requirements reaches a specified point
  • Bug rate falls below a certain level
  • Beta or alpha testing period ends

    What if there isn't enough time for thorough testing?
    Use risk analysis to determine where testing should be focused.
    Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:
  • Which functionality is most important to the project's intended purpose?
  • Which functionality is most visible to the user?
  • Which functionality has the largest safety impact?
  • Which functionality has the largest financial impact on users?
  • Which aspects of the application are most important to the customer?
  • Which aspects of the application can be tested early in the development cycle?
  • Which parts of the code are most complex, and thus most subject to errors?
  • Which parts of the application were developed in rush or panic mode?
  • Which aspects of similar/related previous projects caused problems?
  • Which aspects of similar/related previous projects had large maintenance expenses?
  • Which parts of the requirements and design are unclear or poorly thought out?
  • What do the developers think are the highest-risk aspects of the application?
  • What kinds of problems would cause the worst publicity?
  • What kinds of problems would cause the most customer service complaints?
  • What kinds of tests could easily cover multiple functionalities?
  • Which tests will have the best high-risk-coverage to time-required ratio?

    What can be done if requirements are changing continuously?
    A common problem and a major headache.
  • Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible.
  • It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch.
  • If the code is well-commented and well-documented this makes changes easier for the developers.
  • Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.
  • The project's initial schedule should allow for some extra time commensurate with the possibility of changes.
  • Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version.
  • Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.
  • Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job.
  • Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes.
  • Try to design some flexibility into automated test scripts.
  • Focus initial automated testing on application aspects that are most likely to remain unchanged.
  • Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
  • Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)
  • Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails).

    What if the application has functionality that wasn't in the requirements?
    It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk.

    How can Software QA processes be implemented without stifling productivity?
    By implementing QA processes slowly over time, using consensus to reach agreement on processes, and adjusting and experimenting as an organization grows and matures, productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection, panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings, and promote training as part of the QA process. However, no one - especially talented technical types - likes rules or bureacracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug-fixing and calming of irate customers.

    How does a client/server environment affect testing?
    Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing

    How can World Wide Web sites be tested?
    Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:
  • What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
  • Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
  • What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?
  • Will down time for server and content maintenance/upgrades be allowed? how much?
  • What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?
  • How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
  • What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
  • Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
  • Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??
  • How will internal and external links be validated and updated? how often?
  • Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
  • How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
  • How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?

    How is testing affected by object-oriented designs?
    Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well-designed this can simplify test design.

    What is Extreme Programming and what's it got to do with testing?
    Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. It was created by Kent Beck who described the approach in his book 'Extreme Programming Explained'. Testing ('extreme testing') is a core aspect of Extreme Programming. Programmers are expected to write unit and functional test code first - before the application is developed. Test code is under source control along with the rest of the code. Customers are expected to be an integral part of the project team and to help develope scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are modified and rerun for each of the frequent development iterations. QA and test personnel are also required to be an integral part of the project team. Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected.



  • August 26, 2003
     
    Reviews, Inspections, and Walkthroughs
    In a review , a work product is examined for defects by individuals other than the person who produced it. A Work Product is any important deliverable created during the requirements, design, coding, or testing phase of software development.

    Research shows that reviews are one of the best ways to ensure quality requirements, giving you as high as a 10 to 1 return on investment. Reviews help you to discover defects and to ensure product compliance to specifications, standards or regulations

    Software Inspections are a disciplined engineering practice for detecting and correcting defects in software artifacts, and preventing their leakage into field operations.

    Software Inspections are a reasoning activity performed by practitioners playing the defined roles of Moderator, Recorder, Reviewer, Reader, and Producer.

    Moderator: Responsible for ensuring that the inspection procedures are performed through out the entire inspection process. The responsibilities include
  • Verifying the work products readiness for inspection
  • Verifying that the entry criteria is met
  • Assembling an effective inspection team
  • Keeping the inspection meeting on track
  • Verifying that the exist criteria is met

    Recorder: The Recorder will document all defects that arise from the inspection meeting. This documentation will include where the defects was found. Additionally, every defect is assigned a defect category and type.

    Reviewer: All of the Inspection Team individuals are also considered to play the Reviewer role, independent of other roles assigned. The Inspector role is responsible for analyzing and detecting defects within the work product.

    Reader: The reader is responsible for leading the Inspection Team through the inspection meeting by reading aloud small logical units, paraphrasing where appropriate

    Producer: The person who originally constructed the work product. The individual that assumes the role of Producer will be ultimately responsible for updating the work product after the inspection.

    In a Walkthrough, the producer describes the product and asks for comments from the participants. These gatherings generally serve to inform participants about the product rather than correct it.



  •  
    The Software Inspection Process
    Great place for Software Inspection Process. This site will give you the detailed description of all stages in Software Inspection Process. It also lists different reports produced after inspection.
    August 22, 2003
     
    Defect Management Process
    Keeping in mind the philosophies and goals developed in QAI research report number 8, Mosaic Inc. developed a multi-step approach to defect management. The major steps involved in the process are:

    Defect Prevention
    Implementation of techniques, methodology, and standard processes to reduce the risk of defects
  • Identify Critical Risks
    Identify the critical risks facing the project or system. These are the types of defects that could jeopardize the successful construction, delivery and/or operation of the system.
  • Estimate Expected Impact
    For each critical risk, make an assessment of the financial impact if the risk becomes a problem.
  • Minimize Expected Impact
    Once the most important risks are identified try to eliminate each risk. For risks that cannot be eliminated, reduce the probability that the risk will become a problem and the financial impact should that happen.

    Deliverable Baseline
    A deliverable (e.g. work product) is baselined when it reaches a predefined milestone in its development.
    Errors caught before a deliverable is baselined would not be considered defects.
    Deliverable baselining involves the following activities:
  • Identify Key Deliverables
    Select those deliverables that will be baselined and the point within the development process where the deliverable will be baselined.
  • Define Standards for Each Deliverable:
    Set the requirements for each deliverable and the criteria that must be met before the deliverable can be baselined.

    Defect Discovery
  • Find Defect
    Discover the Defects before they become problem
    Techniques to find defects can be divided into three categories
    Static Techniques – Code review
    Dynamic Techniques - Executing Test Cases
    Operational Techniques – Defects found by users, customers, or control personnel
  • Report Defect
  • Acknowledge Defect

    Defect Resolution
  • Prioritize Risk
    Developers determine the importance of fixing a particular defect. It is a three level method
    Critical
    Major
    Minor
  • Schedule Fix and Fix Defect
    Developers schedule when to fix a defect. Then developers should fix defects in order of importance
  • Report Resolution
    Developers notify all relevant parties how and when the defect was repaired along with other pertinent information such as:
    The nature of the fix,
    When the fix will be released, and
    How the fix will be released.

    Process Improvement

    Management Reporting (Parallel activity for the above 5 steps)
    It is important that the defect information, which is a natural by-product of the defect management process, be analyzed and communicated to both project management and senior management. This could take the form of defect rates, defect trends, types of defects, failure costs, etc. From a tactical perspective, Defect Arrival Rate (rate at which new defects are being discovered) is a very useful metric that provides insight into a project's likelihood of making its target date objectives. Defect Removal Efficiency is also considered to be one of the most useful metrics; however it can not be calculated until the system is installed. Defect Removal Efficiency is the ratio of defects found prior to product operation divided by the total number of defects found in the application.
  • August 20, 2003
     
    What is Quality?
    The definition of the term quality is an issue. Based interesting discussion of the meaning of Quality, a surprising number of people still think software quality is simply the absence of errors. Dictionary definitions are too vague to be of much help. The only relevant definition offered by the Oxford English Dictionary (Oxford, 1993), for instance, is peculiar excellence or superiority. Noteworthy here is that quality cannot be discussed for something in isolation: comparison is intrinsic.

    Many software engineering references define software quality as correct implementation of the specification. Such a definition can be used during product development, but it is inadequate for facilitating comparisons between products. Standards organizations have tended to refer to meeting needs or expectations, e.g. the ISO defines quality as the totality of features and characteristics of a product or service that bears on its ability to satisfy stated or implied needs.

    IEEE defines quality as (1) The degree to which a system, component, or process meets specified requirements. (2) The degree to which a system, component, or process meets customer or user needs or expectations. An older IEEE defines Software quality is the degree to which software possesses a desired combination of attributes.

    Quality has been variously defined as:

  • Excellence (Socrates, Plato, Aristole)
  • Value (Feigenbaum 1951, Abbot 1955)
  • Conformance to specification (Levitt 1972, Gilmore 1974)
  • Fit for purpose (Juran 1974)
  • Meeting or exceeding, customers’ expectations (Gronroos 1983, Parasuraman & Ziethaml & Berry 1985)
  • Loss avoidance (Taguchi 1989)

    In short these six definitions show different aspects of quality. All can be applied to software development. We often find our products marketed for their excellence. We want to delight our customers with our products to build a long term business relationship. Many countries trade laws oblige us to sell the product only when fit for the purpose to which our customer tells us they will put it. When purchasing managers look at our software, they may judge comparable products on value knowing that this may stop them buying the excellent product. In managing the software development, efficiency and effective development processes together help avoid losses through rework and reducing later support and maintenance budgets. In testing, we work to see that the product conforms to specification.

    Thanks to Carol Long

  • July 29, 2003
     
    The Product Quality Measures

    1. Customer satisfaction index
    (Quality ultimately is measured in terms of customer satisfaction.)
    Surveyed before product delivery and after product delivery
    (and on-going on a periodic basis, using standard questionnaires)
    Number of system enhancement requests per year
    Number of maintenance fix requests per year
    User friendliness: call volume to customer service hotline
    User friendliness: training time per new user
    Number of product recalls or fix releases (software vendors)
    Number of production re-runs (in-house information systems groups)

    2. Delivered defect quantities
    Normalized per function point (or per LOC)
    At product delivery (first 3 months or first year of operation)
    Ongoing (per year of operation)
    By level of severity
    By category or cause, e.g.: requirements defect, design defect, code defect,
    documentation/on-line help defect, defect introduced by fixes, etc.

    3. Responsiveness (turnaround time) to users
    Turnaround time for defect fixes, by level of severity
    Time for minor vs. major enhancements; actual vs. planned elapsed time

    4. Product volatility
    Ratio of maintenance fixes (to repair the system & bring it into
    compliance with specifications), vs. enhancement requests
    (requests by users to enhance or change functionality)

    5. Defect ratios
    Defects found after product delivery per function point
    Defects found after product delivery per LOC
    Pre-delivery defects: annual post-delivery defects
    Defects per function point of the system modifications

    6. Defect removal efficiency
    Number of post-release defects (found by clients in field operation),
    categorized by level of severity
    Ratio of defects found internally prior to release (via inspections and testing),
    as a percentage of all defects
    All defects include defects found internally plus externally (by
    customers) in the first year after product delivery

    7. Complexity of delivered product
    McCabe's cyclomatic complexity counts across the system
    Halstead’s measure
    Card's design complexity measures
    Predicted defects and maintenance costs, based on complexity measures

    8. Test coverage
    Breadth of functional coverage
    Percentage of paths, branches or conditions that were actually tested
    Percentage by criticality level: perceived level of risk of paths
    The ratio of the number of detected faults to the number of predicted faults.

    9. Cost of defects
    Business losses per defect that occurs during operation
    Business interruption costs; costs of work-arounds
    Lost sales and lost goodwill
    Litigation costs resulting from defects
    Annual maintenance cost (per function point)
    Annual operating cost (per function point)
    Measurable damage to your boss's career

    10. Costs of quality activities
    Costs of reviews, inspections and preventive measures
    Costs of test planning and preparation
    Costs of test execution, defect tracking, version and change control
    Costs of diagnostics, debugging and fixing
    Costs of tools and tool support
    Costs of test case library maintenance
    Costs of testing & QA education associated with the product
    Costs of monitoring and oversight by the QA organization
    (if separate from the development and test organizations)

    11. Re-work
    Re-work effort (hours, as a percentage of the original coding hours)
    Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)
    Re-worked software components (as a percentage of the total delivered components)

    12. Reliability
    Availability (percentage of time a system is available, versus the time
    the system is needed to be available)
    Mean time between failure (MTBF)
    Mean time to repair (MTTR)
    Reliability ratio (MTBF / MTTR)
    Number of product recalls or fix releases
    Number of production re-runs as a ratio of production runs
    July 28, 2003
     
    TEST AUTOMATION FRAMEWORKS
    An excellent ebook by "Carl Nagle"
     
    Three Questions About Each Bug You Find

    1. Is this mistake somewhere else also?

    2. What next bug is hidden behind this one?

    3. What should I do to prevent bugs like this?

    for more information, click on the title....


     
    Risk Management

    Risk avoidance: Risk is avoided by obviating the possibility that the undesirable event will happen. You refuse to commit to meeting milestone M by feature F - don't sign the contract until the software is done. This avoids the risk. As long as you enter into the contract to deliver specific scope by a specific date, the risk that it won't come about exists.

    Risk reduction: this consists of minimizing the likelihood of the undesirable event. XP reduces the likelihood that you will lack some features at each milestone by reducing the amount of "extra" work to be done, such as paperwork or documentation, and improving overall quality so as to make development faster.

    Risk mitigation: this consists of minimizing the impact of the undesirable event. XP has active mitigation for the "schedule risk", by insisting that the most valuable features be done first; this reduces the likelihood that important features will be left out of milestone M.

    Risk acceptance: just grit your teeth and take your beating. So we're missing feature F by milestone M - we'll ship with what we have by that date. After reduction and mitigation, XP manages any residual risk this way.

    Risk transfer: this consists of getting someone else to take the risk in your place. Insurance is a risk transfer tactic. You pay a definite, known-with-certainty amount of money; the insurer will reimburse you if the risk of not completing feature F by milestone M materializes. No provision in XP. Has anyone ever insured a software project against schedule/budget overrun ?

    Contingency planning: substituting one risk for another, so that if the undesirable event occurs you have a "Plan B" which can compensate for the ill consequences. If we miss critical milestone M1 with feature set F1, we'll shelve the project and reassign all resources to our back-burner project which is currently being worked on by interns.

    Key point from all the above: risk management starts with identifying specific risks. Also, I think you can perform conscious risk management using any process, method, technique or approach. It's important to recognize that any process, etc. simply changes the risk landscape; your project will always have one single biggest risk, then a second biggest risk, and so on.

    Also: risks, like requirements, don't have the courtesy to stay put over the life of a project. They will change - old ones will bow out as risk tactics take effect, new ones will take their place.

    Risk management is like feedback. If you're not going to pay attention to it, you're wasting your time. More than once I've tried to adopt a risk-oriented approach to projects, only to have management react something like, "Oh, you think that's a risk. Well, thank you for telling us. We're happy to have had that risk reduced. Now proceed as before."

    One risk I often raise in projects is skills risk. Developers are supposed to crank out Java code who have only ever written Visual Basic, that sort of thing. Not once have I seen a response of risk avoidance (substituting other, trained team members for the unskilled ones), reduction (training the worker in Java), or mitigation (making provision for closer review of the person's code). It's always been acceptance - "We know it's less than ideal to have this guy working on that project, but he's what we've got at the moment. Can't hire anyone on short order, no time for training, no time for more reviews."

    If you only ever have one tactic for dealing with risk, your risk "management" is a no-brainer.

    ---- From the Laurent Bossavit weblog


     
    Defect Management Process
    An excellent place for Defect Management Process. The content in this site are same as mentioned in Knowledge domain 9 of CSTE
    The topic covered in this web site are

  • Defect Prevention
  • Deliverable Baseline
  • Defect Discovery
  • Defect Resolution
  • Process Improvement
  • Management Reporting


  • July 18, 2003
     
    Common definitions for testing - A Set of Testing Myths:
    “Testing is the process of demonstrating that defects are not present in the application that was developed.”

    “Testing is the activity or process which shows or demonstrates that a program or system performs all intended functions correctly.”

    “Testing is the activity of establishing the necessary “confidence” that a program or system does what it is supposed to do, based on the set of requirements that the user has specified.”


    These myths are still entrenched in much of how we collectively view testing and this mind-set sets us up for failure even before we start really testing! So what is the real definition of testing?

    “Testing is the process of executing a program/system with the intent of finding errors.”

    The primary axiom for the testing equation within software development is this:

    “A test when executed that reveals a problem in the software is a success.”
     
    Why Test?
  • Test for defects so they can be fixed, and

  • Test for confidence in the software
  • July 17, 2003
     
    Q&A's > CSTE > Knowledge Domain 6 > Test Planning Process
    7) What is the objective of test plan?

    The objective of test plan is to describe all testing that is to be accomplished, together with the resources and schedule necessary for completion. The test plan should provide background information on the software being tested, a test objective and risks, and specific tests to be performed.

    8) What are the concerns testers faces?

    Not enough training
    Us-versus-them mentality: This common problem arises when developers and testers are on opposite sides of the testing issue
    Lack of test tools
    Lack of management understanding/support of testing
    Lack of customer and user involvement
    Not enough time for testing
    Over reliance on independent testers: also called as “throw it over the wall”
    Rapid Change
    Testers are in lose-lose situation: On the one hand, if the testers report too many defects, they are blamed for delaying the project. Conversely, if the testers do not find the critical defects, they are blamed for poor quality
    Having to say No: saying No, the software is not ready for production

    9) What are different approaches to organize test team? Or what are different methods for test team composition? Or what are the different ways to form a test team?

    Test Team Approach: Internal IT
    Composition of test team members: Project Team
    Advantages: Minimize cost, training, and Knowledge of Project
    Disadvantages: Time allocation, lack of independence, and lack of objectivity

    Test Team Approach: External IT
    Composition of test team members: QA Professional Testers
    Advantages: Independent view, IT Professionals, and Multiple Project testing experience
    Disadvantages: Cost, over reliance, and competition

    Test Team Approach: Non-IT
    Composition of test team members: Users, Auditors, and Consultants
    Advantages: Independent view, independence in assessment, and ability to act
    Disadvantages: Cost, lack of IT knowledge, and lack of project knowledge

    Test Team Approach: Combination
    Composition of test team members: Any or all of the above
    Advantages: Multiple Skills, Education, and Clout
    Disadvantages: Cost, Scheduling reviews, and diverse backgrounds


    10) List five skills a competent tester should have?

    Test Process Knowledge
    Excellent written and oral communication skills
    Analytical ability
    Knowledge of test tools
    Understanding of defect tools

    July 15, 2003
     
    Q&A's > CSTE > Knowledge Domain 6 > Test Planning Process

    1) Test plan should begin at what time of testing life cycle?

    Test Planning should begin at the same time requirements definition starts. The plan will be detailed in parallel with application requirements. During the analysis stage of project, the Test Plan defines and communicates test requirements and the amount of testing needed so that accurate test estimates can be made and incorporated into the project plan

    2) IEEE standards for test plan?

    Several standards suggest what a test plan should contain, including the IEEE.
    The standards are:
    IEEE standards:

    829-1983 IEEE Standard for Software Test Documentation
    1008-1987 IEEE Standard for Software Unit Testing
    1012-1986 IEEE Standard for Software Verification & Validation Plans
    1059-1993 IEEE Guide for Software Verification & Validation Plans

    I am not sure about the above answer; anyone let me know more about IEEE standards for test plan?

    3) What is test design?

    Test Design details what types of tests must be conducted, what stages of testing are required (e.g. Unit, Integration, System, Performance, Usability), and then outlines the sequence and timing of tests

    4) Is the test design is part of test plan? Or both are different?

    Yes, Test design is a part of test plan.
    Test Plan is defined as an overall document providing direction for all testing activity.
    Test design refines the test approach and identifies the features to be covered by the design and its associated tests (according to IEEE)
    Test plans and designs can be developed for any level of testing, and more often combined in the same document

    5) Why plan tests?

    The primary purpose of test planning is to define the testing activities required to achieve sufficient confidence in a solution to put it into production. In the absence of a test plan, testing stops when you run out of time.
    Documented tests are repeatable, controllable, and insure adequate test coverage when executed (please see CBOK for the definition of Repeatable, Controllable, & Coverage)


    6) What are the main contents in test plan?

    Test Scope
    Test Objectives
    Assumptions
    Risk Analysis
    Test Design
    Roles & Responsibilities
    Test Schedule & Resources
    Test Data Management
    Test Environment
    Communication Approach
    Test Tools

    These are all the contents of test plan. Please take most important ones to answer the question




    July 07, 2003
     
    Definitions
    Smoke Testing (ensuring that all navigation through an application works properly);

    Configuration Testing (making sure the application works correctly on different operating systems, processors, or web browsers, as well as machines equipped with varying amounts of memory).
    July 04, 2003
     
    Regression Testing Goals
    1. To ensure that the current system will work when updates/changes are applied to the system.

    2. To implement lifecycle testing for end to end testing.
     
    What is COTS?
    COTS. The term "COTS" is meant to refer to things that one can buy, ready-made, from some manufacturer's virtual store shelf (e.g., through a catalogue or from a price list). It carries with it a sense of getting, at a reasonable cost, something that already does the job. It replaces the nightmares of developing unique system components with the promises of fast, efficient acquisition of cheap (or at least cheaper) component implementations.

    The salient characteristics of a COTS product are

    it exists a priori
    it is available to the general public
    it can be bought (or leased or licensed)

    Source: Carnegie Mellon Software Engineering Institue & An Architecture for COTS Based Software Systems
     
    Metrics for evaluating application system testing
    Metric = Formula

    Test Coverage = Number of units (KLOC/FP) tested / total size of the system
    Number of tests per unit size = Number of test cases per KLOC/FP
    Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria
    Defects per size = Defects detected / system size
    Test cost (in %) = Cost of testing / total cost *100
    Cost to locate defect = Cost of testing / the number of defects located
    Achieving Budget = Actual cost of testing / Budgeted cost of testing
    Defects detected in testing = Defects detected in testing / total system defects
    Defects detected in production = Defects detected in production/system size
    Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100
    Effectiveness of testing to business = Loss due to problems / total resources processed by the system.
    System complaints = Number of third party complaints / number of transactions processed
    Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10
    Source Code Analysis = Number of source code statements changed / total number of tests.
    Effort Productivity =
  • Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation
  • Test Execution Productivity = No of Test cycles executed / Actual Effort for testing


  • July 03, 2003
     
    How Many Bugs Do Regression Tests Find?
    What percentage of bugs are found by rerunning tests? That is, what's the value of this equation:

    number of bugs in a release found by re-executing tests
    100 X ------------------------------------------------------------------------------- ?
    number of bugs found by running all tests (for 1st or Nth time)

    Excellent article, click on the title for more.......
     
    Testing Questions
    Testing Philosophy

  • What is software quality assurance?
  • What is the value of a testing group? How do you justify your work and budget?
  • What is the role of the test group vis-à-vis documentation, tech support, and so forth?
  • How much interaction with users should testers have, and why?
  • How should you learn about problems discovered in the field, and what should you learn from those problems?
  • What are the roles of glass-box and black-box testing tools?
  • What issues come up in test automation, and how do you manage them?
  • What development model should programmers and the test group use?
  • How do you get programmers to build testability support into their code?
  • What is the role of a bug tracking system?

    Technical Breadth

  • What are the key challenges of testing?
  • Have you ever completely tested any part of a product? How?
  • Have you done exploratory or specification-driven testing?
  • Should every business test its software the same way?
  • Discuss the economics of automation and the role of metrics in testing.
  • Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams.
  • When have you had to focus on data integrity?
  • What are some of the typical bugs you encountered in your last assignment?

    Project Management

  • How do you prioritize testing tasks within a project?
  • How do you develop a test plan and schedule? Describe bottom-up and top-down approaches.
  • When should you begin test planning?
  • When should you begin testing?
  • Do you know of metrics that help you estimate the size of the testing effort?
  • How do you scope out the size of the testing effort?
  • How many hours a week should a tester work?
  • How should your staff be managed? How about your overtime?
  • How do you estimate staff requirements?
  • What do you do (with the project tasks) when the schedule fails?
  • How do you handle conflict with programmers?
  • How do you know when the product is tested well enough?
  •  
    Deciding on the Correct Ratio of Developers to Testers
    Many of us would like a precise answer to the question: "What's the correct staffing ratio for developers to testers in my product development organization?" Usually though, the only answer is "It depends". Your answer depends on your situation: the kind of project you're working on, your schedule constraints, the culture you work in, and the quality expectations for the product. This paper discusses the thought process involved in deciding on your correct staffing ratios.


     
    Why Software Fails
    This note summarizes conclusions from a three year study about why released software fails. Our method was to obtain mature-beta or retail versions of real software applications and stress test them until they fail. From an analysis of the causal faults, we have synthesized four reasons why software fails. This note presents these four classes of failures and discusses the challenges they present to developers and testers. The implications for software testers are emphasized


     
    Success with Test Automation
    This paper describes several principles for test automation. These principles were used to develop a system of automated tests for a new family of client/server applications. It encourages applying standard software development processes to test automation. It identifies criteria for selecting appropriate tests to be automated and advantages of a Testcase Interpreter. It describes how cascading failures prevent unattended testing. It identifies the most serious bug that can affect test automation systems and describes ways to avoid it. It circumscribes reasonable limits on test automation goals.
     
    Totally Data-Driven Automated Testing
    The purpose of this document is to provide the reader with a clear understanding of what is actually required to successfully implement cost-effective automated testing. Rather than engage in a theoretical dissertation on this subject, I have endeavored to be as straightforward and brutally honest as possible in discussing the issues, problems, necessities, and requirements involved in this enterprise.
     
    Testing Papers
    An excellent resource for Testing, Quality Assurance Paper. Some of the best papers are

  • An Introduction to Software Testing
  • Software Testing and Software Development Lifecycles
  • Why Bother to Unit Test?
  • Organisational Approaches for Unit Testing
  • Designing Unit Test Cases
  • Host / Target Testing
  • Structural Coverage Metrics: Their Strengths and Weaknesses
  • Complete Application Testing
  • A Strategy for Testing C++
  • C++ - It's Testing Jim, But Not As We Know It!
  • Testing Embedded C++ with Cantata++

  •  
    Testing Java Applets and Applications
    Very good presentation on Testging Java Applets and Applications by Kevin A. Smith, Software Test Engineer, JavaSoft, Sun Microsystems, Inc
     
    Black-Box Testing Techniques
    "Boundary value analysis" one of the most fruitful forms of black-box testing, requires that test cases be generated which are on, and immediately around, the boundaries of the input and output for a given piece of software.

    "Equivalence class partitioning" is a formalization of the way many people already test software. An equivalence class is a collection of items which can all be regarded as identical at a given level of abstraction, e.g., a set of data items which will all evoke the same general behavior from a given software module.

    "cause-effect graphing" - In situations where there are many different combinations of inputs possible suggests a black-box technique called "cause-effect graphing." This technique helps software engineers identify those specific combinations of inputs which will be the most error-prone.
     
    White-box Testing
    White-Box Testing: White-box testing is the testing of the underlying implementation of a piece of software (e.g., source code) without regard to the specification (external description) for that piece of software. The goal of white-box testing of source code is to identify such items as (unintentional) infinite loops, paths through the code which should be allowed, but which cannot be executed (e.g., [Frankel and Weyuker, 1987]), and dead (unreachable) code.

    Probably the most commonly used example of a white-box testing testing technique is "basis path testing." For an opposing view see McCabe's approach requires that we determine the number of linearly independent paths through a piece of software (what he refers to as the cyclomatic complexity), and use that number coupled with a graph of the control flow through the same piece of software to come up with a set of test cases which will cause executable statements to be executed at least once.

    McCabe's approach is an attempt to systematically address an even older concept in white- box testing, i.e., coverage. Coverage is simply a measure of the number and type of statements executed, as well as how these statements are executed. Glen Myer describes several types of coverage. "Statement coverage," the weakest acceptable form of coverage, requires that enough test cases be written so that we can be assured that all executable statements will be executed at least once. "Condition coverage" requires that all statements be executed at least once, and that all binary decisions have a
    true and a false outcome at least once.

     
    Testing & Debugging
    Testing is the process of examining something with the intention of finding errors. While testing may reveal a symptom of an error, it may not uncover the exact cause of the error.

    Debugging is the process of locating the exact cause of an error, and removing that cause.

     
    "Testing proves the presence, not the absence, of bugs."

    -- E.W. Dijkstra

    "Absence of evidence is not evidence of absence."

    -- Source Unknown

     
    Thesis - Testing of a Computer Program
    Good thesis on Testing of a Computer Program on the Example of a Medical Application with Diversification and other Methods. Contains info regarding Psychology and Software Tests, Kinds of Software Testing, and black/white box testing, etc.


    June 30, 2003
     
    When can the software be released?
    A Nice article on the title specified above...

    Powered by Blogger