Skip to main content

STLC models


     Traditional CMMI or waterfall development model

A common practice of software testing is that testing is performed by an independent group of testers after the functionality is developed before it is shipped to the customer. This practice often results in the testing phase being used as a project buffer to compensate for project delays, thereby compromising the time devoted to testing.
Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.
Further information: Capability Maturity Model Integration and Waterfall model 

Agile or Extreme development model
In contrast, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process, unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course, these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process). The ultimate goal of this test process is to achieve continuous integration where software updates can be published to the public frequently.

A sample testing cycle
Although variations exist between organizations, there is a typical cycle for testing. The sample below is common among organizations employing the Waterfall development model.

·        Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests work.
·        Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed.
·        Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software.
·        Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team.
·        Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
·        Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly), or deferred to be dealt with with later.
·        Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing.
·        Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything and that the software product as a whole is still working correctly.
·        Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.
Automated testing


Main article: Test automation
Many programming groups are relying more and more on automated testing, especially groups that use test-driven development. There are many frameworks to write tests in, and continuous integration software will run tests automatically every time code is checked into a version control system.
While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well-developed test suite of testing scripts in order to be truly useful.

Testing tools
Program testing and fault detection can be aided significantly by testing tools and debuggers. Testing/debug tools include features such as:
·        Program monitors, permitting full or partial monitoring of program code including:
·         Instruction set simulator, permitting complete instruction level monitoring and trace facilities
·         Program animation, permitting step-by-step execution and conditional breakpoint at source level or in machine code
·         Code coverage reports
·        Formatted dump or symbolic debugging, tools allowing inspection of program variables on error or at chosen points
·        Automated functional GUI testing tools are used to repeat system-level tests through the GUI
·        Benchmarks, allowing run-time performance comparisons to be made
·        Performance analysis (or profiling tools) that can help to highlight hot spots and resource usage
Some of these features may be incorporated into an Integrated Development Environment (IDE).
·        A regression testing technique is to have a standard set of tests, which cover existing functionality that result in persistent tabular data, and to compare pre-change data to post-change data, where there should not be differences, using a tool like diffkit. Differences detected indicate unexpected functionality changes or "regression".

Measurement in software testing
Main article: Software quality
Usually, quality is constrained to such topics as correctness, completeness, security,[citation needed] but can also include more technical requirements as described under the ISO standard ISO/IEC 9126, such as capability, reliability,efficiency, portability, maintainability, compatibility, and usability.
There are a number of frequently used software metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing.
Testing artifacts

      
         The software testing process can produce several artifacts.

       Test plan
A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to management and the developers. The idea is to make them more cautious when developing their code or making additional changes. Some companies have a higher-level document called a test strategy.
Traceability matrix
A traceability matrix is a table that correlates requirements or design documents to test documents. It is used to change tests when related source documents are changed, to select test cases for execution when planning for regression tests by considering requirement coverage.

Test case
A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result.[42] This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table.

Test script
A test script is a procedure, or programing code that replicates user actions. Initially the term was derived from the product of work created by automated regression test tools. Test Case will be a baseline to create test scripts using a tool or a program.

Test suite
The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
             
               Test fixture or test data
 In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the   test values and changeable environmental components are collected in separate files and stored as test data. It is   also useful to provide this data to the client and with the product or a project.
             
               Test harness
 The software, tools, samples of data input and output, and configurations are all referred to collectively as a test   harness.

Comments

Popular posts from this blog

API Testing With Selenium WebDriver

REST API Testing Framework We will be creating a simple Rest Testing Framework in Java and JUnit that could be used for any testing scenarios. Rest Testing Framework Overview The framework should be able to execute the basic REST operations (GET, POST, PUT, PATCH, DELETE) and perform the validations on the code, message, headers and body of the response. The completed code can be accessed from my  GitHub account   from where you can collect and make modifications based on your requirements. Design We will be having three classes for the framework (in package com.axatrikx.controller ) RestExecutor  : Performs the HTTP operations using Apache Http Client RestResponse  : A javabean class to hold our response values (code, message, headers, body) RestValidator  : Validates the response with the expected values The package  com.axatrikx.test  holds the test scripts. Note: I will be using ‘ json-server ‘ to fake the REST API. Its a real handy tool to r

ExtentReports in Selenium Webdriver

ExtentReports in Selenium Webdriver What is ExtentReport? ExtentReports  is a HTML reporting library for Selenium WebDriver for Java which is extremely easy to use and creates beautiful execution reports. It shows test and step summary, test steps and status in a toggle view for quick analysis Download Download the jar below: Download ExtentReports 1.4 (1623)    Snapshot of Extent report After Executing the Script   Program Steps:  We are going to write three different testcases. Pass Warning Fail TestCase with Pass Result Navigate to http://www.guvi.in Click on Sign-in Enter the credientials Check the URL is correct or not after login   TestCase with Warning Result Verify with the Wrong URL (static String Afterloginfail="http://www.guvi.in/ ")    TestCase with fail Result Click on Menu Select Tech Challenges Verify With wrong URL. Source Code: import  java.io.File; import  java.io.IOException; import   java.sql

How to Compare Two Images in Selenium

How to Compare Two Images in Selenium Some time in testing we have to do image comparision as verification point. So, in this blog I expalin how to compare two images in selenium. I have created one function for compare two images. you can use directly into your framework. first of all you have to import below packages into your code. import java.awt.image.BufferedImage; import java.io.File; import javax.imageio.ImageIO; Now, you can use below function for comparison of two images. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 public boolean mCompareImages ( BufferedImage img1 , BufferedImage img2 ) { boolean bCompareImage = true ; try { if ( img1 . getWidth () == img2 . getWidth () && img1 . getHeight () == img2 . getHeight ()) { for ( int i = 0 ; i < img1 . getWidth (); i ++) { for ( int j = 0 ; j < img1 . getHeight (); j ++) { if ( img1 . get