CoolInterview.com - World's Largest Collection of Interview Questions
Start Your Own website Now
Sponsored Links

Interview Questions
Our Services

Get 9,000 Interview Questions & Answers in an eBook.


  • 9500+ Pages
  • 9000 Question & Answers
  • All Tech. Categories
  • 14 MB Content

    Get it now !!



    Send your Resume to 6000 Companies


  • INTERVIEW QUESTIONS TESTING SOFTWARE TESTING TYPES DETAILS
    Question :
    How many types of testing are there?

    Posted by: rajaneesh on 6/4/2008

    Contact rajaneesh Contact rajaneesh
    Category Software Testing Types Interview Questions
    Rating (0.4) By 600 users
    Added on 6/4/2008
    Views 2238
    Rate it!
    Answers:

    There are Number of types of testing
    that are-

    Stand Alone Testing

    Unit Testing

    Static Testing

    Proof of Concept Testing ( POC Testing )







    System Testing

    Functional Testing / Functionality Testing

    User Interface Testing

    Error exit Testing

    Help Information Testing

    Integration Testing

    Dynamic Testing

    Black Box Testing

    White Box Testing




    Performance Testing

    Stress/Load Testing

    Volume Testing

    Limit Testing

    Disaster Recovery Testing





    User Acceptance Testing ( UAT )

    Free Fall Testing

    Equivalence class partitioning

    Boundary Value Analysis

    Compatibility Testing / Data Migration

    Security Testing





    Posted by: vishakha    

    Contact vishakha Contact vishakha

    The types of testing and there definitions are as follows.

    Types-
    Stand Alone Testing

    Unit Testing

    Static Testing

    Proof of Concept Testing ( POC Testing )


    System Testing

    Functional Testing / Functionality Testing

    User Interface Testing

    Error exit Testing

    Help Information Testing

    Integration Testing

    Dynamic Testing

    Black Box Testing

    White Box Testing




    Performance Testing

    Stress/Load Testing

    Volume Testing

    Limit Testing

    Disaster Recovery Testing





    User Acceptance Testing ( UAT )

    Free Fall Testing

    Equivalence class partitioning

    Boundary Value Analysis

    Compatibility Testing / Data Migration

    Security Testing










    Unit testing
    Unit testing is the process of testing a singular item of software. An example would be a window/form which allows a user to choose two ways of launching the application. Option A will launch exe A where Option B will launch exe B. The single form can be launched on its own ( normally by the developer ) and the function of launching each option can be confirmed before adding the code to the main application.



    Static Testing
    Static tests are those that do not involve the execution of anything ? be it code or executable specification. Static testing comprises of comparing specifications.

    The Requirement Expression states what the system is expected to achieve from the end users point of view

    The System Specification lists the functions and attributes of the actual system in detail

    The System Design Specification states how the system is to be put together, and at a high level, what the overall software design of the system is.

    A Module Specification states what an item of code is expected to do.


    Proof of Concept testing ( POC Testing )
    POC testing in many cases is the first opportunity to use the software and confirm that the program is capable of providing the desired end solution. In many cases the design requirements may have changed from the initial order and this is the first opportunity to confirm the software is capable of adapting to meet the end requirement.




    System Testing
    System testing is the first time at which the entire system can be tested against the system specification. The specifications are defined within the business analysis documentation defining the programs purpose. System testing is in effect testing that the entire system is working together and all the functionality of the system is performing as expected. System testing ONLY proves the system and does not prove the software or the data/work flow.. Below are some of the stages of System testing.

    Functional Testing / Functionality Testing
    Functional testing is the process of confirming the functionality of the application. Generally this form of testing can be scripted directly from the menu options of the application.


    User Interface Testing
    From a system testing point of view the User Interface Testing confirms that the forms/ windows or GUI?s which appear perform as specified and are sized and viewed as expected. Items such as menus, minimise and maximise options are checked.


    Error exit testing
    This form of testing confirms the application and all it?s separate forms will close once open and that any forms have cancel options in case the user has selected them accidentally.


    Help Information Testing
    The process of launching all the Help links within an application and confirming they launch the appropriate help item if required.



    Integration Testing
    Integration testing is often set up with it?s own testing team who only perform integration testing. The main purpose of this type of testing is to check if the new software interferes with any other functionality of any other software which is running on the companies machines. Many companies may have ?loadsets? for each department ( ie. the accounts departments pc?s will have different software to the art departments pc?s. One would be the Accounts loadset where the other would be the Art departments loadset ) Personally I would look to automate a large proportion of Integration testing along with developing a DLL/OCX database which would highlight immediate concerns just be looking at the installation files of any new software.




    Dynamic Testing
    Dynamic testing confirms that a deliverable ? typically some software ? functions according to its specifications. Test scripts and recorded results should be agreed within an acceptance plan

    Dynamic testing can be based on two different aspects. Black box and White box testing

    Black Box Testing
    Black box testing is the process of testing a function ( such as a program which converts the format of an interface file ) without having access to the code which is converting the data. The testing stages would consist of specifying the file before the conversion takes place and then confirming the changes which occur after the program has been run and converted the file. The name ?Black Box? comes from not being how to see how the function works

    White Box Testing
    White box testing on the other hand allows the tester to see the code which is converting the data. Consequently the tester can write tests to include data which will ?trip up the code?.




    Performance Testing
    Performance testing is the most effective way to gauge an application or an environment?s capacity and scalability. This type of testing must be automated and record the systems response times to a simulation of users logging onto the system. The expected performance ratio of users to response times will be identified before the tests are carried out. With good planning the performance tool can be used for ongoing analysis of the system and the behaviours of the users. Data can be assessed to identify the most popular times users log on and consequently the key time when the system will under the greatest loads.

    Stress/Load Testing
    Such testing involves running the system under heavy loading by simulating users and functionality?s up to a point where the maximum loads are anticipated from the design specification documentation.

    Volume Testing
    Such tests submit the system to large volumes of data. Normally this is automated and consists of multiple processes being run simultaneously increasing the size of transactions files being processed. The volume which has been specified within the business requirements documentation can be confirmed. As well as multiple files of increasing size the system should be analysed and tested for single files too.

    For instance, attached to a financial application was an audit log which detailed every transaction entered by the 120 users across the UK.( not high volumes of users ) The Audit file was never refreshed so the file just kept growing. The area which highlighted the potential issue was that the file was being used to create an interface file of transactions which had occurred that week. The volume at which this file broke the integrity of the system was during the backup procedure run over night. Within six months I identified that this file would grow to be a surprising 1.2 gigabytes which the system could still handle, however during the backup procedure the system would require 2.4 gigabytes of space which the Unix partition didn?t have available.

    Limit Testing
    At least one test should be developed for each of the documented system limits Such tests are designed to investigate how the system will react to data which is maximal or minimal in the sense of attaining some limit either specified within the system specification or the user guide.

    During the system testing the system should be tested beyond the limits specified for it. The purpose here is to find any situation where insufficient safety margins have been built in.

    Disaster Recovery Testing
    This is clearly a vital area of testing for safety critical and similar systems. The systems reactions to failures of all sorts might need to be tested. During this testing we can identify any corruption?s and potential down times during system failure. .




    User Acceptance Testing ( UAT )
    User acceptance testing is probably the most known term of testing by non testers. Consequently if the testing structure and stages have not been performed correctly, users will tend to include and lump all aspects of testing into the User Acceptance Testing stage. This is often due to the defects from previous testing stages being fixed and regression tested. Some companies do not have the facility to have multiple testing environments ( One for System Testing & One for UAT ) as well as a development environment, so there is a high possibility that the regression system testing will also happen during the UAT stage.

    In truth the User Acceptance Testing stage should not include any of it?s previous testing stages (But time constraints and budget often intervene ) and the explanation of UAT is described within it?s name. In short User Acceptance Testing includes the processes and functionality performed by the users who will be using the system on a day to day basis. The tests will follow the processes from end to end with a fully functional and complete system. Additionally and the more difficult to identify; this phase will also include all the strange and wonderful things the users will attempt to do with the software even though the software was never designed to do these things.

    To identify some of the wonderful things the users will attempt, the tester must analyse the current system and identify the differences between the old system and the new. Less obvious scenarios can be obtained through testing methods such as Boundary Value Analysis & Equivalence class partitioning.

    Maybe the best way to explain UAT is to break down each word within it?s name.

    User

    Users are the real business users who will have to operate the system on a day to day basis

    Acceptance

    The Users Acceptance that the system completes all the requirements which are needed for day to day usage of the software as a business tool which gives benefit to the business. If this is an upgrade from a previous system then the goal should be that the user can complete all the previous functionality of the old system and any new functionality which has been identified as beneficial.

    Testing

    This area can be broken into two halves

    (1) Testing the system to prove that it behaves and produces the results expected by the users. As you would expect these tested functions give the user confidence that the new software and system will do everything they expected it to do. The users will confirm what needs to be tested and will naturally sign off documentation which concludes that the tests performed cover everything they need for acceptance. They will be happy that business will continue with the new system.

    (2) Testing the system to prove that it behaves and produces the results expected by the users even when they do the most obscure things which the software was never designed for. Sometimes ?only? users can perform these actions as experienced software users would never do some of the things an inexperience user would attempt.


    Free Fall Testing
    This form of testing is normally done just before release to the users and uses a system which has already been tested. The general goal in this testing is to round up some of the key users of the software and allow them to free fall their way through completing the normal daily tasks they will be performing when the system goes live. Here some of the obscure things a user will attempt to do will be highlighted and a final lockdown of certain functionality can be identified.


    Equivalence class partitioning
    A software testing technique which identifies a small amount of input values that invokes as many different input conditions as possible.


    Boundary Value Analysis
    A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The theory for boundary value analysis is that if the system performs correctly for these special values then it is likely to work correctly for all the numbers in-between.

    An example would be if a data field was set to except amounts of money from 0 to 10 pounds the boundaries would be ?0.00, ?0.01, ?9.99 & ?10:00





    Compatibility Testing / Data Migration
    Tests are made to probe where the new system does not subsume the facilities and modes of use of the old system where it was intended to do so. In these cases and dependant on the system being tested, a parallel run should be considered so that data from the old system can be directly compared to the new system.


    Security Testing
    Tests are performed to compromise the systems security. This include as an example, accessing an oracle data base containing the data using multiple logins or unauthorised id?s.. Additionally hacking tools could be considered if the system could be access externally, such as over the internet..



    Posted by: vishakha    

    Contact vishakha Contact vishakha

    The types of testing and there definitions are as follows.

    Types-
    Stand Alone Testing

    Unit Testing

    Static Testing

    Proof of Concept Testing ( POC Testing )


    System Testing

    Functional Testing / Functionality Testing

    User Interface Testing

    Error exit Testing

    Help Information Testing

    Integration Testing

    Dynamic Testing

    Black Box Testing

    White Box Testing

    Performance Testing

    Stress/Load Testing

    Volume Testing

    Limit Testing

    Disaster Recovery Testing

    User Acceptance Testing ( UAT )

    Free Fall Testing

    Equivalence class partitioning

    Boundary Value Analysis

    Compatibility Testing / Data Migration

    Security Testing










    Unit testing
    Unit testing is the process of testing a singular item of software. An example would be a window/form which allows a user to choose two ways of launching the application. Option A will launch exe A where Option B will launch exe B. The single form can be launched on its own ( normally by the developer ) and the function of launching each option can be confirmed before adding the code to the main application.



    Static Testing
    Static tests are those that do not involve the execution of anything ? be it code or executable specification. Static testing comprises of comparing specifications.

    The Requirement Expression states what the system is expected to achieve from the end users point of view

    The System Specification lists the functions and attributes of the actual system in detail

    The System Design Specification states how the system is to be put together, and at a high level, what the overall software design of the system is.

    A Module Specification states what an item of code is expected to do.

    Proof of Concept testing ( POC Testing )
    POC testing in many cases is the first opportunity to use the software and confirm that the program is capable of providing the desired end solution. In many cases the design requirements may have changed from the initial order and this is the first opportunity to confirm the software is capable of adapting to meet the end requirement.

    System Testing
    System testing is the first time at which the entire system can be tested against the system specification. The specifications are defined within the business analysis documentation defining the programs purpose. System testing is in effect testing that the entire system is working together and all the functionality of the system is performing as expected. System testing ONLY proves the system and does not prove the software or the data/work flow.. Below are some of the stages of System testing.

    Functional Testing / Functionality Testing
    Functional testing is the process of confirming the functionality of the application. Generally this form of testing can be scripted directly from the menu options of the application.

    User Interface Testing
    From a system testing point of view the User Interface Testing confirms that the forms/ windows or GUI?s which appear perform as specified and are sized and viewed as expected. Items such as menus, minimise and maximise options are checked.


    Error exit testing
    This form of testing confirms the application and all it?s separate forms will close once open and that any forms have cancel options in case the user has selected them accidentally.

    Help Information Testing
    The process of launching all the Help links within an application and confirming they launch the appropriate help item if required.



    Integration Testing
    Integration testing is often set up with it?s own testing team who only perform integration testing. The main purpose of this type of testing is to check if the new software interferes with any other functionality of any other software which is running on the companies machines. Many companies may have ?loadsets? for each department ( ie. the accounts departments pc?s will have different software to the art departments pc?s. One would be the Accounts loadset where the other would be the Art departments loadset ) Personally I would look to automate a large proportion of Integration testing along with developing a DLL/OCX database which would highlight immediate concerns just be looking at the installation files of any new software.
    Dynamic Testing
    Dynamic testing confirms that a deliverable ? typically some software ? functions according to its specifications. Test scripts and recorded results should be agreed within an acceptance plan

    Dynamic testing can be based on two different aspects. Black box and White box testing

    Black Box Testing
    Black box testing is the process of testing a function ( such as a program which converts the format of an interface file ) without having access to the code which is converting the data. The testing stages would consist of specifying the file before the conversion takes place and then confirming the changes which occur after the program has been run and converted the file. The name ?Black Box? comes from not being how to see how the function works

    White Box Testing
    White box testing on the other hand allows the tester to see the code which is converting the data. Consequently the tester can write tests to include data which will ?trip up the code?.




    Performance Testing
    Performance testing is the most effective way to gauge an application or an environment?s capacity and scalability. This type of testing must be automated and record the systems response times to a simulation of users logging onto the system. The expected performance ratio of users to response times will be identified before the tests are carried out. With good planning the performance tool can be used for ongoing analysis of the system and the behaviours of the users. Data can be assessed to identify the most popular times users log on and consequently the key time when the system will under the greatest loads.

    Stress/Load Testing
    Such testing involves running the system under heavy loading by simulating users and functionality?s up to a point where the maximum loads are anticipated from the design specification documentation.

    Volume Testing
    Such tests submit the system to large volumes of data. Normally this is automated and consists of multiple processes being run simultaneously increasing the size of transactions files being processed. The volume which has been specified within the business requirements documentation can be confirmed. As well as multiple files of increasing size the system should be analysed and tested for single files too.

    For instance, attached to a financial application was an audit log which detailed every transaction entered by the 120 users across the UK.( not high volumes of users ) The Audit file was never refreshed so the file just kept growing. The area which highlighted the potential issue was that the file was being used to create an interface file of transactions which had occurred that week. The volume at which this file broke the integrity of the system was during the backup procedure run over night. Within six months I identified that this file would grow to be a surprising 1.2 gigabytes which the system could still handle, however during the backup procedure the system would require 2.4 gigabytes of space which the Unix partition didn?t have available.

    Limit Testing
    At least one test should be developed for each of the documented system limits Such tests are designed to investigate how the system will react to data which is maximal or minimal in the sense of attaining some limit either specified within the system specification or the user guide.

    During the system testing the system should be tested beyond the limits specified for it. The purpose here is to find any situation where insufficient safety margins have been built in.

    Disaster Recovery Testing
    This is clearly a vital area of testing for safety critical and similar systems. The systems reactions to failures of all sorts might need to be tested. During this testing we can identify any corruption?s and potential down times during system failure. .




    User Acceptance Testing ( UAT )
    User acceptance testing is probably the most known term of testing by non testers. Consequently if the testing structure and stages have not been performed correctly, users will tend to include and lump all aspects of testing into the User Acceptance Testing stage. This is often due to the defects from previous testing stages being fixed and regression tested. Some companies do not have the facility to have multiple testing environments ( One for System Testing & One for UAT ) as well as a development environment, so there is a high possibility that the regression system testing will also happen during the UAT stage.

    In truth the User Acceptance Testing stage should not include any of it?s previous testing stages (But time constraints and budget often intervene ) and the explanation of UAT is described within it?s name. In short User Acceptance Testing includes the processes and functionality performed by the users who will be using the system on a day to day basis. The tests will follow the processes from end to end with a fully functional and complete system. Additionally and the more difficult to identify; this phase will also include all the strange and wonderful things the users will attempt to do with the software even though the software was never designed to do these things.

    To identify some of the wonderful things the users will attempt, the tester must analyse the current system and identify the differences between the old system and the new. Less obvious scenarios can be obtained through testing methods such as Boundary Value Analysis & Equivalence class partitioning.

    Maybe the best way to explain UAT is to break down each word within it?s name.

    User

    Users are the real business users who will have to operate the system on a day to day basis

    Acceptance

    The Users Acceptance that the system completes all the requirements which are needed for day to day usage of the software as a business tool which gives benefit to the business. If this is an upgrade from a previous system then the goal should be that the user can complete all the previous functionality of the old system and any new functionality which has been identified as beneficial.

    Testing

    This area can be broken into two halves

    (1) Testing the system to prove that it behaves and produces the results expected by the users. As you would expect these tested functions give the user confidence that the new software and system will do everything they expected it to do. The users will confirm what needs to be tested and will naturally sign off documentation which concludes that the tests performed cover everything they need for acceptance. They will be happy that business will continue with the new system.

    (2) Testing the system to prove that it behaves and produces the results expected by the users even when they do the most obscure things which the software was never designed for. Sometimes ?only? users can perform these actions as experienced software users would never do some of the things an inexperience user would attempt.


    Free Fall Testing
    This form of testing is normally done just before release to the users and uses a system which has already been tested. The general goal in this testing is to round up some of the key users of the software and allow them to free fall their way through completing the normal daily tasks they will be performing when the system goes live. Here some of the obscure things a user will attempt to do will be highlighted and a final lockdown of certain functionality can be identified.


    Equivalence class partitioning
    A software testing technique which identifies a small amount of input values that invokes as many different input conditions as possible.


    Boundary Value Analysis
    A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The theory for boundary value analysis is that if the system performs correctly for these special values then it is likely to work correctly for all the numbers in-between.

    An example would be if a data field was set to except amounts of money from 0 to 10 pounds the boundaries would be ?0.00, ?0.01, ?9.99 & ?10:00

    Compatibility Testing / Data Migration
    Tests are made to probe where the new system does not subsume the facilities and modes of use of the old system where it was intended to do so. In these cases and Dependant on the system being tested, a parallel run should be considered so that data from the old system can be directly compared to the new system.


    Security Testing
    Tests are performed to compromise the systems security. This include as an example, accessing an oracle data base containing the data using multiple logins or unauthorized id?s.. Additionally hacking tools could be considered if the system could be access externally, such as over the internet..



    Posted by: vishakha    

    Contact vishakha Contact vishakha

    types of testing:
    # alpha testing
    # beta testing
    # gamma testing
    # adhoc testing
    # Agile testing
    # smoke testing
    # sanity testing
    # security testing
    # endurance testing
    # exhaustive testing
    # exploratory testing
    # mutation testing
    # I18N testing
    # L10 testing
    # concurrency testing
    # dependency testing
    # context driven testing
    # data driven testing
    # monkey testing
    # gorilla testing
    # vendor validation testing
    # benefit realization testing
    # volume testing
    # retesting
    # regression testing



    Posted by: santhosh    

    Contact santhosh Contact santhosh

    Types of testing:

    1.Static
    2.Dynamic

    Dynamic Testing:-
    1.white box testing
    2.black box testing

    white box testing:-
    1.Unit testing
    2.Integration Testing


    Black Box Testing:-
    1.Functional testing
    2.Non-Functional Testing

    Functional Testing:-
    1.Smoke testing
    2.Function
    3.ReTesting
    4.Regression Testing

    Non-Functional Testing:-
    1.Load Testing
    2.Stress testing
    3.Performance testing
    4.Alpha Testing
    5.Beta Testing
    6.Compatability Testing


    More Testings...
    1.Portability Testing
    2.Security Testing
    3.Concurrency Testing
    4.Disaster & Recovery Testing
    5.Adhoc Testing
    6.Exploratory Testing
    7.Mutation Testing
    8.GUI Testing
    9.Localization Testing
    10.Globalization Testing
    11.Internationalization Testing
    12.Usability Testing
    13.Progression Testing
    14.User Acceptance Testing(UAT)



    Posted by: Vikram.V    

    Contact Vikram.V Contact Vikram.V

    If you have the better answer, then send it to us. We will display your answer after the approval.
    Rules to Post Answers in CoolInterview.com:-

  • There should not be any Spelling Mistakes.
  • There should not be any Gramatical Errors.
  • Answers must not contain any bad words.
  • Answers should not be the repeat of same answer, already approved.
  • Answer should be complete in itself.
  • Name :*
    Email Id :*
    Answer :*
    Verification Code Code Image - Please contact webmaster if you have problems seeing this image code Not readable? Load New Code
    Process Verification  Enter the above shown code:*
    Inform me about updated answers to this question

       
    Related Questions
    View Answer
    What is Compatibility Testing?
    View Answer
    How can u do the following 1) Usability testing 2) scalability Testing
    View Answer
    What is calability testing?
    What are the phases of the calability testing?
    View Answer
    Application Scalability
    Is the application scalable in terms of software also. If yes then give an example for scalability
    View Answer
    What is the purpose of the testing?
    View Answer
    What are the different type of software testing? explain it briefly.
    View Answer
    What is silk testing ? Is it automated testing or type of manual testing?
    View Answer


    Please Note: We keep on updating better answers to this site. In case you are looking for Jobs, Pls Click Here Vyoms.com - Best Freshers & Experienced Jobs Website.

    View ALL Software Testing Types Interview Questions

    User Options
    Sponsored Links


    Copyright ©2003-2014 CoolInterview.com, All Rights Reserved.
    Privacy Policy | Terms and Conditions

    Download Yahoo Messenger | Placement Papers| FREE SMS | ASP .Net Tutorial | Web Hosting | Dedicated Servers | C Interview Questions & Answers

    Testing Articles | Testing Books | Testing Certifications | Testing FAQs | Testing Downloads | Testing Interview Questions | Testing Jobs | Testing Training Institutes


    Cache = 0 Seconds