Question:
How many types of testing are there?
Answer:
There are Number of types of testing that are-
Stand Alone Testing
Unit Testing
Static Testing
Proof of Concept Testing ( POC Testing )
System Testing
Functional Testing / Functionality Testing
User Interface Testing
Error exit Testing
Help Information Testing
Integration Testing
Dynamic Testing
Black Box Testing
White Box Testing
Performance Testing
Stress/Load Testing
Volume Testing
Limit Testing
Disaster Recovery Testing
User Acceptance Testing ( UAT )
Free Fall Testing
Equivalence class partitioning
Boundary Value Analysis
Compatibility Testing / Data Migration
Security Testing
Source: CoolInterview.com
Answered by: vishakha | Date: 6/26/2008
| Contact vishakha
The types of testing and there definitions are as follows. Types- Stand Alone Testing
Unit Testing
Static Testing
Proof of Concept Testing ( POC Testing )
System Testing
Functional Testing / Functionality Testing
User Interface Testing
Error exit Testing
Help Information Testing
Integration Testing
Dynamic Testing
Black Box Testing
White Box Testing
Performance Testing
Stress/Load Testing
Volume Testing
Limit Testing
Disaster Recovery Testing
User Acceptance Testing ( UAT )
Free Fall Testing
Equivalence class partitioning
Boundary Value Analysis
Compatibility Testing / Data Migration
Security Testing
Unit testing Unit testing is the process of testing a singular item of software. An example would be a window/form which allows a user to choose two ways of launching the application. Option A will launch exe A where Option B will launch exe B. The single form can be launched on its own ( normally by the developer ) and the function of launching each option can be confirmed before adding the code to the main application.
Static Testing Static tests are those that do not involve the execution of anything – be it code or executable specification. Static testing comprises of comparing specifications.
The Requirement Expression states what the system is expected to achieve from the end users point of view
The System Specification lists the functions and attributes of the actual system in detail
The System Design Specification states how the system is to be put together, and at a high level, what the overall software design of the system is.
A Module Specification states what an item of code is expected to do.
Proof of Concept testing ( POC Testing ) POC testing in many cases is the first opportunity to use the software and confirm that the program is capable of providing the desired end solution. In many cases the design requirements may have changed from the initial order and this is the first opportunity to confirm the software is capable of adapting to meet the end requirement.
System Testing System testing is the first time at which the entire system can be tested against the system specification. The specifications are defined within the business analysis documentation defining the programs purpose. System testing is in effect testing that the entire system is working together and all the functionality of the system is performing as expected. System testing ONLY proves the system and does not prove the software or the data/work flow.. Below are some of the stages of System testing.
Functional Testing / Functionality Testing Functional testing is the process of confirming the functionality of the application. Generally this form of testing can be scripted directly from the menu options of the application.
User Interface Testing From a system testing point of view the User Interface Testing confirms that the forms/ windows or GUI’s which appear perform as specified and are sized and viewed as expected. Items such as menus, minimise and maximise options are checked.
Error exit testing This form of testing confirms the application and all it’s separate forms will close once open and that any forms have cancel options in case the user has selected them accidentally.
Help Information Testing The process of launching all the Help links within an application and confirming they launch the appropriate help item if required.
Integration Testing Integration testing is often set up with it’s own testing team who only perform integration testing. The main purpose of this type of testing is to check if the new software interferes with any other functionality of any other software which is running on the companies machines. Many companies may have ‘loadsets’ for each department ( ie. the accounts departments pc’s will have different software to the art departments pc’s. One would be the Accounts loadset where the other would be the Art departments loadset ) Personally I would look to automate a large proportion of Integration testing along with developing a DLL/OCX database which would highlight immediate concerns just be looking at the installation files of any new software.
Dynamic Testing Dynamic testing confirms that a deliverable – typically some software – functions according to its specifications. Test scripts and recorded results should be agreed within an acceptance plan
Dynamic testing can be based on two different aspects. Black box and White box testing
Black Box Testing Black box testing is the process of testing a function ( such as a program which converts the format of an interface file ) without having access to the code which is converting the data. The testing stages would consist of specifying the file before the conversion takes place and then confirming the changes which occur after the program has been run and converted the file. The name ‘Black Box’ comes from not being how to see how the function works
White Box Testing White box testing on the other hand allows the tester to see the code which is converting the data. Consequently the tester can write tests to include data which will ‘trip up the code’.
Performance Testing Performance testing is the most effective way to gauge an application or an environment’s capacity and scalability. This type of testing must be automated and record the systems response times to a simulation of users logging onto the system. The expected performance ratio of users to response times will be identified before the tests are carried out. With good planning the performance tool can be used for ongoing analysis of the system and the behaviours of the users. Data can be assessed to identify the most popular times users log on and consequently the key time when the system will under the greatest loads.
Stress/Load Testing Such testing involves running the system under heavy loading by simulating users and functionality’s up to a point where the maximum loads are anticipated from the design specification documentation.
Volume Testing Such tests submit the system to large volumes of data. Normally this is automated and consists of multiple processes being run simultaneously increasing the size of transactions files being processed. The volume which has been specified within the business requirements documentation can be confirmed. As well as multiple files of increasing size the system should be analysed and tested for single files too.
For instance, attached to a financial application was an audit log which detailed every transaction entered by the 120 users across the UK.( not high volumes of users ) The Audit file was never refreshed so the file just kept growing. The area which highlighted the potential issue was that the file was being used to create an interface file of transactions which had occurred that week. The volume at which this file broke the integrity of the system was during the backup procedure run over night. Within six months I identified that this file would grow to be a surprising 1.2 gigabytes which the system could still handle, however during the backup procedure the system would require 2.4 gigabytes of space which the Unix partition didn’t have available.
Limit Testing At least one test should be developed for each of the documented system limits Such tests are designed to investigate how the system will react to data which is maximal or minimal in the sense of attaining some limit either specified within the system specification or the user guide.
During the system testing the system should be tested beyond the limits specified for it. The purpose here is to find any situation where insufficient safety margins have been built in.
Disaster Recovery Testing This is clearly a vital area of testing for safety critical and similar systems. The systems reactions to failures of all sorts might need to be tested. During this testing we can identify any corruption’s and potential down times during system failure. .
User Acceptance Testing ( UAT ) User acceptance testing is probably the most known term of testing by non testers. Consequently if the testing structure and stages have not been performed correctly, users will tend to include and lump all aspects of testing into the User Acceptance Testing stage. This is often due to the defects from previous testing stages being fixed and regression tested. Some companies do not have the facility to have multiple testing environments ( One for System Testing & One for UAT ) as well as a development environment, so there is a high possibility that the regression system testing will also happen during the UAT stage.
In truth the User Acceptance Testing stage should not include any of it’s previous testing stages (But time constraints and budget often intervene ) and the explanation of UAT is described within it’s name. In short User Acceptance Testing includes the processes and functionality performed by the users who will be using the system on a day to day basis. The tests will follow the processes from end to end with a fully functional and complete system. Additionally and the more difficult to identify; this phase will also include all the strange and wonderful things the users will attempt to do with the software even though the software was never designed to do these things.
To identify some of the wonderful things the users will attempt, the tester must analyse the current system and identify the differences between the old system and the new. Less obvious scenarios can be obtained through testing methods such as Boundary Value Analysis & Equivalence class partitioning.
Maybe the best way to explain UAT is to break down each word within it’s name.
User
Users are the real business users who will have to operate the system on a day to day basis
Acceptance
The Users Acceptance that the system completes all the requirements which are needed for day to day usage of the software as a business tool which gives benefit to the business. If this is an upgrade from a previous system then the goal should be that the user can complete all the previous functionality of the old system and any new functionality which has been identified as beneficial.
Testing
This area can be broken into two halves
(1) Testing the system to prove that it behaves and produces the results expected by the users. As you would expect these tested functions give the user confidence that the new software and system will do everything they expected it to do. The users will confirm what needs to be tested and will naturally sign off documentation which concludes that the tests performed cover everything they need for acceptance. They will be happy that business will continue with the new system.
(2) Testing the system to prove that it behaves and produces the results expected by the users even when they do the most obscure things which the software was never designed for. Sometimes ‘only’ users can perform these actions as experienced software users would never do some of the things an inexperience user would attempt.
Free Fall Testing This form of testing is normally done just before release to the users and uses a system which has already been tested. The general goal in this testing is to round up some of the key users of the software and allow them to free fall their way through completing the normal daily tasks they will be performing when the system goes live. Here some of the obscure things a user will attempt to do will be highlighted and a final lockdown of certain functionality can be identified.
Equivalence class partitioning A software testing technique which identifies a small amount of input values that invokes as many different input conditions as possible.
Boundary Value Analysis A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The theory for boundary value analysis is that if the system performs correctly for these special values then it is likely to work correctly for all the numbers in-between.
An example would be if a data field was set to except amounts of money from 0 to 10 pounds the boundaries would be £0.00, £0.01, £9.99 & £10:00
Compatibility Testing / Data Migration Tests are made to probe where the new system does not subsume the facilities and modes of use of the old system where it was intended to do so. In these cases and dependant on the system being tested, a parallel run should be considered so that data from the old system can be directly compared to the new system.
Security Testing Tests are performed to compromise the systems security. This include as an example, accessing an oracle data base containing the data using multiple logins or unauthorised id’s.. Additionally hacking tools could be considered if the system could be access externally, such as over the internet..
Source: CoolInterview.com
Answered by: vishakha | Date: 6/26/2008
| Contact vishakha
The types of testing and there definitions are as follows. Types- Stand Alone Testing
Unit Testing
Static Testing
Proof of Concept Testing ( POC Testing )
System Testing
Functional Testing / Functionality Testing
User Interface Testing
Error exit Testing
Help Information Testing
Integration Testing
Dynamic Testing
Black Box Testing
White Box Testing Performance Testing
Stress/Load Testing
Volume Testing
Limit Testing
Disaster Recovery Testing
User Acceptance Testing ( UAT )
Free Fall Testing
Equivalence class partitioning
Boundary Value Analysis
Compatibility Testing / Data Migration
Security Testing
Unit testing Unit testing is the process of testing a singular item of software. An example would be a window/form which allows a user to choose two ways of launching the application. Option A will launch exe A where Option B will launch exe B. The single form can be launched on its own ( normally by the developer ) and the function of launching each option can be confirmed before adding the code to the main application.
Static Testing Static tests are those that do not involve the execution of anything – be it code or executable specification. Static testing comprises of comparing specifications.
The Requirement Expression states what the system is expected to achieve from the end users point of view
The System Specification lists the functions and attributes of the actual system in detail
The System Design Specification states how the system is to be put together, and at a high level, what the overall software design of the system is.
A Module Specification states what an item of code is expected to do.
Proof of Concept testing ( POC Testing ) POC testing in many cases is the first opportunity to use the software and confirm that the program is capable of providing the desired end solution. In many cases the design requirements may have changed from the initial order and this is the first opportunity to confirm the software is capable of adapting to meet the end requirement.
System Testing System testing is the first time at which the entire system can be tested against the system specification. The specifications are defined within the business analysis documentation defining the programs purpose. System testing is in effect testing that the entire system is working together and all the functionality of the system is performing as expected. System testing ONLY proves the system and does not prove the software or the data/work flow.. Below are some of the stages of System testing.
Functional Testing / Functionality Testing Functional testing is the process of confirming the functionality of the application. Generally this form of testing can be scripted directly from the menu options of the application.
User Interface Testing From a system testing point of view the User Interface Testing confirms that the forms/ windows or GUI’s which appear perform as specified and are sized and viewed as expected. Items such as menus, minimise and maximise options are checked.
Error exit testing This form of testing confirms the application and all it’s separate forms will close once open and that any forms have cancel options in case the user has selected them accidentally.
Help Information Testing The process of launching all the Help links within an application and confirming they launch the appropriate help item if required.
Integration Testing Integration testing is often set up with it’s own testing team who only perform integration testing. The main purpose of this type of testing is to check if the new software interferes with any other functionality of any other software which is running on the companies machines. Many companies may have ‘loadsets’ for each department ( ie. the accounts departments pc’s will have different software to the art departments pc’s. One would be the Accounts loadset where the other would be the Art departments loadset ) Personally I would look to automate a large proportion of Integration testing along with developing a DLL/OCX database which would highlight immediate concerns just be looking at the installation files of any new software. Dynamic Testing Dynamic testing confirms that a deliverable – typically some software – functions according to its specifications. Test scripts and recorded results should be agreed within an acceptance plan
Dynamic testing can be based on two different aspects. Black box and White box testing
Black Box Testing Black box testing is the process of testing a function ( such as a program which converts the format of an interface file ) without having access to the code which is converting the data. The testing stages would consist of specifying the file before the conversion takes place and then confirming the changes which occur after the program has been run and converted the file. The name ‘Black Box’ comes from not being how to see how the function works
White Box Testing White box testing on the other hand allows the tester to see the code which is converting the data. Consequently the tester can write tests to include data which will ‘trip up the code’.
Performance Testing Performance testing is the most effective way to gauge an application or an environment’s capacity and scalability. This type of testing must be automated and record the systems response times to a simulation of users logging onto the system. The expected performance ratio of users to response times will be identified before the tests are carried out. With good planning the performance tool can be used for ongoing analysis of the system and the behaviours of the users. Data can be assessed to identify the most popular times users log on and consequently the key time when the system will under the greatest loads.
Stress/Load Testing Such testing involves running the system under heavy loading by simulating users and functionality’s up to a point where the maximum loads are anticipated from the design specification documentation.
Volume Testing Such tests submit the system to large volumes of data. Normally this is automated and consists of multiple processes being run simultaneously increasing the size of transactions files being processed. The volume which has been specified within the business requirements documentation can be confirmed. As well as multiple files of increasing size the system should be analysed and tested for single files too.
For instance, attached to a financial application was an audit log which detailed every transaction entered by the 120 users across the UK.( not high volumes of users ) The Audit file was never refreshed so the file just kept growing. The area which highlighted the potential issue was that the file was being used to create an interface file of transactions which had occurred that week. The volume at which this file broke the integrity of the system was during the backup procedure run over night. Within six months I identified that this file would grow to be a surprising 1.2 gigabytes which the system could still handle, however during the backup procedure the system would require 2.4 gigabytes of space which the Unix partition didn’t have available.
Limit Testing At least one test should be developed for each of the documented system limits Such tests are designed to investigate how the system will react to data which is maximal or minimal in the sense of attaining some limit either specified within the system specification or the user guide.
During the system testing the system should be tested beyond the limits specified for it. The purpose here is to find any situation where insufficient safety margins have been built in.
Disaster Recovery Testing This is clearly a vital area of testing for safety critical and similar systems. The systems reactions to failures of all sorts might need to be tested. During this testing we can identify any corruption’s and potential down times during system failure. .
User Acceptance Testing ( UAT ) User acceptance testing is probably the most known term of testing by non testers. Consequently if the testing structure and stages have not been performed correctly, users will tend to include and lump all aspects of testing into the User Acceptance Testing stage. This is often due to the defects from previous testing stages being fixed and regression tested. Some companies do not have the facility to have multiple testing environments ( One for System Testing & One for UAT ) as well as a development environment, so there is a high possibility that the regression system testing will also happen during the UAT stage.
In truth the User Acceptance Testing stage should not include any of it’s previous testing stages (But time constraints and budget often intervene ) and the explanation of UAT is described within it’s name. In short User Acceptance Testing includes the processes and functionality performed by the users who will be using the system on a day to day basis. The tests will follow the processes from end to end with a fully functional and complete system. Additionally and the more difficult to identify; this phase will also include all the strange and wonderful things the users will attempt to do with the software even though the software was never designed to do these things.
To identify some of the wonderful things the users will attempt, the tester must analyse the current system and identify the differences between the old system and the new. Less obvious scenarios can be obtained through testing methods such as Boundary Value Analysis & Equivalence class partitioning.
Maybe the best way to explain UAT is to break down each word within it’s name.
User
Users are the real business users who will have to operate the system on a day to day basis
Acceptance
The Users Acceptance that the system completes all the requirements which are needed for day to day usage of the software as a business tool which gives benefit to the business. If this is an upgrade from a previous system then the goal should be that the user can complete all the previous functionality of the old system and any new functionality which has been identified as beneficial.
Testing
This area can be broken into two halves
(1) Testing the system to prove that it behaves and produces the results expected by the users. As you would expect these tested functions give the user confidence that the new software and system will do everything they expected it to do. The users will confirm what needs to be tested and will naturally sign off documentation which concludes that the tests performed cover everything they need for acceptance. They will be happy that business will continue with the new system.
(2) Testing the system to prove that it behaves and produces the results expected by the users even when they do the most obscure things which the software was never designed for. Sometimes ‘only’ users can perform these actions as experienced software users would never do some of the things an inexperience user would attempt.
Free Fall Testing This form of testing is normally done just before release to the users and uses a system which has already been tested. The general goal in this testing is to round up some of the key users of the software and allow them to free fall their way through completing the normal daily tasks they will be performing when the system goes live. Here some of the obscure things a user will attempt to do will be highlighted and a final lockdown of certain functionality can be identified.
Equivalence class partitioning A software testing technique which identifies a small amount of input values that invokes as many different input conditions as possible.
Boundary Value Analysis A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The theory for boundary value analysis is that if the system performs correctly for these special values then it is likely to work correctly for all the numbers in-between.
An example would be if a data field was set to except amounts of money from 0 to 10 pounds the boundaries would be £0.00, £0.01, £9.99 & £10:00
Compatibility Testing / Data Migration Tests are made to probe where the new system does not subsume the facilities and modes of use of the old system where it was intended to do so. In these cases and Dependant on the system being tested, a parallel run should be considered so that data from the old system can be directly compared to the new system.
Security Testing Tests are performed to compromise the systems security. This include as an example, accessing an oracle data base containing the data using multiple logins or unauthorized id’s.. Additionally hacking tools could be considered if the system could be access externally, such as over the internet.. Source: CoolInterview.com
Answered by: vishakha | Date: 6/26/2008
| Contact vishakha
types of testing: # alpha testing # beta testing # gamma testing # adhoc testing # Agile testing # smoke testing # sanity testing # security testing # endurance testing # exhaustive testing # exploratory testing # mutation testing # I18N testing # L10 testing # concurrency testing # dependency testing # context driven testing # data driven testing # monkey testing # gorilla testing # vendor validation testing # benefit realization testing # volume testing # retesting # regression testing
Source: CoolInterview.com
Answered by: santhosh | Date: 9/17/2008
| Contact santhosh
If you have the better answer, then send it to us. We will display your answer after the approval.
Rules to Post Answers in CoolInterview.com:-
- There should not be any Spelling Mistakes.
- There should not be any Gramatical Errors.
- Answers must not contain any bad words.
- Answers should not be the repeat of same answer, already approved.
- Answer should be complete in itself.
|