1. Software testing activities should start
a. as soon as the code is written
b. during the design stage
c. when the requirements have been formally documented
d. as soon as possible in the development lifecycle
2. Faults found by users are due to:
a. poor quality software
b. poor software and poor testing
c. bad luck
d. insufficient time for testing
3. What is the main reason for testing software before releasing it?
a. to show that the system will work after release
b. to decide when the software is of sufficient quality to release
c. to find as many bugs as possible before release
d. to give information for a risk-based decision about release
e. to use up the time between the end of development and release
4. Which of the following statements is not true:
a. Performance testing can be done during unit testing as well as during the testing of the
b. The acceptance test does not necessarily include a regression test.
c. Verification activities should not involve testers (reviews, Inspection, etc).
d. Test environments should be as similar to production environments as possible.
5. When reporting faults found to developers, testers should be:
a. as polite, constructive and helpful as possible
b. firm about insisting that a bug is not a "feature" if it should be fixed
c. diplomatic, sensitive to the way they may react to criticism
d. subservient, after all the developers know what they are doing
e. a, b and c above
6. In which order should tests be run?
a. the most important tests first
b. the most difficult tests first (to allow maximum time for fixing)
c. the easiest tests first (to give initial confidence)
d. the order they are thought of
7. The later in the development life cycle a fault is discovered, the more expensive it is to fix. Why?
a. the documentation is poor, so it takes longer to find out what the software is doing.
b. wages are rising
c. the fault has been built in to more documentation, code, tests, etc.
d. none of the above
8. Which is not true - The black box tester
a. should be able to understand a functional specification or requirements document
b. should be able to understand the source code
c. is highly motivated to find faults
d. is creative to find the system's weaknesses
9. A test design technique is
a. a process for selecting test cases
b. a process for determining expected outputs
c. a way to measure the quality of software
d. a way to describe in a test plan what has to be done
e. all of the above
10. Testware (test cases, test data, etc.)
a. needs configuration management just like requirements, design and code
b. should be newly constructed for each new version of the software
c. is needed only until the software is released into production or use
d. does not need to be documented and commented, as it does not form part of the released
11. An incident logging system
a. only records defects
b. is of limited value
c. is a valuable source of project information during testing if it contains all incidents
d. should be used only by the test team
12. Increasing the quality of the software, by better development methods, will affect the time needed
for testing (the testing phases) by:
a. reducing test time
b. no change
c. increasing test time
13. Coverage measurement
a. is nothing to do with testing
b. is a partial measure of test thoroughness
c. branch coverage should be mandatory for all software
d. can only be applied at unit or module testing, not at system testing
14. When should you stop testing?
a. when time for testing has run out
b. when all planned tests have been run
c. when the test completion criteria have been met
d. when no faults have been found by the tests run
e. when the software is proven perfect
15. Which of the following is true?
a. component testing should be black box, system testing should be white box
b. if you find a lot of bugs in testing, you should not be very confident about the quality of
c. if you do a lot of testing, you should be confident about the quality of the software.
d. the fewer bugs you find, the better your testing was
e. the more tests you run, the more bugs you will find
f. none of the above
16. What is the most important criterion in deciding what testing technique to use?
a. how well you know a particular technique
b. the objective of the test
c. how appropriate the technique is for testing the application
d. whether there is a tool to support the technique.
17. If the pseudocode below were a programming language, how many tests are required to achieve
100% statement coverage?
1. If x=3 then
3. If y=2 then
18. Using the same code example as question 17, how many tests are required to achieve 100%
19. Which of the following is NOT a type of non-functional test?
20. Which of the following tools would you use to detect a memory leak?
a. Static analysis
b. Coverage analysis
c. Dynamic analysis
d. Memory analysis.
21. Which of the following is NOT a standard related to testing?
a. IEEE 829
b. IEEE 610
22. Which of the following is the component test standard?
a. IEEE 829
b. IEEE 610
23. Which of the following statements is true?
a. Faults in program specifications are the most expensive to fix.
b. Faults in code are the most expensive to fix.
c. Faults in requirements are the most expensive to fix.
d. Faults in designs are the most expensive to fix.
24. Which of the following is NOT an integration strategy?
25. Which of the following is a black box test design technique?
a. statement testing
b. equivalence part itioning
d. usability testing
26. A program with high cyclomatic complexity is most likely to be:
c. Difficult to write
d. Difficult to test
27. Which of the following is a static test?
a. code inspection
b. coverage analysis
c. usability assessment
d. installation test
28. Which of the following is the odd one out?
a. white box
b. glass box
29. A program validates a numeric field as follows:
Values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or
equal to 22 are rejected.
Which of the following input values cover all of the equivalence partitions?
a. 10, 11, 21
b. 3, 20, 21
c. 3, 10, 22
d. 10, 21, 22
30. Using the same specification as question 29, which of the following covers the MOST boundary
a. 9, 10, 11, 22
b. 9, 10, 21, 22
c. 10, 11, 21, 22
d. 10, 11, 20, 21
Saturday, May 31, 2008
1. Software testing activities should start
Friday, May 30, 2008
How do you start the application under test?
A - use the Windows Start menu
B - simply begin recording
C - open a command line and start the application by typing its name
D - click the Start Application button on the recording toolbar
What can be tested when recording a verification point?
A - an objects data only
B - an objects data or properties only
C - whether or not the object is working
D - an objects data, properties, or existence
While recording a script, the recording monitor _____.
A - appears at the conclusion of recording
B - is only displayed on the toolbar
C - does not appear
D - displays a message for each action
What can you use to select an object as a verification point?
A - the object finder, the object picker, or the object browser
B - the main object browser, the test object browser, or the extra object browser
C - the object finder, the test object browser, or the delay method
D - the delay method1 the scripting method, or the pointer method
How do you stop recording?
A - click the Stop Recording button on the recording toolbar
B - end the application under test
C - close RFT
D - close the recording monitor
A recording is started by:
A - Entering script_record on the command line
B - creating a script and then pressing the record button in the RFT application
C - starting the application under test
D - Starting RFT
What must you do to view a comparator of a failed verification point from an RFT text log?
A - open a web browser and browse to open the file: \
B - right-click on the test log and select Failed Verification Points from the right-click menu, then
select the verification point you want to view
C - open the test log, right-click on the verification point line you want to view and select View
Results from the right-click menu
D - log results in another format since you cannot open a comparator from a text log
Given an existing TestManager Rational Test project, what are the steps to log results to
A - from TestManager, create a new Test Script Type for RFT, then from RFT, select the
Functional Test logging preferences to TestManager
B - from RFT, select the Functional Test logging preferences to TestManager, then select the
TestManager project when you run an RFT test
C - from RFT, associate the Rational Test Project with the RFT project, then select the Functional
Test logging preferences to TestManager
D - from the Rational Administrator, associate the RFT project to the Rational Test Project, then
from RFT, select the Functional Test logging preferences to TestManager
Out of the box, what are the different options for logging RFT tests?
A - HTML, text, custom, TestManager, and none
B - HTML, text, TPTP, TestManager, and none
C - TestManager, CQTM, TPTP, HTML, and none
D - HTML, PDF, text, TestManager, and none
Not including TestManager or custom logging, how can you organize RFT test results?
A - define and follow a naming convention for all test logs
B - define and follow a naming convention for all logs and log subfolders
C - create as many folders in the *_logs project as needed and drag logs into the appropriate
D - create additional log projects which are associated with the primary RFT project, (for example,
How do you perform image verification in a test?
A - select Perform Image Verification Point from the Verification Point and Action Wizard
B - select the Perform Properties Verification Point from the Verification Point and Action Wizard,
then select only the .src or other property for the image
C - download and install the RFT Enhancement Pack plug-in from IBM Rational Support
D - download and install the Image Comparator for Rational Functional Tester 2003.06 utility from
What should the tester open to view Test Objects, Main Data Area and Recognition Data?
A - the test script
B - the test comparator
C - the object map
D - the log viewer
Which three actions are possible with RFT? (Choose three.)
A - use a wizard to substitute literals with datapool variables
B - substitute literals in verification points with datapool variables
C - create a datapool while recording a data-driven script
D - create scripts in c#
Answer: A, B, C
You must _____ a script with a datapool before substituting literal values in the script with
references to datapool variables.
A - share
B - associate
C - run
D - disassociate
When is the best time to use data-driven testing?
A - when the test only needs to be run once
B - when the test steps change based on the test input data
C - when the test must be run multiple times with different data
D - when the test requires a lot of manual data entry
Functional Tester allows you to import an external datapool from which of the following? (Choose
A - an external .csv file
B - another Functional Tester datapool
C - an existing TestManager datapool
D - an access (.mdb) file
Answer: A, B, C
What will the following CallScript do? CallScript (myScript, null, DP_ALL)
A - cause the script to run an infinite number of times
B - cause the script to iterate through the entire datapool
C - cause the script to run through 100 datapool values
D - cause myScript to always pass
What is one way to insert data-driven commands into the test script?
A - use the Insert Data Driven Commands button while recording
B - use the Insert Data Driven Commands button while executing
C - the commands are generated automatically while importing the datapool
D - use the super helper class after recording
What must you do before editing the datapool using an external application?
A - make it a public datapool
B - make it a private datapool
C - export it to a .csv file
D - import it from a .csv file
Which statement is true about an RFT test datapool?
A - It is exclusive for only one test script.
B - It is a collection of related data records.
C - It is automatically generated during script record.
D - It is a collection of related test scripts.
Thursday, May 22, 2008
With Functional Tester, test scripts are created for the purpose of retesting system functionality. With the functional tester scripts are recorded onc e and are amended according the test requirements. With every build of a new system, the scripts are executed to validate previously working functionality. Scripts are generally executed in an unattended mode, so it gives more time to the testing team to test other functionality of the system application.
Features of rational functional tester:
1) It uses object mapping technology while recording scripts that helps in finding the object even if the position of the object is changed in the next build.
2) we can record and playback our scripts on windows but can only linux we can only playback.
3)It uses datapool that helps us in using our scripts for a set of multiple data.
To start working with functional tester we need to do follow these steps:
1)Configuring the java runtime environment.
2)Configure the application for testing.
3)Create a test project.
Configuring the java runtime environment.
Create a test project
A test project is the location where Functional Tester keeps all of your scripts, along with expected and actual results.In functional tester we can use java or .Net for recording the scripts.If you're using the Java language, proceed to Create a Java test project. If you're using Visual Basic.NET, proceed to Create a .NET test project.
1)First open the workspace where you want your tests to be recorded.
2) Select File > New > Functional Test Project to create a new project.
3) Name the project Testproject and click Finish to create your project.
4)Now you can record scripts related to the test project.
1) Start your recording by selecting Script > Add Script Using Recorder...
You can also click on the red record button on the toolbar.
2.)Select script name as Test Script.
3) Click Finish to start recording. The scripting environment minimizes and
the recording toolbar appears.
4. Click Start Application on the recording toolbar that is the third icon from
the left, and looks like a window behind a green triangle.
5. Select the Application that you want to test.
6)Perform the actions on the application and the recorder will record your mouse and key movements and will generate a java code for those steps.
7)Click the stop button on recording toolbar when you are finished with your recording.
8)You can playback the recorded script to check whether it was recorded clearly or not.
Data Driven Testing:
This is the main feature of the automation testing tool that enables us to test the same part of the application for different set of data. From the Recording toolbar, drag the Insert Data Driven Commands icon over the object that you want to use for data driven testing and that object will be encased in a red square.
Datapools:To make your scripts to be used for a multiple set of data you can make your scripts datadriven and store that data into datapool.
1)Right click the project name that you have created and select Add Datapool.
2)Name the datapool and finish
3)Now you can import your csv file into the datapool .
4)check the box first value as variable name if you want your first row in datapool
To be the variable name so as to make it easy for future reference.
5)you can associate this datapool with any scripts under the project.
With the verification point we can check the state of an object.In Functional tester we have two types of verification points.
1)Data verification point
2)Property Verification point
If we want to check some static or dynamic data in our application we can use data verification point for this purpose. For ex if we want to check the values in the combo box are right or not after the next build ,then for this purpose we can
1)Simply insert a verification point for the combo box.
2)select the combo box by the object finder button and it will be encased in a red box.
3)select perform data verification point and click finish.
4)Now all the values within the combo box have been captured and click finish.
5)This will create a baseline for the combo box. In the next build during playback it will check whether the values in the combo box are same as that in the baseline.
Similarly we can use verification point for checking the dynamic data .for ex-suppose a transaction no is generated for every transaction in the application an suppose it should be numeric every time , so in order to test this thing we can user patterns.
Logs: when our script is completely playback then logs are generated for the scripts by which we can know whether our run was pass or fail.we can save the logs in the HTMl,text or Test
Thursday, May 15, 2008
Verification: It is a process to define pre-define activity i.e.,( before testing process starts, check whether all relavent documents are prepared and ensure that standard which we create to meet the requirement) to prepare a application.
Validation: It is a process to test the apllication and deliver error free application.
Integration testing:testing the integration between modules.
System testing:Testing the system as a whole whether it meets the specification requirements .
User Acceptance Testing:This is the tesing done by actual users to determine whether the system satisfies the acceptance criteria or not.
Regression testing:Testing to check that while fixing some bugs some unintended bugs have not been introduced in the system.
Performance Testing: Testing the system whether it meets the specified performance requirements.
Usability Testing:Check the system whether it is easy to use by the users.It also verifies the ease of learning software ,inclding the user documentation.
Configuration Testing: Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.
Recovery Testing:It is used in verifying software restart capabilties after a disaster.
Security Testing: Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Smoke Testing: typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Alpha Testing: testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
Beta Testing: testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
Soak Test :-Soak testing is running a system at high levels of load for prolonged periods of time. A soak test would normally execute several times more transactions in an entire day (or night) than would be expected in a busy day, to identify any performance problems that appear after a large number of transactions have been executed.
Compatibility Testing :-Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.
Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it. A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality.
Context-driven testing - Testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Friday, May 2, 2008
Dynamic Testing: Dynamic testing means running and executing the software.It basically invloves the validation by checking the output by giving a specific set of inputs.
Unit Testing: Unit tesing means testing the individual part of a software.unit is considered as a smallest part of a softwate that can be tested.this is done by the developers.
Integration Testing: Integration testing means testing the intearction between two or more modules.when the output of one module is an input for the other module then integration testing is done so as to check whether they interacts correctly or not.
System Testing: System testing means testing the system as whole.This involves all the functional and non functional requirements.This basically falls within black box testing.This is considered as the most importanty phase of testing.
Black Box Testing:In black box testing ,the tester only knows what the software is supposed to do,he can't look into the box to see that how the system works.He simply gives input and check the output .he does not know how or why it happen .Black box testing is also referred as Functioanl or behavioral testing.
White Box Testing:In white box testing tester can look into the code and can examine it for the tesing purpose.This is also known as clear box testing as tester can look into the box.this type of testing is normally done by the developers.