Monday, December 15, 2008

ISTQB Exam Paper


21 Given the following types of tool, which tools would typically be used by developers and which by an independent test team?
i. static analysis
ii. Performance testing
iii. Test management
iv. Dynamic analysis
v. test running
vi. Test data preparation
a) Developers would typically use i, iv and vi; test team ii, iii and v
b) Developers would typically use i and iv; test team ii, iii, v and vi
c) Developers would typically use i, ii, iii and iv; test team v and vi
d) Developers would typically use ii, iv and vi; test team I, ii and v
e) Developers would typically use i, iii, iv and v; test team ii and vi
22 The main focus of acceptance testing is:
a) Finding faults in the system
b) Ensuring that the system is acceptable to all users
c) Testing the system with other systems
d) Testing for a business perspective
e) Testing by an independent test team
23 Which of the following statements about the component testing standard is false?
a) Black box design techniques all have an associated measurement technique
b) White box design techniques all have an associated measurement technique
c) Cyclomatic complexity is not a test measurement technique
d) Black box measurement techniques all have an associated test design technique
e) White box measurement techniques all have an associated test design technique
24 Which of the following statements is NOT true?
a) Inspection is the most formal review process
b) Inspections should be led by a trained leader
c) Managers can perform inspections on management documents
d) Inspection is appropriate even when there are no written documents
e) Inspection compares documents with predecessor (source) documents
25 A typical commercial test execution tool would be able to perform all of the following EXCEPT:
a) Generating expected outputs
b) Replaying inputs according to a programmed script
c) Comparison of expected outcomes with actual outcomes
d) Recording test inputs
e) Reading test values from a data file
26 The difference between re-testing and regression testing is
a) Re-testing is running a test again; regression testing looks for unexpected side effects
b) Re-testing looks for unexpected side effects; regression testing is repeating those tests
c) Re-testing is done after faults are fixed; regression testing is done earlier
d) Re-testing uses different environments, regression testing uses the same environment
e) Re-testing is done by developers; regression testing is done by independent testers
27 Expected results are:
a) Only important in system testing
b) Only used in component testing
c) Never specified in advance
d) Most useful when specified in advance
e) Derived from the code
28 Test managers should not:
a) Report on deviations from the project plan
b) Sign the system off for release
c) Re-allocate resource to meet original plans
d) Raise incidents on faults that they have found
e) Provide information for risk analysis and quality improvement
29 Unreachable codes would best be found using:
a) Code reviews
b) Code inspections
c) A coverage tool
d) A test management tool
e) A static analysis tool
30 A tool that supports traceability, recording of incidents or scheduling of tests is called:
a) A dynamic analysis tool
b) A test execution tool
c) A debugging tool
d) A test management tool
e) A configuration management tool
31 What information need not be included in a test incident report?
a) How to fix the fault
b) How to reproduce the fault
c) Test environment details
d) Severity, priority
e) The actual and expected outcomes
32 Which expression best matches the following characteristics or review processes:
1. Led by author
2. Undocumented
3. No management participation
4. Led by a trained moderator or leader
5. Uses entry exit criteria
s) Inspection
t) Peer review
u) Informal review
v) Walkthrough
a) s = 4, t = 3, u = 2 and 5, v = 1
b) s = 4 and 5, t = 3, u = 2, v = 1
c) s = 1 and 5, t = 3, u = 2, v = 4
d) s = 5, t = 4, u = 3, v = 1 and 2
e) s = 4 and 5, t = 1, u = 2, v = 3
33 Which of the following is NOT part of system testing?
a) Business process-based testing
b) Performance, load and stress testing
c) requirements-based testing
d) Usability testing
e) top-down integration testing
34 What statement about expected outcomes is FALSE?
a) Expected outcomes are defined by the software’s behavior
b) Expected outcomes are derived from a specification, not from the code
c) Expected outcomes include outputs to a screen and changes to files and databases
d) Expected outcomes should be predicted before a test is run
e) Expected outcomes may include timing constraints such as response times
35 The standard that gives definitions of testing terms is:
a) ISO/IEC 12207
b) BS7925-1
c) BS7925-2
d) ANSI/IEEE 829
e) ANSI/IEEE 729
36 The cost of fixing a fault:
a) Is not important
b) Increases as we move the product towards live use
c) Decreases as we move the product towards live use
d) Is more expensive if found in requirements than functional design?
e) Can never be determined
37 Which of the following is NOT included in the Test Plan document of the Test Documentation Standard?
a) Test items (i.e. software versions)
b) What is not to be tested?
c) Test environments
d) Quality plans
e) Schedules and deadlines
38 Could reviews or inspections be considered part of testing:
a) No, because they apply to development documentation
b) No, because they are normally applied before testing
c) No, because they do not apply to the test documentation
d) Yes, because both help detect faults and improve quality
e) Yes, because testing includes all non-constructive activities
39 Which of the following is not part of performance testing:
a) Measuring response time
b) Measuring transaction rates
c) Recovery testing
d) Simulating many users
e) Generating many transactions
40 Error guessing is best used
a) As the first approach to deriving test cases
b) After more formal techniques have been applied
c) By inexperienced testers
d) After the system has gone live
e) Only by end users
Answers
21 B
22 D
23 A
24 D
25 A
26 A
27 D
28 C
29 A
30 E
31 E
32 B
33 E
34 A
35 B
36 B
37 D
38 D
39 C
40 B

Tuesday, October 7, 2008

Database Testing

Database systems play an important role in nearly every modern organization. The ultimate objective of database analysis, design, and implementation is to establish an electronic data store corresponding to a user’s conceptual world.
Functionality of the database is a very critical aspect of application’s quality; problems with the database could lead to data loss or security violation, and may put a company at legal risk
depending on the type of data being stored. Applications consist of a database and improving quality of data in an organization is often a daunting task. A database should be evaluated throughout the database development life cycle to prepare a quality database application.
Data in a database may be input from and displayed on a number of different types of Systems. Each of these types of systems has unique system limitations, which may dictate how data should be formatted in your database. A database should be evaluated based on the factors
such as data integrity, consistency, normalization, performance, security and very important - the expectations of its end users.
                                 The database design process is decided by finding the requirements and needs of the end user. Uncertainty about understanding the requirements can be reduced only after significant analysis and discussions with users. Once the user requirements are clear, the
process of behavior implementation consists of the design and construction of a solution domain following the problem domain. Because of the difficulties associated with the changing
requirements, the database developer must attempt to develop a database model which closely matches the perception of the user, and deliver a design that can be implemented, maintained and modified in a cost-effective way. Diagrammatic representation using entity relationship
diagrams, object models, data flow diagrams, allows the information described in a visual format in a meaningful way.
Database testing is one of the most challenging tasks to be done by software testing team.
A Database Tester, by understanding the referential integrity and database security, and by having a good grasp on the various technologies and data formats used to transfer and retrieving the data from the database, can test database to avoid problems. Testing should be included in various phases of database development life cycle. The cycle typically consists of several stages from planning to designing, testing and deployment. In the first phase of database development process, requirements are gathered; checklists can be used as part of the evaluation process for the database specification. After gathering requirement and understanding the need for the database, a preliminary list of the fields of data to be
included in the database should be prepared. We should have complete information about what information is required by the client and what type of fields are required to produce that information. Next determine if the logical data model is complete and correct. Confirm the design with the business community in a design review process. Create a logical Entity Relationship Diagram (ERD) to graphically represent the data store. Determine if the data model is fully documented (entities, attributes, relationships) Attributes have correct data type, length, NULL status, default values. General discussion of business rules to be enforced by database level constraints should be carried out e.g.
• Not null constraints
• Check constraints
• Unique constraints
• Primary key constraints
• Foreign key constraints
Business rules to be enforced by triggers and procedures should be discussed along with the business rules to be enforced by application code. After this the normal forms should be tested with the help of test data. Testing physical database design includes testing table definitions,constraints, triggers and procedures. Black Box testing techniques like Boundary value analysis can be used. We can test the table definition by testing the column definitions and constraints that have been imposed:Database constraints can be checked as follows:
1. Primary Key- Write Test Case to insert duplicate value in Primary Key column.
2. Insert record to violate Referential Integrity Constraint.
3. Delete record to violate Referential Integrity Constraint.
4. Insert NULL value in NOT NULL column.
5. Insert values to violate check constraints by inserting values outside input domains.                    
For relational databases queries are written using SQL. We can test database queries by identifying different situations and data values. SQL conditions can be tested using Decision Condition Coverage:
a. Select statements use conditions in Where Clause and Having Clause for Group By columns.
I. Conditions written using AND logical operator requires (T, T), (T, F), (F, T) outcomes for two operands to be tested.
ii. Conditions written using OR logical operator requires (F, F), (T,
F), (F, T) outcomes to be tested.
Testing requires that every condition affecting the result takes all possible outcomes at least once.
b. Testing SQL statements involving NULL values.
I. Requires Testing conditions with each operand in the condition taking NULL value.
ii. For a Group By Clause, NULL values have to be considered.
c. For sub queries, include test cases to return zero and more rows.
d. SQL statements like Update, Insert, and Delete also need to be tested for conditions.

Apart from testing the table definitions and SQL statements a database tester should test the triggers, procedures and functions. These objects can be unit tested by finding various paths of
execution in the code and functionality can be tested by executing the code - providing required inputs and checking the output generated.
Let’s see how can we design test cases to test table definition i.e. column definitions and constraints. Refer to ‘Item’ table storing details of items in stock. Details of sales orders placed for various items are stored in another table ‘Sales’. The table definitions are as follows:
Table Definitions
To test the constraints, we can design test cases as follows:
ITEM Table
Test Case ID: TCcheck _itemcodePK
Objective: To evaluate Primary key constraint on Item code in item table.
Description: Insert two records with I001 as Item code.
Expected Result: second record should not be saved in database.
Test Case ID: TCcheck _itemcode1
Objective: To evaluate check constraint on Item code in item table.
Description: Insert a record with I001 as Item code.
Expected Result: record should be saved in database.
Test Case ID: TCcheck _itemcode2
Objective: To evaluate check constraint on Item code in item table.
Description: Insert a record with I555 as Item code.
Expected Result: record should be saved in database.
Test Case ID: TCcheck _itemcode3
Objective: To evaluate check constraint on Item code in item table.
Description: Insert a record with invalid Item code as I000.
Expected Result: error message should be displayed.
Test Case ID: TCcheck _itemcode4
Objective: To evaluate check constraint on Item code in item table.
Description: Insert a record with I556 as Item code.
Expected Result: error message should be displayed.
Test Case ID: TCcheck _Description1
Objective: To evaluate NOT NULL constraint on description column in item table.
Description: Insert a record with no value for description column.
Expected Result: error message should be displayed.
Test Case ID: TCcheck _price
Objective: To evaluate check constraint on price column in item table.
Description: Insert a record with price=-10.
Expected Result: error message should be displayed.
Test Case ID: TCcheck _delete
Objective: To evaluate referential integrity constraint on item code column in item table.
Description: insert a record in sales table with item code I001.
Delete record from item table where item code=I001.
Expected Result: error message should be displayed. SALES Table
Test Case ID: TCcheck _itemcode
Objective: To evaluate references constraint on item code column in sales table.
Description: Insert a record with item code not existing in item table.
Expected Result: error message should be displayed.
Test Case ID: TCcheck _orderID
Objective: To evaluate check constraint on Order_ID column in sales table.
Description: Insert a record with Order_ID as 1001.
Expected Result: error message should be displayed. 
Now consider a requirement for displaying details of all the items for which qty ordered is>10. We can find the required details by writing

SQL query:

Select item_code from sales where qty>10
To test this query we can prepare a test sales table having records with qty>10. When executed, this query should return all the matching records. Similarly we can check the query by preparing a table not having any matching records.

Blog posting on google search

Thursday, September 25, 2008

ISTQB Examp Paper


ISTQB Examp Paper 1
1 When what is visible to end-users is a deviation from the specific or expected behavior, this is called:
a) an error
b) a fault
c) a failure
d) a defect
e) a mistake
2 Regression testing should be performed:
v) every week
w) after the software has changed
x) as often as possible
y) when the environment has changed
z) when the project manager says
a) v & w are true, x – z are false
b) w, x & y are true, v & z are false
c) w & y are true, v, x & z are false
d) w is true, v, x y and z are false
e) all of the above are true
3 IEEE 829 test plan documentation standard contains all of the
following except:
a) test items
b) test deliverables
c) test tasks
d) test environment
e) test specification
4 Testing should be stopped when:
a) all the planned tests have been run
b) time has run out
c) all faults have been fixed correctly
d) both a) and c)
e) it depends on the risks for the system being tested
5 Order numbers on a stock control system can range between 10000 and99999 inclusive. Which of the following inputs might be a result of designing tests for only valid equivalence classes and valid boundaries:
a) 1000, 5000, 99999
b) 9999, 50000, 100000
c) 10000, 50000, 99999
d) 10000, 99999
e) 9999, 10000, 50000, 99999, 10000
6 Consider the following statements about early test design:
i. early test design can prevent fault multiplication
ii. faults found during early test design are more expensive to fix
iii. early test design can find faults
iv. early test design can cause changes to the requirements
v. early test design takes more effort
a) i, iii & iv are true. Ii & v are false
b) iii is true, I, ii, iv & v are false
c) iii & iv are true. i, ii & v are false
d) i, iii, iv & v are true, ii us false
e) i & iii are true, ii, iv & v are false
7 Non-functional system testing includes:
a) testing to see where the system does not function properly
b) testing quality attributes of the system including performance and usability
c) testing a system feature using only the software required for that action
d) testing a system feature using only the software required for that function
e) testing for functions that should not exist
8 Which of the following is NOT part of configuration management:
a) status accounting of configuration items
b) auditing conformance to ISO9001
c) identification of test versions
d) record of changes to documentation over time
e) controlled library access
9 Which of the following is the main purpose of the integration
strategy for integration testing in the small?
a) to ensure that all of the small modules are tested adequately
b) to ensure that the system interfaces to other systems and networks
c) to specify which modules to combine when and how many at once
d) to ensure that the integration testing can be performed by a small team
e) to specify how the software should be divided into modules
10 What is the purpose of test completion criteria in a test plan:
a) to know when a specific test has finished its execution
b) to ensure that the test case specification is complete
c) to set the criteria used in generating test inputs
d) to know when test planning is complete
e) to plan when to stop testing
11 Consider the following statements
i. an incident may be closed without being fixed
ii. incidents may not be raised against documentation
iii. the final stage of incident tracking is fixing
iv. the incident record does not include information on test
environments
v. incidents should be raised when someone other than the author of the software performs the test
a) ii and v are true, I, iii and iv are false
b) i and v are true, ii, iii and iv are false
c) i, iv and v are true, ii and iii are false
d) i and ii are true, iii, iv and v are false
e) i is true, ii, iii, iv and v are false
12 Given the following code, which is true about the minimum number of test cases required for full statement and branch coverage:
Read P
Read Q
IF P+Q > 100 THEN
Print “Large”
ENDIF
If P > 50 THEN
Print “P Large”
ENDIF
a) 1 test for statement coverage, 3 for branch coverage
b) 1 test for statement coverage, 2 for branch coverage
c) 1 test for statement coverage, 1 for branch coverage
d) 2 tests for statement coverage, 3 for branch coverage
e) 2 tests for statement coverage, 2 for branch coverage
13 Given the following:
Switch PC on
Start “outlook”
IF outlook appears THEN
Send an email
Close outlook
a) 1 test for statement coverage, 1 for branch coverage
b) 1 test for statement coverage, 2 for branch coverage
c) 1 test for statement coverage. 3 for branch coverage
d) 2 tests for statement coverage, 2 for branch coverage
e) 2 tests for statement coverage, 3 for branch coverage
14 Given the following code, which is true:
IF A > B THEN
C = A – B
ELSE
C = A + B
ENDIF
Read D
IF C = D Then
Print “Error”
ENDIF
a) 1 test for statement coverage, 3 for branch coverage
b) 2 tests for statement coverage, 2 for branch coverage
c) 2 tests for statement coverage. 3 for branch coverage
d) 3 tests for statement coverage, 3 for branch coverage
e) 3 tests for statement coverage, 2 for branch coverage
15 Consider the following:
Pick up and read the newspaper
Look at what is on television
If there is a program that you are interested in watching then switch
the the television on and watch the program
Otherwise
Continue reading the newspaper
If there is a crossword in the newspaper then try and complete the
crossword
a) SC = 1 and DC = 1
b) SC = 1 and DC = 2
c) SC = 1 and DC = 3
d) SC = 2 and DC = 2
e) SC = 2 and DC = 3
16 The place to start if you want a (new) test tool is:
a) Attend a tool exhibition
b) Invite a vendor to give a demo
c) Analyse your needs and requirements
d) Find out what your budget would be for the tool
e) Search the internet
17 When a new testing tool is purchased, it should be used first by:
a) A small team to establish the best way to use the tool
b) Everyone who may eventually have some use for the tool
c) The independent testing team
d) The managers to see what projects it should be used in
e) The vendor contractor to write the initial scripts
18 What can static analysis NOT find?
a) The use of a variable before it has been defined
b) Unreachable (“dead”) code
c) Whether the value stored in a variable is correct
d) The re-definition of a variable before it has been used
e) Array bound violations
19 Which of the following is NOT a black box technique:
a) Equivalence partitioning
b) State transition testing
c) LCSAJ
d) Syntax testing
e) Boundary value analysis
20 Beta testing is:
a) Performed by customers at their own site
b) Performed by customers at their software developer’s site
c) Performed by an independent test team
d) Useful to test bespoke software
e) Performed as early as possible in the lifecycle
ANSWERS
1 C
2 C
3 E
4 E
5 C
6 A
7 B
8 B
9 C
10 E
11 B
12 B
13 B
14 B
15 E
16 C
17 B
18 C
19 C
20 A

Tuesday, June 24, 2008

What is Defect Leakage ?

Defect leakage occurs at the Customer or the End user side after the application delivery. After the release of the application to the client, if the end user gets any type of defects by using that application then it is called as Defect leakage. This Defect Leakage is also called as Bug Leak.

Monday, June 16, 2008

Poll Result

Is manual testing important?
YES [7]
NO [1]
CAN'T SAY [0]
Total no of votes: 8

Client Server Testing

Projects are divided into two types of architecture:
* 2 tier applications
* 3 tier applications
CLIENT / SERVER TESTING
This type of testing usually being done for 2 tier applications, we will test for front-end and backend modules.The application launched on front-end will have forms and reports with the help of which we can monitor and manipulate the data.
E.g: The front end applications developed in Visual Basic, VC++, Core Java, C, C++, C#, Dot net etc. The back end applications will be use MS Access, SQL Server, Oracle, Sybase, Mysql etc.

The tests performed on these types of applications would be
- UI testing
- Manual testing
- Functionality testing
- Compatibility testing & configuration testing

WEB TESTING
This is done for 3 tier applications (developed for Internet / intranet )
For web testing we will use browser [ Mozilla, Opera, Netscape, IE etc] and database server.The applications accessible in browser would be developed in HTML, DHTML, XML, JavaScript, AJAX etc. Applications for the web server would be developed in Java, ASP, JSP, VBScript, JavaScript, Perl, PHP etc. (All the manipulations are done on the web server with the help of these programs developed).The data base server would be having oracle, sql server, mysql etc. (All data is stored in the database available on the data base server)

The tests performed on these types of applications would be
- User interface testing
- Functionality testing
- Security testing
- Browser compatibility testing
- Load / stress testing
- Interoperability testing/intersystem testing
- Storage and data volume testing

Desktop application:
1. Application runs in single memory (Front end and Back end in one place)
2. Single user only

Client/Server application:
1. Application runs in two or more machines
2. Application is a menu-driven
3. Connected mode (connection exists always until logout)
4. Limited number of users
5. Less number of network issues when compared to web app.

Web application:
1. Application runs in two or more machines
2. URL-driven
3. Disconnected mode (state less)
4. Unlimited number of users
5. Many issues like hardware compatibility, browser compatibility, version compatibility, security issues, performance issues etc.
As per difference in both the applications come where, how to access the resources. In client server once connection is made it will be in state on connected, whereas in case of web testing http protocol is stateless, then there comes logic of cookies, which is not in client server.

For client server application users are well known, whereas for web application any user can login and access the content, he/she will use it as per his use.So, there are always issues of security and compatibility for web application.

Tuesday, June 3, 2008

Types of Testing

Grey Box Testing: It is the combination of the black box and white box testing.
Red Box Testing: It is nothing but a protocol testing / Error message testing.
Yellow Box Testing: It is for Warning messages testing.

Tuesday, May 20, 2008

Poll Results

Does testing team plays an important role in service based company?
YES 8
NO 1
CAN'T SAY 0
Total no of votes = 9

The Ten Principles of Good Software Testing

Testing principle 1: Business risk can be reduced by finding defects.
Testing principle 2: Positive and negative testing contribute to risk reduction.
Testing principle 3: Static and execution testing contribute to risk reduction.
Testing principle 4: Automated test tools can contribute to risk reduction.
Testing principle 5: Make the highest risks the first testing priority.
Testing principle 6: Make the most frequent business activities (the 80/20 rule) the second testing priority.
Testing principle 7: Statistical analyses of defect arrival patterns and other defect characteristics are a very effective way to forecast testing completion.
Testing principle 8: Test the system the way customers will use it.
Testing principle 9: Assume the defects are the result of process and not personality.
Testing principle 10: Testing for defects is an investment as well as a cost.

Saturday, May 10, 2008

Soak Testing

Soak Tests (Also Known as Endurance Testing): Soak testing is running a system at high levels of load for prolonged periods of time. A soak test would normally execute several times more transactions in an entire day (or night) than would be expected in a busy day, to identify any performance problems that appear after a large number of transactions have been executed.
Also, it is possible that a system may ‘stop’ working after a certain number of transactions have been processed due to memory leaks or other defects. Soak tests provide an opportunity to identify such defects, whereas load tests and stress tests may not find such problems due to their relatively short duration. A soak test would run for as long as possible, given the limitations of the testing situation. For example, weekends are often an opportune time for a soak test.

Some typical problems identified during soak tests are listed below:
1.Serious memory leaks that would eventually result in a memory crisis.
2.Failure to close connections between tiers of a multi-tiered system under some circumstances which could stall some or all modules of the system.
3.Failure to close database cursors under some conditions which would eventually result in the entire system stalling.
4.Gradual degradation of response time of some functions as internal data-structures become less efficient during a long test.
Apart from monitoring response time, it is also important to measure CPU usage and available memory. If a server process needs to be available for the application to operate, it is often worthwhile to record it's memory usage at the start and end of a soak test. It is also important to monitor internal memory usages of facilities such as Java Virtual Machines, if applicable.
Long Session Soak Testing: When an application is used for long periods of time each day, the above approach should be modified, because the soak test driver is not Logins and transactions per day, but transactions per active user for each user each day.
This type of situation occurs in internal systems, such as ERP and CRM systems, where users login and stay logged in for many hours, executing a number of business transactions during that time. A soak test for such a system should emulate multiple days of activity in a compacted time-frame rather than just pump multiple days worth of transactions through the system.
Long session soak tests should run with realistic user concurrency, but the focus should be on the number of transactions processed.
Test Duration: The duration of most soak tests is often determined by the available time in the test lab. There are many applications, however, that require extremely long soak tests. Any application that must run, uninterrupted for extended periods of time, may need a soak test to cover all of the activity for a period of time that is agreed to by the stakeholders, such as a month. Most systems have a regular maintenance window, and the time between such windows is usually a key driver for determining the scope of a soak test.

A classic example of a system that requires extensive soak testing is an air traffic control system. A soak test for such a system may have a multi-week or even multi-month duration.

Thursday, May 8, 2008

API Testing

An API (Application Programming Interface) is a collection of software functions and procedures, called API, which can be executed by other software applications.API testing is mostly used for the system which has collection of API that needs to be tested. The system could be system software, application software or libraries.API testing is different from other testing types as GUI is rarely involved in API Testing. Even if GUI is not involved in API testing, we still need to setup initial environment, to invoke API with required set of parameters and then analyze the result.
Setting initial environment become complex because GUI is not involved. It is very easy to setup initial condition in GUI. In case of API this is not the case. This can be divided further in test environment setup and application setup. Things like database should be configured, server should be started are related to test environment setup. On the other hand object should be created before calling non static member of the class falls under application specific setup.Initial condition in API testing also involves creating conditions under which API will be called. API can be called directly or it can be called with the help of some event or in response of some exception. Output of API can be some data or status or it can just wait for some other call to complete in a-synchronized environment. Most of the test cases of API will be based on the output, if API
* Return value based on input condition: - These are relatively simple to test as input can be defined and results can be validated against expected return value. For example, It is very easy to write test cases for int add (int a, int b) kind of API. You can pass different combinations of int a and int b and can validate these against known results.
* Does not return anything: - For cases like these you will probably have some mechanism to check behavior of API on the system. For example, if you need to write test cases for delete (List Element) function you will probably validate size of the list, absence of list element in the list.
* Trigger some other API/event/interrupt: - If API is triggering some event or raising some interrupt, then you need to listen for those events and interrupt listener. Your test suite should call appropriate API and asserts should be on the interrupts and listener.
* Update data structure:-This category is also similar to the API category which does not return anything. Updating data structure will have some effect on the system and that should be validated. If you have other means of accessing the data structure, it should be used to validate that data structure is updated.
* Modify certain resources:-If API call is modifying some resources, for example updating some database, changing registry, killing some process etc, then it should be validated by accessing those resources.
Main Challenges of API Testing can be divided into following categories.
* Parameter Selection
* Parameter combination
* Call sequencing

Monday, April 21, 2008

Test cases for elevator

Test cases for elevator: Following are the test cases for the elevator.

  1. Check for the maximum weight it can carry at a time. [ Boundary value analysis]
    Whether elevator is capable of moving up and down.
    It waits till the “CLOSE” button is pressed.
    It stops at the desired floor for which the button is pressed.
    Does it indicate if it crosses the threshold limit?
    It moves up when called from upward and down when called from downward.
    Whether it automatically stops at each floor?
    If anyone enters in between the door at the time of closing, the door should open.
    Does the elevator operation is smooth?
    Check the display system whether it is giving correct information according to the floor?
    Does the “OPEN DOOR” button work when elevator is moving up or down?
    Check for the default time of the door?

Thursday, April 10, 2008

Poll Result

Is manual testing is eually important as automated testing?
Yes : 88% [30]
No : 11% [4]
Can't say : 1%
Total No of votes : 34

Tuesday, March 4, 2008

Types of Testing

The different "Types of Testing" are listed below.

* Acceptance Testing
* Ad hoc Testing
o Buddy Testing
o Paired Testing
o Exploratory Testing
o Iterative / Spiral model Testing
o Agile / Extreme Testing
* Aesthetics Testing
* Alpha Testing
* Automated Testing
* Beta Testing
* Black Box Testing
* Boundary Testing
* Comparison Testing
* Compatibility Testing
* Conformance Testing
* Consistency Testing (Heuristic)
* Deployment Testing
* Documentation Testing
* Domain Testing
* Download Testing
* EC Analysis Testing
* End-to-End Testing
* Fault-Injection Testing
* Functional Testing
* Fuzz Testing
* Gray Box Testing
* Guerilla Testing
* Install & Configuration Testing
* Integration Testing
o System Integration
o Top-down Integration
o Bottom-up Integration
o Bi-directional Integration
* Interface Testing
* Internationalization Testing
* Interoperability Testing
* Lifecycle Testing
* Load Testing
* Localization Testing
* Logic Testing
* Manual Testing
* Menu Walk-through Testing
* Performance Testing
* Pilot Testing
* Positive & Negative Testing
* Protocol Testing
* Recovery Testing
* Regression Testing
* Reliability Testing
* Requirements Testing
* Risk-based Testing
* Sanity Testing
* Scalability Testing
* Scenario Testing
* Scripted Testing
* Security Testing
* SME Testing
* Smoke Testing
* Soak Testing
* Specification Testing
* Standards / Compliance Testing
o 508 accessibility guidelines
o SOX
o FDA / Patriot Act
o Other standards requiring compliance
* State Testing
* Stress Testing
* System Testing
* Testability Testing
* Unit Testing
* Upgrade & Migration Testing
* Usability Testing
* White box Testing
o Static Testing Techniques
+ Desk checking
+ Code walk-through
+ Code reviews and inspection
o Structural Testing Techniques
+ Unit Testing
+ Code Coverage Testing
+ Statement
+ Path
+ Function
+ Condition
+ Complexity Testing / Cyclomatic complexity
+ Mutation Testing

Wednesday, February 13, 2008

Database Testing Interview Questions

What is Database testing?
How to Test Database Procedures and Triggers?
What is mean by internal quality auditing?
What are the different stages involved in Database
How do you test whether a database in updated when?
How to check a trigger is fired or not, while
What SQL statements have you used in Database Testing?
Is an "A fast database retrieval rate" a testable
How can we write test cases from Requirements..?
What is data driven test?
How do you test Oracle database in Load Runner
What is way of writing test cases for database testing?
What is the difference between functions & procedures?
How to Test database in manually?
What steps does a tester take in testing Stored
In Database testing What will you keep in
What is database testing and what we test in it ?
How to verify build instructions. Mention steps?
What we normally check for in the Database Testing?
How to do negative testing for database testing?
How to test a SQL Query in Win runner?
How do you test oracle e business suit manually?
How to use sql queries in Win runner/QTP?
How to test a DTS package created for data insert?
Is there any Freeware Tool to do Database Testing?
How to test data loading in Data base testing?
How to mount the Database?

Tuesday, January 22, 2008

List of link checking tools

List of link checking tools

Site Analysis - Hosted service from Web metrics, used to test and validate critical website components, such as internal and external links, domain names, DNS servers and SSL certificates. Runs as often as every hour, or as infrequent as once a week. Ideal for dynamic sites requiring frequent link checking.

HiSoftware Link Validation Utility - Link validation tool; available as part of the AccVerify Product Line.

ChangeAgent Link checking and repair tool from Expandable Language. Identifies orphan files and broken links when browsing files; employs a simple, familiar interface for managing files; previews files when fixing broken links and before orphan removal; updates links to moved and renamed files; fixes broken links with an easy, 3-click process; provides multiple-level undo/redo for all operations; replaces links but does not reformat or restructure HTML code. For Windows.

Link Checker Pro - Link check tool from KyoSoft; can also produce a graphical site map of entire web site. Handles HTTP, HTTPS, and FTP protocols; several report formats available. For Windows platforms.

Web Link Validator - Link checker from REL Software checks links for accuracy and availability, finds broken links or paths and links with syntactic errors. Export to text, HTML, CSV, RTF, Excel. Freeware 'REL Link Checker Lite' version available for small sites. For Windows.

Site Audit - Low-cost on-the-web link-checking service from Blossom Software.

Xenu's Link Sleuth - Freeware link checker by Tilman Hausherr; supports SSL websites; partial testing of ftp and gopher sites; detects and reports redirected URL; Site Map; for Windows.

Linkalarm - Low cost on-the-web link checker from Link Alarm Inc.; free trial period available. Automatically-scheduled reporting by e-mail.

Alert Linkrunner - Link check tool from Viable Software Alternatives; evaluation version available. For Windows.

InfoLink - Link checker program from BiggByte Software; can be automatically scheduled; includes FTP link checking; multiple page list and site list capabilities; customizable reports; changed-link checking; results can be exported to database. For Windows. Discontinued, but old versions still available as freeware.

LinkScan - Electronic Software Publishing Co.'s link checker/site mapping tool; capabilities include automated retesting of problem links, randomized order checking; can check for bad links due to specified problems such as server-not-found, unauthorized-access, doc-not-found, relocations, timeouts. Includes capabilities for central management of large multiple intranet/internet sites. Results stored in database, allowing for customizable queries and reports. Validates hyperlinks for all major protocols; HTML syntax error checking. For all UNIX flavors, Windows, Mac.

CyberSpyder Link Test - Shareware link checker by Aman Software; capabilities include specified URL exclusions, ID/Password entries, test resumption at interruption point, page size analysis, 'what's new' reporting.