Wednesday, December 26, 2007

Testing Question Part- 3

31. Measures designed to minimize the probability of modification, destruction, or inability to retrieve software or data is
a. Preventive security
b. Corrective security
c. Protective security
d. None of the above
Ans: a

32. In the TCS scenario a Project Leader is a Project Manager
a. True
b. False
Ans:

33. Quality assurance is a function responsible for
a. Controlling quality
b. Managing quality
c. Inspections
d. Removal of defects
Ans: b

34. The word management in quality assurance describes many different functions, encompassing
a. Policy management
b. Human resources management, safety control
c. Component control and management of other resources and daily schedules.
d. All of the above
e. None of the above
Ans: e

35. Malcolm Balridge National Quality Award is an annual award to recognize U.S. companies which excel in
a. Quality achievement
b. Quality management
c. Both of the above
Ans: b

36. With defined process in SEI’s process model, organization will achieve the foundation for major and continuing process.
a. True
b. False
Ans: a

37. Statistical process control help to identify the __________ of process problems which are causing defects.
a. Root cause
b. Nature
c. Person/persons involved
d. All of the above
e. None of the above
Ans: b

38. Statistical methods are used to differentiate random variation from
a. Standards
b. Assignable variation
c. Control limits
d. Specification limits
Ans: c

39. Random causes of process problems can be ___________ eliminated.
a. Sometimes
b. Never
c. Rarely
d. Always
Ans: d

40. Complexity measurements are quantitative values accumulated by a pre-determined method for measuring complexity of a
a. Software engineering process
b. Software product
c. Data base
d. Project team
Ans: b

41. Function points provide an objective measure of the application system -------------that can be used to compare different kinds of application systems.
a. Size
b. Complexity
c. Performance
d. Operation ease
Ans: a

42. Which of the following is not relevant in quantifying the amount of information processing function?
a. External inquiry
b. Software platform
c. External output
d. Logical internal file
e. External input
f. External interface file
Ans:

43. Function point analysis requires information on hardware and software for the application system.
a. True
b. False
Ans: b

Thursday, December 6, 2007

Testing Question Part-2

16.The activity which includes confirming understanding, brainstorming and testing ideas is a
a. Code walkthrough
b. Inspection
c. Review
d. Structured walkthrough
Ans: c

17. The following can be considered to measure quality:
a. Customer satisfaction
b. Defects
c. Rework
d. All of the above
e.None of the above
Ans: d

18. The most common reason for the presence of a large number of bugs in a software product is,
a.Incompetence of the developer
b.Incompetence of the tester
c.Bad requirements
d.Wrong use of tools and techniques
Ans: d

19. The following is (are) not part of a data center operations
1. Capacity planning
2. I/O control
3. Scheduling
4. All of the above
5. None of the above
Ans:5

20. The process of securing future processing capability with proper data for future contingencies by duplicating systems procedures and data is
a. providing a Help Desk
b. Database Design
c. Artificial Intelligence
d. System Backup
e. All of the above
f. None of the above
Ans:

21. The objective of TQM is
a. To improve processes
b. To improve profitability
c. All of the above
d.None of the above
Ans: a

22. System Test Plan will not include
a. Approach
b.Pass/Fail criteria
c.Risks
d.Suspension and Resumption criteria
e.None of the above
Ans: c

23. The two types of checklists are _______________ and ________________________.

24. The following is NOT a category in MBNQA criteria:
a. Leadership
b. HR Focus
c. Quality Management
d. Information and Analysis
e. None of the above
Ans: c

25. The following are types of listening are:
a. Descriptive listening
b.Compensation listening
c.Apprehensive listening
d.All of the above
e.None of the above
Ans: c

26. Complaints must be resolved within
a. An hour
b. Four minutes
c. A day
d. Four hours
e.None of the above
Ans: b

27. Function Point is not a measure of
a. Effort
b.complexity
c.usability
d.all of the above
e.size
f.None of the above
Ans:

28.Quality Assurance personnel must not be involved in changing work products.
a. True
b. False
Ans:

29. The purpose of cost-of -quality computations is to show how much is being spent for the quality control and quality assurance program.
a. True
b. False
Ans: b

30. The method by which release from the requirements of a specific standard may be obtained for a specific situation is a
a. Tailoring
b. Customization
c. Force Field Analysis
d. Waiver
e. None of the above
Ans: d

Wednesday, December 5, 2007

Testing Question Part -1

1. The statement of an organization's commitment to quality is a
a. Policy
b. Vision
c. Mission
d. Principle
e. Goal
Ans: a

2. Which of the following is not a defect metric?
a. Location
b. Cause
c. Time to fix
d. Classification
e. Coverage
f. All of the above
Ans: f

3. Quality improvement programs may require the product itself to be changed.
a. True
b. False
Ans: b

4. The basis upon which adherence to policies is measured is
a. Standard
b. Requirement
c. Expected result
d. Value
e. All of the above
f. None of the above
Ans: a

5. Which of the following does not form a part of a workbench?
a. Standards
b. Quality attributes
c. Quality control
d. Procedures
e. Rework
Ans: b

6. The focus on the product is highest during
a. a walkthrough
b. a checkpoint review
c. an inspection
Ans: b

7. During an inspection, inspectors normally make suggestions on correcting the defects found.
a. True
b. False
Ans: b

8. There are _______ numbers of function types.
a. 2
b. 3
c. 4
d. 5
e. 6
Ans: c

9. The Quality manager will find it difficult to effectively implement the QAI Quality Improvement Process, unless his organization is willing to accept the Quality principles as
a. The organization’s policy
b. A challenge
c. The corporate vision
d. The organization’s goal
e. A management philosophy
f. All of the above
Ans: f

10. Baselines measure the _____________________ change.
a. Situation prior to
b. Expectation of benefits of
c. Effects of
d. Desirability of
e. None of the above
Ans: a

11. Modifying existing standards to better match the need of a project or environment is
a. Definition
b. Standard for a standard
c. Tailoring
d. Customization
e. None of the above
Ans: c

12. Malcolm Baldridge National Quality Award has the following eligibility categories/ dimensions
a. Approach
b. Deployment
c. Results
d. All of the above
e. Manufacturing, Service and small businesses
f. None of the above
Ans: e

13. The term “benchmarking” means
a. Comparing with past data from your organization
b. Comparing with the results of a market survey
c. Comparing with the results of a customer survey
d. None of the above
Ans: d

14. An example of deployment of a quality approach is:
a. The degree to which the approach embodies effective evaluation cycles
b. The appropriate and effective application to all product and service characteristics
c. The effectiveness of the use of tools, techniques, and methods
d. The contribution of outcomes and effects to quality improvement
e. The significance of improvement to the company’s business
Ans: c

15. The concept of continuous improvement as applied to quality means:
a. Employees will continue to get better
b. Processes will be improved by a lot of small improvements
c. Processes will be improved through a few large improvements
d. Improved technology will be added to the process, such as acquiring CASE tools
e. The functionality of the products will be enhanced
Ans: b

Tuesday, November 27, 2007

What is a test plan?

A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:

* Title
* Identification of software including version/release numbers
* Revision history of document including authors, dates, approvals
* Table of Contents
* Purpose of document, intended audience
* Objective of testing effort
* Software product overview
* Relevant related document list, such as requirements, design documents, other test plans, etc.
* Relevant standards or legal requirements
* Traceability requirements
* Relevant naming conventions and identifier conventions
* Overall software project organization and personnel/contact-info/responsibilties
* Test organization and personnel/contact-info/responsibilities
* Assumptions and dependencies
* Project risk analysis
* Testing priorities and focus
* Scope and limitations of testing
* Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
* Outline of data input equivalence classes, boundary value analysis, error classes
* Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
* Test environment validity analysis - differences between the test and production systems and their impact on test validity.
* Test environment setup and configuration issues
* Software migration processes
* Software CM processes
* Test data setup requirements
* Database setup requirements
* Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
* Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
* Test automation - justification and overview
* Test tools to be used, including versions, patches, etc.
* Test script/test code maintenance processes and version control
* Problem tracking and resolution - tools and processes
* Project test metrics to be used
* Reporting requirements and testing deliverables
* Software entrance and exit criteria
* Initial sanity testing period and criteria
* Test suspension and restart criteria
* Personnel allocation
* Personnel pre-training needs
* Test site/location
* Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues
* Relevant proprietary, classified, security, and licensing issues.
* Open issues
* Appendix - glossary, acronyms, etc

Wednesday, August 15, 2007

CMM

SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.

CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.

Level 1 - characterized by chaos, periodic panics, and heroicefforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.

Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.

Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.

Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.

Level 5 - the focus is on continuous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.


Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key process area was inSoftware Quality Assurance.

Friday, July 27, 2007

Practice Paper - 3

Question 21:
A test plan is prepared for management by the project leader, which explains all project control variances relative to the testing effort. It also summarizes test case logs and coverage statistics for key programs.
A. True
B. False

Question 22:
Memory leak checker tools are used to create runtime performance profiles at the module, library and function level.
A. True
B. False

Question 23:
Configuration management tools support activities related to managing the environment, including configuration management, change control, library control, documentation control.
A. True
B. False

Question 24:
The key process area at CMM Level 4 - Managed focus on establishing a quantitative understanding of both the software process and the software work products being built.
A. True
B. False

Question 25:
Non-functional system testing is a testing process that test system requirements that do not relate to functionality.
A. True
B. False

Question 26:
A set of behavioral and performance requirements which, in aggregate, determine the functional properties of a software system.
A. Functional requirement
B. Functional specifications
C. Functional test cases

Question 27:
Business process-based testing is used in system testing and acceptance testing.
A. True
B. False

Practice Paper - 2

Question 11:
Ad Hoc testing is a formal and structured testing method.
A. True
B. False

Question 12:
The programs send bad data to devices, ignore error codes coming back, and try to use devices that are busy or aren't there. This is a:
A. Calculation error
B. Functional error
C. Hardware error
D. System error
E. User Interface error

Question 13:
Errors that are cosmetic in nature are usually assigned a ______ severity level.
A. Fatal (Severity)
B. Low (Severity)
C. Serious (Severity)

Question 14:
If a system is not functioning as documented and the data is not corrupted. What priority and measure are assigned?
A. Priority 1: Critical
B. Priority 2: High
C. Priority 3: Medium
D. Priority 4: Low

Question 15:
A testing process that is conducted to test new features after regression testing of previous features.
A. Operational testing
B. Progressive testing
C. Recovery testing
D. Regression testing

Question 16:
Which of the following are major test documents? (choose the best answer)
a) Test plan
b) Test case
c) Test design
d) Test procedure
e) Defect report
A. a and b
B. a, c, and d
C. a, c, d, and e
D. all of the above

Question 17:
The requirements document identifies all system components and requirements to be tested, as well as detailed approaches to be followed, so that the testing of components and requirements is effective.
A. True
B. False

Question 18:
The test case log is used to keep track of the status of each test case.
A. True
B. False

Question 19:
What test document contains all the information about a specific test case, including requirements and the modules to be tested?
A. Test plan
B. Test case specification
C. Test design specification
D. Test procedure
E. Defect report

Question 20:
A test case specification document is used to keep track of each test run.
A. True
B. False

Tuesday, July 24, 2007

Practice Paper -1

Question 1:
The best time to influence the quality of a system design is in the __________.
A. Planning Phase
B. Analysis Phase
C. Design Phase
D. Testing Phase

Question 2:
IEEE stands for:
A. Information Engineering Endeavoring to Excel
B. Institute of Electrical and Electronics Engineers
C. Institute of Education for E-commerce Entrepreneurs
D. Individual Excellence in Engineering Enterprises

Question 3:
Which type of document might be reviewed at a Review/Inspection session?
A. Employee performance review
B. Test Plan
C. Project Status Report
D. Defect Tracking Form

Question 4:
Quality control and quality assurance are different names for the same activity.
A. True
B. False

Question 5:
Cause and effect diagrams can be used to view attempts to solving quality issues that have not worked in the past.
A. True
B. False

Question 6:
Which of the following is not a job responsibility of a software tester?
A. Identifying test cases
B. Preparing test data
C. Executing tests
D. Writing the functional specifications

Question 7:
The test strategy that is informal and non structured is:
A. Equivalence partitioning
B. Validation strategy
C. White box testing
D. Ad hoc testing

Question 8:
The test strategy that involves understanding the program logic is:
A. Equivalence partitioning
B. White box testing
C. Black box testing
D. Boundary strategy

Question 9:
Data defects can occur when accessing the programs data log files.
A. True
B. False

Question 10:
Once the requirement document is approved, the Tester can begin creating a Requirements Matrix to track the requirements throughout the SDLC.
A. True
B. False

Monday, July 23, 2007

ISTQB

ISTQB Syllabus

Download area

Archive area

Monday, July 16, 2007

BugTracker.NET

What is BugTracker.NET?

BugTracker.NET is a free, open-source, web-based bug or customer support issue tracker written using ASP.NET, C#, and Microsoft SQL Server (or its free cousins, SQL Server Express and MSDE). It is in daily use by thousands of development and support teams around the world.

BugTracker.NET is easy to install and learn how to use. When you first install it, it is very simple and you can start using it right away. However, it offers you the ability to change its configuration to handle your needs if they are more complex.

Feature Highlights

For detailed info about features, see the BugTracker.NET README.html file. Here are some feature highlights:

  • Suitable for tracking customer support tickets as well as software bugs.
  • Sending and receiving emails is integrated with the tracker, so that the email thread about a bug is tracked WITH the bug.
  • Allows incoming emails to be recorded as bugs. So, for example, an email from your customer could automatically be turned into an bug in the tracker.
  • Allows you to attach files and screenshots to bugs. There is even a custom screen capture utility [screenshot] that lets you take a screenshot, annotate it, and post it as a bug with just a few clicks.
  • Add your own custom fields.
  • Custom bug lists, filtered and sorted the way you want, with the columns that you want.
  • You can display bugs of a certain priority and/or status in a different color, so that the most important items grab your attention.
  • Define your own statuses and workflow, or use with the simple one it installs with.
  • Configure different user roles to see different lists of bugs. For example, a developer might see a list of open bugs. A QA analyst might want to see a list of bugs ready for testing.
  • Search for bugs using flexible criteria. Save searches as SQL queries that you can run or modify later.
  • Subscribe to email notifications that tell you when a bug has been added or changed.
  • Create hyperlinks between related bugs, so that you can jump from one to the other directly.
  • Includes some starter bug statistic reports with pie charts and bar charts. Easy to create your own reports if you know SQL.
  • Handles Unicode (Chinese characters, etc...)
  • Set permissions controlling who can view, report, edit bugs by project.
  • For more documentation

    Source:

    Wednesday, July 11, 2007

    Bugzilla 3.0 Released (New Version)

    Bugzilla is a open-source Bug tracking tool. Many companies are using this open source tool for managing the software development process.

    New Features In Bugzilla 3.0

    Custom Fields

    Bugzilla now includes very basic support for custom fields.
    Users in the admin group can add plain-text or drop-down custom fields. You can edit the values available for drop-down fields using the "Field Values" control panel.Don't add too many custom fields! It can make Bugzilla very difficult to use. Try your best to get along with the default fields, and then if you find that you can't live without custom fields after a few weeks of using Bugzilla, only then should you start your custom fields.

    mod_perl Support

    Bugzilla 3.0 supports mod_perl, which allows for extremely enhanced page-load performance. mod_perl trades memory usage for performance, allowing near-instantaneous page loads, but using much more memory.If you want to enable mod_perl for your Bugzilla, we recommend a minimum of 1.5GB of RAM, and for a site with heavy traffic, 4GB to 8GB.If performance isn't that critical on your installation, you don't have the memory, or you are running some other web server than Apache, Bugzilla still runs perfectly as a normal CGI application, as well.

    Shared Saved Searches

    Users can now choose to "share" their saved searches with a certain group. That group will then be able to "subscribe" to those searches, and have them appear in their footer.
    If the sharer can "bless" the group he's sharing to, (that is, if he can add users to that group), it's considered that he's a manager of that group, and his queries show up automatically in that group's footer (although they can unsubscribe from any particular search, if they want.)
    In order to allow a user to share their queries, they also have to be a member of the group specified in the querysharegroup parameter.
    Users can control their shared and subscribed queries from the "Preferences" screen.

    Attachments and Flags on New Bugs

    You can now add an attachment while you are filing a new bug.
    You can also set flags on the bug and on attachments, while filing a new bug.

    For more features

    To download Bugzilla 3.0 Released (Updated version)


    Saturday, June 30, 2007

    IEEE Standard for software testing - 4

    14102: 1995; 1465-1998, Standard - Adoption of International Standard ISO/IEC

    12119: 1994(E);1471-2000, Recommended Practice for Architectural Description of Software Intensive;

    1490-1998, Guide - Adoption of PMI Standard - A Guide to the Project Management Body of Knowledge;

    1517-1999, IEEE Standard for Information Technology-Software Life Cycle Processes-Reuse;

    1540-2001, Standard for Software Life Cycle Processes- Risk Management

    2001-2002, Recommended Practice for Internet Practices - Web Page Engineering;

    14143.1-2000, Adoption of ISO/IEC

    14143-1:1998 Information Technology-Software


    For interview question see

    IEEE Standard for software testing - 3

    1220-1998, Standard for the Application and Management of the Systems Engineering;

    1228-1994, Standard for Software Safety Plans;

    1233-1998, Guide for Developing System Requirements Specifications;

    1320.1-1998, Standard for Functional Modeling Language-Syntax and Semantics for IDEF0;

    1320.2-1998, Standard for Conceptual Modeling Language Syntax and Semantics;

    1362-1998, Guide for Information Technology-System Definition-Concept of Operations;

    1420.1-1995, Standard for Information Technology-Software Reuse-Data Model for Reuse;

    1420.1a-1996, Supplement to Standard for Information Technology-Software Reuse-Data;

    1420.1b-1999, IEEE Trial-Use Supplement to Standard for Information;

    1462-1998, Standard - Adoption of International Standard ISO/IEC


    For software tools related questions see


    IEEE Standard for software testing - 2

    1028-1997, Standard for Software Reviews;

    1044-1993, Standard Classification for Software Anomalies;

    1045-1992, Standard for Software Productivity Metrics;

    1058-1998, Standard for Software Project Management Plans;

    1061-1998, Standard for a Software Quality Metrics Methodology;

    1062-1998, Recommended Practice for Software Acquisition;

    1063-2001, Standard for Software User Documentation;

    1074-1997, Standard for Developing Software Life Cycle Processes;

    1175.1-2002, Guide for CASE Tool Interconnections - Classification and Description;

    1219-1998, Standard for Software Maintenance;

    For more standards see

    For question on software testing



    IEEE Standard for software testing - 1

    610.12-1990, Standard Glossary of Software Engineering Terminology

    730-2002, Standard for Software Quality Assurance Plans

    828-1998, Standard for Software Configuration Management Plans

    829-1998, Standard for Software Test Documentation

    830-1998, Recommended Practice for Software Requirements Specifications

    982.1-1988, Standard Dictionary of Measures to Produce Reliable Software

    1008-1987 (R1993), Standard for Software Unit Testing

    1012-1998, Standard for Software Verification and Validation

    1012a-1998, Supplement to Standard for Software Verification and Validation

    1016-1998, Recommended Practice for Software Design Descriptions

    For more standards see

    For question on software testing




    Saturday, June 9, 2007

    Software Testing Practice Exam -4

    31 Which one of the following describes the major benefit of verification early in the life cycle?
    a)It allows the identification of changes in user requirements.
    b)It facilitates timely set up of the test environment.
    c)It reduces defect multiplication.
    d)It allows testers to become involved early in the project.

    32 Integration testing in the small:
    a)tests the individual components that have been developed.
    b)tests interactions between modules or subsystems.
    c)only uses components that form part of the live system.
    d)tests interfaces to other systems.

    33 Static analysis is best described as:
    a)the analysis of batch programs.
    b)the reviewing of test plans.
    c)the analysis of program code.
    d)the use of black box testing.

    34 Alpha testing is:
    a)post-release testing by end user representatives at the developer’s site.
    b)the first testing that is performed.
    c)pre-release testing by end user representatives at the developer’s site.
    d)pre-release testing by end user representatives at their sites.

    35 A failure is:
    a)found in the software; the result of an error.
    b)departure from specified behaviour.
    c)an incorrect step, process or data definition in a computer program.
    d)a human action that produces an incorrect result.

    36 In a system designed to work out the tax to be paid:
    An employee has £4000 of salary tax free. The next £1500 is taxed at 10%
    The next £28000 is taxed at 22%
    Any further amount is taxed at 40%
    Which of these groups of numbers would fall into the same equivalence class?
    a)£4800; £14000; £28000
    b)£5200; £5500; £28000
    c)£28001; £32000; £35000
    d)£5800; £28000; £32000

    37 The most important thing about early test design is that it:
    a)makes test preparation easier.
    b)means inspections are not required.
    c)can prevent fault multiplication.
    d)will find all faults.

    38 Which of the following statements about reviews is true?
    a)Reviews cannot be performed on user requirements specifications.
    b)Reviews are the least effective way of testing code.
    c)Reviews are unlikely to find faults in test plans.
    d)Reviews should be performed on specifications, code, and test plans.

    39 Test cases are designed during:
    a)test recording.
    b)test planning.
    c)test configuration.
    d)test specification.

    40 A configuration management system would NOT normally provide:
    a)linkage of customer requirements to version numbers.
    b)facilities to compare test results with expected results.
    c)the precise differences in versions of software component source code.
    d)restricted access to the source code library.


    Question number Correct answer

    31C 32B 33C 34C 35B 36D 37C 38D 39D 40B


    For Software testing related question :
    http://amaging-question.blogspot.com

    Software Testing Practice Exam - 3


    21 Which of the following should NOT normally be an objective for a test?
    a)To find faults in the software.
    b)To assess whether the software is ready for release.
    c)To demonstrate that the software doesn’t work.
    d)To prove that the software is correct.

    22 Which of the following is a form of functional testing?
    a)Boundary value analysis
    b)Usability testing
    c)Performance testing
    d)Security testing

    23 Which of the following would NOT normally form part of a test plan?
    a)Features to be tested
    b)Incident reports
    c)Risks
    d)Schedule

    24 Which of these activities provides the biggest potential cost saving from the use of CAST?
    a)Test management
    b)Test design
    c)Test execution
    d)Test planning

    25 Which of the following is NOT a white box technique?
    a)Statement testing
    b)Path testing
    c)Data flow testing
    d)State transition testing

    26 Data flow analysis studies:
    a)possible communications bottlenecks in a program.
    b)the rate of change of data values as a program executes.
    c)the use of data on paths through the code.
    d)the intrinsic complexity of the code.

    27 In a system designed to work out the tax to be paid:
    An employee has £4000 of salary tax free. The next £1500 is taxed at 10%
    The next £28000 is taxed at 22%
    Any further amount is taxed at 40%
    To the nearest whole pound, which of these is a valid Boundary Value Analysis test case?
    a)£1500
    b)£32001
    c)£33501
    d)£28000

    28 An important benefit of code inspections is that they:
    a)enable the code to be tested before the execution environment is ready.
    b)can be performed by the person who wrote the code.
    c)can be performed by inexperienced staff.
    d)are cheap to perform.

    29 Which of the following is the best source of Expected Outcomes for User Acceptance Test scripts?
    a)Actual results
    b)Program specification
    c)User requirements
    d)System specification

    30 What is the main difference between a walkthrough and an inspection?
    a)An inspection is lead by the author, whilst a walkthrough is lead by a trained moderator.
    b)An inspection has a trained leader, whilst a walkthrough has no leader.
    c)Authors are not present during inspections, whilst they are during walkthroughs.
    d) A walkthrough is lead by the author, whilst an inspection is lead by a trained moderator.


    Question number Correct answer


    21D 22A 23B 24C 25D 26C 27C 28A 29C 30D

    For Software testing related question :
    http://amaging-question.blogspot.com

    For Software job
    http://amaging-jobs.blogspot.com/

    Software Testing Practice Exam - 2

    11 Which of the following is false?
    a)Incidents should always be fixed.
    b)An incident occurs when expected and actual results differ.
    c)Incidents can be analysed to assist in test process improvement.
    d)An incident can be raised against documentation.

    12 Enough testing has been performed when:
    a)time runs out.
    b)the required level of confidence has been achieved.
    c)no more faults are found.
    d)the users won’t find any serious faults.

    13 Which of the following is NOT true of incidents?
    a)Incident resolution is the responsibility of the author of the software under test.
    b)Incidents may be raised against user requirements.
    c)Incidents require investigation and/or correction.
    d)Incidents are raised when expected and actual results differ.

    14 Which of the following is not described in a unit test standard?
    a)syntax testing
    b)equivalence partitioning
    c)stress testing
    d)modified condition/decision coverage

    15 Which of the following is false?
    a)In a system two different failures may have different severities.
    b)A system is necessarily more reliable after debugging for the removal of a fault.
    c)A fault need not affect the reliability of a system.
    d)Undetected errors may lead to faults and eventually to incorrect behaviour.

    16 Which one of the following statements, about capture-replay tools, is NOT correct?
    a)They are used to support multi-user testing.
    b)They are used to capture and animate user requirements.
    c)They are the most frequently purchased types of CAST tool.
    d) They capture aspects of user behavior.

    17 How would you estimate the amount of re-testing likely to be required?
    a)Metrics from previous similar projects
    b)Discussions with the development team
    c)Time allocated for regression testing
    d)a & b

    18 Which of the following is true of the V-model?
    a)It states that modules are tested against user requirements.
    b)It only models the testing phase.
    c)It specifies the test techniques to be used.
    d)It includes the verification of designs.

    19 The oracle assumption:
    a)is that there is some existing system against which test output may be checked.
    b)is that the tester can routinely identify the correct outcome of a test.
    c)is that the tester knows everything about the software under test.
    d)is that the tests are reviewed by experienced testers.

    20 Which of the following characterizes the cost of faults?
    a)They are cheapest to find in the early development phases and the most expensive to fix in the latest test phases.
    b)They are easiest to find during system testing but the most expensive to fix then.
    c)Faults are cheapest to find in the early development phases but the most expensive to fix then.
    d)Although faults are most expensive to find during early development phases, they are cheapest to fix then.


    Question number Correct answer

    11A 12B 13A 14C 15B 16B 17D 18D 19B 20A

    For Software testing related question :


    For Software job



    Thursday, June 7, 2007

    Software Testing Practice Exam - 1

    Time allowed: 1 hour 40 QUESTIONS
    NOTE: Only one answer per question

    1 We split testing into distinct stages primarily because:
    a)Each test stage has a different purpose.
    b)It is easier to manage testing in stages.
    c)We can run different tests in different environments.
    d)The more stages we have, the better the testing.

    2 Which of the following is likely to benefit most from the use of test tools providing test capture and replay facilities?
    a)Regression testing
    b)Integration testing
    c)System testing
    d)User acceptance testing

    3 Which of the following statements is NOT correct?
    a)A minimal test set that achieves 100% LCSAJ coverage will also achieve 100% branch coverage.
    b)A minimal test set that achieves 100% path coverage will also achieve 100% statement coverage.
    c)A minimal test set that achieves 100% path coverage will generally detect more faults than one that achieves 100% statement coverage.
    d)A minimal test set that achieves 100% statement coverage will generally detect more faults than one that achieves 100% branch coverage.

    4 Which of the following requirements is testable?
    a)The system shall be user friendly.
    b)The safety-critical parts of the system shall contain 0 faults.
    c)The response time shall be less than one second for the specified design load.
    d)The system shall be built to be portable.

    5 Analyse the following highly simplified procedure:
    Ask: “What type of ticket do you require, single or return?”
    IF the customer wants ‘return’
    Ask: “What rate, Standard or Cheap-day?”
    IF the customer replies ‘Cheap-day’
    Say: “That will be £11:20”
    ELSE
    Say: “That will be £19:50”
    ENDIF
    ELSE
    Say: “That will be £9:75”
    ENDIF
    Now decide the minimum number of tests that are needed to ensure that all
    the questions have been asked, all combinations have occurred and all
    Replies given.
    a)3
    b)4
    c)5
    d)6

    6 Error guessing:
    a)supplements formal test design techniques.
    b)can only be used in component, integration and system testing.
    c)is only performed in user acceptance testing.
    d)is not repeatable and should not be used.

    7 Which of the following is NOT true of test coverage criteria?
    a)Test coverage criteria can be measured in terms of items exercised by a test suite.
    b)A measure of test coverage criteria is the percentage of user requirements covered.
    c)A measure of test coverage criteria is the percentage of faults found.
    d)Test coverage criteria are often used when specifying test completion criteria.

    8 In prioritising what to test, the most important objective is to:
    a)find as many faults as possible.
    b)test high risk areas.
    c)obtain good test coverage.
    d)test whatever is easiest to test.

    9 Given the following sets of test management terms (v-z), and activity descriptions (1-5), which one of the following best pairs the two sets?
    v – test control
    w – test monitoring
    x - test estimation
    y - incident management
    z - configuration control

    1 - calculation of required test resources
    2 - maintenance of record of test results
    3 - re-allocation of resources when tests overrun
    4 - report on deviation from test plan
    5 - tracking of anomalous test results

    a)v-3,w-2,x-1,y-5,z-4
    b)v-2,w-5,x-1,y-4,z-3
    c)v-3,w-4,x-1,y-5,z-2
    d)v-2,w-1,x-4,y-3,z-5

    10 Which one of the following statements about system testing is NOT true?
    a)System tests are often performed by independent teams.
    b)Functional testing is used more than structural testing.
    c)Faults found during system tests can be very expensive to fix.
    d)End-users should be involved in system tests.



    Question number correct answer

    1A 2A 3D 4C 5A 6A 7C 8B 9C 10D


    For Software testing related question :


    For Software job

    Monday, June 4, 2007

    Different software testing standards

    • IEEE 1008, a standard for unit testing
    • IEEE 1012, a standard for Software Verification and Validation
    • IEEE 1028, a standard for software inspections
    • IEEE 1044, a standard for the classification of software anomalies
    • IEEE 1044-1, a guide to the classification of software anomalies
    • IEEE 1233, a guide for developing system requirements specifications
    • IEEE 730, a standard for software quality assurance plans
    • IEEE 1061, a standard for software quality metrics and methodology
    • BSS 7925-1, a vocabulary of terms used in software testing
    • BSS 7925-2, a standard for software component testing
    For Software testing related question :

    For Software job