Regression Testing and Retesting

Difference between Regression Testing and Retesting is another one of the most frequently asked question in Software Testing interviews. So, a very straight forward and crisp answer would be –
 
Re-testing is the verification of defects to confirm that the functionalities are working as expected. When a bug is fixed, the test cases which failed with reference to it are executed again.
 
Regression Testing is performed to verify that any changes to the code (bug fixes, enhancements, code cleanup etc.) have not impacted the untouched or other functionalities of the software/application.
 
For example:

  • Consider an application ‘abc’ with modules ‘a1, b1 and c1’.
  • Some bug fixes are to module b1.
  • Retesting – Testing the bugs raised and module b1.
  • Regression Testing – Testing the areas affected by module b1 on a1 and c1.
 

Regression Testing
Image Source: MS Word Clip Art
 
The image above also represents basic Regression Testing and Retesting concept:
Rotten apple spoils the barrel –
Ensure the rotten one is removed ~Re-testing
Always go through the rest ~Regression Testing
Following is a detailed description of differences between Regression and Retesting.
 

Retesting

Regression Testing

Retesting is performed to make sure that the tests cases which failed in last execution are passed after the defects against them failures are fixed.
Regression testing is performed to ensure that changes like defect fixes or enhancements to the module or application have not affected the unchanged parts of application.
Retesting is carried out based on the defect fixes.
Regression testing is not carried out on specific defect fixes. It is planned as specific area or full regression testing.
In Retesting, the test cases which failed earlier are included in the test suite.
In Regression testing, test cases which impact the functionality of application are included in the test suite irrespective of their passed/failed status in earlier runs.
Test cases for Retesting cannot be prepared before start testing. In Retesting only re-execute the test cases failed in the prior execution.
Regression test cases are derived from the functional specification, user manuals, user tutorials, and defect reports in relation to corrected problems.
Automation for retesting scenarios is not recommended.
Regression scenarios are the first candidates of automation testing
Retesting is performed before Regression.
Regression testing can be carried out parallel with Retesting.

 

Principles of Software Testing

Following are the seven principles of Software Testing as per ISTQB syllabus. The principles are very basic and if remembered, may prove very-very useful and sometimes help in resolving misconceptions. The very one or two liners description says it all about them, so I have not edited the contents:
Source: ISTQB Foundation Level Syllabus


 

Testing is context dependent

 

Pesticide paradox

 

Defect clustering

 

Early testing

 

Exhaustive testing is impossible

Shows presence of Errors

 

 

  SOFTWARE

TESTING

Principle 1 – Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.

Principle 2 – Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts.

Principle 3 – Early testing
Testing activities should start as early as possible in the software or system development life cycle, and should be focused on defined objectives.

Principle 4 – Defect clustering
A small number of modules contain most of the defects discovered during pre-release testing, or are responsible for the most operational failures.

Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this “pesticide paradox”, the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.

Principle 6 – Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.

Principle 7 – Absence-of-errors fallacy

Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’ needs and expectations.

Software Testing – Myths and Facts

“Software Testing is relatively new” – I get irritated whenever I hear or read this. As if before the term, software’s being developed were not tested. Oh, I used the relatively new word again – checked for their expected behaviour.

Image source: http://qld.greens.org.au/sites/greens.org.au/files/u4412/fact%20or%20fiction.jpg
So, when we talk about myths about software testing, the above stands on top. There are few more I have tried to jot down in no particular order. Let’s see if you agree.

1.  Software Testing is relatively New
I do not need to write anything for this. We get to hear this ironic term at all places – big or small.

Fact:  It has been existing ever since first piece of code was written.

2.  Testing is boring
Testing being a monotonous activity is reflected very frequently at almost every platform. People have a misconception that testers keep on doing the same old clicks and data entry without any creativity.

Fact: A statement somewhere on internet says it all – “Testing is like sex. If it’s not fun, then you are not doing it right.”

3.  Testing is easy/No formal training is required
Testing being an easy job is another big misconception. There are arguments all over the world that since users keep on finding bugs, it’s no big deal being a tester. Sometimes, a proper planned formal training for the same is also neglected.

Fact: Many times testing proves to be more complex and tedious than development as its difficult to analyze the behavior without code (I am not saying that developers have an easy job, but access to code gives an edge on analyzing the behavior), Testers require a deep understanding of testing methods, business requirements and the development process.
Formal training and experience makes a good tester with solid skills or we can say a valued resource.

4.  Anyone can test
I cannot forget the statement by one of my managers – “Amita, don’t fell offended, but the fact is that testing can be done by anyone. I mean, you need special skills and technology knowledge to develop the software, but testing can be performed by developers as well.”

Fact: If that was the case, no company would have paid a dollar to testers. There would have been no companies that specialize in only testing services. This is the trap management falls into.

5.   A tester’s job is to find “ALL” bugs
100% test coverage is one of the top goals of most of the testers. This is the most common myth which clients, Project Managers and the management team believe in.
Fact: This is the trap management fall into. The very first principle of software testing states that – “Testing shows presence of defects”. Testing shows that defects are present, but cannot prove that there are no defects. Even if no defects are found, it is not a proof of correctness.
6.   Automated Scripts replace manual testing
Whenever I give a POC for automation, people have an impression – “Wow! Just click and all done!” Statements like – “So, if you create the scripts once, we can just run them for next releases”. This is the most common misunderstanding that automated tests are equivalent to manual tests. The worst is that there are testers and test managers who actually believe it.

Fact: No test automation tools can ever replicate human feelings or emotions.
For example, a tool can test the fonts, color and layout of a screen is as per the test script but it can never analyze if a screen is user friendly.
7.    Software Testing is “time consuming” and “expensive”.
I personally get the emails to revise our estimates for testing efforts very frequently. Sometimes, I hate to justify the minute details of my estimates. The considered myth that its too much effort – “time consuming” automatically generates the next one – “Expensive”.

Fact: Software testing principle 3 – Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives.
One cannot expect the testers to be magician and test the code overnight when they have no idea of background.
Decide – Pay during development or pay latter for production issues and reputation.
8.  Software Testing is same as Quality Control
I can bet that almost 90% people confuse testing with quality control. Software Testing, Quality Assurance and Quality Control are the terms used almost interchangeability.

Fact: Testing is just a component of software quality control, wherein a tester identifies the bugs for the stakeholders. Quality Control include many other activities like Reviews – Self review, peer review and structured walk through

9.  No changes – Regression testing is not needed
Regression Testing is skipped many times to save efforts. There has been no change in this code is the most common excuse to that.

Fact: Even if the module was not touched, spending some bucks on re-verifying the results may save the cost of an unidentified bug or scenario.

10. Software Testing is the career choice for failures
Fresher’s dread the software testing field. They think that it does not offer career growth and is a low profile job.

Fact: Think again. Facebook, Microsoft etc. pay millions to find a bug.

These are the most common myths I have faced during my career. If you have any additions, or faced similar experiences, please feel free to comment or share. This does not do not welcome general like/dislike comments. I love them the most… 🙂

Quality Assurance and Quality Control – QA Vs QC


Before I jump into the title of this post, I would like to touch upon the base term – ‘Quality’.

If I look into oxford dictionary as a layman, I get the following definitions of the word ‘Quality’.
  • The standard of something as measured against other things of a similar kind; the degree of excellence of something.
  • A distinctive attribute or characteristic possessed by someone or something.

Everyone has their own interpretation of quality. Broadly, Quality means “conformance to requirements”. ISO 9000’s states quality as – “degree to which a set of inherent characteristics fulfils requirement”.
Coming to more technical details,

A Product developer will define quality as– The product which meets the customer requirements.
The Customer will define Quality as – Required functionality is provided with user friendly interface.

With reference to quality, QA and QC are two terms that are used most frequently and almost interchangeably. Let’s define both the terms.

Quality Assurance (QA): It may be defined as processes of software quality that assures that the standards, processes, and procedures are appropriate for the project and are correctly implemented.
QA refers to the maintenance of a desired level of quality in a service or product, especially by means of attention to every stage of the process of delivery or production.
There are two principles included in Quality Assurance:
  1. “Fit for purpose”, the product should be suitable for the intended purpose
  2. “Right first time”, mistakes should be eliminated.

Quality Control (QC): It may be defined as the function of software quality that checks that the project follows its standards, processes, and procedures, and that the project produces the required internal and external (deliverable) products. It is performed by testing a sample of the output against the specification
If I have to differentiate between the two, I would do that as –
Quality Assurance is the process of managing for quality that makes sure you are doing the right things, the right way.
Quality Control
verifies the quality of the output and makes sure the results of what you’ve done are what you expected.

Quality Assurance Vs Quality Control (QA Vs QC)

Quality Assurance
Quality Control
It is a proactive quality process.
QC is a reactive process.
The goal of QA is to improve development and test processes so that defects do not arise when the product is being developed.
The goal of QC is to identify (and correct) defects after a product is developed and before it’s released.
Verification is an example of QA
Validation/Software Testing is an example of QC
QA is a managerial tool
QC is a corrective tool
It does not involve executing the program or code.
It always involves executing the program or code.
All peoples who are involved in the developing software application as responsible for the quality assurance.
Testing team is responsible for Quality control.
Establish a good quality management system and the assessment of its adequacy. Periodic conformance audits of the operations of the system.
Finding & eliminating sources of quality problems through tools & equipment so that customer’s requirements are continually met.
Quality Assurance is the process of managing for quality;
Quality Control is used to verify the quality of the output
Quality Assurance is process oriented
Quality Control is product oriented
It identifies weakness in processes to improve them.
It identifies defects to be fixed.
It is done before Quality Control.
It is done only after Quality Assurance activity is completed.
Quality Assurance means Planning done for doing a process.
Quality Control Means Action has taken on the process by execute them.


Defect Severity and Defect Priority with examples


Difference between Severity and Priority of a defect has been the most common question for Software Testing job interviews. This is one topic; even very senior managers have conflicting views sometimes. 
Defect Severity
Defect Severity signifies degree of impact the defect has on the development or operation of a component application being tested. It is the extent to which the defect can affect the software. The severity type is defined by the Software Tester based on the written test cases and functionality.
Defect Severity may range from Low to Critical
  • Critical – this defect is causing system failure. Nothing can proceed further. It may also be called as a show stopper
  • Major – highly severe defect, is causing the system to collapse, however few parts of the system are still usable, and/or there are a few workarounds for using the system in the collapsed state too
  • Medium – is causing some undesirable behavior, however system / feature is still usable to a high degree
  • Low – is more of a cosmetic issue. No serious impedance to system functionality is noted
Defect Priority
Defect priority signifies the level of urgency of fixing the bug. In other words Priority means how fast/ how soon it has to be fixed. Though priority may be initially set by the Software Tester, it is usually finalized by the Project/Product Manager.
Defect Priority may range from Low to Urgent
  • Urgent: Must to be fixed before any other high, medium or low defect should be fixed. Must be fixed in the next build.
  • High: Must be fixed in any of the upcoming builds but should be included in the release.
  • Medium: should take precedence over low priority defects and may be fixed after the release / in the next release.
  • Low: Fixing can be deferred until all other priority defects are fixed. It may or may not be fixed at all.

Differences between Defect Severity and Defect Priority

Severity
Priority
Severity is associated with standards/functionality.
Priority is associated with scheduling.
Severity refers to the seriousness of the bug on the functionality of the product. Higher effect on the functionality will lead to assignment of higher severity to the bug.
Priority refers to how soon the bug should be fixed.
Generally, the Quality Assurance Engineer decides the severity level.
Priority to fix a bug is decided in consultation with the client/manager.
Examples;-
  1. Let us assume a scenario where “Login” button is labeled as “Logen”:
    The priority and severity for different situations may be expressed as:-
·         For GUI testing: it is high priority and low severity
·         For UI testing: it is high priority and high severity
·         For functional testing: it is low priority and low severity
·         For cosmetic testing: it is low priority and high severity
  1. Low Severity, Low Priority
Suppose an application (web) is made up of 20 pages. On one of the pages out of the 20 which is visited very infrequently, there is a sentence with a grammatical error. Now, even though it’s a mistake on this expensive website, users can understand its meaning without any difficulty. This bug may go unnoticed to the eyes of many and won’t affect any functionality or the credibility of the company.
  1. Low Severity, High Priority
·         While developing a site for Pepsi, by mistake a logo sign of coke is embedded. This does not affect functionality in any way but has high priority to be fixed.
·         Any typo mistakes or glaring spelling mistakes on home page.
  1. High Severity, Low Priority
·         Incase application works perfectly for 50000 sessions but beings to crash after higher number of sessions. This problem needs to be fixed but not immediately.
·         Any report generation not getting completed 100% – Means missing Title, Title Columns but having proper data enlisted. We could have this fixed in the next build but missing report columns is a High Severity defect.
  1. High Severity, High Priority
·         Now assume a windows-based application, a word-processor let’s say. As you open any file to be viewed it in, it crashes. Now, you can only create new files but as you open them, the word-processor crashes. This completely eliminates the usability of that word-processor as you can’t come back and edit your work on it, and also affects one of the major functionalities of the application. Thus, it’s a severe bug and should be fixed immediately.
·         Let’s say, as soon as the user clicks login button on Gmail site, some junk data is displayed on a blank page. Users can access the gmail.com website, but are not able to login successfully and no relevant error message is displayed. This is a severe bug and needs topmost priority.

Defect Life Cycle/BugLife Cycle/Defect Management


Simple Wikipedia definition of Bug is:“A computer bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program’s source code or its design.”
Defect/bug life cycleis a process which a defect goes through during its lifetime. It starts when defect is found and ends when a defect is closed, after ensuring it’s not reproduced. The Life cycle varies from organization to organization and is governed by the software testing process the organization or project follows and/or the Defect tracking tool being used.
In general, life cycle of a defect is depicted by following steps/status figure.


Description of Stages:
New State:
When the defect is found and submitted, New status is automatically assigned to it.

Rejected: A quality assurance or project manager reviews the defect and decides whether or not to repair the defect. If the defect is refused, Rejected status is assigned to the defect

Open: If the defect is accepted for repair, quality assurance or project manager change the status to Open and assign it to development team.

Deferred: The development team reviews the defect and decides to fix the defect in current fix or in some other fix release. If defect is decided not to be fixed in current release, deferred status is assigned to it.

Fixed: The development teams repairs the defect and change its status to fixed. They then assign the defect back to quality assurance or project manager.

Retest: After the defect is fixed, the quality assurance or project manager change the status of defect to Retest and assigns the defect to tester.

Reopen: The testers retest the defect and see if it has been repaired or not. If the bug still exists, it is again changed to Reopen status. The defect the again passes through fixed, and retest status until it has been fixed .

Closed: If the defect is found repaired after retest, the quality assurance or project manager changes the defect to Closed.

General Test Case Template

Hi All,

Attached is the sample test case template we use for our organization.

General Test Case Template.xls

Hope it helps all….:)

QC Export Wizard – Error on Step 4 of 8 – User is not permitted to export this kind of data – Quality Center

While exporting from Excel to QC, the export wizard throws following error at step 4 –

  • Above error is thrown when the user does not have required rights to upload data.
  • In this case, if the user would not be able to create requirements/test/defects manually also.
  • To resolve the issue, please verify the permissions or the role on project you are trying to export data. This may be checked by navigating to Tools-> customizations. 
  • If you do not have proper rights, chances are that you would not be able to view the customizations screen. 
  • In that case, either try logging with admin login or ask your teammates for the same.

Life Cycle of Functional Test Automation

Based on my experience,here is the life cycle of Functional Test Automation as per my understanding.

  1. Check if the automation tool to be used supports the application to be tested.
  2. Identify the scenarios to be tested.
    • The test scenarios identified should be defined in the form of actions (QTP)/scripts (RFT).
  3. Object Identification/Recognition
    • Any tool understands and identified objects on an application via a feature commonly called as object repository.
    • For e.g., QTP has OR (object repository) which identifies objects by class name, text, parent name etc. properties.
    • RFT has Object Map to identify the objects
  4. Object Map
    • Decide which Object Map has to be used; public object map or private Object Map has to be used.
    • In RFT perspective,
Public Object Map
Private Object Map
Its objects are accessible to all the scripts within a project.
Its objects are accessible only to a particular script.
    • If Public map is to be used, a map file must be created.
  1. Create Script
    • A script consists of Steps, Data and verification.
    • Points to be taken care while recording a script
                                                               i.      Do not open any other application while recording.
                                                             ii.      Perform only relevant actions.
                                                            iii.      When finished, come back to the initial step. For eg, if the script opens a browser, it must close it at the end of script.
  1. Play back
    • Playback the recorded script to see if it works fine and only required actions are recorded.
  2. Synchronization
    • It refers to waiting till a defined condition is satisfied.
    • In QTP -> waitforproperty().
    • In RFT -> waitforesistance().
  3. Verification
    • Refers to mentioning the expected data or a property value.
    • Continuation or failing of a script on a failed verification point depends upon the scenario.
  4. Load Functional Libraries
    • Normally, there are two types of function libraries
                                                               i.      General – It includes the functions which may be used across projects – like Functions on excel files.
                                                             ii.      Application Specific – It includes the functions which cannot be used across projects and are specific to the applications
  1. Create data – data driven
    • If verifications points run successfully, a data pool must be created.
  2. Output Value Identification
    • It is necessary for reporting or scenarios where output of one step or script may be required as an input to other step/script.
  3. Exception Handling/Recovery Scenarios
  4. Run
  5. Batch Run
  6. Defect Tracking

Load Runner(LR) – Error: “-27751: Step download timeout (120 seconds) has expired when downloading resource(s)…”

If step download time error is coming after few iterations that means your application is actually taking this much time to process it. 
To resolve the issue, try following
  • Go to Run-Time settings,
  • Go to Internet Protocol -> Preferences and select the “Options” button,
  • Change the “Step DownLoad Timeout (secs)” from the default value of 120 seconds to the desired value.
Note that in the Run-Time Setting dialog, the maximum limit for step download timeout is 32000 seconds. To specify an increased limit, use the web_set_timeout() function as follows:
  • web_set_timeout(STEP, “xx”); //for regular web V user
  • web.set_timeout(STEP, “xx”); //for regular Java / JavaScript V user
The Step Download Timeout encompasses all requests made from a single Load Runner statement. For example, a step in the script may consist of a single request to the server, or may consist of 10 requests. For example an HTML mode web_URL request to WWW.Google.com would consist of an initial request to the Google server for the main HTML file. After parsing this HTML file, further additional resources will be downloaded (e.g. .gif files). In HTML mode, these resources are downloaded automatically as a part of the web_URL for WWW.Google.com. Thus, the Page Download Timeout will encompass all the requests, that is, the initial request and the requests arising from any related resources).