Software Testing Abbreviations and Acronyms

Below is a list of Acronyms or Abbreviations I could collect from various sources on Google related to Software Testing

ABI
Application Binary Interface
ACC
Attribute Component Capability
AIAT
Artificial Intelligence Applications Testing
ALM
Application Lifecycle Management
AMC
Average Method Complexity
AMDD
agile model-driven development
ANSI
American National Standards Institute
API
Application Programming Interface
ASCII
American Standard Code for Information Interchange
ASTF
Automated Software Test Framework
ATP
Acceptance Test Procedure
ATLM
Automated testing lifecycle methodology
AUT
Application Under Test
BDD
Business Driven Development
BERT
Bit Error Test (Diagnostic Tests)
BIST
Built-in self-test (Diagnostic Tests)
BITE
Browser Integrated Test Environment
BFV
Bug fix verification
BR
Business Requirement
BRD
Business Requirement Document
BRS
Business Requirement Specification
BVA
Boundary Value Analysis
CASE
Computer-Aided Software Engineering
CAST
Computer Aided Software Testing
CCB
configuration control board
CDR
Critical Design Review
CE
Critical Error
CHAP
Challenge Handshake Authentication Protocol
CID
Configuration identification
CM
Configuration management
CMM
Capability Maturity Model
CMMI
Capability Maturity Model Integrated
CMP
Configuration Management Plan
CMT
Configuration Management Tool
COF
Cost of Failure
COQ
Cost of Quality
COTS
Commercial Off-The-Shelf Software
CR
Change Request
CRUD
Create, Read, Update, Delete
CTP
Critical Testing Processes
DDP
Defect Detection Percentage
DFD
Data Flow Diagram
DOM
Document Object Model
DSDM
Dynamic Software Development Method
DRE
Defect Removal Efficiency
EP
Equivalence Partitioning
ERD
Entity Relationship Diagram
ETA
Estimated Time of Arrival
ETL
Extract Transformation Load
FAQ
Frequently Asked Questions
FDD
Functional Design Document
FDD
Feature Driven Development
FMEA
Failure Mode and Effect Analysis 
FPA
Function Point Analysis
FTP
File Transfer Protocol
FVT
Function verification test
GUI
Graphical User Interface
GTA
Google Test Analytics
HIPO
Hierarchy, Input, Processing, Output
HLD
High Level Design
HP QC
Hewlett Packard Quality Center
IDD
Interface design document
IDE
Integrated Development Environment
IDEAL
Initiating, Diagnosing, Establishing, Acting & Learning
IDL
Interface design language
IEEE
Institute of Electrical and Electronics Engineers
ISO
International Organization for Standards
ISAKMP
Internet Security Association and Key Management Protocol
ISDN
Integrated Services Digital Network
lSLE
Integrated Software Lifecycle Environment
ISTQB
International Standard Testing Quality Board (Certification)
JAD
Joint Application Development
JDBC
Java Database Connectivity
JTC1
Joint Technical Committee 1
KBSA
Knowledge-Based Software Assistant
KLOC
Thousands of lines of code
KM
Knowledge Management
LCL
lower control limit
LCSAJ
Linear Code Sequence And Jump (a software analysis method)
LOC
lines of code
LSRT
Long-Sequence Regression Testing
MDD
model-driven development
MTBF
Mean Time Between Failure
MTTF
Mean Time To Fail
MTTR
Mean Time To Repair
MTTCF
Mean time to critical failure
MVP
Minimum Viable Product
NCSA
National Cyber Security Alliance
NDA
Non-Disclosure Agreement
NFR
Nonfunctional requirements
NIST
National Institute of Standards and Technology
NBS
National British standard
OCM
Operational configuration management
ODBC
Open Database Connectivity
OKRs
Objectives and key results
OLAP
On Line Analytical Processing
OLTP
On Line Transactional Processing
ORD
Object Relationship Diagram
OS
Operating System
OSI
Open Systems Interconnection
PA
Physical Audit
PCA
Performance and Coverage Analysis
PDR
Preliminary design review
PERT
Program Evaluation and Review technique Diagram
PIR
Post Implementation Review
PCRTS
Problem and Change Request Tracking System
PIM
Platform-Independent Model
PIM
Platform-independent model
POC
Proof of Concept
POF
Probability of Failure
POI
Poor Obfuscation Implementation
POST
Power- On Self – Test (Diagnostic Tests)
PSI
Platform-Specific Implementation
PTR
Problem trouble report
QA
Quality Assurance
QC
Quality Control
QMS
Quality Management System
QoS
Quality of Service
QUES
Quality Evaluation System
RAD
Rapid Application Development
RAT
Real Application Testing
RCA
Root Cause Analysis
RFC
Request For Change/Comments
ROI
Return on Investment
RIB
Reflexive User Interface Builder
RFP
Request for Proposal
RMI
Remote Method Invocation
RPF
Record and Playback framework
RST
Reverse Semantic Traceability
RTM
Requirements traceability matrix

SA
Structured Analysis
SADT
Systems Analysis and Design Technique
SCA
Static Code Analysis
SC
Security Checklist
SCA
Source Code Analyzer
SC
Standards committee
SCR
Software Change Request
SDD
System and software Design Document
SDK
Software Development Kit
SDLC
Software Development Life Cycle
SDP
Software development plan
SEI
Software Engineering Institute
SEO
Search Engine Optimization
SIR
System Investigation Report
SIT
System Integration Testing
SLA
Service Level Agreement
SLC
Software life cycle
SOA
Service Oriented Architecture
SQL
Structural Query Language
SQS
Software quality system
SQSP
Software quality system plan
SRD
Specification Requirement Document
SRR
Software requirements review
SRS
Software Requirement Specification
SMARTS
Software Maintenance and Regression Test System
SSH
Secure Shell
STAF
Software testing automation framework
STEP
Systematic Test and Evaluation Process
START
Structured Testing and Requirements Tool
STR
Software trouble report
STR
System trouble report
SIM
Subscriber Identity Module
ST
State Table
Sc
Security
ST
System Testing
STLC
Software Testing Life Cycle
SUT
System Under test
SUMI
Software Usability Measurement Inventory
TCAT
Test Coverage Analysis Tool
TCB
Trusted Computing Base
TDD
Test Driven Development
TDGEN
Test filelData Generator
TLS
The Transport Layer Security
TMM
Test Maturity Model
TOE
Target of Evaluation
TPA
test point analysis
TPI
Test Process Improvement
TPT
Time Partition Testing
TQC
Total quality control
TTCN
Testing and Test Control Notation
TTM
Test Traceability Matrix
TRR
test readiness review
TSR
Test Summary Report
UAT
User Acceptance Testing
UCL
upper control limit
UDF
unit development folder
UDDI
Universal Description, Discovery and Integration
UML TP
UML Testing Profile
USM
User-based Security Model
UTC
Usability-Test Candidate
UI
User Interface
UML
Unified Modelling Language
URI
Uniform Resource Identifier
URL
Uniform Resource Locator
V & V
Verification and Validation
VE
Virtual Environment
VM
Virtual machine
VU
Virtual Users
WIP
Work In Progress
WSJF
Weighted Shortest Job First
XML
Extensible Markup Language
XP
eXtreme Programming
XSS
Cross Site Scripting

JIRA Query to find the list of issues updated in a month

As a process, we need to make an effort variance report every month for all the applications tested b y our team.  We use JIRA to log the efforts for each item.

Effort Variance = (Planned Efforts-Actual Efforts)/100

Following query may be used in JIRA to help:

project = TEST AND assignee in (“[email protected]”,”[email protected]”,”[email protected]”) AND updated >= “2014/06/01” and updated < “2014/07/01”

The above query would list all issues for the project ‘TEST’ having assignee among ‘abc’, ‘pqr’ or ‘ijk’ which were updated in the month of June-2014.

The query can also be modified as per needs…:)

Smoke Testing and Sanity Testing

Apart from similar names, Smoke Test and Sanity Test, both aims at reducing testing efforts. No doubt they are confused for each other.

Smoke Testing
Smoke Testing is the very basic and initial testing activity that verifies if the application/deliverable is testable or not. I read somewhere that “Smoke testing is like General Health Check Up”. It is generally applicable during Integration Testing, System Testing and Acceptance Testing.
Only positive scenarios are validated in Smoke Testing.

Scope – The set of test cases which verifies the functionality on a high level. Some basic scenarios may include:
1. Tester is able to access the application/deliverable.
2.  Tester is able to navigate through the application.
3. The user is able to interact with user interface.

Example– Following may be the test cases for a login page:
1. ‘OK’ button is logging in the user on entering some data in user name and password.
2. Cancel button is responding entering some data in user name and password and user remains on the same page.
 Type anything in the username and password field and press the OK button, the page should change and the request should have gone to the database to confirm if it is a valid request or not.

Advantages of Smoke testing:
1. Issues that arise due to integration of modules can be found.
2. Issues are found in the early phase of testing.
3. Induces confidence to tester that fixes in the previous builds have not broken major features.


Sanity Testing
Sanity Testing is a part of Regression Testing and it is performed when we do not have enough time for doing testing. I read somewhere that “specialized health check-up”.
Sanity Testing covers both positive and negative scenarios.

Scope – The set of test cases from regression test suite which quickly verified status of the product after they have done changes in the code or there is some controlled code change in a feature to fix any critical issue. Care must be taken care to:
1. Include only critical test cases in BVT.
2. Only stable test scenarios should be included
3. Test cases included should be sufficient for application test coverage.
Example Following may be the test cases for a login page:
1. ‘OK’ button is logging in the user on entering valid data in user name and password.
2. ‘OK’ button is not logging in the user on entering invalid data in user name and password.
2. Cancel button is responding entering some data in user name and password and user remains on the same page.
Some of the Test cases for Text editor may be:
1) Creating text file.

2) Writing something into text editor
3) Copy, cut, paste functionality of text editor
4) Opening, saving, deleting text file.

Advantages of Sanity testing:
1. Makes sure that developers have not defined conflicting or multiple functions or global variable definitions.
2. Helps to identify the dependent missing objects.
To conclude, consider purchasing a vehicle – Car or Bike.
We take it for a test ride and check basic functionalities– Smoke Test
We bring it home, ride it more and check detailed information (Mileage etc) – Sanity Testing

Regression Testing and Retesting

Difference between Regression Testing and Retesting is another one of the most frequently asked question in Software Testing interviews. So, a very straight forward and crisp answer would be –
 
Re-testing is the verification of defects to confirm that the functionalities are working as expected. When a bug is fixed, the test cases which failed with reference to it are executed again.
 
Regression Testing is performed to verify that any changes to the code (bug fixes, enhancements, code cleanup etc.) have not impacted the untouched or other functionalities of the software/application.
 
For example:

  • Consider an application ‘abc’ with modules ‘a1, b1 and c1’.
  • Some bug fixes are to module b1.
  • Retesting – Testing the bugs raised and module b1.
  • Regression Testing – Testing the areas affected by module b1 on a1 and c1.
 

Regression Testing
Image Source: MS Word Clip Art
 
The image above also represents basic Regression Testing and Retesting concept:
Rotten apple spoils the barrel –
Ensure the rotten one is removed ~Re-testing
Always go through the rest ~Regression Testing
Following is a detailed description of differences between Regression and Retesting.
 

Retesting

Regression Testing

Retesting is performed to make sure that the tests cases which failed in last execution are passed after the defects against them failures are fixed.
Regression testing is performed to ensure that changes like defect fixes or enhancements to the module or application have not affected the unchanged parts of application.
Retesting is carried out based on the defect fixes.
Regression testing is not carried out on specific defect fixes. It is planned as specific area or full regression testing.
In Retesting, the test cases which failed earlier are included in the test suite.
In Regression testing, test cases which impact the functionality of application are included in the test suite irrespective of their passed/failed status in earlier runs.
Test cases for Retesting cannot be prepared before start testing. In Retesting only re-execute the test cases failed in the prior execution.
Regression test cases are derived from the functional specification, user manuals, user tutorials, and defect reports in relation to corrected problems.
Automation for retesting scenarios is not recommended.
Regression scenarios are the first candidates of automation testing
Retesting is performed before Regression.
Regression testing can be carried out parallel with Retesting.