“Making mistakes should not
be a mistake”
Testing software or an application is incomplete
without a pinch of humour. Do you know why?
Because it is the most exhausting job of all
(Do not tell our team that we said this😜),and
we are trying to keep up the spirits for our
Quality Analysis and Testing team.
Testing software or an application is
incomplete without a pinch of humour.
Do you know why? Because it is the most
exhausting job of all (Do not tell our team that
we said this😜),and we are trying to keep up the
spirits for our Quality Analysis and Testing
The development team pushes the MVP to the next level for the quality analysis and testing team to find bugs (improper function of the code), any design discrepancy, and unknown errors raised due to ripple effect.
The testing process is guided by set
procedures and the use of standard
documents such as test plans, test cases,
Along with natural human talent, i.e., eyes
for finding errors in others’ work, our team
uses automated testing tools and
frameworks to test the applications.
There is more to test between feeding the
input into the system and getting the
The quality is not in the eyes of the beholder but has to be in the product itself. Here are our top 5 quality parameters for testing an MVP:
Intended input must produce the expected output.
Must be implemented as per the documentation.
The design and functionality of the application must support the scaling requirement of the business.
The code and use of APIs must be optimized to have a cleaner, efficient, and easy-to-understand environment for all future developments.
The MVP must also behave failure- free in a customer-oriented environment, rather than just saying, “It was working just fine on your development systems.”
Integrity features must be paid special attention to check any security loopholes in the MVP that may lead to the misuse of the system.
Before beginning with the processes mentioned above, the team has one more thing to do – referring to the SRS document.
Remember SRS – the bible of application
development. It has to be revisited by Quality
Analysts to match the requirements with what
the developer has submitted for testing. The
team must refer to the functional design
requirements from the SRS to understand the
system’s expectations and design testing
The test scenarios are typically the one-liners that specify “what to test” for specific functionality. On a broader scale, the review of SRS leads to the following:
The SRS review is done in the presence of business analysts and developers to ensure that the quality analysis team is referring to the correct version of the SRS document.
The quality analysis team is left alone with these specifications to carry out further testing tasks, which begins with preparing the strategies for the testing.
Many development teams now use a methodology known as continuous testing. It is part of a DevOps approach – where development and operations collaborate over the entire product life cycle. The aim is to accelerate software delivery while balancing cost, quality and risk. With this testing technique, teams don’t need to wait for the software to be built before testing starts. They can run tests much earlier in the cycle to discover defects sooner, when they are easier to fix.
As an organization, Anuyat believes in defined work protocols and best practices to make the functioning go smooth. Preparing the test strategy is a part of this culture.
The Project Manager prepares Anuyat wide test strategy to define the application testing techniques that need to be followed while analyzing the system's quality. It lists the testing objective and the process for achieving that objective in a “Test Strategy Template”.
Though the SRS document already mentions the main expectations of the application, it still needs to be restated in the strategy document.
Must consider analyzing the User Interface (UI), business logic, databases, reports, data flow, overall performance, hardware & software integration, security aspects, integrity & usability of each function, and roles & rights.
Along with functional testing, the focus is majorly shifted to the application's performance, load, and online security.
For customer experience, cross-browser testing, multilanguage support, stress testing, and Beta testing are also a part of the web application testing process.
The primary concern for testing a mobile app is mobile screen compatibility. Hence, UI testing must be rigorous. It is combined with regression, functional, and security testing.
There are multiple questions being answered while strategizing on this parameter. What is the test process, how is defect management handled, what will happen if the team receives a change request, what are the activities while executing a test case, what will be used as test management & automation tools, and so on?
Test environments are nothing but a setup of technological tools to test the application. The strategy is formulated to decide how many different testing environments are needed. At Anuyat, we maintain four testing environments, namely
For each environment, the strategy must define the access rights, setup configuration & system requirements, test data, and data backup & restoration techniques.
Keeping track of the release version of the application is more important than you might think. In the absence of this information, there will be incidents of wrong application releases being available in different test environments, and one can imagine the chaos.
The release management must define where the team can find the latest application build, where it should be deployed, and where the team will find the build for the production environment.
This plan document also includes information about who is responsible for “go” and “no-go” approvals for the release on the production environment.
While the test strategy is done at the organizational level, test plans are more specific to the application’s requirements. But they are tightly coupled together as the latter is the extension of the former in terms of project specifications, using the pre-defined strategies.
A good test plan has the following
Defines the features to be or not-to-be tested and all the dependencies.
Specify the reason for the testing – is it a validation of bug fixes or because new features are added, or there is a revamp of the application.
Outlines the aspect of testing as in whether the security, functionality or usability, reliability, performance, or efficiency is tested.
Are well-defined methods and are used to explore
the distinct aspects of the application.
for who, when, where, how is added in the test plan to provide more specific timelines to the team.
Until here, a solid foundation is laid for the testing team. Now they are all-powerful to
proceed with the writing test cases for each specific module and function of the
application that needs to be executed.
In simple words, “Time for the real action.”
Test cases are real actionable information that touches the application at the unit level of its functions.
Test cases have deep-rooted and wide scope; they can include anything from explaining the use of a variable to checking the successful message display after a checkout process.
Since designing test cases is an elaborative process, the best way to cover all the aspects of the
application is to divide the test cases based on their purpose.
This is what the categorization will look like:
|Type of Test Case||Details||Step||Expected result||Status|
|Functionality||The phone number field|
must accept 10 digits
|Input more than 10||Must produce an error |
|Pass or Fail|
|Security||The OTP must be sent on|
the registered number to
allow user login
|Check if the sent OTP |
leads to user account
|Correct OTP must open user|
account’s dashboard, other
an error message must be
|Usability||Check if all the links on|
the screens are working
|Click the links on the|
|All links must redirect to the|
|Pass or fail|
|User Interface||The progress bar must|
appear after user pushes
the “Submit” button
|Enter correct details|
and click the “Submit”
|Progress should appear while |
the user waits for the
information to get submitted
|Pass or fail|
All the test cases are executed in the test environment, using test management and product management tool - Jira. It is an automated tool that has automated capabilities to find defects, raise environmental issues and provide a collaboration platform for all the team members.
With the help of team collaborating and automation tools, there is a back & forth exchange of the bugs reporting and resolution statuses. The quality analyst must agree on closing the issue or bug after it has been resubmitted for testing by the developers. The project is moved to the production environment after the first round of testing, only after the developer, testers, and client approve the build.
A stable build of the application is gradually available for delivery after a few internal testing phases in the development environment. And the QA team has just one more thing to do- create a Test Closure Report (TCR).
A formal document prepared by the testing team to
provide the summary of all the tests conducted during
the different testing phases, detailed analysis of all the
bugs reported and their statuses, test cases that were
executed, along with the number & density of bugs found.
Engineers at Work
We recommend you use our mvp.wiki platform to estimate the scope and the cost
associated with it. It is a self-estimation MVP platform, with simple interfaces and fewer