By the end of this module, learners will be able to:
Understand the purpose of software testing and its role in SDLC
Differentiate between QA, QC, and testing
Identify various types and levels of testing
Understand key principles and methodologies used in software testing
Apply real-world thinking to software quality challenges
The software development lifecycle (SDLC) is the cost-effective and time-efficient process that development teams use to design and build high-quality software. The goal of SDLC is to minimize project risks through forward planning so that software meets customer expectations during production and beyond. This methodology outlines a series of steps that divide the software development process into tasks you can assign, complete, and measure.
Software development can be challenging to manage due to changing requirements, technology upgrades, and cross-functional collaboration. The software development lifecycle (SDLC) methodology provides a systematic management framework with specific deliverables at every stage of the software development process. As a result, all stakeholders agree on software development goals and requirements upfront and also have a plan to achieve those goals.
Here are some benefits of SDLC:
The software development lifecycle (SDLC) outlines several tasks required to build a software application. The development process goes through several stages as developers add new features and fix bugs in the software.
The details of the SDLC process vary for different teams. However, we outline some common SDLC phases below.
The planning phase typically includes tasks like cost-benefit analysis, scheduling, resource estimation, and allocation. The development team collects requirements from several stakeholders such as customers, internal and external experts, and managers to create a software requirement specification document.
The document sets expectations and defines common goals that aid in project planning. The team estimates costs, creates a schedule, and has a detailed plan to achieve their goals.
In the design phase, software engineers analyze requirements and identify the best solutions to create the software. For example, they may consider integrating pre-existing modules, make technology choices, and identify development tools. They will look at how to best integrate the new software into any existing IT infrastructure the organization may have.
In the implementation phase, the development team codes the product. They analyze the requirements to identify smaller coding tasks they can do daily to achieve the final result.
The development team combines automation and manual testing to check the software for bugs. Quality analysis includes testing the software for errors and checking if it meets customer requirements. Because many teams immediately test the code they write, the testing phase often runs parallel to the development phase.
When teams develop software, they code and test on a different copy of the software than the one that the users have access to. The software that customers use is called production, while other copies are said to be in the build environment, or testing environment.
Having separate build and production environments ensures that customers can continue to use the software even while it is being changed or upgraded. The deployment phase includes several tasks to move the latest build copy to the production environment, such as packaging, environment configuration, and installation.
In the maintenance phase, among other tasks, the team fixes bugs, resolves customer issues, and manages software changes. In addition, the team monitors overall system performance, security, and user experience to identify new ways to improve the existing software.
A software development lifecycle (SDLC) model conceptually presents SDLC in an organized fashion to help organizations implement it. Different models arrange the SDLC phases in varying chronological order to optimize the development cycle. We look at some popular SDLC models below.
The waterfall model arranges all the phases sequentially so that each new phase depends on the outcome of the previous phase. Conceptually, the design flows from one phase down to the next, like that of a waterfall.
The waterfall model provides discipline to project management and gives a tangible output at the end of each phase. However, there is little room for change once a phase is considered complete, as changes can affect the software's delivery time, cost, and quality. Therefore, the model is most suitable for small software development projects, where tasks are easy to arrange and manage and requirements can be pre-defined accurately.
The iterative process suggests that teams begin software development with a small subset of requirements. Then, they iteratively enhance versions over time until the complete software is ready for production. The team produces a new software version at the end of each iteration.
It’s easy to identify and manage risks, as requirements can change between iterations. However, repeated cycles could lead to scope change and underestimation of resources.
The spiral model combines the iterative model's small repeated cycles with the waterfall model's linear sequential flow to prioritize risk analysis. You can use the spiral model to ensure software's gradual release and improvement by building prototypes at each phase.
The spiral model is suitable for large and complex projects that require frequent changes. However, it can be expensive for smaller projects with a limited scope.
The agile model arranges the SDLC phases into several development cycles. The team iterates through the phases rapidly, delivering only small, incremental software changes in each cycle. They continuously evaluate requirements, plans, and results so that they can respond quickly to change. The agile model is both iterative and incremental, making it more efficient than other process models.
Rapid development cycles help teams identify and address issues in complex projects early on and before they become significant problems. They can also engage customers and stakeholders to obtain feedback throughout the project lifecycle. However, overreliance on customer feedback could lead to excessive scope changes or end the project midway.
In traditional software development, security testing was a separate process from the software development lifecycle (SDLC). The security team discovered security flaws only after they had built the software. This led to a high number of bugs that remained hidden as well as increased security risks.
Today, most teams recognize that security is an integral part of the software development lifecycle. You can address security in SDLC following DevSecOps practices and conducting security assessments during the entire SDLC process.
DevSecOps is the practice of integrating security testing at every stage of the software development process. It includes tools and processes that encourage collaboration between developers, security specialists, and operation teams to build software that can withstand modern threats. In addition, it ensures that security assurance activities such as code review, architecture analysis, and penetration testing are integral to development efforts.
Unit tests are very low level and close to the source of an application. They consist in testing individual methods and functions of the classes, components, or modules used by your software. Unit tests are generally quite cheap to automate and can run very quickly by a continuous integration server.
Integration tests verify that different modules or services used by your application work well together. For example, it can be testing the interaction with the database or making sure that microservices work together as expected. These types of tests are more expensive to run as they require multiple parts of the application to be up and running.
Functional tests focus on the business requirements of an application. They only verify the output of an action and do not check the intermediate states of the system when performing that action.
There is sometimes a confusion between integration tests and functional tests as they both require multiple components to interact with each other. The difference is that an integration test may simply verify that you can query the database while a functional test would expect to get a specific value from the database as defined by the product requirements.
End-to-end testing replicates a user behavior with the software in a complete application environment. It verifies that various user flows work as expected and can be as simple as loading a web page or logging in or much more complex scenarios verifying email notifications, online payments, etc...
End-to-end tests are very useful, but they're expensive to perform and can be hard to maintain when they're automated. It is recommended to have a few key end-to-end tests and rely more on lower level types of testing (unit and integration tests) to be able to quickly identify breaking changes.
Acceptance tests are formal tests that verify if a system satisfies business requirements. They require the entire application to be running while testing and focus on replicating user behaviors. But they can also go further and measure the performance of the system and reject changes if certain goals are not met.
Performance tests evaluate how a system performs under a particular workload. These tests help to measure the reliability, speed, scalability, and responsiveness of an application. For instance, a performance test can observe response times when executing a high number of requests, or determine how a system behaves with a significant amount of data. It can determine if an application meets performance requirements, locate bottlenecks, measure stability during peak traffic, and more.
Smoke tests are basic tests that check the basic functionality of an application. They are meant to be quick to execute, and their goal is to give you the assurance that the major features of your system are working as expected.
Smoke tests can be useful right after a new build is made to decide whether or not you can run more expensive tests, or right after a deployment to make sure that they application is running properly in the newly deployed environment.
By the end of this module, learners will be able to:
Understand the role and importance of manual testing
Create effective test scenarios and test cases
Use real-world techniques to design and organize test cases
Apply boundary value, equivalence partitioning, and exploratory testing principles
URL: https://example.com/login
Test Case ID | Test Case Description | Steps to Execute | Test Data | Expected Result | Type |
---|---|---|---|---|---|
TC_01_01 | Verify login with valid credentials | 1. Open login page 2. Enter valid username/password 3. Click Login |
username: user1 , password: pass123 |
Redirect to dashboard/home page | Positive |
TC_01_02 | Verify error message with invalid credentials | 1. Enter wrong username/password 2. Click Login |
username: user1 , password: wrongpass |
Display “Invalid credentials” message | Negative |
TC_01_03 | Verify login with empty fields | 1. Leave username and password blank 2. Click Login |
— | Prompt validation messages like “Username is required” | Negative |
TC_01_04 | Check password is hidden when typed | 1. Enter password field | password: 123456 |
Password should be masked (••••••) | UI |
TC_01_05 | Test 'Remember Me' checkbox functionality | 1. Enter credentials 2. Tick 'Remember Me' 3. Login and logout 4. Revisit login page |
username: user2 |
Credentials should be remembered | Functional |
A user story is a short, simple description of a feature told from the perspective of the person who desires the new capability—usually a user or customer of the system. It’s commonly used in Agile software development to define work items.
The standard format is:
As a
<type of user>
,
I want<some goal>
,
So that<some reason/value>
.
Component | Description | Example |
---|---|---|
Role | Who wants the feature? | As a registered user |
Goal | What do they want to do? | I want to reset my password |
Benefit | Why is it important? | So that I can regain access to my account |
Understanding how to identify, report, and manage defects is one of the most critical skills in software testing. In this module, we’ll explore the concept of software defects, the complete defect lifecycle, and best practices for writing effective bug reports. You’ll also learn about tools like JIRA and key concepts like severity vs. priority.
A software defect is a deviation from the expected behavior of a software application. It occurs when the software doesn’t meet the specified requirements or produces incorrect results. Defects can be introduced during any phase of development—requirements gathering, design, coding, or testing.
By the end of this module, learners will be able to:
Define functional, integration, and regression testing
Understand the differences and relationships among them
Identify where and how each type is applied in the SDLC
Apply best practices in designing test cases for each type
Create a basic regression plan based on a software change
By the end of this module, learners will be able to:
Understand what test automation is and its role in software testing
Identify what should and shouldn’t be automated
Explore benefits and challenges of automation
Differentiate between manual and automated testing
Understand test pyramid structure and commonly used tools
Write and explain a simple automation script using Cypress