TransWikia.com

When should QA be testing during a sprint? (Agile)

Software Quality Assurance & Testing Asked by Mercfh on January 6, 2022

If you are getting build on alternative days then, once the sprint is started you can start preparing your test cases corresponding to the assigned user stories.

Once the build is deployed then you can start executing your test cases, if there is some change in the build then you can execute your test suites.

Whenever a new build is out, you should execute your sanity set of the test suite.

6 Answers

My idea is preety simple. Prepare regression automation suite and setup in CI & CD pipeline and add this as a post build action.

So for the new deployment it will run can help to do the regression and sanity of the application.

Your focus during the Sprint should be starting automation of repeatative tasks and push this in CI CD pipeline daily.

If automation taking time for some test cases it is better to done Manual as first round and priorities as per need.

Answered by Hemant Varhekar on January 6, 2022

Keep it simple!

Test throughout the sprint! Yes, this means deployment throughout the sprint!

But how! Developers should work ahead. They will only be able to work ahead if the most ignored Agile rule of under-estimating and taking on less than can be done in sprint cycle days per developer, is properly implemented.

Here is a full article I wrote out of my own struggle and how I triumphed! I solved Agile testing bottleneck problem!

https://medium.com/@salibsamer/i-solved-scrum-sprint-end-testing-bottleneck-problem-bfd6222284a1

Answered by Samer on January 6, 2022

As shown in the other answers and comments this is a common issue that I've seen in several companies that I've worked in. Thinking it through, I suspect most companies struggle with the generic issue of allowing enough time for QA, testing, and automation once the feature is complete.

Generally, people may feel there is no clear guidance in Agile as to how to address this.

I would address this in two ways:

1) Testing happens before, during and after dev work. For example, if you practice BDD and write a failing test before the app code then you will be one step closer to your goal of keeping up.

2) A little discipline may be desired to allow more time for QA. For example, it's easy to say 'we will change to a process whereby dev works for a week and then QA has a week to test'. In reality, the work is usually not done in the first week and overflows into the second week leading to the same situation again. Try to address this with formal scheduled turnover and mileposts. For instance a calendar reminder "it's Friday, 3 pm. Is your code ready for testing?" You will also need to consider what would dev do for a week if no changes are allowed? Sitting idle for a week isn't going to work. This is a hard problem that is helped by a lot of exploring the issue and factors and by help from more senior folks who have experience in seeing the bigger picture and what would work best for the situation at hand.

In conclusion: You need to have detailed and difficult conversations with all the stakeholders in the development process in an open and caring environment that encourages all points of view in a non-threatening fun workplace where mistakes are just how people learn to do the right thing. In other words, A Good Culture.

Answered by Michael Durrant on January 6, 2022

Testing of a particular feature that is being created in the sprint can be done, only if the developer has developed the feature up to some extent. Meanwhile, when the developer is busy developing the feature, a QA should start working on the test plan/test cases on the basis of the feature specification document or the user stories. If the QA team is automating the test cases and using BDD tools like Cucumber, then he must start writing the Cucumber for the test cases to save time. A QA should be in continuous touch with the developers so that he receives at least a piece of the feature which has been developed.

Once the developed module is received, now a QA has an ample amount of work. He should first do a sanity check of the module received, and quickly log the issues identified in a bug tracking tool. Also, communicate the developer regarding the issue. Side by side he should also automate the test case. This cycle needs to be processed quickly so that each module is tested and delivered without any bug on or before the sprint end date. Thus, in other words, the work of a QA starts as soon as feature specs or user story is received in the sprint and the actual testing can be started as soon as the developer develops some module of the feature.

Answered by Sarabjit Singh on January 6, 2022

My team struggles with a similar issue having multiple input streams, that are running on different iteration/sprint cycles into a common product.

We tried testing in the dev int area for each team for a while and then marking items done at that point, but we quickly discovered that was too early in the process. We could verify that new functionality was working, but we couldn't test at the integration level which was where most of our defects actually occur. So we basically moved our definition of 'done' to be later in the next cycle, I guess you could call that a hardening sprint since it comes after the initial sprint where the dev work occurred or we call it the 'QA Offset'. Our management team really wanted 'testing' to be 'done' during the same section of time as dev, but this just wasn't practical based on the type of system we are testing. We have been attempting to add different layers of automation, to help us get to done earlier, but on a legacy product that can be challenging.

So to answer your original question, we generally monitor the build and whenever there is a large enough quantity of items in it, we will grab them and start testing. Since we build daily it is about every other day that we will restart testing, which includes the new functionality and a mini-smoke to verify that the older items continue to work.

Answered by QA Prescott on January 6, 2022

Define a definition of done that includes testing. Define which testing effort is minimal needed to get the work done.

  • Time boxed exploratory testing session for each story, just after coding is done or even during the coding sessions, pair with developers to test their work
  • Good balance of UI-, Service- and unit-tests, read about the test pyramid
  • Continuous integration is important so that to full product is build on each check-in. So you can test, because the product works. Since Working software is the primary measure of progress.
  • Start each PBI with a Three Amigo's session, think about how you can start testing work in parallel with coding.

Focus on automating most if not all of the test-cases, since you wont have time todo a manual test regression each iteration. Quality should be build in the product and cycle.

Keep in mind that Agile does not have an official method as it goes for testing. Being Agile means doing what is needed to get the work done, iteration after iteration. If it works keep doing it, if you fail adapt. The XP practises are the closest as a best practise for Agile teams, which includes testing.

Suggested read is the Agile Testing book.

Answered by Niels van Reijmersdal on January 6, 2022

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP