Hi! My name is Tanya Grigorkevich, and I am a QA Engineer at RingCentral Bulgaria. 

Everyone has been in a situation where something needs to be done both within a short time frame and to the highest possible standard. This is a common challenge in the software development industry, particularly for software testing engineers.

There are many reasons this might happen. The most common one is when development runs behind schedule, leaving less time for testing. Other reasons could include sudden changes in documentation, insufficient resources, poor planning, or inadequate preparation for testing. Sometimes, a critical bug is discovered that must be fixed and verified immediately, leaving little time for testing, even when the scope is extensive.

That’s why I decided to break down some best practices that would allow you to perform a great job even when your time is limited. Experienced, battle-hardened testers may not find anything surprising in my article. However, for young specialists, this piece will undoubtedly be helpful — it will help you not figure out your actions step by step, without panic.

Most of my experience has been within modern development methodologies such as Agile. In these methodologies, quick and efficient testing is vital for rapid iteration delivery and is a frequent challenge for testers. Early in my career, as a junior specialist, I found this frustrating and often overworked. Over time, I gained a better understanding of products, processes, risks, and testing strategies. This knowledge helped me assess situations more effectively, approach testing processes wisely, and reduce overall stress.

It’s ok to avoid rushed testing 

First of all, let’s address situations where testing under tight deadlines should be avoided. It’s ok to admit this!

Action plan 

Now, I’d like to share how, based on my experience, testing can be managed under tight deadlines — what can be skipped in such situations and what actions should be prioritized.

I’ve put together all the information into a mini-plan to help you approach this process in a structured way. The plan includes several stages outlining a list of actions. I’ll also propose different ways to execute this plan that you may consider

Execution of the plan depends on:

  • The methodology;
  • Resources (time, number of people, budget, technologies, etc.);
  • The project and its objectives;
  • The experience and knowledge of the testers;
  • The purpose of testing (levels and types of testing).

It’s also important to understand that not all stages may be applicable, as some projects might not require every stage. However, if a stage is missing in your process, it might be worth considering and trying to implement it to enhance the quality of testing.

Stage one: early testing 

The first stage is preparation for testing, planning, or as I call it, early testing.

This stage is often neglected. There’s no need to sit and wait for the developer to deliver the implemented functionality.

Testing should always start as early as possible! It can begin as soon as a new functionality idea is proposed or discussed, during the mockup phase, or when requirements are first drafted. Early testing significantly helps me speed up the process. Even if development hasn’t started yet, you can still study the product areas that might be involved and familiarize yourself with new technologies or tools that could be helpful during testing. A good tester will never sit idle waiting for the perfect moment – they will always find something to work on before development begins!

Most of the time spent on this stage is tied to communication, which can be quite time-consuming.

At this stage, you can also start preparing test data or the testing environment.

Stage two: preparing a plan 

Effective testing begins with a well thought-out plan. Unfortunately, some testers postpone this stage until the functional testing phase. It’s important to define the priority areas for testing based on business requirements and areas where errors are most likely.

The plan should include the types and levels of testing, as well as resource allocation (number of people, time for testing, and scope of work). The plan can be formal or informal and doesn’t necessarily need to be submitted for reporting. You can define it for yourself and jot it down briefly — I often use Google Sheets for this purpose.


However, I’ve also worked on projects where a formal test plan had to be created and presented to management. You can use applications like Xmind or Freeform for this task. Here are a couple of examples:

Stage three: testing 

This is the main stage, which includes the testing process itself.
This stage also involves skills of analysis, monitoring, reporting, and the ability to delegate tasks. 

Stage four: automation 

Automation is the key to speeding up the testing process. It can begin even before or simultaneously with manual testing. If automation is well-implemented in the project  with a clear purpose, process, and sufficient automated test coverage — it can significantly accelerate testing, aid in bug detection, provide a better understanding of product quality, and reduce the risk of human error.

Applying this strategy into work 

Now let’s look at an example of how this plan can work in different scenarios.

Scenario 1: How to conduct Acceptance Testing under tight deadlines.

Imagine a team of 18–25 testers. Using tags in the tests, one tester selects the necessary tests for the release. The resulting test scope is evaluated based on the established deadlines, and test priorities are set. Based on the scope, the number of testers needed for the task is determined. Then, Acceptance Testing is conducted.

The result of this testing is a report containing information about:

  • Bugs found;
  • The number of passed and failed test cases;
  • An evaluation and conclusion from the tester on whether the product or functionality can be released.

It’s possible that the planned test scope may not be fully completed. Common reasons for this include:

  • Incorrect estimation of the number of testers needed;
  • Time consumed by reproducing bugs, analyzing their causes, and preparing reports;
  • Poor test selection (e.g., too many tests, irrelevant tests, or the person selecting the tests being unfamiliar with the product or functionality);
  • Tests being overly complex or time-consuming

In situations like these, here’s what you can do:

  • Reevaluate the remaining tests. Remove low-priority tests from the scope if they are not essential to the task at hand.
  • Redistribute testers. Ideally, if more than one tester is involved in regression testing, avoid assigning tests that the engineer has recently or frequently executed. Familiarity can lead to oversights as they might perform the tests on autopilot. However, in extreme cases with tight deadlines, allowing engineers to focus on familiar tests can speed up the process.
  • Calculate the functional coverage. If the coverage is around 95% or higher, testing can be considered sufficient – provided there are no critical bugs. The acceptable coverage percentage may vary depending on project requirements and the criticality of the tested functionality.

Scenario 2: How to test new functionality under tight deadlines.

Suppose the team consists of 1–3 testers. During the planning phase, the test scope is defined based on requirements. Each check and test is assigned a priority. Then, the workload for each tester is assessed and matched with the release deadlines, often relying on prior experience.

When it’s clear that the workload doesn’t align with the available time, here are some steps that can help maintain testing quality:

  1. Notify your lead and team. Share the situation openly to ensure everyone is on the same page.
  2. Ask for help from other teams. If there are other teams working on the same product, request assistance from colleagues.
  3. Start with high-priority checks. This ensures the core functionality works and no critical bugs are present. After completing high-priority checks, testing can be deemed sufficient but not complete. You can continue testing after the release to address remaining cases.
    • If issues are found post-release, log them as bugs or tasks for improvement and include them in upcoming releases based on their importance and urgency.

Final tips 

When facing the challenge of testing functionality under tight deadlines, it’s crucial to remain proactive and goal-oriented.

Here are some tips:

  1. Stay calm and focused. Don’t panic. Recognize that this is a common issue, and many testers encounter similar situations.
  2. Set priorities. Identify the key areas to test first. These should be the most critical functions for the business and users, as well as areas most prone to risk.
  3. Communicate with your team. Explain the situation if needed. They might assist with automation or provide additional resources.
  4. Leverage tools. Use all available automation tools, including test management and bug-tracking tools, to streamline and simplify the process.
  5. Provide frequent status updates. Keep everyone informed about progress and discovered issues, enabling developers to start fixes sooner.
  6. Accept and think critically. Be prepared for the possibility that not all tests can be completed. Accepting a certain level of risk may be necessary.
  7. Engage in two-way feedback. Despite the tight deadlines, your feedback is valuable. If you encounter issues or have ideas for improving the process, share them.

Remember, your goal is to ensure the functionality and quality of the product, regardless of the time constraints. Effective prioritization, clear communication, and continuous feedback are key components of successful testing under tight deadlines.

Originally published Mar 25, 2025