What proportion of different types of software testing does your team use?
You’re asking a unicorn question.
Because you’ve skipped over the point of all that work. It doesn’t matter what proportion of different types of testing you do, it doesn’t matter what your organisation is, and it doesn’t matter what technologies you use, if you don’t know what information your stakeholders want to discover from your testing.
That’s it. That’s all that’s important. And if you don’t start from that, you’re doomed no matter what you do. You’ll probably provide some useful information, but it’ll be by accident rather than by design.
Find out what your stakeholders, what your product owners, what your internal consumers, what your developers want to know. What issues are important to them, not me, not Sam, not anyone else on the planet. What information you can provide to them that will cause them to make different decisions about what you are all building.
Testing does not make anything better in and of itself. It simply provides useful information to people who will make better things, and make things better. If you don’t know what kind of issues most concern them, what kind of errors would represent the greatest threat to the value of your product – then you can’t choose your mix of types of testing to give you the best chance of finding those kind of issues. There isn’t best practice for that, other than: do what works, iterate, keep thinking, keep talking to your stakeholders. None of which have numbers attached.
I’m afraid you’re starting from the wrong end. I agree with user246, nobody here can give you those answers. Except you.
I cannot tell whether you are looking for studies advocating a certain resource mix or just studies describing experiences with a certain resource mix. Aside from what I would find in a Google search, I do not know of any such studies.
In one of my previous jobs, there were several very talented developers on the QA staff. They tended to concentrate on things that were both hard to test manually and unlikely to be tested during development: for example, stress testing. We did not automate any of our UI testing because it changed so often that we did not believe automation was worth the investment. Developers sometimes wrote API-level unit tests but often did not. At that company, I believe our resource mix was closer to the current mix at your company.
In your question, you say you want to shift a majority of your testing resources from manual testing to automated testing. The right mix for your organization depends on considerations that no one in this forum can measure for you: for example, the skill set in your organization, the interest level in your organization, and the maintenance costs for your automation (something you did not mention in your question). I think you described a reasonable goal, but I advice you to approach it incrementally, beginning with what you judge to be easiest and/or most valuable. I believe this will improve your chances for success. It will also give you the opportunity to make mid-course corrections as problems and opportunities arise.
I’ll take a stab at an answer. Some of it I think doesn’t belong and some of it I think needs to be split up into a couple of questions:
What percentage of my testing should be 1) Exploratory Testing 2) End to End testing 3) Integration Testing.
The answer to this question, unfortunately depends a lot on the product itself and should be determined during the product planning phase and captured in a test plan.
A completely separate question is – What percentage of my tests should be automated?
The answer to that one is a little bit more straightforward. If automating a test case will get you better ROI than manual testing, then automate it. This means if developing the automated test, maintaining that test and executing it will take less time than the total amount of time to run a manual test over a product’s life cycle, then it makes sense to automate it. Other factors that might come into play is confidence in automated vs manual tests. Humans can make errors, skipping steps, marking tests as passed even if they don’t really pass, interpreting instructions incorrectly, etc. Similarly, automation can have bugs and an automated test that you think is testing one thing could be completely skipping important validations.
Two pieces that didn’t seem to fit for me are the Unit Tests and Exploratory Testing. Pretty much anything that can have automated Unit Tests should have automated unit tests. The goal here is 100%, barring any major barriers. Manual Exploratory Testing is generally what I try to do more of once I have most of the rest of my tests automated. This is where you find most of your bugs and where you find additional test cases to add to your list to run build over build. New test cases found through exploratory testing can subsequently be automated. Regression testing, whether Manual or Automated rarely finds new bugs, it mostly protects against regressions and gives you coverage and confidence.
- There is no golden rule to decide the percentage and distribution. It depends on Application, Complexity, Technology, Test process followed in your organization
- This has to be decided based on your current team mix, exposure, experience
- From your suggested distribution, Automated Unit Testing & Automated Ingeration Testing takes a good leap. This shows your interest towards automation
- Did you explore, try out free tools to automate your application. More than your organization trying to fund a tool / Automation Resource. I would suggest you to automate regression cases. This will also show the way to highlight management need/benefit invest in automation
- Reducing from 60% manual exploratory testing to 5% is a big goal. I would suggest to evaluate with smaller milestones. Try automating 5-10% every month and target it. Identifying Test cases to be automated / investing time to learn/develop automation is required
- You can also form a small team of testers interested in developing automation. From your learning’s/working as a group. You can work together to start this initiative
- Since it is a customer facing ecommerce website, Did you try evaluating Selenium for automating it
This thread reminded me of the funny story about Testivus, the grand master answering the question, “how much test coverage is necessary”.
For every student that asked, Testivus had a different answer. The answer depended on the person asking the question.
Joking aside, for our apps (consumer facing web app, js/java/apache/tomcat/oracle) we target automated testing at 50% for UT, 30% functional/api, 20% system level regression. On top of that, we add manual regression tests to confirm the results from automation, and that actual look/feel is correct.
For each project (adding a new feature, for example), the mix changes over the lifecycle of that project. Early on, mostly exploratory manual testing. As the project firms up, more and more automation is added.
For your question, you are asking about resource allocation. Resource allocation is hard to answer. You probably need to “overstaff” automation for a while to get caught up to where you want to be. But, this needs to be balanced you your needs to deliver new content.
Our Awesome Tools
- Check your IP Address precisely
- Online JSON Formatter with Syntax Highlight
- Online CSS Minifier Compressor
- Database Administration Tutorials
- Programming Tutorials & IT News
- Linux & DevOps World
- Ebook Reviews
- eSport Matches, Skills Tutorials & News
- Entertainment & General News
- Check your public IP Address precisely