What proportion of different types of software testing does your team use?

Loading...

What proportion of different types of software testing does your team use?

Note: I've asked this over on SO and p.SE and it got closed. I'm not trying to game the system, just looking for a home :)

I'm seeking a little "wisdom of crowds" estimation or a pointer to an authoritative reference regarding insight into best practices for allocating effort into testing.
While there is a wide range of different types of testing, given the following list:

Manual exploratory testing (random ad-hoc testing either thru the user interface or APIs)
Manual end-to-end testing (predefined user-interface driven tests)
Automated end-to-end testing
Manual integration testing
Automated integration testing
Automated unit testing

and given the following technologies and application:

some kind of LAMP stack
consumer facing ecommerce website
REST based business logic middle-tier for use by the website and internal facing back-office applications
back-office applications (inventory/order management, warehouse logistics, BI)

and given the following organization:

product management defining the user-facing website
everyone on the front end is a PHP coder, and some are also good at javascript/css
the middle-tier is java
back-office applications are currently PHP and migrating to java
back-office apps are defined by the internal consumers (purchasing, finance, warehouse, etc).

Let's think about the total number of hours spent performing testing activities (developers creating unit and integration tests, test engineers creating and executing test plans, test engineers creating tools and harnesses and running them).
THE QUESTION:
Does anyone have a pointer to an authoritative study regarding this kind of resource allocation?
At my company, for example, I'd guess this is our current mix:
60% Manual exploratory testing
10% Manual end-to-end testing
10% Automated end-to-end testing
 5% Manual integration testing
 5% Automated integration testing
 5% Automated unit testing

As far as my suggestion for a better practice, I'd prefer to see something more like:
 5% Manual exploratory testing
 5% Manual end-to-end testing
10% Automated end-to-end testing
 5% Manual integration testing
20% Automated integration testing
55% Automated unit testing

I'd be most appreciative for any help in finding some shoulders to stand on to help guide my team.

Postscript: I'm confused about the whole "subjective" thing, and attempted to craft this question in line with the six guidelines for SQs (note that this question was closed on programmers, so that's not the answer). Anyway, given the six guidelines, I thought this satisfied them. Just to review:

inspire answers that explain “why” and “how” -- admittedly a little weak here, but I'm pretty clearly asking for "how"
require long not short answers -- again, someone could answer with a string of five numbers, but I would hope to see some insight into why they landed on that particular mix
have a constructive, fair, impartial tone -- I hope that providing the estimates of my current and hoped for situation don't torpedo this characteristic
encourage experiences over opinions -- I think this is pretty clearly asking for what folks are doing and what's working for them
insist opinion is backed up by fact -- I would hope that folks would tell the truth about their testing mix (analysis identifying why their mix is successful would be a bonus, see #1,2 and 4)
not just mindless social fun -- I'm hoping to learn something here

Solutions/Answers:

Answer 1:

You’re asking a unicorn question.

Why?

Because you’ve skipped over the point of all that work. It doesn’t matter what proportion of different types of testing you do, it doesn’t matter what your organisation is, and it doesn’t matter what technologies you use, if you don’t know what information your stakeholders want to discover from your testing.

That’s it. That’s all that’s important. And if you don’t start from that, you’re doomed no matter what you do. You’ll probably provide some useful information, but it’ll be by accident rather than by design.

Find out what your stakeholders, what your product owners, what your internal consumers, what your developers want to know. What issues are important to them, not me, not Sam, not anyone else on the planet. What information you can provide to them that will cause them to make different decisions about what you are all building.

Testing does not make anything better in and of itself. It simply provides useful information to people who will make better things, and make things better. If you don’t know what kind of issues most concern them, what kind of errors would represent the greatest threat to the value of your product – then you can’t choose your mix of types of testing to give you the best chance of finding those kind of issues. There isn’t best practice for that, other than: do what works, iterate, keep thinking, keep talking to your stakeholders. None of which have numbers attached.

I’m afraid you’re starting from the wrong end. I agree with user246, nobody here can give you those answers. Except you.

Answer 2:

I cannot tell whether you are looking for studies advocating a certain resource mix or just studies describing experiences with a certain resource mix. Aside from what I would find in a Google search, I do not know of any such studies.

In one of my previous jobs, there were several very talented developers on the QA staff. They tended to concentrate on things that were both hard to test manually and unlikely to be tested during development: for example, stress testing. We did not automate any of our UI testing because it changed so often that we did not believe automation was worth the investment. Developers sometimes wrote API-level unit tests but often did not. At that company, I believe our resource mix was closer to the current mix at your company.

In your question, you say you want to shift a majority of your testing resources from manual testing to automated testing. The right mix for your organization depends on considerations that no one in this forum can measure for you: for example, the skill set in your organization, the interest level in your organization, and the maintenance costs for your automation (something you did not mention in your question). I think you described a reasonable goal, but I advice you to approach it incrementally, beginning with what you judge to be easiest and/or most valuable. I believe this will improve your chances for success. It will also give you the opportunity to make mid-course corrections as problems and opportunities arise.

Answer 3:

I’ll take a stab at an answer. Some of it I think doesn’t belong and some of it I think needs to be split up into a couple of questions:
What percentage of my testing should be 1) Exploratory Testing 2) End to End testing 3) Integration Testing.
The answer to this question, unfortunately depends a lot on the product itself and should be determined during the product planning phase and captured in a test plan.

A completely separate question is – What percentage of my tests should be automated?
The answer to that one is a little bit more straightforward. If automating a test case will get you better ROI than manual testing, then automate it. This means if developing the automated test, maintaining that test and executing it will take less time than the total amount of time to run a manual test over a product’s life cycle, then it makes sense to automate it. Other factors that might come into play is confidence in automated vs manual tests. Humans can make errors, skipping steps, marking tests as passed even if they don’t really pass, interpreting instructions incorrectly, etc. Similarly, automation can have bugs and an automated test that you think is testing one thing could be completely skipping important validations.

Two pieces that didn’t seem to fit for me are the Unit Tests and Exploratory Testing. Pretty much anything that can have automated Unit Tests should have automated unit tests. The goal here is 100%, barring any major barriers. Manual Exploratory Testing is generally what I try to do more of once I have most of the rest of my tests automated. This is where you find most of your bugs and where you find additional test cases to add to your list to run build over build. New test cases found through exploratory testing can subsequently be automated. Regression testing, whether Manual or Automated rarely finds new bugs, it mostly protects against regressions and gives you coverage and confidence.

Answer 4:

  • There is no golden rule to decide the percentage and distribution. It depends on Application, Complexity, Technology, Test process followed in your organization
  • This has to be decided based on your current team mix, exposure, experience
  • From your suggested distribution, Automated Unit Testing & Automated Ingeration Testing takes a good leap. This shows your interest towards automation
  • Did you explore, try out free tools to automate your application. More than your organization trying to fund a tool / Automation Resource. I would suggest you to automate regression cases. This will also show the way to highlight management need/benefit invest in automation
  • Reducing from 60% manual exploratory testing to 5% is a big goal. I would suggest to evaluate with smaller milestones. Try automating 5-10% every month and target it. Identifying Test cases to be automated / investing time to learn/develop automation is required
  • You can also form a small team of testers interested in developing automation. From your learning’s/working as a group. You can work together to start this initiative
  • Since it is a customer facing ecommerce website, Did you try evaluating Selenium for automating it

Answer 5:

This thread reminded me of the funny story about Testivus, the grand master answering the question, “how much test coverage is necessary”.

For every student that asked, Testivus had a different answer. The answer depended on the person asking the question.

Joking aside, for our apps (consumer facing web app, js/java/apache/tomcat/oracle) we target automated testing at 50% for UT, 30% functional/api, 20% system level regression. On top of that, we add manual regression tests to confirm the results from automation, and that actual look/feel is correct.

For each project (adding a new feature, for example), the mix changes over the lifecycle of that project. Early on, mostly exploratory manual testing. As the project firms up, more and more automation is added.

For your question, you are asking about resource allocation. Resource allocation is hard to answer. You probably need to “overstaff” automation for a while to get caught up to where you want to be. But, this needs to be balanced you your needs to deliver new content.

Good luck.

Our Awesome Tools

References

Loading...