DevelopsenseLogo

Keep ’em together

Today I found a very strange remark in an article called “Best Practices for Outsourcing Software Testing and Development“.

The question from the interview is What are some of the mistakes that companies commonly make when outsourcing software testing?” Sashi Reddi, CEO of AppLabs Technology, a Philadelphia-based company that specializes in software testing and development with an emphasis on quality assurance, says:

The biggest mistake is that many times companies outsource both testing and development to the same vendor — thus they have a fox guarding the henhouse! These two activities need to be done by independent groups. The testing group must be able to provide independent, objective feedback on the development process and output.

I find this a very peculiar point of view. As is common from some best-practice people, it’s a set of universal generalizations that in fact only apply to a very small corner of the universe. I think Mr. Reddi is not only incorrect and misguided about many of the purposes of testing, but also about the purpose and dynamics of feedback and of software development.

I don’t believe that testers aren’t guards; testing isn’t a supervisory activity. I agree with Cem Kaner‘s explanation of testing: “a process of technical, empirical investigation, done of behalf of stakeholders, with the intention of revealing quality-related information of the kind that they seek.”

If feedback is to be useful, it’s more likely to be helpful if it’s rapid. If there’s a problem, we probably want to find it and to start solving it as soon as possible. Separating testers and developers, as Mr. Reddi appears to promote, tends lengthen to the time in which a problem can be detected, in which it can be discussed, in which it can be demonstrated, in which it can be understood, and in which it can be fixed and retested. That’s not always the case, but a genearl tendency–what’s faster: making a remark to a person next to you, or writing up an email and/or a bug report? Moreover, feedback based on understanding is typically more useful than feedback based on oblivion. Knowing how the product is intended to work and how it actually works helps me to find the problems in it more quickly. Knowing the developers and working with them helps immensely in learning about the product.

In fact, if someone wants good feedback from me, I generally need to be as close to them as I can get. If someone expects me to evaluate a product, my feedback will tend to have more value if I can see the product; if someone wants me to assess a process, they tend to get better results if they allow me to observe the process.

Now, if my job is to act as a tester that is trying to find fault with a product–say in a context like a contract dispute or a lawsuit–that’s a different matter. In that case, the situation is one of confrontation, and keeping testers and developers separate might be necessary. But most software development isn’t like that; in most testers and developers are on the same side, working collaboratively to develop the most useful and robust product possible. One can be cautious and skeptical while collaborating.

When I’m on a testing project, I do provide independent feedback on the development process and output. That is, I’m capable of thinking independently. My feedback is objective with respect to the developers, the project managers, and the other members of the project community–to the degree that anyone can be objective, which as any student of journalism will tell you is… well, impossible. But I don’t have to be kept sequestered from developers or anyone else to call ’em like I see ’em, and doing anything less than that sounds like incompetent testing to me.

Imagine someone saying, “Sales and marketing have to be done by two independent groups. We want the marketers to be kept as far away from the salespeople as possible; we don’t want the independence and objectivity of the marketing effort to be compromised by the salespeople.”

8 replies to “Keep ’em together”

  1. The assertion that we should hire an independent test outsourcer in order to protect the hens from the fox has been a “best practice” for decades. And for test labs who need the business, it IS a best practice, because it’s the practice that gets them hired.

    This is _sometimes_ also a good practice for the potential client of the test lab.

    Testing is often done to mitigate risk.

    If a project’s most significant risk is that the programming organization may deliver unacceptable code because it is operating in a self-interested way that conflicts with the needs of ITS clients, those clients need an organization that is independent of the programmers to assess whether the programmers are delivering an acceptable product or service.

    Such conflicts do arise. For example, when the programming is being done by an independent contractor, that contractor MAY be indifferent to the genuine needs of its client. There are certainly stories of this in military computing and I have seen it in some large commercial contracts.

    However, in the event of conflict between the programming staff and the project clients, would the clients be better able to protect their interests by assessing the quality of the code themselves or by retaining another contractor that might or might not understand or care about the client’s long-term needs?

    I think the client should seriously consider keeping testing in-house if it cannot trust the contracted programming group. However, if the client has no testing capability, it is stuck with the risk and inefficiency of hiring that additional contractor. (I’ll let you decide whether that would be a “best practice” or a practice that is less bad than some alternatives.)

    On the other hand, suppose that you are dealing with a trusted external programming organization. Everything I have heard is that they will do better development if they are accountable for testing as well as code delivery. Testing done throughout the project, by people who are readily available to demonstrate problems and explain their theory of risk, can have profound effects on the implementation. These testers are not merely finding bugs. They are also training the programmers and being trained by them. Additionally, these testers are probably gaining a deeper insight, over a longer period of time, into the product, its risks, its intent and its market. Gaining such information is often a big challenge for an external test organization.

    Sometimes there is a serious mistrust / conflict-of-interest problem with development done in-house. The engineering executives want to ship product in a state (quality, feature set, whatever) that serves their personal agendas but that might do harm to the broader corporation. The value of the independent test lab in this case is that, if properly briefed and trained by an in-house stakeholder, the test lab might “expose” problems that no one inside the company dares to mention, writing reports that would result in a firing if written by a member of the engineering staff. This might be a tactical necessity, but again, you can decide whether to label this a “best practice” or a poor second to the alternative of replacing the self-dealing executives. Even here, though, there is an often-preferable alternative to going to an external test lab–put together an in-house test team under the direction of other executives. These testers are indpendent of the engineering organization but they still have direct access to the key stakeholders and a better shot at gaining deeper knowledge of the product’s market, user community, usage patterns, risks (and the impacts of different types of failure), gossip about the implementation problems as they arise, as well as a better shot at effectively explaining the problems they find (with the product or the management of the larger product) to stakeholders who might have the authority to override the self-dealing engineering executives.

    — cem kaner

    Reply
  2. How many different contexts might there be? As usual, Cem has exposed far more than I’m able to consider in a reactive little blog post. A more expertly written (and considered) version of my original post would have included a heavier emphasis on the large number of situations that _might_ or _might not_ warrant the use of an external test lab.

    Perhaps the most important factor in deciding what’s best in a given case is the level of trust between the client, the project sponsors (not always the same as the client), and the development group–along with the ability of each to think, test, and act in ways that are both critical and mutually supportive.

    Thanks, Cem.

    —Michael B.

    Reply
  3. Agreed, Bolton. In another occasion Mr. Sashi reddi also commented like below.
    ———————————-
    The company sends its most laborintensive “cookie-cutter” work to its 850-employee testing center in Hyderabad, a city in south-central India, Reddi said.

    “For that type of work, we like to take advantage of the cost structure in India,” he said.

    ———————————–

    I find these comments are insult to testing community. This is very clear that he runs a sweat shop in india

    Reply
  4. Well, Anon, you could look at it that way… or maybe not.

    In a weird coincidence, I was invited to speak at AppLabs on my recent trip to India. I spoke for over an hour to a group of testers and AppLabs managers at the very centre to which you refer. I wasn’t there for long, but it sure didn’t look like a sweatshop to me. I could be wrong, but the building was spacious and well-appointed (in considerably better shape than the offices of the company where I started my testing career, as I recall), the people were warm and friendly, and the ones that I talked to could not be characterized as sweatshop thinkers–not by any means. Besides, it’s characteristic of a sweatshop is to close access to the outside and to close minds. If AppLabs were really interested in running a sweatshop, I doubt that they’d have invited me to speak, and I doubt that I would have got so much out of the conversations that I had with their people. I could be wrong, but it didn’t feel like a sweatshop. (If anyone who actually works there has some evidence otherwise, I’d like to hear from them so that I can correct myself yet again on this thread; mb@michaelbolton.net–and I will utterly respect your privacy).

    I don’t think the cookie-cutter comment is insulting to the testing community per se. It might be an indictment of the way many people and businesses think about testing overall. I might like that to change, but I don’t feel I have to take it personally. There is a way out of that mindset, whether you live in Hyderabad or Houston: a skilled tester can almost always provide new information that the client will value, no matter how the client originally conceived of the mission–if that tester is motivated to do so.

    —Michael B.

    Reply
  5. It seems to me that it all depends on why you’re testing – there’s not just one reason.

    If you’re testing as an integrated part of the development process (“Agile” development), then you want them closely coupled – not just within the same organization, but very possibly the same people. Test early, test often.

    Having said that, if your purpose in testing is to, say, satisfy auditing requirements, or risk mitigation as has been pointed out above, it makes huge sense to split it up. I know developers – and so do you – that will be more than happy to fudge their test results, or even gloss over the testing process entirely, to save work now – especially if someone else in charge of “operations” is going to be on the hook to clean up afterwards. I unfortunately speak from experience here. There’s nothing wrong with testing to keep people honest. I strongly suspect this is the sort of testing Reddi was thinking of in his quote.

    In an ideal world you do both – unit testing during development to keep things on track and ease QA, and independent testing afterwards (or at milestones) to make sure you have a real idea of the functional state of the project.

    Reply
  6. Thanks for the comment.

    The thing that upset me about Mr. Reddi’s quote (and I’ve calmed down now 🙂 ) was the seemingly absolute nature of it. In my own defense, I at least acknowledged the possibility of other contexts. But I also wanted to emphasize the point that, just as there are different ways of thinking about “testing”, there are also different ways of thinking about “independence”.

    —Michael B.

    Reply
  7. I look at it from two dimensions, first one the subjective goal of making product better and the second finding as many bugs as possible (could be functional or trivial or enhancements could be any thing that matters).From the subjective sense its important that both developers and testers work together and from objective sense i feel they need be isolated from each other. Isolation is necessary for many reasons like giving sense of Independence to the test engineers, avoiding the BUT NATURAL collisions of developers and test engineers, create an environment where both developers and testers feel that their activity is purely an independent activity.
    Having said that, it is also important that both the teams have clear sight of their vision, may be in their own sense, It might be a good idea to bring them to a single dais and discuss the progress of the project, but isolation for sure makes testers more effective (combining them with developers might make them more efficient though).

    Reply
  8. The term independence is indeed subjective; I was interested to read your views on the subject!

    My test team is internally under the “customer” management tree, compared to the “project” tree, to allegedly allow us to provide independent testing.

    However, I still have to agree my budgets and timescales with the projects. Lets say I want to run test A, B & C, the customer wants A & B testing and is willing to accept the risk of not doing C to save cost, but the project will only pay for B & C and wants the customer to accept a lower level of A… You can see how the “discussions” over this waste time & money, cause unnecessary stress, and can result in a level of independent testing that the customer would otherwise not have accepted.

    If I want evidence from an expensive test or need more time, the project manager is essentially able to block my test and I have to fight tooth and nail to justify the test to somebody that is intersted primarily in time and cost; testing is often on the critical path, at a time the project is running close to budget, which can have a direct impact on the project managers bonus pay for meeting milestones!

    My test report will reflect this of course, but essentially the project has determined the level of “independent” testing. The customer often just gets a report saying independent testing has been done. The report shows no observations from the areas I couldn’t test, only comments that some aspects were not tested.

    The customer doesn’t necessarily appreciate what is in the report; independent testing has been paid for and they got a report back with technically no observations… so, unless the report is written in goats blood, it’s easy to assume they have the level of testing they would expect from an independent tester, rather than a level of testing determined by the very person they wanted to check hasn’t cut any corners!

    If the “customer” payed for my testing from a separate budget to the project with milestones that are outside the scope of the project manager, instead of giving the project the money to organise “independent” testing, then the customer would have control over the level and cost of the testing and more confidence that their independent evidence gives them the level of risk they expected; the customer won’t skimp on their own test assurance, unless they don’t want or need it, which is their own risk.

    I don’t take independent testing to mean I can’t work with the developers though, which I believe is generally vital to understanding what you are testing; to me it means that the customer needs testers that are under separate management and have time/cost constraints under the customers control so they directly authorise all corner-cutting.

    With internal customers, if a project pays for your testing, then your independence is potentially compromised (to my knowledge, NASA have acknowledged this issue, by the way), and at best your process is simply inefficient, as you are arguing and agreeing the level of testing with the wrong people.

    If the customer was paying for the testing directly then I, as an independent tester, only have to tell the customer what I want to see, and they just need to work out what they can afford and what they will take at risk.

    There is no need for the project to get involved in determining the scope and scale of my testing (though there is still a need for me to work with the project). This is independence.

    Of course, creating the correct engineering cost structure means that the testing would be RevEx instead of CapEx, which is no good for the business 😉

    Reply

Leave a Comment