This is human nature, of course, just like the glee we feel when the test results come back positive. I’ve worked with developers who LOVE CapCal when it gives them the kind of results they expect but are quick to call it into question when it doesn’t. I’m happy to say that the times in which there really is a problem with CapCal are becoming more and more infrequent, but it doesn’t keep me from assuming that it is (or at least could be) until I can prove otherwise. In the court of testing, the tool is guilty of malfunction until it can be proven otherwise. Proving it otherwise means finding and fixing the problem with the application most of the time.
Using software to test software is like using a diamond to cut diamonds (except for the word “soft”, which spoils the whole analogy if you dwell on it much). If the drill bit breaks while you are cutting a diamond you just have to replace it with a harder one and keep working. With CapCal this kind of breakage is normally due to an exotic combination of things that rarely occur simply because they are so exotic. I’d love to give you an example but you might work for a competitor and if that's the case you’ll just have to figure it out for yourself! :-)
No comments:
Post a Comment