Saturday, August 23, 2008
So, the line item to change all the externals to make them compatible (previous posts) between the two systems began. Although I was in a design and usability department, because I owned the externals and everyone knew about the test generator I had written, I was asked to do the testing.
(By the way, testers don't get the credit they should. That'll be another post someday.)
There were many line items being added, functions in the old system being ported to the new system and hardware. I would be ensuring not only that all the old externals were compatible, but that all externals for the new would also match. That's every variation and combination of every option on every old and new command, error condition, and operator info situation under every possible condition.
It will mean nothing to anyone but another tester, but in one year, I wrote, ran, verified, and compared results on over 350,000 variations on TWO operating systems.
As new code for new functions was added to the system, I waited until it had been fully functionally tested, and then I ran the compatibility tests for that function.
If I found incompatibilities, I wrote them up and passed them to Peter's group, who then decided whether it had to be changed to be compatible, or whether there were compelling reasons to allow an incompatibility. We had to keep track of those to warn the customer later that if their application depended on this message or return, they'd have to change it.
If I found an actual bug in new code, I'd write a PTM (Program Trouble Memo) which would go to the development group responsible for coding of that function.
If I found a bug in old code, code that had been in the system for ages, I'd write an APAR, which would go to the design group owning that function. (One of these days, I'll do a post on the origin of APARs.)
For that release of the operating system, there were 30-some testers testing new function, and 10 testing old function. Keep in mind that I didn't get the code to do my tests until it had been thoroughly tested and approved by those 40-odd people, and by then the programmers in the development groups had figured their work was done, passed test, and had moved on to other things.
I found bug after bug. Old code, new code. During development and testing of that release, something like 1800 PTMs were written, and of those, over 1,000 were written by ME! After the programmers thought they were finished. Some functions looked like they hadn't been tested at all!
So besides all the testing, simply processing PTMs was a full time job. Plus I wrote about 100 APARs.
By the time I was finished, that system was CLEAN! It had been a superhuman effort.
You'd think I ought to get some kind of recognition. I kinda thought so.
Nope.
By finding so many bugs so late in the cycle, I pissed off the programming groups by messing up their schedules, making them redo what they thought was finished.
By finding so many missed bugs after they had "thoroughly" tested and approved the new code, I pissed off the testing departments. I made them look like crap - and from what I saw, what they'd handed me, they DID do a crappy job.
Everybody hated me. Because I was right? Because I made them do things right? Because I exposed their inadequacies? Whatever. The release slipped schedule, and it was all my fault.
All my fault? Because I did my job well? Because I found a lot of bugs? You didn't have to fix them all, you know. Oh, I forgot, you did have to fix them to keep up the pretense that quality matters.
Yeah, I forgot Quality Rule 1. Schedules are always more important than quality.
Minor "quality" awards had always been given to the tester who found the most bugs. They skipped the award for that release. After all, I wasn't a tester. I was in design.
Lots of visibility, but all the wrong kind.
.
No comments:
Post a Comment