Chapter 6. Testing Perl Programs

“Run and find out.”

—The Jungle Book, Rikki-Tikki-Tavi by Rudyard Kipling

Hands up everyone who hates testing their code. Yes, we thought so. And who can blame you? The hidden beliefs that make testing so painful run deep:

  • I might find something so wrong that I have to make a radical change.

  • If I find something wrong, I'll have to delay delivery.

  • I'm a code writer, not a code tester.

  • If I find something wrong, I'll lose plausible deniability unless I take the time to fix it.

  • It's really elegant the way it is. I don't want to be bothered with special cases that no one's likely to hit anyway.

  • Users are the best testers. They have real test cases.

It's a rare programmer who doesn't have at least one of those going on (our hands are up too). Unfortunately, as dirty as the job is, it's yours and no one else's responsibility to make sure you deliver the very best code you can. So let's wade past the discomfort and the apathy and see what we can do.

Testing is getting more attention these days, thanks to a new methodology called Extreme Programming (XP), which states not only that everything shall be tested, but also that the tests will be written before the code they test. Here's a breakdown of different types of testing:

  • Inspection testing, or code walkthroughs. Nothing is executed; people simply read the code.

  • Unit testing. Each individual component is tested for the expected response under the design conditions and for correct error responses under all other conditions.

    Although there is no clear definition of how large a component can be tested, it is expected that it is small enough so that you can test all or nearly all the possible boundary conditions and execution paths and it is relatively easy to test the component in isolation. In most languages, the unit is a subroutine.

  • Integration testing. This is testing whether a subroutine or group of subroutines functions according to design specifications with regards to API and linked with whatever other subroutines exist in a project. Issues to consider at this stage are namespace conflicts and memory leaks.

  • System testing. The entire system (or whatever you call the thing you are delivering) is tested against the customer's requirements. (You did get requirements from the customer, didn't you?)

  • Regression testing. This is the same as system testing, but the term implies that it is performed whenever any changes have been made to the system to ensure that the system still performs correctly and has not regressed to some other state.

  • Saturation testing, or load testing. In a system that handles arbitrary loads (such as Web servers) it's important to find out what its limits are. This type of testing throws more and more simultaneous requests at the system until it fails or you have exceeded the requirements by a vast margin.

  • Acceptance testing. This one isn't for you; it's for the customer. They need a way to tell whether what you've delivered to them meets their requirements. In an ideal world, they write the test themselves, and warrant that it matches the requirements they gave you. Don't be surprised if they ask you to write the tests for them, though. (Don't be surprised if they ask, “What's acceptance testing?” either.)

Few projects include all these types of testing; the distinctions among several of them are blurred anyway, and it will usually be acceptable to leave some out. XP requires only unit testing and regression testing; code walkthroughs of a sort happen because programmers are required to work in pairs.

Let's go through each type of testing and see what it means to a Perl programmer.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset