A new testing methodology?
I try and always program using test driven development, and just like I feel a little scared when I drive, even a short distance, without a seatbelt on - I now feel a little scared when I program without a full suite of automated unit tests to back me up. It's really a great way to program. Sometimes, when I get tired or uninspired with regards to programming, I'll look at my tests cases and make sure that I'm testing everything. And whenever I find a bug, the first thing I do is create a test case that show's the bug, and eventually, proves that the bug has been fixed.
I started noticing that I often did the same thing - I would create these little files for input and another for the expected output. Then the test program would take the input, create output in a temporary file and then compare that output with the expected output. I'd leave the temp output around in case it doesn't match.
Then one day I had to do some tests for xml2ddl. The tests I wanted to do is to take one XML input file compare it to another input XML file which would create a list of DDL (or SQL) statements to bring XML1 to XML2. So for every test I would have at least 3 files (two for input and one for output) plus the temp files. In fact it's worse since the DDL output can be different for each type of connectivity. I wasn't looking forward to it.
Then I came upon the great idea to put all three files into one XML file. Once I did this, I started seeing some extra things I can do with this method. I can hang off extra tidbits off the XML like:
An then I realized that I could output all the results of the tests into an HTML file. And now, this HTML file is not only a list of tests and whether they passed, but is a form of documentation. Just take a look at what dml2ddl outputs. It's fantastic! I've created documentation, a feature list and tested my code all in one stroke.
I think this is really a different way of testing. I'll call it the Kirkwood Testing Methodology (KTM).
I started noticing that I often did the same thing - I would create these little files for input and another for the expected output. Then the test program would take the input, create output in a temporary file and then compare that output with the expected output. I'd leave the temp output around in case it doesn't match.
Then one day I had to do some tests for xml2ddl. The tests I wanted to do is to take one XML input file compare it to another input XML file which would create a list of DDL (or SQL) statements to bring XML1 to XML2. So for every test I would have at least 3 files (two for input and one for output) plus the temp files. In fact it's worse since the DDL output can be different for each type of connectivity. I wasn't looking forward to it.
Then I came upon the great idea to put all three files into one XML file. Once I did this, I started seeing some extra things I can do with this method. I can hang off extra tidbits off the XML like:
- a long description of what the test does,
- an indicator that this test should fail,
- or that a test shouldn't fail, but it does now,
- or this is the response for this DMBS and here's the different response for another DBMS.
An then I realized that I could output all the results of the tests into an HTML file. And now, this HTML file is not only a list of tests and whether they passed, but is a form of documentation. Just take a look at what dml2ddl outputs. It's fantastic! I've created documentation, a feature list and tested my code all in one stroke.
I think this is really a different way of testing. I'll call it the Kirkwood Testing Methodology (KTM).
Comments