Tuesday, September 21, 2004

The Importance of doing what you say you do.

I've worked in several organisations, in many different industries and styles. From blue chip insurance companies to telecommunications startups to crown corporations. Each organisation had a different style, be it XP/Agile, ISO-9000, or Wild West.

However, all of the successful ones had one thing in common. They did what they said they did. For example, if they said they used a waterfall development model, they made sure that that really was how they developed software. If they were XP, they followed the practices (all of them) in Kent Beck's book.

The organisations that struggled were the ones that either didn't say what they did, or worse, said one thing but actually worked entirely differently.

Everyone knows these groups. They don't understand why they fail, and they don't understand why they succeed. When they amazingly do succeed it is usually because of a sheer force of will by the team.

At first I thought it was certification that differentiated the organisations. The ISO 9000 certified teams developed better software than the others. But that wasn't it either. I've worked for organisations that didn't have ISO 9000 certification and developed good software, on time, and on budget. I've also worked for ISO certified companies that completely failed to actually deliver working products. It helped that the ISO training I had said that ISO didn't guarantee quality. :)

Certifications aren't key, although they do help. I feel that communication is the key. With good communication, the team can quickly come to a concensus on the state of the project and what the next step is.

I see this all over the place. As teams get larger, communication gets harder. It isn't just that the number of possible conversations increases exponentially, it's that each person finds it harder to communicate.

A small team develops a method of working through peer pressure and shared experience. They do things a certain way because they have always done it that way, and it works. They don't have to talk about it because the communication between them is constantly happening. The small team is able to monitor each other to keep each other out of trouble. For example, if someone decides to rewrite a piece of code, they will tend to avoid causing problems for others since they know what everyone else is working on.

As teams get larger, this communication becomes harder - it no longer happens as a side effect. New members don't share the same experiences, so they make the old mistakes all over again. Even worse, people start spending more and more time cleaning up after each other's changes.

I believe that the key is the shared metaphor -- the development culture. If developers in a team are able to say, "We do that", and everyone understands what "that" is, the team is better off. New staff understand what is expected of them, and what to expect of their team mates. If people share the same language, that saves time allowing the communication of more important things like, "What's Bill doing today?"

This is where codification, standardisation and certification helps. It gives everyone a common understanding. It doesn't have to be CMM or RUP or ISO 9000, it just needs to be understood. If the process you are following is based on another one, it is important to state the divergences.

For example, I worked for a group that said they used XP. However, they didn't do continuous integration, pair programming, short cycles, the planning game, have an onsite customer, work a 40 hour work week, or do test driven development. They did refactor, have a coding standard, have an automated build system that did run unit tests (which were disabled because they failed too frequently), and they did have short cycles (albeit with fixed content). They then went from a 4 person team to 12 people over the course of 3 months. There was a lot of angst because new developers who were hired with the expectations of XP kept running into barriers as they tried to go against the flow of the team and do all of XP. Once the new hires learned how the team was really expected to work, the complaints faded into the background and were dealt with as expected problems instead of process failures.

I've always said that if you don't improve how you do things, you're going to get worse. If a team doesn't "do what it says it does", there's no real way to know where the holes are. Without a common understanding, people may think that something is covered off when it isn't. If you've played any doubles sports (tennis, badminton, volleyball) you'll understand this. This is the point where everyone watches the ball head towards the ground waiting for the other person to play it.

So, "Doing what you say you Do" is all about communication, honesty and self-improvement. This has nothing at all to do with which type of process you use, many are appropriate, including "Hack it until it works". What does matter is that everyone on the the team knows and understands what it is that the team does.

Sunday, September 12, 2004

Duff's Device, Coroutines and Continuations

I've been doing a lot of programming recently after a bit of a hiatus. It's not that I haven't been programming, it's more the type. Instead of writing deep-down, think about it for 10 minutes and then press a key, kind of software I'm writing more bread and butter code.

The change in style has made me aware of several frustrations. When writing bread and butter code, I find that a vast majority of it is boilerplate. It doesn't change from class to class, and I even find myself performing cut-and-paste programming. I'm finding this frustrating because I feel that my productivity has dropped. I find that I have a certain number of lines of bug-free code per day in me. If I can do more by writing fewer lines, then I'm more productive. Having to write boilerplate saps my precious daily lines of code!

So, I'm on the lookout for new ideas to help me write more code faster. I look for ways to architect the code to avoid having to actually write code. This is the other side of Paul Graham's "Good Hackers Don't Use Java" argument. While I have to use C++, I'm looking for line saving ideas from other programming languages.

One of the more code sucking types of code is the state machine. State machines are everywhere. They are frequently implemented as tables of functions, with each state having a lot of boilerplate at the start and the end.

My explorations for a better state machine implementation have taken me to Duff's Device, Coroutines and Continuations. When I started, I knew what Duff's Device and Coroutines were, but I only had a vague concept (and an incorrect one at that) of what a Continuation was.

First, an introduction

Everyone out there has probably heard of loop unrolling. This is when you take a long running loop in your program and attempt to perform more transactions per iteration. The starting point looks something like this:

int j=100;
for (int i=0;i<j;i++) {
    printf("%d\n", i);
}

To speed up this type of loop, you attempt to perform more printf's per iteration. Simplistically, you will try something like this:

for (int i=0;i<j/5;i+=5) {
    printf("%d\n", i);
    printf("%d\n", i+1);
    printf("%d\n", i+2);
    printf("%d\n", i+3);
    printf("%d\n", i+4);
}

However, what happens if j = 3? The code will (now incorrectly) still output 0-4. There is a way to fix it...

Enter Duff's Device.

Essentially, Duff's Device makes use of some interesting behaviour of switch and case statements. The switch/case statements act like named goto's instead of if statements. This gives them the ability to jump into scopes!

So, let's change the code to use Duff's Device:

int i=0;
int j=3;

switch (j%5) {
case 0:
for (;i<j;i+=5) {
    printf("%d\n", i);
case 4:
    printf("%d\n", i+1);
case 3:
    printf("%d\n", i+2);
case 2:
    printf("%d\n", i+3);
case 1:
    printf("%d\n", i+4);
}

This is how loops are unrolled. In this way, we get many more operations for each iteration through the loop.

At this point, the audience yells out "But this doesn't save me any lines of code!", and they would be correct. It's Duff's Device with the next concept that can save us code. Enter Coroutines.

Coroutines

Coroutines are co-operative multitasking. At some point the task will indicate that it yields the processor, allowing another thread of control to be executed. If the task doesn't give up control, it is never interrupted. Most systems have moved on to using full blown independent threads, but there are still some classes of systems where threads aren't appropriate. Good examples are systems with several hundred thousand independent operations active at any time. It isn't possible to have a real thread for each operation, and managing a thread pool is just as hard as managing coroutines. Add to that the benefits of avoiding locking and resource sharing issues and the argument for coroutines becomes pretty compelling. On those systems, code can still be simpflified by using coroutines. State machine code can be dramatically simplified, turning it into a linear top to bottom function rather than a set of states managed by an array or switch.

For example, consider the following (note, the code only works with GCC):

class coroutine {
   public:
   coroutine() : state(0) {}
   virtual void run();
   private:
   int state;
}

#define coBEGIN() switch (state) {case 0:
#define coRETURN() state = __LINE__;return;case __LINE__:
#define coEND()   }

class coBasic : public coroutine {
    public:
    virtual void run();
}

void coBasic::run() {
    coBEGIN();

    fprintf(stderr, "Here we are in state 1\n");
    coRETURN();

    fprintf(stderr, "Here we are in state 2\n");
    coRETURN();

    fprintf(stderr, "Here we are in state 3\n");

    coEND();
}

int main(void) {
   coBasic co;

   co.run();
   co.run();
   co.run();
}

Where does this save us code? We save code in several locations. First, we don't have to write a function header for each state. Code will flow from one state to the next. Next up, we avoid all of the setup code. Frequently code will have to do at least a little bit of marshalling as it prepares to run. Using this, that code is shared among all of the states, instead of being copied, or shared as a separate function.

There are some limitations to this implementation. First, it is very difficult to specify the next state to invoke, linear is the name of the game. It is possible to change the macros to use specific state names instead of a simple return, which would allow more complex state machines to be created. If we restrict ourselves to GCC, whose preprocessor allows macro overloading, we can even combine the two with both named and unnamed states.

Also, there are some limitations imposed by the use of Duff's Device. C++ does not allow code to jump over constructors, so code like the following will resultin a compilation error:

void coBasic::run() {
    coBEGIN();

    fprintf(stderr, "Here we are in state 1\n");
    std::map<int, int> foobar; // <-- ERROR
    coRETURN();

    fprintf(stderr, "Here we are in state 2\n");

    coEND();
}

To get around this, we can do two things. We can either move to C-style declarations and do all of our declarations at the start of the function, or we can modify the macros slightly. The modified macros would look like this:

#define coBEGIN() switch (state) {case 0:{
#define coRETURN() state = __LINE__;return;}case __LINE__:{
#define coEND()   }}

As you can see, we're adding an additional scope around each state. In that way, we avoid jumping over a constructor for an object we could conceivably use, but we do end up having some strange compiler problems. Consider the following code:

void coBasic::run() {
    coBEGIN();

    fprintf(stderr, "Here we are in state 1\n");
    std::map<int, int> foobar;
    coRETURN();

    fprintf(stderr, "Here we are in state 2 map size = %d\n",
            foobar.size()); // <-- ERROR

    coEND();
}

This will generate an undefined variable error for foobar, because foobar is only defined in the previous state. So, we get slightly more confusing code because scopes aren't necessarily clearly defined. However, this does allow state-local variables to be declared and used.

Continuations

When I originally started to write this article, I misunderstood what a continuation was. I thought it was simply a coroutine by another name. I was wrong. Continuations are about continuing execution at stored location in the procedure, but they add a new wrinkle. Where the coroutine above returns, a continuation passes the execution location down the stack. Perhaps a picture will help:

Coroutine:

 A -> B (no state)
   <- B (point 1)
 A -> B (starts at point 1)
   <- B returns
...

Continuation
 A -> B (no state)
        -> C (B at point 1)
             -> D (B still at point 1)
                 -> B (starts at point 1)
   <--------------- B returns all the way back to A
A

It seems that continuations are a relatively new programming style, with few people understanding their benefits, or even being able to explain how they work. The easiest explanation I found was on c2.com (http://c2.com/cgi/wiki?ContinuationExplanation), especially the C code version (http://c2.com/cgi/wiki?ContinuationsInCee). The form of continuation presented in that code is called a "Single Use Continuation", meaning it can only be executed if it is still on the stack, and that it can only be invoked once. Some languages (Lisp/Scheme/Dylan?) fully support continuations allowing them to be executed multiple times at any point.

There is one more thing about continuations. They save the state of the stack at the point the continuation is created, allowing code to return to that location using the existing local variables. This is very useful for unraveling network and user interaction code. For this we really need a language that supports continuations, such as Scheme, Lisp and Dylan. In C/C++, we can mostly fake it with some limitations.

We can fake them using various methods. In C, we can use setjmp/longjmp or on x86, modify the stack base pointer as in the C2 example. In C++, we can do much of the same using exceptions. The use of exceptions allows us to properly clean up the parent stack frames, while returning to the previous stack frame. In C, we can use the speedier code and simply "Go There".

Using the previous example, the C/C++ continuations I'm presenting here would also invalidate any continuations in stack frames C or D when B is executed the second time.

For a basic implementation, consider the following:

class continuation<class X> {
   public:
   typedef X myContinueSignal;
   continuation() : state(0) {}
   virtual void run();
   void continue();
   private:
   int state;
}

void continuation::continue() {
    throw X();
}

struct myContinuationContinueSignal {}

class myContinuation: public continuation<class myContinuationContinueSignal> {
    public:
    void run();
}

#define coBEGIN() switch (state) {case 0:{
#define coRETURN() state = __LINE__;return;}case __LINE__:{
#define coCALL(x)  state = __LINE__;try {x;}catch (myContinuation::myContinueSignal&){}case __LINE:{
#define coEND()   }}

I hope I've got the code correct, I've never actually plugged any of this into a compiler, so you can be reasonably sure it won't work.

This allows us to take the original state machine and add some niftyness.

void get_some_more_information(continuation *continuation) {
    // Do some stuff, like send a message and then get a response
    // Do some more stuff, like put the response into the class.
    continuation->continue();
}

void myContinuation::run() {
    coBEGIN();

    fprintf(stderr, "Here we are in state 1\n");
    coRETURN();

    fprintf(stderr, "Here we are in state 2\n");
    coRETURN();

    coCALL(get_some_more_information(this));

    fprintf(stderr, "Here we are in state 3\n");

    coEND();
}

Here, if get_some_more_information decides it already has the requested information, it can continue immediately. If decides that it will have to block, it can use it's own continuation to throw back through myContinuation::run while still preserving the state for later execution.

Why is this useful? We could pass the coroutine down and execute it again later. If we did that, when get_some_more_information eventually returned the code would again execute. We could try to execute the continuation at a possibly new state, but that may not be what we want. It results in hard to manage execution paths, with the code possibly getting extra, undesired state transitions.

These continuations are still limited. Due to the try/catch blocks, we can't really interleave continuations of the same type. The destruction of the higher stack frames makes it difficult to have multiple continuations active at any one point in time. Finally, we can't return to the same continuation point more than once, meaning we can't store a state and then return to it at a future point, to, for example, re-attempt a transaction.

Coroutines and Continuations. Both are great at decreasing the amount of bug-ridden, hard to read, unmaintainable C++ state machine code you have to write. I know it's helped me get more done in a day.

Wednesday, May 05, 2004

Development Processes

A recent employer has problems with the quality of software they are producing. They also have issues with cost and time to market. So, they're interested in changing their development procedures. A couple of years worth of losses is a really good incentive to improve performance.

As with many groups, there are problems with their approach. The individuals steering the development of the methodology don't actually have development experience. Even if they have been developers in the past, it was with unrelated technologies. This leads to difficulties during methodology development. Since they don't really understand the work that their engineers are performing, it is more difficult to determine what are the real problems.

This lack of local knowledge leads to the search for the quick fix. They look at the books and say "We need to be CMM level 3!", or, "We need ISO 9000 certification!" - neither of which say much about the actual goal - quality.

I believe that methodologies need to be tailored for each organisation. While it is possible to take a methodology off of the shelf (for example RUP), such a big bang approach is highly risky. The team not only needs to learn the new process, they need to keep on producing software! In such a situation, it will be very difficult to identify the causes of a project failure. Is it because of the new process, poor engineering, or poor requirements? It isn't possible to tell.

Additionally, there may be problems with the methodology chosen. Some practices may not be appropriate for the field. For example, waterfall isn't appropriate for skunkworks teams. Skunkworks projects work in a prototype, iterative approach, while waterfall is all about up front design. While waterfall may result in more determinism, it dramatically hinders the goal of a skunkworks project - exploration.

If a methodology is grown, each new process can be implemented independently. That way, it is possible to reduce the risk to the team while still moving forward. It also allows the team to select only those practices which make sense for their team. Configuration management, nightly builds, automated regression testing, code inspections, design reviews, joint application development - all good practices - often considered "Best Practices". They may not, however, suit your organisation.

Before moving forward with practice changes, the drivers of the change need to communicate and get buy in from the engineers. The set of "Best Practices" is context driven, and changes from team to team. The engineers will more than likely have ideas on what needs changing, or what doesn't make sense".

Thursday, April 15, 2004

Scoping Doesn't Save You.
or
Large projects and type names.

There seems to be a bit of a misconception out there. People act like namespaces mean they don't have to think too hard about type names anymore. Type names are local in scope, so you don't have the same collision problems that you used to get in large projects.

"Gone are the days when we need to carefully craft a symbol name" they cheer. "If each library has it's own namespace, we are in the clear!".

Not so.

Think about maintenance. We are 600 lines into a 1500 line file, one of 20 in this library, one of several hundred in the project. We are looking for code that is instantiating the class "Bank::Branch::Acct::BalanceInfo" which we have just modified. We want to be thorough and make sure you aren't messing anyone up. At line 600, we find something like this:

BalanceInfo newBalance(5,5,5);

Now, is this something we need to be worried about? A quick glance at the top of our file and we see,

namespace Bank {
namespace Branch {

So, we're close to the same scope, but is it the same one? To find out, we essentially need to track down the one that the compiler ends up using. At first glance, it doesn't look like it, our BalanceInfo is in the "Bank::Branch::Acct" namespace. However, we might have a magical "using namespace" lying around to cover for that. So, it's back to grepping through the code. Luckily we find a type that is "Bank::Branch::BalanceInfo", which would hopefully mean that the one we've got there isn't what we are looking for. I say hopefully because we can't be sure about the using, and the compiler doesn't maintain a global symbol table for the namespace, only the ones it finds during the compile of that specific file. In other words, it depends on which files are included in the compilation of that file.

In this case, the developer wasn't completely derganged, and there wasn't a hidding "using", so we had the correct one.

I don't really see an easy way around this other than careful design. We need to ensure that types with the same name contain the same information. Even to the point that there is an implied inheritance structure.

For example instead of

class Bank::Branch::BalanceInfo;

and

class Bank::Branch::Acct::BalanceInfo;

we would have

class Bank::Branch::BalanceInfo;

and

class Bank::Branch::Acct::BalanceInfo public class Bank::Branch::BalanceInfo;

Might save some hassle later...

Thursday, March 11, 2004

How to Walk an STL Map in GDB

Sometimes, you need to walk a map in GDB. Without a process running, you aren''t allowed to call functions on the structure, such as "find" or "[]". However, the data is still available, and can be retrieved if you know how to walk the tree structure.

This assumes that you are using GCC (may be version specific, I used 3.2.3).

The following code is used as an example:

#include <map>
#include <vector>
#include <iostream>

using namespace std;
map<int, int> map_to_walk;
vector<int> vector_to_walk;


int main() {

    cout << "Hello" << endl;

    map_to_walk[5] = 6;

    vector_to_walk.push_back(5);

   cout << "There" << endl;
}

Pretty basic stuff.

At the first cout, the map has been constructed. At that point, it looks like this:

(gdb) p map_to_walk 
$1 = {
    _M_t = {<_Rb_tree_base<std::pair<const int, int>,std::allocator<std::pair<const int, int> > >> = 
            {<_Rb_tree_alloc_base<std::pair<const int, int>,std::allocator<std::pair<const int, int> >,true>> = 
             {_M_header = 0x22e78}, 
             <No data fields>}, 
            _M_node_count = 0, 
            _M_key_compare = {<binary_function<int,int,bool>> = {<No data fields>}, <No data fields>}
    }
}

I've changed the formatting a little to make it easier to see some things.

I'm guessing, but I think the fields are:

  • _M_t - the implementation of the tree
  • ._M_node_count - the number of nodes in the map.
  • ._M_header - the root node in the tree.
(gdb) p map_to_walk._M_t._M_header    
$3 = (_Rb_tree_node<std::pair<const int, int> > *) 0x22e78

The root node concept is borne out by the fact that it is a tree node, and it does exist. Looking at the node we see:

(gdb) p *map_to_walk._M_t._M_header
$4 = {<_Rb_tree_node_base> = {_M_color = _M_red, _M_parent = 0x0, 
    _M_left = 0x22e78, _M_right = 0x22e78}, _M_value_field = {first = 0, 
    second = 0}}

With both the left and right children pointing back to us.

Now, after inserting an element, we see the following:

(gdb) p *map_to_walk._M_t._M_header
$3 = {<_Rb_tree_node_base> = {_M_color = _M_red, _M_parent = 0x22e90, 
    _M_left = 0x22e90, _M_right = 0x22e90}, _M_value_field = {first = 0, 
    second = 0}}

The root node is still empty. The node count is now 1. The only thing that has changed is that the _M_left and _M_right pointers have moved to point to something else.

If we look at left/right, we see that they are _Rb_tree_node_base pointers. GDB doesn't do downcasts, so we don't get anything else. However, if we force the cast, we get the following:

(gdb) p *(_Rb_tree_node<std::pair<const int, int> > *)map_to_walk._M_t._M_header._M_left
$14 = {<_Rb_tree_node_base> = {_M_color = _M_black, _M_parent = 0x22e78, 
    _M_left = 0x0, _M_right = 0x0}, _M_value_field = {first = 5, second = 6}}

Woohoo, we see our value sitting right there! From there, I'm hoping that it is easy to track down.

STL vectors in GDB

STL vectors are even easier. The vector contains a single buffer of the elements. It contains a pointer to the start and the end of the elements, as well as a pointer to the end of the allocated space.

  • _M_start : The 0th element
  • _M_finish : The last+1 element (equals end())
  • _M_end_of_storage: The end of the currently allocated buffer.

So, to get at a specific element in the vector, you start with the _M_start pointer (0th) and then add the index of the value you are looking for:

(gdb) p vector_to_walk
$1 = {<_Vector_base<int,std::allocator<int> >> = {<_Vector_alloc_base<int,std::allocator<int>,true>> = {_M_start = 0x24c18, _M_finish = 0x24c1c, 
      _M_end_of_storage = 0x24c1c}, <No data fields>}, <No data fields>}
(gdb) p vector_to_walk._M_start 
$2 = (int *) 0x24c18
(gdb) p *vector_to_walk._M_start
$3 = 5
(gdb) p vector_to_walk._M_finish - vector_to_walk._M_start
$4 = 1

Saturday, March 06, 2004

OO Design for Unit Tests

Hmmm. I'm working a contract right now, so there hasn't been much progress on the requirements management system.

I'm currently fixing problems with some existing code. At the same time, I was looking at writing some automated C-unit style unit tests for my fixes. The only problem is that the scaffolding that I would have to erect around the code to even test a single function.

It brought a lesson home to me. In a lot of object oriented code, you see people with long chains of getters until they obtain what they are looking for and call a final function on that. eg:

  transaction->getConnection()->getStatus()

The only problem is that in order to test this code in isolation, you not only have to create a stub transaction, but you have to create a stub connection too! If it looked like this instead:

  transaction->getConnectionStatus()

We would only have to stub out transaction!

I have always read in articles and books that if you "have a" member that is itself a class, instead of simply exposing it in a getter, you are better to expose the individual member functions of the enclosed member object. I guess I now understand why.

The only question is how do you maintain the isolation of interfaces, and avoid writing a lot of code?

Wednesday, February 18, 2004

RPMS

The first thing I found was how annoyed I was getting when during the debug/test cycle I would have to hand copy files around. That forced me to write a script to do the copying for me. From there, it was a quick jump to having a makefile and then RPMs.

As a packaging system, RPMs are pretty nifty. They are lightweight, and the creation process handles a lot of the gruntwork for you. Compared to HP/UX and Solaris, they are amazing! The only problem? The documentation. The documentation that is provided at rpm.org is incredibly out of date. So much so that the examples don't work anymore.

Once you get around that, it is very handy. The automatic dependency generation even found some bugs in my code!

I do wish it played better with CPAN. That it goes and finds all sorts of Perl dependencies is nice, that it isn't able to look at the already installed Perl modules (through CPAN) is a bit of a pain. I guess I'll have to figure out how to use cpan2rpm. :)

Tricks? Try to avoid having hardcoded versions in the spec file. For example, if you have:

Version: 0.4 Release: 1

in your spec file, you will have to edit it for each Version/Release that you do. Not a great way to spend your day. To get around it, you can specify things on the command line. First, you will need to add the following to the top of your specfile:

%if %{?rel:0}%{!?rel:1} %define rel 1 %endif %if %{?ver:0}%{!?ver:1} %define ver 1 %endif

How it really works, I'm not sure. Once I hit the problem with the code examples in the RPM docs not working, I stopped reading. :) However, it seems to define a variable if it isn't already defined. Once you have that, you can change your Version/Release entries to:

Version: %{ver} Release: %{rel}

Then, you invoke rpmbuild like this:

% rpmbuild --define "rel 2" --define "ver 1.0"

That sets those variables from the command line, a great time saver.

Thursday, February 12, 2004

Using XML for export/import of DB install values In many systems, you find a bunch of sql code that gets executed at install time. This code is usually brittle, hard to read, and generally nasty. My last employer had a lot of code like this, and it was just that, VERY nasty. I've just come up with, I hope, a better way. I'm using XML to store the data, and then reading it and using my existing database (see: Class::DBI) classes to handle the insertions/updates. This gives me the following benefits:

  1. I'm using the same code to do the initial insertion that I use in the regular running of the system, so I only have to fix a bug once
  2. When I migrate to different new database, I don't have to port the installation code
  3. My code handles changes, as well as additions. So, if you change the value of a column, the loader will handle that properly, instead of doing simple "inserts" - see find_or_create() on Class::DBI
  4. The XML file is generated from my test database (see: DBI::Generator::XML) so I don't have to edit it by hand
  5. I can generate test data programatically and load that at any time
  6. I can dump test data from the system without hardcoded primary key id values!
  7. If I have to edit the file by hand, I can
All sorts of benefits. :) The ability to keep on executing the script without it deleting and re-inserting all of the rows is of special benefit. It lets me progressively add more information to the rows (such as comments) which aren't needed right away. Now, without the export, there is no real benefit. Hand editing XML isn't really pleasant. Hand modifying the XML is handy, you don't have to do too much typing. :)

First Post! Wow, my first post. Never having used a blog before, this will be interesting. Essentially, I'm doing this because I find that I'm writing my friends a lot of emails on the subjects, and I don't really want them to be lost. :) As an introduction and scope, I'm writing a Project Information System. If you're interested, you can find out more information at Pollock.ca I've got an early release of the Requirements Management System up there. I will be talking about Perl, XML and software development.