Wednesday, December 20, 2006

Real Time Billing is Dead

I've recently become convinced that real time billing is dead. Specifically, prepaid cell phones are dead. In the next 2-5 years, it will be near impossible to find a prepaid cell phone.

It's the carrier's own behaviour that has led me to this belief. Let's review:

  1. Cingular and Sprint have both increased their per-SMS charges from 10c to 15c
  2. Hutchison's 3 network in the UK is offering unlimited mobile 3G broadband for UKP5/month.
  3. Hutchison's 3 network's broadband plan includes a built-in skype client.
  4. Skype is offering skype-out to North American destinations for US$30/year.
  5. Cost of VoIP calls are rapidly approaching the price of termination.
  6. The number of transactions on the network is increasing, with a growth rate of 25%.
  7. Many vendors in this area charge per transaction (prepaid, SMSC, etc).
  8. Vodafone NZ and Telecom NZ are playing the "who can offer more SMS's for NZ$10" game.
  9. Vodafone NZ charges the same price for all long distance calls.

We have a situation where there are large downward pressures on charges (per minute and per SMS), along with a huge increase in the number of transactions. The marginal value of each transaction is dropping. I expect that some carriers may be getting close to crossing the line where it costs more to bill for an SMS than they make in revenue (see Cingular and Sprint).

The response? Don't bill for it. Charge the customer a flat rate for access to the network. Turn off the SCPs, account engines, rating engines and anything else who's only purpose is to rate the transaction.

Save yourself some rather large maintenance bills! Simplify your network! Increase the handling capacity of your network at the same time!

Oh sure, per-unit real time billing will still exist in markets where there are large disparities in wealth, or as carriers as last resort for people with poor credit histories. However, in tier 1/2 carriers? Dead, dead, dead.

Sunday, November 12, 2006

Is a refactoring project navel gazing?

If your project is refactoring code without adding new features, is that time well spent?

How do you tell when you're finished?

How do you tell if you were successful, or even if you utterly failed?

How do you decide what needs refactoring?

How do you know that what you are refactoring will make things easier in the future?

Are past successes indications of future performance? In other words, will your future projects have the same problems that your earlier ones did?

What happens if you choose the wrong piece of code to refactor?

Don't spend money refactoring code until you know that you will see a benefit from the refactor. You've got better things to do with your time - like inventing new products. Otherwise, you've got a high likelihood of refactoring dead code.

Technical and Architectural Debt are terms used by software engineers as an excuse to rewrite code they don't like. Don't fall for it.

Monday, October 23, 2006

Untested Releases, Curse or Blessing?

For years, I've been going on and on about untested releases. I hated them with a passion. Every opportunity, I would corner someone and tell them that we had to do something about the problem presented by the number of untested binaries we were releasing.

Recently, I've started to change my opinion. Unprecedented!

First off, the problems.

  1. The releases aren't packaged
  2. They aren't consistently versioned
  3. They aren't tested
  4. They can cause regressions
  5. When they go bad, they go _really_ bad
  6. They may not be reproduceable
  7. They aren't tracked very well
  8. They result in a mish-mash of software versions on the customer's
  9. platform

Now, the benefits...

  1. They aren't tested! This saves the company a LOT of money!
  2. The customer only gets the one fix they are looking for. No side effects!
  3. The customer accepts the risk for failure!

In my current view, those benefits more than likely outweigh all of the problems! So what if the customer has a mish-mash of software versions? So what if they aren't tested - it's the customer's problem! The customer refuses to sign the "Risk Acceptance Form"? Doesn't change the fact that they have tacitly accepted the risk - they installed it, didn't they?

So, I'm changing my opinion. I'm coming to the point of view that the plethora of untested unpackaged releases isn't a procedural failure in the company, but a good way to get the job done quickly and cheaply.

Since untested releases are SOP, that means that when the process goes wrong, panic isn't required! It isn't a slight against my performance if it fails, it's an accepted risk. Ah, I can feel the stress fading away already.

Thursday, July 20, 2006

Language, Language

Have you ever noticed how easy it is to agree with someone and not commit to anything? This is very evident whenever you talk about change with people.

Here are two that I've heard this week.

"I can't make that decision. I'm waiting on the CEO to tell us if we are a product or services company. Once they do that, everything else becomes easy."

Classic avoidance of responsibility. It is important to notice that the person is claiming that they would like to help you, but someone else is stopping them.

Of course, if that decision is never fully made, or it's always a tension between the two poles, nothing will ever be done. It is even a ready made excuse to avoid doing anything.

The other one is even sneakier.

"We could try that."

This one is very, very stealthy. You feel that the person saying it has agreed with you. However, if you look at it, they've committed to nothing. Of course we could try many things, how many of them will actually happen?

Some of my most frustrating work experiences have been with people who appear to agree, but then do nothing. If you don't agree, say so, don't let your comments fester.

I challenge you to use different language the next time someone comes to you with an idea. How about trying some of these.

If you disagree, "Have you considered..."

Working towards a final result of, "Let's try that. You can start next week."

Trained to Complain

I had an epiphany the other day. The chairman of the board came and gave a presentation. He relayed some information to us that was amazing. He said that the CTO for one of our customers had changed companies, and that he had recommended our product to his new employers!

That shouldn't be surprising. You would hope your customers are making recommendations like that all of the time. The interesting thing was that the original customer didn't appear to like us! The relationship with that customer was stormy, and they eventually turned off our software.

The epiphany was this. Every single one of our customer relationships is dysfunctional. Even though our customers complain and say that our software is compete garbage, they secretly love us. That tells me one of two things is true:

  1. Our software really is terrible, it just happens to be slightly better than any other supplier's.
  2. Our software is fine, and something else is going on.

My guess is on 2. Otherwise, the CTO wouldn't have recommended us.

I think we've trained our customers. I think that they have learned that to get anything they need to complain loudly. Not only do they have to complain loudly, they have to threaten. They feel that the only time they get attention from us is when we feel they are going to take their business elsewhere.

The thing is, they're right. We've trained them to act this way.

We are completely reactionary. We keep moving staff from fire to fire. Customers don't get attention unless they are the current emergency. As surely as a dog can be made to salivate when hearing a bell, our customers have learned to scream when they want attention.

That means that not only do our customers feel ignored, we feel like we're always digging ourselves out of holes. We're not happy and our customers aren't happy.

This is something that needs to change. We need to convince them that they will get attention regardless of how bad the situation becomes. We do this by becoming more proactive.

Have you ever seen a team of 5 year olds playing a team sport? They're in a big cluster around the ball. That's what we look like right now. In kids, it's really cute. In a bunch of adults, it is sad.

How do you avoid the problem? As in sports, you start to play positions. Traditional maintenance contracts involve a vendor fixing bugs as the arise. If no bugs are found, the money is pure profit. The change is to spend a portion of that "profit" on the customer every month.

You give your customers a time budget every month. If your customer doesn't have any urgent problems to fix, spend that time proactively. Fix some of their less important issues, or even better, go looking for new ones. The trick is that your customer sees progress on what they want changed. You could even use that time to implement small changes that you would have previously charged for.

At first glance, it looks like this will result in lower profits. However, as with most faults, early detection is key to lower costs. It's amazing how often that faults start out as minor annoyance and become serious or critical problems over the space of about 6 months. If you keep contact with your customers, and work steadily on their problems, you will find that you will avoid that severity escalation. If you consider just how much extra money you spend on a customer in an emergency, you will likely find that spending extra to keep them happy is more than worth it.

You'll enjoy your job more too.

Wednesday, May 17, 2006

Who vs What part 2

When I wrote the original posting, I talked about accountability. However, as I wrote it, I got it backwards. Accountability for the failures doesn't mean didly if you aren't also held accountable when it works properly. It is important to receive the credit!

So, who is still more important than what. However, it's not to know who to blame when it goes wrong, but who to reward when it goes right.

If you're being appreciated for the work you do, you'll make more of an effort on it. If you see it as a thankless task, you'll avoid it like a disease.

Essentially, I had the correct idea, but I was stuck in the blame side of the story. My bad.

Tuesday, May 16, 2006

Better STL Map Decoding

Way back when, I tried to decode an STL map in GDB. It didn't work very well. Well, a recent google trawl has turned up a better way!

It seems that GDB has a built-in scripting language! I Never knew that, but I should have guessed.

I've reproduced it here just in case the university takes it down. I don't want to lose it!

# GDB stl functions, a set of scripts to help debug STL containers
# Copyright (C) 2000 Gilad Mishne
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA  02111-1307, USA.


# vector functions #

define p_stl_vector_size
 set $vec = ($arg0)
 set $vec_size = $vec->_M_finish - $vec->_M_start
 printf "Vector Size: %d\n", $vec_size

define p_stl_vector
 set $vec = ($arg0)
 set $vec_size = $vec->_M_finish - $vec->_M_start
 if ($vec_size != 0)
   set $i = 0
   while ($i < $vec_size)       printf "Vector Element %d:  ", $i       p *($vec->_M_start+$i)
     set $i++

# list functions                                                         #
# provides generic pointers that need to be cast back to the type        #

define p_stl_list_size
 set $list = ($arg0)
 set $list_size = 0
 set $firstNode = $list->_M_node
 set $curNode = $list->_M_node->_M_next
 while ($curNode != $firstNode)
 set $curNode = ((_List_node *)$curNode)->_M_next
   set $list_size++
 printf "List Size: %d\n", $list_size

define p_stl_list
 set $list = ($arg0)
 set $list_size = 0
 set $firstNode = $list->_M_node
 set $curNode = $list->_M_node->_M_next
 while ($curNode != $firstNode)
   printf "List Element %d: ", $list_size
   p (void *) (& ((_List_node *)$curNode)->_M_data)
 set $curNode = ((_List_node *)$curNode)->_M_next
   set $list_size++

# tree   functions #

define p_stl_tree_size
 set $tree = ($arg0)
 set $tree_size = $tree->_M_t->_M_node_count
 printf "Tree Size: %d\n", $tree_size

define p_stl_tree
 set $tree = ($arg0)
 set $i = 0
 set $node = $tree->_M_t->_M_header->_M_left
 set $end = $tree->_M_t->_M_header
 while ($node != $end)
   set $i++
   printf "NODE %d: ", $i
   set $value = (void *)($node + 1)
   p $value
   if ($node->_M_right != 0)
     set $node = (_Rb_tree_node_base *)$node->_M_right
       while ($node->_M_left != 0)
         set $node = (_Rb_tree_node_base *)$node->_M_left
     set $tmp_node = (_Rb_tree_node_base *)$node->_M_parent
       while ($node == $tmp_node->_M_right)
         set $node = $tmp_node
         set $tmp_node = $tmp_node->_M_parent
       if ($node->_M_right != $tmp_node)
         set $node = $tmp_node

# hash   functions #

define p_stl_hash_size
 set $hash = ($arg0)
 set $table = $hash->_M_ht
 set $table_size = $table->_M_num_elements
 set $num_buckets = $table->_M_buckets->_M_finish - $table->_M_buckets->_M_start
 printf "Table Size: %d  (in %d buckets)\n", $table_size, $num_buckets

define p_stl_hash
 set $i = 0
 set $hash = ($arg0)
 set $table = $hash->_M_ht
 set $table_size = $table->_M_num_elements
 set $cur_bucket = 0

 set $num_buckets = $table->_M_buckets->_M_finish - $table->_M_buckets->_M_start
 while ($cur_bucket < $num_buckets && $i < $table_size)     if (*($table->_M_buckets->_M_start + $cur_bucket) != 0)
     printf "Bucket %d:\n--------\n", $cur_bucket
     set $cur_node = *($table->_M_buckets->_M_start + $cur_bucket)
     while ($cur_node != 0)
       set $cur_val = (void*)(&((_Hashtable_node *)$cur_node)->_M_val)
       set $cur_node = ((_Hashtable_node *)$cur_node)->_M_next
       p $cur_val
       set $i++
     printf "\n"
   set $cur_bucket++

# Documentation for online gdb help #

document p_stl_list_size
p_stl_list_size : Print size of stl list
document p_stl_list
p_stl_list : Print contents of stl list as (void*) pointers. Cast back to actual template type to see the values.

document p_stl_vector_size
p_stl_vector_size : Print size of stl vector
document p_stl_vector
p_stl_vector : Print contents of stl vector as (void*) pointers. Cast back to actual template type to see the values.

document p_stl_tree_size
p_stl_tree_size : Print size of stl trees (sets and maps)
document p_stl_tree
p_stl_tree : Print contents of stl trees as (void*) pointers.
For sets, cast back to actual template type to see the values.
For maps, cast back to (pair*) to see the values.

document p_stl_hash_size
p_stl_hash_size : Print size of stl hashs (sets and maps)
document p_stl_hash
p_stl_hash : Print contents of stl hashs as (void*) pointers.
For sets, cast back to actual template type to see the values.
For maps, cast back to (pair*) to see the values.

Saturday, May 13, 2006

Who is more important than What or How

Do you have an organisation that talks the talk but doesn't actually walk the walk? Do you always sit around and discuss how "it would be better if we just". Do improvements die on the vine?

I've found that who is always more important than what. Once you answer who, everything else becomes easy. Who is responsible for ensuring that the change happens? Who is responsible for the outcome of the task? Who is held accountable when the code (or any work item) fails?

That's the hard part. Once you decide who wants to be (or is to be made to be) held accountable, then it becomes a matter of following through. It moves improvements from inidividual initiative and into the realm of employee performance management.

Otherwise, I've found that everyone sits around waiting for someone else to make the move. Even if someone steps forward and does the work, it will frequently require the agreement of others to use it. If they don't agree, the improvement will die from neglect (see my outsourcing discussion). This increases the frustration for everyone.

Of course, there could be other reasons for the behaviour. It could just be that even if management says they consider something important, they don't really mean it. It could be that people are afraid to accept responsability, perhaps there is a tendency to shoot the messenger. Perhaps your staff are like little dogs who have been beaten too many times and now flinch whenever someone waves.

It could be many things. Even if any of negative reasons are true, the key is still to make individuals accountable. Once they are accountable for both the good and bad things they do, you will quickly see them start to change how the job is done.

After all, people do like to take pride in their work.

Saturday, April 29, 2006

Build flags declared bad for your health

Too many evils...

We keep getting bitten by this at work. We have several customers using the same codebase, with the only difference being how they are compiled. This isn't a case of one customer needing optimisation, and another not (we have that too). It's completely different code behaviours defined through build flags.

So, let me say it right now. Changing the behaviour of code through a compile time options and then shipping those variants to different customers is bad. Take that compile time option and make it run time. Otherwise, you can be sure that you will ship the wrong binary. We've done it so many times, it's becoming silly.

Your goal should be to have anyone check out, compile and ship your code to any customer using the same commands

There are several ways you can get rid of these options. My first preference is to get rid of it completely. Decide on a way the code will work and go with it. Why do you have the option at all? Is it faster in some way? Why isn't it good enough for all of your customers?

Next in order, I would get rid of the compile time flag completely and make it a run time configuration option. Make your system more configurable! You do have to be careful that you are providing something useful though. Why do you need to decide at all? Is it possible for the code to decide for you? (see previous post on configuration options).

Finally, if you absolutely must have a compile time option (for performance reasons or otherwise), build and ship all variants to all customers. Delay decisions as long as possible. It may take you another 2-3 hours to build both variants, but it will save you a round trip to your customer.

All of these will save you the embarrassment of yet again having your customer come back to you and say, "Don't you know how to compile your code?".

I hate hearing that question.

Inlines are Evil!

We just ran into an interesting problem with inline functions and shared libraries. Consider the following piece of code:

// inlineContainer.h
class inlineContainer {
     inline void setA(int a) {
         _a = a;      

     int _a;
     int _b;
     void setB(int b);

Now, consider the following piece of code (

#include "inlineContainer.h"

void testMe() {
   inlineContainer mine;


The file contains the implementation for setB. It is compiled into a shared library. Additionally, nothing in your file calls setA. Most people would expect that when you compile test.o, it wouldn't have any symbols from inlineContainer defined.

They'd be wrong.

The C++ standard says that the compiler should include a non-inline version of a function, just in case it can't inline it all of the time. GCC doesn't wait for the function to be called before it creates a copy. It creates a copy as soon as the inline is seen. So, test.o ends up with a copy of setA, albeit a weak one.

Still, that wouldn't be too much of a problem in itself, however, the shared library will also have a weak copy. This leads us to the actual problem, what happens when we want to change the implementation of setA? Even though we have a copy of the new version in the shared library, unless we recompile, we will more than likely end up with an interestingly hard to find bug.

The problem is the linker. When there are multiple copies of the symbol that are of equal "weakness", the first one seen by the linker is used. Since the shared library will be seen after the old one in test.o, the linker will use that one.

So, we end up with a very strange bug. It won't happen every time the function is called, only in the situations where the compiler decides the function shoudln't (or can't) be inlined. That means that invocations separated by a couple of lines will have vastly different behaviours.

Of course, this all breaks the C++ "one definition rule", which requires an inline function to be defined with identical meaning in every translation unit. Additionally, compilers and linkers are not required to detect violations of the rule. GCC is obviously one of the ones that can't. ref:

How do we fix it?

First off, don't expose inlines to code that isn't going to be part of the same shared library. It isn't part of your exposed interface, don't let them see it. Do this through private, friends, any favourite way you have. One way would be to put the inlines inside a #if that is only presented to code that is allowed to see them.

However, that won't solve the problem for code that has already been delivered. That is harder. In fact, I'm not sure it's even possible. Testing with G++ produces interesting results:

Consider two files:


class inlineContainer {
   void setA(int a);
   void setB(int b);
   int _a;
   int _b;

inline void inlineContainer::setA(int a) {
   _a = 1;

void inlineContainer::setB(int b) {
   _b = b;

void test_func(inlineContainer &a);

int main(int argc, char **argv){
   inlineContainer funky;

   std::cerr <<>

class inlineContainer {
   void setA(int a);
   void setB(int b);
   int _a;
   int _b;

void inlineContainer::setA(int a) {
   _a = a;

void test_func(inlineContainer &a) {

# compilation instructions
g++ -fPIC -c
g++ -shared test2.o -o
g++ -o testA
g++ test2.o -o testB

Now, what happens when we run testA?

[jpollock@pollock ~]$ ./testA

When we run testB?

[jpollock@pollock ~]$ ./testB

As you can see, the shared library version ALWAYS uses the copy produced in the main executable, even though it is defined as weak. The staticly linked version uses the newer version in, since it is a normal, strong symbol.

It becomes even more interesting if we change to remove the call to setA. When we recompile an re-run, we see that while setA is defined in testA, the version in the shared library is the one that is used.

That indicates that regardless of the weakness of the symbol, if it's in the main executable and called, it will always be used in preference to the one in the shared library.

Scary, huh? That means we can't override that symbol. Even if we ignore the problems presented by versions that are actually inlined, we still cannot override that symbol!

That indicates to me that the only way we can change setA is to either hack the name of setA to ensure that the new version of the library has a different name, or redeliver everything that contains setA as a weak (or otherwise) symbol.

Of course, you could be smart and not expose inline functions outside of your library.

Sunday, March 05, 2006

Offshoring, Smoffshoring, Who's Worried?

The company I'm currently employed by attempted to offshore (outsource) the majority of their support work. The experience has taught me a few things. Things about them, and things about us. It was all interesting to watch.

First, people become software engineers for different reasons. At one end of the spectrum you have the people who become software engineers because they love the work. These are the people who run servers at home, write their own software, and generally love computers. At the other, you have people who become software engineers because it's a job with great pay and good job security. They don't have computers at home. They are pure 9-5'ers, and actually have lives outside of work and World of Warcraft.

The engineers we dealt with in the other company were at the "good job" end of the continuum. This isn't necessarily a bad thing, but it did filter into their work product, when they regularly took the easy way out.

Next, outsourcers are out to make a profit! If you are expecting them to do the best thing for your company, you are in for a nasty surprise! For example, bug fixes were as small as possible. Not necessarily a bad thing, except that they were lazy small fixes instead of pretty small fixes. Yes, entirely subjective, but people out there will hopefully understand. They would consistently ignore other problems with the same piece of code, comment out code instead of removing it, put the fix in the easiest place, rather than the correct place, that sort of thing. Refactoring code to remove problems? Never going to happen. Making recommendations back up the chain on where problems were commonly happening? In your dreams! This is one of the reasons they are cheaper, they don't provide any free extras. Don't worry, even if you do decide to outsource, this can be good for your organisation, it will help flush out any hidden jobs that people used to "just do".

Another way they reduce costs to make a profit is through their staffing selections. The internal team that was being outsourced was staffed with senior engineers, the external team was staffed with predominantly junior engineers. The problem was that we had to keep the internal team around to both monitor the work of the external team, and to fix the really important faults. The external team simply wasn't ready to fix the nasty faults.

CMM Level 5 doesn't mean quality. They may have been CMM Level 5, but they couldn't ship a patch that worked. They would do all of the standard beginner mistakes. They shipped from dirty build trees. They failed to tag the code they shipped. They failed to document the tags, etc... All things that are already documented in my employer's existing processes.

Generally they worked differently to us. If they could ask a question to make the problem go away for a week, they would. If they thought no one was looking, they would skip steps. This is probably more a perception thing than anything else. If we had kept up the project, I feel that they would have figured out how we wanted them to work, and come around.

Finally, at the end of it all, they weren't any cheaper!

So, the work is being pulled back in. This seems to be very popular at the moment.

Don't think for a second that the failure of the project was entirely their fault, that we didn't fail here too. This could have been an extremely successful project.

There were several problems inside the company that caused the failure. First, senior management failed to communicate why the company was performing the outsourcing in the first place. At first, the reason was cost, then it was scalability, after that it was silence. Since they hadn't specified why they were outsourcing the team, they couldn't tell if it succeeded or not.

That lack of goals caused problems inside the company. Since the company is dominated by engineers, they saw it as a threat, and perceived the project as an effort by the senior management to reduce their power in the company. The relationship between support and engineering was already problematic (frequently adversarial), outsourcing just made it worse. The engineering team implemented what was effectively their own internal support team. They changed development practices that were (at the beginning) unworkable for support. Generally, they increased the costs for the support team.

People on the internal team weren't interested in seeing it succeed either. It is very easy to kill something when you are on the team. People don't realise that most projects can be killed simply by inaction. Here, we just assumed that the external team was doing their jobs properly, we didn't check. That way, when they inevitably failed in their deliveries, it was obviously their fault. At the first sign of trouble, we should have started checking all of their work, but we didn't, letting them sink or swim by their own abilities. Hardly behaviour you see when you want something to succeed.

Inevitably, political support evaporated. There was a merger with a third party, and most of the senior management were removed from their jobs. The new management team was more engineering focused than the last. This resulted in support reporting into the engineering team. The first thing the new manager did was cancel the outsourcing project.

Was outsourcing a good thing? I wasn't sure at first, but now I am. They gave easy scalability. They kept us honest - it's impossible to hand software to a third party without proper documentation, let alone code that doesn't compile. They exposed hidden costs, and made people accountable who previously weren't. As a shareholder, I had initial concerns about loss of institutional knowledge, but saw the final result as a necessary step to growing the company past it's existing cash cows.

A final thought... If all of your engineers are busy providing ongoing support for your existing products, who is writing the new ones? To create a new product, you would have to hire and create a new team with all of the risks inherent in that. If a product isn't in active development, do you really need to worry about institutional knowledge loss?

I'm looking forward to seeing what happens next.

Monday, February 06, 2006

Broken Windows

Many researchers have discussed broken windows. This was first noted by psychology researchers who were discussing urban decay. It basically means that if you let a broken window or graffiti remain, it has a tendency to lead to other social decline in the neighbuorhood.

Look around the office. How many broken windows are there? Does it mirror the broken windows in the software? Are your clocks working? Do you have broken computer equipment sitting against a wall for years? How about the whiteboard with outdated project information on it? A door that doesn't properly shut? Urinals that leak? Chairs that are broken?

What message is being given to employees when the physical plant is in such a state? Is it a "Quality is Job #1" message? Or is it, a "Cheap as Possible" message?

So, what does it say about a company when the clocks don't work? That they don't have the money to fix it? Perhaps it's a reflection of what staff are seeing from further up the chain. Are either of those messages you want to pass on to your staff? Imagine the message that would be given by your CEO coming downstairs and changing the battery in the clocks. It would certainly be a powerful one.

I find it funny that management sends out these messages and then hounds staff for better quality. No wonder teams are always confused.

We expect high quality work from our software developers. To get that level of quality, we need to show them that we consider quality to be important.

Put batteries in the clock, find offsite storage for the unused equipment (hint, it's very, very cheap!), get rid of the equipment that is broken and generally fix things as they break.

Your staff is watching.

Friday, February 03, 2006

Teamwork and Coaches

I went and saw "Chicken Little" a couple of months ago. In the middle of the movie, Chicken Little joins a baseball team. It was the final game of the season, bottom of the ninth, two out. Chicken Little is finally up to bat, with their star player on deck. The coach tells him to take the pitch - no matter what. The coach explained that his strike zone was so small, the pitcher could never hit it. What does Chicken Little do?


After missing twice, he manages to get a base hit. The first base coach tells him to stop at first.... Nope. The third base coach tells him to stop at second. Nope. Third? You guessed it, he doesn't stop. There is the standard huge pile-up at home plate, and luckily for him, he is safe.

I sat there very angry and disappointed with the writers. They had Chicken Little completely ignore how teams work. From the start of the movie to the end Chicken Little complains about not being listened to, that nobody takes him seriously. Yet, at the first real opportunity, he ignores the people he wants to impress. He goes his own way, regardless of what is best for the team.

In baseball you are frequently asked to sacrifice yourself for the team. You take a pitch to test the pitcher, you hit a sacrifice fly, you hit a sacrifice bunt. It's all about getting people in position to score. Chicken Little was perfectly placed to get into position to score. It wouldn't have been as glamourous as the in the park home run on errors, but it would have won the game.

Team sports require sacrifices. They require people to occasionally "take one for the team".

So, how does this relate to work? Teamwork is just as important in business. Probably more so. Yet again, it's a lesson that we should have learned when we were children, but we somehow manage to forget.

How do we forget it? We forget it by ignoring the coach. If we don't like how we're told how to do the work, we ignore them and do it our own way. We only look at how the request affects us, without considering how it affects the rest of the business.

This behaviour is very contagious. If one person starts to act in a greedy fashion, others will tend to follow, especially if the greedy individual manages to get away with it.

We always see that the team is more important than the players. Consider what happens to All Star teams at olympics. The team that plays as a team wins. Ignoring the USA Basketball team. :)

In high school I used to be on the swim team (among others). Swimming is usually considered an individual sport. You swim your events and succeed or fail on your own. However, in Colorado Springs it doesn't work that way. The schools have swim meets, and are ranked based on their performance. Our swim team consistently won meets that it wasn't expected to. We won for the simple reason that coach would change what events we swam. Pretty sacrilegious in swimming. He would look at the prior performance of the opposing team, and put us in events where we would perform the best against them.

For example, you're a butterfly swimmer going up against a team that specialises in fly with a freestyle time that will put you in the top 3? Tonight, you swim freestyle, end of story.

It was that flexibility, that sacrifice for the team that won us events. It will win in the business world too. Perhaps the jocks do have something right, and us nerds should learn from them.

Wednesday, January 11, 2006


You're a software developer. You've been given a piece of software that throws exceptions. You want to be a good developer and help out. What do you do?

int main() {
    try {
        [... do some stuff ...]
    } catch (ourExceptionType &e) {
        DBG("Ouch, we've got an unhandled exception!");
        return -1;
    } catch (...) {
        DBG("Ouch, not only is it an unhandled exception, but I don't know
what it is!");
        return -1;

You would think that you're being helpful. You've just stopped a core dump when the program manages to get an exception all the way back out to main. Of course you would be wrong, very, very wrong.

When a program crashes, it should be very loud. It should scream it's death all the way down. It should give developers enough information to track down the cause and fix it. You've just managed to do several things with this "helpful" piece of code that makes an already difficult job impossible.

First, when exiting all you provide is a debug message. These are either turned off or compiled out. It isn't going to be available. Thanks man.

Second, you catch the exceptions! You aren't going to do anything about it, you're not going to clean up and continue! So, why catch it? Catching it completely destroys the stack information from the exception! Congratulations! You have no idea whatsoever where the exception originally came from.

Fine, you say, I'll re-throw the exception. Nope, doesn't help. Let's see what GCC does with that:

#include <memory>
#include <stdio.h>
#include <stdexcept>

 class MyException : public std::runtime_error {
   MyException() : std::runtime_error("MyException") { }
int myalpha() {

    throw MyException();
    return -1;

int myjunk() {
    return myalpha();

int main(int argc, char *argv[]) {
    try {
    } catch (MyException &e) {
        fprintf(stderr, "Boom!\n");

Looking at the stack trace this produces we see:

#0  0xff21e2f4 in _libc_kill () from /usr/lib/
#1  0xff1b57b8 in abort () from /usr/lib/
#2  0xff353100 in __cxxabiv1::__terminate(void (*)()) (
    handler=0xff1b56b0 ) at
#3  0xff353150 in std::terminate() () at
#4  0xff353314 in __cxa_rethrow () at
#5  0x00010d80 in main (argc=1, argv=0xffbff834) at

Nothing at all about where the exception was originally thrown from. We only get information on where the exception that killed the application came from. So, do not be a good samaritan. Don't catch those exceptions unless you mean to do something about it!