Monday, November 30, 2009

Topic Relevant Comment Spam

On my previous post "Streaming Audio to your iPhone", I received a very odd comment. It was related to the subject, so it didn't immediately look like spam, but it wasn't topical.

Here's the comment (I've removed it from the original post):

Web casting, or broadcasting over the internet, is a media file (audio-video mostly) distributed over the internet using streaming media technology. Streaming implies media played as a continuous stream and received real time by the browser (end user). Streaming technology enables a single content source to be distributed to many simultaneous viewers. Streaming video bandwidth is typically calculated in gigabytes of data transferred. It is important to estimate how many viewers you can reach, for example in a live webcast, given your bandwidth constraints or conversely, if you are expecting a certain audience size, what bandwidth resources you need to deploy.

To estimate how many viewers you can reach during a webcast, consider some parlance:
One viewer: 1 click of a video player button at one location logged on
One viewer hour: 1 viewer connected for 1 hour
100 viewer hours: 100 viewers connected for 1 hour…

Typically webcasts will be offered at different bit rates or quality levels corresponding to different user’s internet connection speeds. Bit rate implies the rate at which bits (basic data units) are transferred. It denotes how much data is transmitted in a given amount of time. (bps / Kbps / Mbps…). Quality improves as more bits are used for each second of the playback. Video of 3000 Kbps will look better than one of say 1000Kbps. This is just like quality of a image is represented in resolution, for video (or audio) it is measured by the bit rate.

It was posted by "Andy". I was ready to post a comment thanking Andy for the additional information, but I decided to look to see if it was copied from somewhere else.

It seems that the exact same comment is placed on just about any blog post that mentions streaming. On the Google search I used, I found a lot of comments from the past couple of months.

Seems "Andy" has been a busy boy, and is actually a dirty, rotten spammer.

"Andy" (blogger), who also posts as "andylock", or "Andy Lock" (Facebook) is an automated spam program for vsworld.com, and the website is a flash-only website. I didn't stay there long enough to really figure out what they were selling, but it looked like some sort of contracting agency in India.

Still, I found it interesting that it wasn't immediately obvious that the post was spam.

Monday, November 23, 2009

Audio Streaming to the iPhone, Take Two

I wasn't completely happy with the results of using M3U files. While it did allow me to specify all of the files, I wasn't able to skip forward and back through the song list.

It turns out that the M3U support is there primarily to support live streams. For example, if you have a live stream from a video source (like a TV tuner card), the server records it, splits it into small chunks and converts it to H.264. The iPhone will then use the M3U file as an index to the individual data files, refreshing the M3U file occasionally. I'll have to set that up next. :)

So I went back to Google to see what I could find. I had come across the <OBJECT> tag in my previous searches, but I had discarded it as too difficult. I didn't want to work in HTML, and object embedding just seemed dirty. However, if I wanted to get access to skip forward/back, it looked like I was going to have to use it. I went through and modified the perl program from yesterday to produce an HTML page for each M3U file. It wasn't until I got to the end that I realized that I had done it wrong.

Lots of things don't work with the OBJECT tag, primarily the GOTO command. It seems that the proper way to embed things into a document is with the EMBED tag. That isn't to say that the EMBED tag is prettier. It isn't, they are both nasty. It is platform and player specific. Luckily, this only has to work on the iPhone/iPod Touch, so I'm lucky that way. I would hate to have to support this for multiple clients.

The first nasty surprise is that while you can have multiple songs in a list, you are limited to 255 of them. Even worse, they aren't done through object references in the rest of the document, the entire list is in the single object tag. That makes it harder to do dynamic, on the fly modification of the list. No small cgi to do shuffles here!

Still, 255 songs is enough to cover pretty much all of my artist directories.

It also seems that Mobile Safari ignores GOTO commands. In regular Safari, you are able to loop back around to the start of the playlist by putting "QTNEXT255=GOTO0" into the embed tag. Looks like Apple doesn't want playing loops on the iPhone.

Next, the iPhone ignores the "autohref", "autoplay" and "autostart" parameters. It always waits for user interaction. This is because the object is not really embedded, it takes full control of the screen. If it did start automatically, it would cause problems on many other sites. It's a small pain, but we'll survive.

I still wish it properly supported M3U files. The seamless transitions are nice. With the QTNEXT, there is a definite pause between tracks.

Here is the updated Perl code:

#!/usr/bin/perl

use File::Find;
use File::Basename;

use vars qw/*name *dir *prune/;

my $URL_BASE= "http://10.10.10.5/";

my @M3Us;

sub createPlaybackHTMLHeader {
    my ($filename, $target, $song) = @_;

    my $title = basename(dirname($filename));

    # open the file, 
    open(HTML, ">$target");

    # Write the HTML/Head elements,

    print HTML <<END;
<html>
  <head>
    <title>$title</title>
    <meta name="viewport" content="width=device-width; initial-scale=1.25"/>
  </head>
  <body>
    <p>Play all music in the "$title" directory, click the button below</p>
    <embed src="$song">
      autoplay="true"
      controller="true"
END
;
    # close the file.
    close(HTML);
}

sub createPlaybackHTMLFooter {
    my ($filename, $target) = @_;

    # open the file, 
    open(HTML, ">>$target");

    # Write the HTML/Head elements,

    print HTML <<END;
      qtnext255="GOTO0"
    </embed>
  </body>
</html>

END
;
    # close the file.
    close(HTML);
}

sub createPlaybackAddSong {
    my ($filename,$target, $song, $count) = @_;

    open(HTML, ">>$target");
    print HTML <<END;
      qtnext$count="<$song> T<myself>"
END
;
    # close the file.
    close(HTML);
}

sub convertM3UToHTML {
    my ($m3ufile, $target) = @_;

    print "Converting $m3ufile to $target...";

    if ( ! -f $m3ufile ) {
 print "No m3ufile, returning\n";
 return;
    }

    # Open that m3u file, and convert it to a playback html page.
    open(M3U, $m3ufile) or die "Can't open m3u file";
    my @songs=<M3U>;
    close(M3U);

    if (@songs > 255) {
 print "Too many songs! Oh well\n";
 @songs = @songs[0..254];
    } elsif (@songs == 0) {
 print "No songs, returning\n";
 return;
    } else {
 print "No problems!\n";
    }

    my $first_song = shift @songs;

    chomp $first_song;

    createPlaybackHTMLHeader($m3ufile, $target, $first_song);
    my $count = 1;

    foreach my $song (@songs) {
 chomp $song;
 createPlaybackAddSong($m3ufile, $target, $song, $count);
 $count += 1;
    }

    createPlaybackHTMLFooter($m3ufile, $target);
}

sub entering {
    print "entering Directory boundary ", $File::Find::name, "\n";

    push @M3Us, $File::Find::dir;

    if ($File::Find::name =~ /.AppleDouble/) {
 return;
    }

    return sort(@_);
}

sub leaving {
    print "leaving Directory boundary ", $File::Find::name, "\n";

    my $directory = pop @M3Us;

    # open the file, 
    my $source = $directory . "/PLAY.m3u";
    my $target = $directory . "/PLAY_ALL.html";

    convertM3UToHTML($source, $target);
}

sub wanted {
   print "Checking ", $File::Find::name,"\n";

   if ( $File::Find::name =~ /\.mp3/ ) {
       print "MP3 Found ", $File::Find::name, "\n";
       my $url = $File::Find::name;
       $url =~ s/ /%20/g;
       $url =~ s/\/export\///g;
       $url = $URL_BASE . $url;
       foreach my $dir (@M3Us) {
    my $m3u_file = $dir . "/PLAY.m3u";
    open (M3U, ">>$m3u_file") or die "Boom!";
    print M3U "$url\n";
    close(M3U);
       }
   }
}

sub cleaner {
    if ($File::Find::name =~ /PLAY.m3u/ ) {
 print "Removing ", $File::Find::name, "\n";
 unlink($File::Find::name);
    }

    if ($File::Find::name =~ /PLAY_ALL.html/ ) {
 print "Removing ", $File::Find::name, "\n";
 unlink($File::Find::name);
    }
}

find ({ wanted => \&cleaner},"/export/mp3");

find ({ wanted => \&wanted , preprocess => \&entering, postprocess => \&leaving},"/export/mp3");

open(M3U, "/export/mp3/PLAY.m3u") or die "Unable to open play";

my @main_m3u = <M3U>;
close(M3U);

my @music = grep(!/AudioBooks/, @main_m3u);
my @random = sort { int(rand(3))-1 } @music;

open(M3U, ">/export/mp3/MUSIC.m3u") or die "Unable to open MUSIC.m3u";

print M3U @music;
close(M3U);

convertM3UToHTML("/export/mp3/MUSIC.m3u", "/export/mp3/MUSIC_PLAY.html");

open(M3U, ">/export/mp3/RANDOM.m3u") or die "Unable to open RANDOM.m3u";
print M3U @random;
close(M3U);

convertM3UToHTML("/export/mp3/RANDOM.m3u", "/export/mp3/RANDOM_PLAY.html");

Sunday, November 22, 2009

Streaming Audio to your iPhone

I'm trying to hide the CDs and DVDs that are steadily taking over my house. I've managed to get rid of all of the DVDs, they've all been ripped to the server and are now happily up in the attic. The last problem are the CDs.

Yes, I have them all ripped to the server. The problem is that I don't have any way of getting at them everywhere in the house. Specifically, my wife likes to listen to murder mysteries as she cooks (me, I like cooking to the news). Since I don't have a network enabled stereo, I have to put up with a stack of CDs sitting on the counter.

I have given up waiting for a network enabled stereo system. The squeezebox just doesn't do it for me. I want something small, with wifi and wired, with a radio, clock and a streaming mp3 player. It should also have .m3u support and a "shuffle".

I decided to see what I could do with what we already have...

  • First realization. We both have iPhones.
  • Second realization. We have a WiFi network in the house.
  • Third realization. The iPhone will happily stream music off of a web page.

So the project was formed. First, I tried the SqueezeBox server (previously SlimServer) a server which serves up streams to network attached stereo component equipment. I've got a 1st generation SliMP3, it was a great device. The server just doesn't play well with the iPhone. So, I decided to go the bare bones route. I decided to setup Apache on my file server so that it will serve up music to the iPhones in the house. Then the phones can be used to stream the music/books wherever anyone is in the house. I would finally be allowed to hide all of the CDs! Perfect!

Setting up Apache was pretty easy. I followed the instructions that are easily Googled. I didn't create any HTML files, but I did configure the auto_index module (/etc/apache2/mods-available/autoindex.conf). My changed settings were:

IndexOptions SuppressDescription SuppressSize SuppressLastModified
IndexOptions SuppressHTMLPreamble
HeaderName /include/iPhone_Header.html

NOTE: HeaderName is relative to the DocumentRoot, not the filesystem.

This allowed me to keep the listing to just the filenames, and replaced the standard HTML header with one of my own:

<html>
<head>
<meta name="viewport" content="width=device-width; initial-scale=1.25"/>
</head>
<body>

This header provides a hint to the iPhone of where to set the viewport. It seems to work for my listings, making the file list usable.

Once we have that, we've got a server with a directory listing that we can scroll through and play individual MP3s. We don't have any playlists though, which is pretty unpleasant.

Bring in the Perl!

I've written a small piece of Perl code to iterate over my MP3 tree and create M3U files in each directory containing URLs for all of the MP3s that are children of that directory. Because of how I store my MP3s, that gives me album, artist and full play lists. I then randomize the playlist to give me a shuffled list.

The new iPhone release is able to play M3U files, with one problem. You can't skip to the next track, which is pretty poor. But now I've got a method of delivering music to any room in the house. I just have to get one of those 3rd party iPod speaker systems, and I can get rid of all of the CDs in the kitchen!

#!/usr/bin/perl

use File::Find;

use vars qw/*name *dir *prune/;

my $URL_BASE= "http://10.10.10.5/";

my @M3Us;

sub entering {
    print "entering Directory boundary ", $File::Find::name, "\n";

    push @M3Us, $File::Find::dir;
    return sort(@_);
}

sub leaving {
    print "leaving Directory boundary ", $File::Find::name, "\n";

    pop @M3Us;
}

sub wanted {
   print "Checking ", $File::Find::name,"\n";

   if ( $File::Find::name =~ /\.mp3/ ) {
       print "MP3 Found ", $File::Find::name, "\n";
       my $url = $File::Find::name;
       $url =~ s/ /%20/g;
       $url =~ s/\/export\///g;
       $url = $URL_BASE . $url;
       foreach my $dir (@M3Us) {
    my $m3u_file = $dir . "/PLAY.m3u";
    open (M3U, ">>$m3u_file") or die "Boom!";
    print M3U "$url\n";
    close(M3U);
       }
   }
}

sub cleaner {
    if ($File::Find::name =~ /PLAY.m3u/ ) {
 print "Removing ", $File::Find::name, "\n";
 unlink($File::Find::name);
    }
}

find ({ wanted => \&cleaner},"/export/mp3");

find ({ wanted => \&wanted , preprocess => \&entering, postprocess => \&leaving},"/export/mp3");

open(M3U, "/export/mp3/PLAY.m3u") or die "Unable to open play";

my @main_m3u = <M3U>;
close(M3U);

my @music = grep(!/AudioBooks/, @main_m3u);
my @random = sort { int(rand(3))-1 } @music;

open(M3U, ">/export/mp3/MUSIC.m3u") or die "Unable to open MUSIC.m3u";
print M3U "@music";
close(M3U);

open(M3U, ">/export/mp3/RANDOM.m3u") or die "Unable to open RANDOM.m3u";
print M3U "@random";
close(M3U);

Sunday, November 15, 2009

Oracle's BETWEEN keyword

I came across a new Oracle keyword today, BETWEEN.

At first, I thought it was pretty cool. I would be able to simplify the majority of the range checks that I perform. Before I really started using it though, I decided to look at what it actually did. Ouch!

Google, my ever present documentation source, told me that between doesn't work the way I thought it would. It's inclusive of both ends of the range. Who would want that? You never want a range that is inclusive of both ends! Otherwise, elements in contiguous ranges have indeterminate ownership!

Let's try an example:

We want all rows with a date on the row that has a date of today.

  select id from test_table where my_date between 
    trunc(sysdate) and trunc(sysdate) + 1;

Great you would think, almost too easy!

You would be correct too, it was too easy. It doesn't work. Since it is inclusive of the end of range, you get all values that have a my_date of 12:00:00 tomorrow. What you really want is:

  select id from test_table where 
      (my_date >= trunc(sysdate)) and 
      (my_date < trunc(sysdate) + 1);

When using new keywords and abstractions, you should always know what they are doing.

Monday, November 02, 2009

std::auto_ptr and GDB.

I needed to gain access to the contents of an auto_ptr inside of GDB. However, GDB doesn't like the overloaded -> operator, so the simple foo->fnImInterestedIn() doesn't work. Here's the simple pattern:

class bar_t {
  public:
     int fnImInterestedIn();
};

std::auto_ptr<bar_t> foo;
(gdb) p ((struct bar_t *)foo._M_ptr)->fnImInterestedIn()

Hulu Proxy Apocalypse

It seems that the great Hulu apocalypse has hit more than just Witopia.net, Amazon's EC2 instances also seem to be blocked.

Of course, there are a tonne of other cloud providers out there, several of them even cheaper than Amazon. Personally, I'm still with Witopia. Like any good company, they had a new address range up immediately.

Hulu is stuck in the same game as Apple. They're playing whack-a-mole with the hackers. The only problem, every time they want to block an access method, it costs them money. However, it is free for us to invent a way around a block (it's a hobby), and there are a lot more of us than there are developers at Hulu.

This is yet another example of "Don't piss off the nerds".

Personally, here are the lessons that I would take from this.

  1. There is a market for International access. People are willing to _pay_ to get access. I am currently paying US$12/month to access Hulu, Pandora, and TV.com. I would pay that for unfettered access to Hulu.
  2. Hulu isn't going to win with black lists. They're going to have to implement a white list, which is a lot more expensive to maintain.
  3. Not a single person who was using Hulu simply stopped watching TV when Witopia and Amazon were turned off, they just went back to bittorrent.
  4. Don't piss off the nerds. They can out-spend you.

There are plenty of cloud providers out there. The same solution will probably work on other sites. Install Squid and give it a try!

Saturday, September 05, 2009

Using Amazon EC2 to access Hulu

As more and more content moves onto the Internet, it is frequently provided on a region-by-region basis. A lot of people want to have access to Hulu, Pandora, ABC, NBC, Netflix and the US Amazon and iTunes online stores.

I was one of them. S92A also provided an impetus to my research. If downloading US content was going to result in disconnection, I needed another way to get my North American TV fix. It would be even better if it was legal.

A little-known section of the NZ copyright law makes it legal to break DRM if the only purpose is to provide a region lock. To me, that indicates that if I can get around the geographic IP block on these web sites, I am no longer breaking NZ copyright law by watching the shows.

Perfect.

There are a couple of methods to do this. The ultimate method is using an OpenVPN server on EC2. I didn't start there. I started by using Squid.

First, you will need to learn how to construct an Amazon EC2 instance. This requires setting up an account, downloading the tools and starting an instance. Nothing too difficult, and all described by Robert Sosinski.

Since this will be a network proxy, we don't need a fast CPU, or a lot of memory. The smallest EC2 machine image is perfectly usable. I used a Fedora instance, since that was what I was familiar with at the time.

From the steps in Robert's instructions, I would leave out allowing access to port 80 (ec2-authorize default -p 80). We don't need it for this.

Now that you have a working image, we need to get Squid working. A funny aside, you can tell how mature the Internet is getting by the search results for open source project names. It used to be that if you Google'ed "Squid" you got the HTTP proxy. Now you get cephalopods.

I wanted a set of instructions that could be easily scripted so that I didn't have to leave the instance running, or store anything on S3, Amazon's storage system.

export EC2HOST=ec2-xx-xxx-xx-xx.compute-1.amazonaws.com

First we install and start squid.

ssh -i ec2-keypair root@$EC2HOST  "yum -y install squid"
ssh -i ec2-keypair root@$EC2HOST  "/etc/init.d/squid start"

Finally, we setup a local tunnel from our local machine to the Squid proxy.

ssh -i ec2-keypair -N -L3128:localhost:3128 root@$EC2HOST 

To make use of the proxy, all we need to do is point Firefox (or your preferred browser) at localhost:3128. Voila we now have access to the US.

However, if we try to use Hulu, only some of the videos work. We don't get the "not available in your region" error message anymore, instead we get a "unable to play the video at this time". Something else is going on.

Hulu is using multiple layers of security. They are not only checking the source of the HTTP stream, the actual RTMPE stream is protected as well. Time to add more stealth.

First, we tell squid to not tell anyone downstream who they are proxying for:

ssh -i ec2-keypair root@$EC2HOST  "echo "forwarded_for off >> /etc/squid/squid.conf"
ssh -i ec2-keypair root@$EC2HOST  "/etc/init.d/squid restart"

However, that doesn't fix all of the problems. Reading up on the protocol that the flash player uses (RTMP), we see that while it will tunnel over HTTP, it will first try to make a direct connection. It is that direct connection which is causing us problems, so we will turn it off.

sudo ipfw add 2000 deny tcp from any to any 1935 out

Now, when we try to use Hulu, we see that all of the videos are working, the RTMP stream is properly using the HTTP proxy, and Hulu is no longer restricting our access.

However, this isn't perfect. Amazon EC2 seems to rate limit the instances. Even though you are paying per byte of transfer, EC2 doesn't let you have more than 1mbps per connection. That means that while we can watch Hulu, we can't get reliable access to the HD content.

So, how expensive is it?

Here's some math....
  • NZ Sky subscription, basic plan (no movies, no sports). $11.74/week
  • Hulu Video
    Amazon Cost:
      Instance: US$0.1*(NZ$1/USD$0.67) = NZ$0.15
      Traffic (based on 1 episode of Eureka)
      (304MB/43min)*(60min/hr)*(1GB/1024MB)*(USD$0.27/1GB)*(NZD$1/USD$0.67) = NZ$0.17
      Total   : NZ$0.32/hr
    NZ Bandwidth cost:
      (304MB/43min)*(60min/hr)*(1GB/1024MB)*(NZ$1.50/GB) = NZ$0.62/hr
    Total cost per hour: NZ$0.94/hr
    
  • Bittorrent cost per hour
      SeedRatio = 1.0
      (350MB/40min)*(60min/hr)*(2-(1-SeedRatio))*(1GB/1024MB)*(NZ$1.50/GB) = NZ$1.53/hr
    

Break even point of Squid Proxy:

  • vs Bittorrent - instant
  • vs Sky - 11.74/0.94 = 12 hours.

Therefore, to have any value, you need to be watching content on Sky TV which isn't available on FreeView for more than 12 hours a week in order to justify paying for Sky TV.

So, in this case, going legal is cheaper.

Sunday, March 22, 2009

The Future of Cable-TV

There's a bit of a discussion going on between Mark Cuban and Avner Ronen. It started with someone writing an article about boxee, the company that Avner heads.

Mark seemed to take exception to the article, and posted, "Why Do Internet People Think Content People Are Stupid?" Avner then followed up on his blog.

I've been thinking about Cable and Broadcast TV for a while. I find myself agreeing with Avner, that the Internet will allow customers to have a la carte access to shows and channels.

First off, why would this happen? We only need to look at PVRs to see where it is going. PVRs are changing the relationship between broadcast networks, content creators and viewers. Currently, a PVR changes how a viewer makes use of the TV network. They can watch any show they are interested in, regardless of when it was broadcast, or even if they know it is being broadcast. People with PVRs timeshift TV shows, frequently watching a show up to a week after broadcast. This change in viewing habits is showing up in the TV show's ratings, with some shows seeing a drop of up over 30% viewership as their viewers record it for later viewing.

What happens when the majority of your customers have PVRs? They no longer watch "whatever is on". They will watch the shows that they want to watch, when they want to watch them. The broadcast schedule becomes less important. In fact, it doesn't matter when the show is broadcast anymore. Your viewers will automatically follow the show through scheduling changes.

Some consultants say that people will still want to watch the show during the broadcast slot so that they can talk about it the next day at work. That is true, there is some pressure to watch a show as it comes out, but it doesn't need to be watched immediately. It can be watched just as easily +1,+2,+6 hours later. The person could even elect to watch the show the next morning over coffee or on the way to work on the train.

So, if the timeslot doesn't matter, what does that do to a broadcast network? Frankly, it turns them into two things. First and foremost, they are aggregators. They decide what it is that you are going to watch. Second, they are a very efficient one to many data network. Because of their sunk costs in the form of spectrum and transmitters they are able to deliver content to homes across the world cheaply.

Now, remember, in the world of the PVR, it doesn't matter when a show is broadcast - it'll be found and recorded. This lowers the value represented by prime-time slots. Everyone is watching TV at that time, but it isn't needed to show them pre-recorded content. It also increases the value of the 2-6AM slots. The time when a broadcaster would usually send out a test-signal or infomercials are now perfectly positioned to show syndicated content.

Remember, it's all about feeding the PVR. You want to fill their viewing hours with _your_ content, not the other guy's.

So, we're not in a world where insanely popular shows are broadcast to people's PVRs at 2AM on a Monday morning. Just in case you think this is crazy, this is exactly how bittorrent works with background downloaders like TED (torrent episode downloader). TV shows show up on the pirate networks in the middle of the night, and are available for watching on your local PVR a couple of hours later.

So, now that they're feeding your PVR, what happens next? Aggregation. The networks provide aggregation and editing services. They are tastemakers. The only problem? They demonstrate this taste through their line-up and schedule. As I've already shown, the schedule is unimportant - people use PVRs. That leaves only the line-up. Those "up-next" teasers for the next show? Of little value - the viewer can't watch the next show! So, the line-up also becomes unimportant. There is no way to package shows to make viewers more "sticky". They will choose shows from networks a la carte. Again, we can already see this behaviour - both on PVRs, and bittorrent. There are also better "tastemakers" than networks. They're called bloggers. There are a lot more of them, they are the reviewers of the Internet age. Even better, they are perceived as more trustworthy, because they don't review things for a living. True "tastemaking".

In the world of the PVR, syndication is dead. In the world of 500 channels, how many versions of CBS do you need? How many times a day do you need to watch the same episode of "The Simpsons"? If your viewers have a PVR, the answer is you only need 1 instance. All those cable networks that are syndicating your shows to fill in their schedules? If you put those shows on your own network, with your own ads, you can have that revenue too.

So, the value of syndication has just collapsed. There's only room for one copy of each episode per week (perhaps even per year). All that revenue broadcasters get from cable companies for retransmitting their channels? Gone - the cable company only needs one, and they'll take the local one - they can frequently get that for free.

Let's recap. We've killed the schedule, aggregation and syndication. We have just turned a local TV broadcaster into a broadband pipe that specialises in delivery of video and audio content supported by local advertising.

Does that sound familiar? It should. It's an ISP running a rewriting HTTP proxy, that inserts advertisements into pages that its customers view.

At that point, the model shifts. Content producers sell their own advertisements and purchase time on the broadcast networks. This is how I see it proceeding:

  1. Content producers sell their own advertisements.
  2. They put their back catalog up on the Internet at VoD.
  3. They make the content available on a P2P network.
  4. In markets where they have significant penetration, they purchase time from the local broadcasters, and feed that information to their customers.

Now that would be an interesting synergy. You watch a show on Hulu through Boxee. When you are finished, up pops a dialog box "Would you like to schedule this show for recording in your area?". If you select yes, it instructs your local PVR to record that show the next time it comes around.

All of a sudden, there is a continuous flow between Internet VoD, P2P and OTA (or cable) broadcast. Each with their own strengths.

That'll be cool.

Monday, February 23, 2009

New Zealand S92A

Just to make sure that my stance isn't lost now that the blackout's been lifted.

I would like to see several changes to the TCF, including the right to see the evidence of your accusers.

Tuesday, February 10, 2009

Holy Cow

Wow, the price plans are changing. Even the prepaid plans are getting into the whole "flat rate" thing. I love the idea of a plan where if you don't use the phone, you aren't charged for the plan. The addition of unlimited on-net calling? Perfect.

Verizon prepaid pricing changes coming February 11th

(from Engadget Mobile)

Risk Management, are you getting your money's worth?

For the first time in the better part of a decade, I'm tech lead on a new project for my current employer. For the intervening years I've been working for other employers, other projects.... I wasn't slacking, honest!

Anyways, it gives me a perfect opportunity to compare and contrast the organization of 10 years ago with today.

Holy Cow! It takes them way to long to start up a project. They spend too long deciding what to do, and not enough time actually doing it. Of course, this is all in the name of "risk mitigation".

I'm finding it both extremely frustrating and hilarious. It makes me want to throw things. The company is going to spend well over 30k to make sure that the 150k they spend on the project is a success. Even before writing the SRS. They are going to spend 30k writing a Product Concept Document, Project Definition Workshops, Project Initiation Gate Meeting, and a Product Scoping Document with estimates. I'm even willing to bet that I am underestimating how much they've spent on this.

The funny thing? All that work will be thrown out as soon as the SRS is written.

The next shock was the testing cost. For every day of development, there is a day of testing, more risk mitigation. Then there's another day added for "overhead". So, that small 100 day project? It's actually 300 days.

So, I looked at it from a math point of view. First the project budget:

  1. Pre-SRS work - 45k
  2. SRS - 45k
  3. Development effort - 100k
  4. Testing effort - 100k
  5. Management overhead - 100k
Total Project cost: 390k
Risk of complete failure: 10%

Even worse, the calendar time and effort are unrelated. The calendar time for steps 1+2, 3, 4 are all the same.

Now, let's have a look at a riskier way of doing it:

  1. Pre-SRS work - 15k
  2. SRS - 7k
  3. Development effort - 75k
  4. Testing effort - 50k
  5. Management overhead - 50k
Total Project cost: 197k
Risk of complete failure: 50%

We've gotten rid of all of the risk mitigation. Not only that, we've shrunk the time in steps 1+2 by 75%! That's a huge time to market win.

Let's see if the risk reward makes sense.

The cost of a failure is 50% (odds of failure) * the cost of the project:
  Risky Way: 197k * 0.5 = 95k
  Safe Way: 390k * 0.1 = 39k

Therefore, the amortized cost of a project using the:

  Risky Way: 197k + 95k = 292k
  Safe Way:  390k + 39k = 429k

What failure rate would be needed to justify the extra cost? 70%? 80%? 90%? To justify the extra money spent (assuming 0% the safe way), the failure rate would have to be:

390k - 197k
----------- = 97%
   197k

Failure is the cost of total failure, as in the project has to be thrown away and started over.

Add in the time to market benefits (on the order of 30% for these assumptions), and it starts to look pretty convincing.

Everyone wonders why so many businesses are CMM level 0/1. Have you considered that they might actually be right?

Thursday, January 15, 2009

Prepaid is Dead, redux.

I just saw this in the news today:

Boost sees $50 unlimited plan battling Leap, Metro

Unlimited calling and texting for US$50/month. It's a race to the bottom with all-you-can-eat plans.

If you're charging per minute, per sms, per byte, the question is "why?". Save yourself a lot of money and quit billing for the core service! That Intec Billing Engine you're thinking of buying? Get rid of it, or use it for something other than charging for calls.