Archive for March, 2006


Friday, March 31st, 2006

There will be a day, not today, not tommorow, but at a time when you do not expect when I will come into power. On that day, I will run a publisher. I will bestow glory and ruin on the lives of development teams as well as their owners. I will sign and tear up million dollar deals as another man does toilet paper. And my decisions will be based not on quality of the demos or the solidarity or experience of teams. But on the color of shoes, the performance of my golf game, and if you remembered to bring a pen to sign my forms.

Don’t forget to bring your pen.

I do my worst programming in front of the computer

Thursday, March 30th, 2006

This morning I was working on my new ReplicaManager for RakNet and was stuck with a bug where existing objects were not sent to new systems, because I set the scope of the object in the constructor to be true for all players. The problem was that players that connected later would have the default scope of false, and didn’t get data. So I quickly wrote down a few solutions:
My first idea was to indicate in the comments that the user needs to be aware of this, and don’t set the scope in the constructor. Lame.
My second idea was to have a callback called for all new players, where users could do this. Annoying, because users have to then write a callback, which complicates the system and is out of context.
My third idea was to have a callback for all players and for objects, which is called once for every permutation. In this callback you could put your scoping code.

If I had been sitting in front of the computer at work I would have done just that. But since I was at home, and since I wasn’t happy with that solution, I went to go take a shower instead.

15 minutes later, I race out of the shower, re-open Visual Studio, and write down an even better idea – I already have the third solution in the object serialization pushes to other systems! This has to happen once per object anyway, so if I just had the user put the code in there I don’t have to write any new functions and the user can do the call with full contextual information. Happy with myself, I program that in about 5 minutes and close Visual Studio again.

I go to wash dishes before going to work. Practically as soon as I turn on the water, another idea hits me, which I would have never thought of at work, in front of the computer. If I have a function to automatically set the scope for new objects, the users don’t have to do anything at all! I write down that solution and go back to washing dishes. No sooner than is my first dish washed that I find a problem with my last solution, which would have been a bug had I been at the computer and implemented this.

So the final solution is to have a single function where users can automatically set scope, and when objects go in scope they are automatically serialized to remote systems. I get to work, program this in about 5 minutes, and it compiles the first time.

My current solution is very easy to use and fits very nicely with the existing architecture. In fact I only had to write about 10 lines of code. I ‘programmed’ this entire solution while taking a shower and washing dishes It only adds one optional function call to the manager that only needs to be called once.

My third solution, which is what I would have programmed had I been at work, in front of the computer, would have taken a few dozen lines of code, would have required that the user write and register a callback, and that callback wouldn’t have any useful purpose other than to try to make the user’s code work within my manager’s architecture.

Seems like a good argument for telecommuting!

Why voluntary pricing makes sense for intellectual property

Tuesday, March 21st, 2006

I went to Fry’s last week and, as usual, made a bee-line to the games section. Again as usual, there were several games I wanted: D&D Online, RF Online, WW2 Tank Commander, Dungeon Lords, and others. Even though I really want a game I walked out empty-handed. Why? I have a certain budget I decide I can afford and a certain quality of game I have in mind. The prices were too high so I got nothing. If the prices on one or more games had been below my threshold, assuming that price was still above cost, the developer would have made a profit. A smaller profit is better than none at all right?

x = the profit you made
– y = the profit you lost because your price was higher than the customer’s price threshold.

If x – y > 0 then you should sell the product because some profit is better than none at all. If it < 0, or negative profit, then you should not sell because you are losing money. The problem for stores is they don't set the price threshold for each individual customer. They also don't have salesmen for each product. As a result, they set one price. That price is more or less determined by a profit curve, where your total profit = profit per customer * the number of customers who will buy at that price. You pick the point that will generate the maximum profit on the curve, and sell for that much. The problem is you lose the profit of the customers who would have bought your product more cheaply than the price you pick, but still at a profit. How about a different approach: Voluntary pricing. Lets say that at point A you make 1 cent of profit and at point B you make maximum profit, such that if you listed a price higher than B people would picket your office for being a greedy capitalist. What would happen if you let the customers themselves pick a price point between A and B, depending on what they feel about the value of your product? Well in most cases this would simply destroy your profit margin. If you have a high price per unit, you don't have much room to move, and so you lack the price flexibility to attract many bargain bin hunters. Stores only make about $10 profit on games they sell. Lowering a $50 game to $40 won't generate many more sales because there are other games that originally sold for $30 at cost, some of which are more appealing than your $50 game. However, selling at $40 instead of $50 would have resulted in essentially no profit. So offering voluntary pricing doesn't make sense when you have a high cost per unit. People tend not to feel loyal to large corporations. They tend to feel, sometimes justifiably, they are paying for the CEO's personal jet to his private golf course in Hawaii. The products were made by Chinese for 1/10th the price anyway so why should you pay more than the minimum? Conversely, if people feel that they are helping the environment or some other noble cause they will be more loyal, and thus more likely to give a bit extra. This is why Whole Foods is in business. It's not that organic necessarily costs twice as much to grow but that people feel they are helping a small chain selling organics against pesticide wielding meglofarms. If you have a monopoly, people will buy your product no matter what you charge. Offering voluntary pricing in this case is just foolish. People will buy Windows whether the cost is $99 or $69 so why charge less? There's no viable alternative for the mass-market yet although Linux is coming close. So suppose you have a small company, behind a noble cause, selling an item with virtually no incremental cost per sale? This the case with most companies selling intellectual property. Magnatune does this and this is why I paid more than the minimum $5 when I bought a music CD from them. Half of the sale goes towards the artist, who I am much more inclined to support than a large publisher. I feel that the music industry is evil, thus they have a noble cause. And the cost of the download was only about 25 cents to Magnatune for the 600 full-quality .wav files. If you think about it, this is the case with most intellectual property providers. Many intellectual property providers are single or small groups of artists or programmers trying to sell online and get away from the man. It's generally true that companies that sell solely through the internet tend to be small compared to companies that are able to sell through Walmart. When a small company uses a publisher most of the profit goes to the publisher anyway. For example, if you buy a $50 game only $8 of that is allocated to the developer. But the developer doesn't see that money, it goes to the publisher anyway to offset costs such as marketing or FMVs. These costs are charged to the developer and are simply written off by the publisher if the game turns out to not pay for it (which is part of the whole plan to not pay royalties). Developers never see royalties unless their game sells between 1 and 4 million units, unheard of except for the largest blockbusters. Letting your customers choose the price they feel your product makes sense for intellectual property with virtually no per-unit cost. Going back to our x - y equation, we can now suppose that we lose no sales due to price sensitivity because now the customers choose their own price. The variables are now redefined as: x = the profit you made, which is whatever you charged - y = the profit you lost by not charging a higher fixed price that the customer would have paid anyway. I'll argue that in most cases now, x - y > 0 because 9 times out of 10 people won’t buy your product not because they don’t want it at any price but because of price sensitivity. If that’s the case, then even if you make 90% less profit per sale, you’d still break even. If people feel you have a high quality product and they pay more than the minimum then you made more profit using voluntary pricing than you did fixed pricing.

Starting April 1, or slightly sooner, I’m trying this out with my network library RakNet by letting people choose what they feel the library is worth. In a future post we can see how it works out.

How to fix the patent system in 10 easy steps

Monday, March 20th, 2006

1. Software cannot be patented. If there is any doubt as to whether a patent is a software patent, it is.
2. Business methods cannot be patented. As before, if there is any doubt as to whether or not it is a business method patent, it is.
3. Obviousness and novelty should be decided by experts in the field who, as much as possible, have no stake in the matter.
4. You cannot re-patent a new application of the same idea in solely order to extend the life of the patent.
5. You must include a working model with your patent.
6. The patent office gets the same fees whether they grant patents or not.
7. The patent office employees get a bonus every year. For every overturned patent in court, the legal fees of the winner come out of the bonuses of those who were involved in granting the bad patent. (Bonus can be 0, but not negative).
8. Funding for the patent office stays with the patent office and cannot be diverted by congress or others for other purposes. Excess funding beyond a working threshold goes towards bonuses.
9. Patents are limited to 5 years.
10. You cannot patent life, or life processes.

Modular component programming

Wednesday, March 15th, 2006

The more I work on the engine at work the more this reinforces my idea that modular component programming is the way to go. The problem I’m having is that every time I want to compile it takes 5 minutes. Every time I want to load the game it’s another 2 minutes. A morning update from source control might take 15 minutes in total to resolve all the files and everything else you need to do. Worse, these interruptions are just long enough to break my attention causing me to go check email or something else, which is another 5 minutes lost.

If I do 500 iterations per month and lose an average of 5 minutes each time over what it would have otherwise cost me, that’s 41 hours or an entire week lost.

I think the problem is actually worse than that though simply based on experience. When I first wrote my distributed object system, I didn’t write a very comprehensive test bed. Instead, I wanted to try it out in the game in a live enviroment. Big mistake. Over two weeks I barely got anything done. So, fed up with how much time I was wasting, I spent a few hours to write a full test bed and fixed every problem in the system in one day.

I think the mistake with most game engine development is that people put too much in the game engine and not enough in the components (graphics, sound engine, network engine). As a result, when others have to work on the game they are practically paralzyed by the load and build times. It’s understandable because it’s very hard to program in general and easy to program in specifics. So the more you put in your module the harder it is to account for all the possible permutations. For example, my distributed object system only replicates object creation, destruction, and memory synchronization. Yet it’s a huge amount of code relative to what it does that is by far the hardest component to use in my engine. Still, in my opinion it was worthwhile to do this because now I have a module that can be used by any game, can be debugged outside the game, and doesn’t increase load or build times.

I would someday like to see what a game engine would look like if all the components were very powerful, modular, and totally game independent such as RakNet.

Newegg not so bad!

Wednesday, March 15th, 2006

I wrote an email to Newegg complaining that I was recharged a restocking fee for the defective part I got. I also suggested they pay back the $10 I had to pay to ship it back to them…

And they did it!

To say that’s suprising is an understatement since most companies won’t even bother to reply to your mail, much less refund anything.

I take back what I said about Newegg and will shop there in the future.

Single producer single consumer optimization

Tuesday, March 14th, 2006

I wrote this article for Code Project Single Producer Single Consumer with no Critical sections. It’s a very significant discovery in computer science and programming because in the case where you use a SPSC, you can double the throughput speed as opposed to the traditional method of using mutexes.

A couple of idiots gave the article a low rating because it didn’t have pretty pictures for them to look at, but everyone at work seemed to like it and rated it highly.

Next Windows Vista takes 800 MB of RAM

Friday, March 10th, 2006

I read yesterday that the next version of Windows, Vista, takes 800 MB of RAM. I read this on a mid-end XP computer that took 10 seconds to load up Firefox, 5 seconds to load the page, and 2 seconds to close the page because the system is so starved for memory to begin with.

Am I the only person that thinks there is something wrong when you need a high-end system just to run the OS? I thought the purpose of the OS was to provide a platform from which to run your programs. Why is it most of my computer resources go just towards running the OS?

Newegg sucks

Monday, March 6th, 2006

Newegg sucks. I just spent $920 on computer parts that I ordered on Monday and got on Friday despite the fact that it was shipped with 1 day shipping. The $120 motherboard arrived DOA. Now to get my money back, I have to accept a $20 “Restocking Fee” although I don’t know why you’d restock a defective motherboard. On top of that, I had to pay $10 to ship it back, plus my gas and time to get to the post office.

In my opinion, a retailer should be responsible for the parts they sell and if a part is defective the retailer should absorb the cost to correct the situation. What right do they have to charge people for defective goods? It would be like if I went to Fry’s and they told me “If this part is broken, you can have your money back, but we get to keep 15% to put it back on the shelf.” As it is, they put the defective parts back on the shelf for free 🙂

The cost of buying from NewEgg was about $200 less than if I had bought the same parts from Fry’s. But if you consider all the time and money I spent because of the defective parts, plus the headache of waiting for the parts, I would have been better off just paying the $200.