Probably the most eagerly-anticipated game for a number of years is upon us. Four years in development, Doom3 is the latest game by Mesquite-based id Software, a game that id says is the best they have ever done. While the reception to its gameplay is mixed (as is always the case for such a subjective topic of discussion), the majority is in agreement that it is the best looking game ever released on the PC platform.

This reporter, finally getting his hands on a copy of the game after a torturous wait, has been corresponding with id's Technical Director, John Carmack, the man responsible for the engine powering Doom3, on a variety of topics concerning the game. Deciding that the information generated from these correspondences could prove useful to the public, a request was made to turn these correspondences into an unofficial interview and John gave the go-ahead to this "interviewer" (although he voiced his concern to this interviewer that this "interview" may incite lots of other people to send him more emails!).

Some of the following information have been provided by this interviewer in Beyond3D's various forums, scattered as they may be due to the nature of Beyond3D's forums where there are individual forums for discussing technologies and games. The hope is this "interview" would be a single source of information that touches on a few matters raised by this interviewer to John. Let's not waste anymore time.

It appears that benchmarking demos (using the "timedemo" command) results in higher performance stats than actual gameplay. Can you explain to me why is this so, regarding the "timedemo" command? Is it because "timedemo" do no calculate AI and physics? What else? Also, if I run in a straight line in normal gameplay and have that entire run logged, total frames is higher than a recorded demo of that same run. Why is this?

Timedemo doesn't do any game logic.

Demos are always recorded at exactly 30Hz.

Okay, timedemo only tests graphics rendering and ignores AI and physics. Even with a high-end CPU system, I have found that timedemo is still very CPU-dependent. In demos with many monsters and/or large/complex monsters, I will have to assume this CPU-dependency in timedemos is a result of CPU skinning (since AI and physics are ignored in timedemo). Correct? 

CPU skinning, shadow generation, and tangent space reconstruction are the parts of Doom that take up significant time in timedemos, but there is a lot of driver overhead as well.

"r_skipRenderContext 1" will unbind the gl context, so all driver calls become null functions. You can have some timings activated, then enable this, then disable it so things draw again, and compare to see how much time is spent in the driver, assuming the scene is not hardware limited.

What's the situation with regards to depth bounds test implementation in the game? Doesn't appear to have any effect on a NV35 or NV40 using the cvar.

Nvidia claims some improvement, but it might require unreleased drivers. It's not a big deal one way or another.