16. To create realistic light effects on a wall using vertex lighting (hardware lighting) you need to use thousands of triangles while with a lightmap and 2 polygons you can achieve the same (or maybe even better) now what will be more efficient (read faster)? 


Again, depends on the app. If your app is only drawing a few walls, but is drawing them at a very high resolution, the extra memory bandwidth required for fetching a second texture (or, even worse, the extra fill needed for a second rendering pass) may be your bottleneck, and so you can add some geometric complexity and save a bunch on fill. There's other cases where effects like lightmaps will remain a better visual effect though, like some of the lighting done in some current games.

17. How can we solve the timing difference between T&L and render core? I mean T&L has a rather specific amount of math and thus cycles while rendering is dependent on the size of the polygon (more or less cycles needed). Where would you locate the buffer to solve this? 

I'm not really sure what you mean by this question, sorry.

18. Many people asked us to find out the texture cache size of the GeForce256, also do you have an output cache like ATI introduced with the Rage Fury ?

I'm not the one to talk about how we've implemented various performance tweaks, like caches, on the GeForce. Can't make it too easy for our competitors to catch up… :)

19. A question about the future: More fill-rate requires more bandwidth... how do you plan to tackle this increasing problem...

DDRAM will make a big difference, I suspect. In addition, NVIDIA's got some new techniques for how to more efficiently use whatever bandwidth is made available to the GeForce, be it through SDRAM or DDRAM.

20. S3TC is finally seeing some support, years after it was introduced, how long do you expect that it will take to see the first killer app that needs T&L... After all Quake3 is only designed for about 10.000 tri/frame... I thus mean the message on the box saying: T&L required... not quick hack games.

I've seen some games that use it already, though they're not publicly shown yet. There's one that I can point people starved for screenshots at: Halo, by Bungie Software. Check it out at: http://halo.bungie.com/, You should also check out http://www.wxp3d.com. That's another AMAZING looking demo that I've roamed around in (I mention it in question 15, too)

21. How has Intel responded to the introduction of T&L? Positive, negative... not ?

Dunno. I don't really watch what they say about graphics terribly closely, to be honest. Intel's an important company to pay attention to, but I just don't have enough time in a day to follow what everyone else thinks.

22. Bump Mapping: Embossing, Dot3, Environment... which do you prefer and why? And which ones are supported in "hardware" by the GeForce256 ?

We've got a paper written by a guy in my group named Mark that shows some AMAZING bump mapping effects done on GeForce using some new techniques. I think that paper will be on our web site within the next couple of days, with a demo app and some screenshots. It blows away any other bump mapping I've ever seen before. Check out this screenshot:





23. Can T&L scale with the CPU? Will T&L on a P2-300 be the same as on a P3-500? Is it possible to balance the load as some companies (S3, 3Dlabs, ... ) claim ?

In general, things will still run (noticeably) faster on a P3-500 than on a P2-300. You've got faster memory, and a faster CPU feeding the data to GeForce. To hit a number like 15M vertices/sec, you need to feed a pretty large amount of data to the chip. On slower CPUs, the GeForce will still be great, but I suspect that the GPU will be spending some piece of time waiting for the CPU to send data to it.