8. Creative labs was very honest about their upcoming product
in an interview (taken from http://www.3dhardware.net/reviews/cl_geforce/)
, 2 extracts from it :
" With most of the demos, we saw a rather constant frame rate of 30FPS -
with the default features on and at default polygon rates. In the fire truck
demo, which contains 100k polygons, the GeForce256 tugged along at roughly
15FPS, not the most impressive score albeit anything else on the market
would do that demo much slower."
Ok, some simple math based on that info reveal : 100k x 15fps = 1.5M ...
Hmm if this demo is an example of a real world situation (which it isn't
since there is zero AI, zero physics, zero sound, ... ) that its very very
far away from your theoretical numbers... actually a factor 10?! Are those
numbers incorrect or is it just a poorly programmed sample?
I suspect that the demo now runs much faster, but to be honest, I haven't
seen the firetruck demo. Just run the stuff for yourself and see how well
you think it performs.
9."This time around, NVIDIA have not supplied manufacturers with
the solution on how to overclock the GeForce256 as they did with the TNT2.
What's the reason for this? They don't want the same thing to happen with
the GeForce256 as with the TNT2, meaning that many companies shipped TNT2
boards at ridiculous clock speeds while just getting a small yield.... One
thing is certain though, the GeForce256 runs very hot, and you barely survive
with the on-board fan and heatsink. So if you want to try and overclock
a GeForce256 card make sure you have efficient and powerful cooling."
Now this kind of sends me the message: don't count on overclocking the GeForce256...
it almost introduces the scare for instability on some systems... Any comments
about this (why, how, ... )?
To be honest, I don't think overclocking is a good idea. However, I think
that people will try to do it, and many of them will be pleased to see higher
clock speeds work well.
10. The tree demo is nice but it also reveals an upcoming problem:
opaque overdraw... Traditional renderers like the GeForce256 (?) do "useless"
work: they draw invisible polygons... in the case of the tree demo many
leaves are hiding each other causing a drop in performance due to the fill-rate
limit. In the future this will only get worse: games will get more and more
details and thus more and more object and thus more and more opaque overdraw.
How will you tackle that problem?
There are technologies like scene graphs that perform several functions
to get around this problem. A good scene graph library can, for example,
cull away geometry that is definitely behind a non-blended closer object.
In addition, the scene graph can sort the data so that front-most non-blended
objects are drawn first, which helps on all fill-bound renderings to reduce
the amount of color- and depth-buffer overdraw. Scene graphs are beyond
the scope of what I want to talk about here today, but you can find lots
of info about them on the web.
11. The theoretical peak of the T&L hardware is 15 million triangles,
are we assuming full 100% use of strips and fans here? If that test uses
strips you really only have 15 million "vertices", or thus 5 million "triangles"
in reality (if "no" strips/fans are used). Correct, incorrect?
The 15M number is 15M vertices. This is the number people should really
focus on when talking about transform and lighting limits, not the number
of triangles, since, as you point out, poorly designed datasets will inefficiently
use that budget of 15M vertices. Well designed datasets will create many
more polygons from that same pool of vertices.
12. The developer support guys told me you have a true benchmark
that shows your hardware doing 11 million triangles per second. Can you
describe that benchmark/demo and maybe give us some screenshots of it?
Sure. It's a small app written by a guy named Steve who works in my group.
Basically it draws a bunch of rotating spheres, each of which is made of
125,000 vertices. It's actually hit higher numbers than 11M vertices per
second on some new cutting edge systems, where AGP4X is available to it.
I think NVIDIA will put that app (called SphereMark now, I think) on our
web site fairly soon, hopefully including source code and a built app.
13. I have seen NVIDIA saying: "Right now, there is nothing that
beats GeForce256 in fill rate." Too often... while this might be true "today"
it will not be true "tomorrow"... what if competitors beat your GeForce256
at 1024x768 or above simply because they have more fill-rate? 1024x768 is
the sweet point today can you maintain 60fps at that resolution? I ask since
I saw Quake3test running on the GeForce256 on the Creative Labs stand at
ECTS with the online frame counter and that counter hovers between 100 (nice)
and 30 (boo). In general frame-rate jumps are seen as very unpleasant...
I can only assume that those drops are due to the fill-rate since the 10.000
polygons per frame in Q3test should be no real issue.
First of all, unlevel frame rates are not a problem with the graphics card,
in my opinion. The way to fix that is within an app, by balancing the number
of vertices per frame, pixels drawn per frame, etc. Those are hard problems
to solve, but I think that we will see those kinds of changes come in the
upcoming couple of years. Now, as for people feeling that GeForce's fill
rate is low compared with specs of new parts some of our competitors have
announced, well, the best I can say is, GeForce 256 is shipping, those others
are not. We'll have to see how well they do with a real chip, instead of
with a spec on a piece of paper.
14. Elsa has reported stability problems with the reference design,
they talk about lowering the voltage etc... Any comments? Will we get power
supply problems like with TNT1 and the Asus Motherboards?
Don't know anything about it. I'm a software guy. I do know that there's
many motherboards out there that do not supply the spec-required current
to the AGP slot, and my guess is that those motherboards will have problems
with any next-generation AGP card like the GeForce or even TNT2.
15. Lightmaps and dual texturing allow great effects, T&L now
introduces new types of lights... will vertex lighting replace lightmaps?
Can you achieve the same effects and quality? Or should they exist next
to each other and be used depending on the situation? One problem with vertex
lighting is that it doesn't take into account shadows...
I'm not sure how this will all pan out. Look at demos like Whole Experience
(http://www.wxp3d.com), which is a D3D title that uses GeForce's lighting
for its lighting effects. From the screenshots you can't really see how
amazingly cool this demo really is. I think there's a lot that game developers
will learn about how to use real OpenGL or Direct3D lighting, now that it's
available to them. This will be the same kind of learning curve that they
all hit when the first games to use lightmaps came out, like Quake1. Over
time, though, I think people should expect amazing visual effects that are
created using the lighting in the 3D APIs.