Whichever your preference is, would it actually matter given that IHV drivers may not ultimately render a scene with the kind of fidelity (overall or in specific instances) you specified due to performance-driven driver optimizations?

Dean Calver
It depends, the problem with IHV making these kind of decision is they had better not lower the quality. It the output is truly 100% the same, the method they took to get there doesn't really matter. If they can produce exactly the same results using FX12 as FP32 that's fine. But even being 'visually' identically isn't enough, they have to be bitwise the same to some (extremely tight) error metric, because error have a tendency to magnify. Lets say they have an auto instruction identifier, that notices a shader your using is one they can do faster. Its n times faster and only has a difference in the lower 1 bit of the red channel on the screen. So they apply it and most games are happy with the extra speed, unfortunately you happen to use this shader as a pre-process to another shader that uses the red channel as some kind of non-colour data (material ID, depth, etc). That one bit error gets magnified and produces incorrect output (and in future (with dynamic loops) could even cause the hardware to stall).

Chris Egerter
Yes it matters. I've spent a long time working on shaders, making things look as good as possible. I don't want drivers to mess around with the shaders that I put so much hard work in.

Tom Forsyth
Yes. I expressed my preference for a reason. If the shader worked acceptably in a lower precision, I'd have written it that way. If the user could get faster performance by dropping precision and quality, I'd have given them a choice using a slider bar to switch between shaders of different precision. But that's my judgement based on what my app does. Some shaders will look abysmal and simply render the wrong colours if you drop the precision. I don't want a driver trying to second-guess me when it knows nothing about my app.

It's like auto-shrinking textures. Most games have texture quality sliders these days. If the user wants faster, lower-quality textures, they'll move that slider and make that choice. You don't want drivers doing the scaling themselves, or mad things happen like text becoming blurred and unreadable because the font texture has been shrunk!

The app makes a choice. The driver has to obey it. End of story.

Jake Simpson
I would prefer to control the fidelity. I guess it could possibly be done in certain area's without me knowing about it, it would kinda depend on what those area's are. Clip planes, color resolution and Z-buffer resolution are not area's I am comfortable being played with behind my code. We had enough of that with the 3DFX Z-buffer. I wouldn't mind having a "perform driver defined optimisations" mode that I could turn on or off. I just need to know exactly what the optimisations are that may be performed.

Tim Sweeney
We can live with this in the 2003-2004 timeframe, but after that, if you don't do full 32-bit IEEE floating point everywhere, your hardware is toast.

"MrMezzMaster"
If I specify a given level of quality and/or precision, and the driver/card does not meet that, then the driver/card is buggy.

When considering coding for DX9 class hardware do you code with multiple precisions in mind or just the one? Note the different DX9 class hardware we have at the moment.

Dean Calver
I think I answered that in a previous question, but I tend to think of floats and floats only. I can think about it but its easier if you don't have to. Nasty from a code maintenance point of view i.e. Having decided to use a low precision variable, later on you modify the shader and have forgotten the reasons why you can use low precision.

Tom Forsyth
Multiple precisions. Wherever possible, I use the lowest precision I can. If that produces unacceptable artefacts (e.g. sharp edges or incorrect colours), I switch to a higher precision where needed. If instead the lower precision produces acceptable but lower-quality artefacts (slight fuzziness or blurring), then I give the user a choice between the two precisions. This is done on a case-by-case basis.

Chris Egerter
I have fixed and floating point rendering paths. The engine was designed to run on older hardware so all of the shaders must work and look great with only PS 1.1. Most of the shaders rely on textures such as cube maps and attenuation maps, rather than a lot of math that can have precision problems. The floating point rendering path was implemented to judge the speed and quality difference (or lack of difference).

Jake Simpson
I'm not writing that code so I can't comment. However it's probably worth noting that if you are doing an Xbox/PC game then you are less liikely to right for multiple precisions simply because on the Xbox you don't need it.

Tim Sweeney
Let's distinguish games that are designed for a broad spectrum of hardware (say DirectX 7 to 9) and games designed from the ground up for DirectX9 and later. If you're supporting DX7-9 then you're happy with whatever new features you can get out of the hardware, and can work around the precision limitations, because you have to anyway for DX7 and DX8. If you're targeting DirectX9 as the bare minimum, then you simply won't want to bother deal with the lower-precision formats.

"MrMezzMaster"
When coding for ATI's R300 series and NVIDIA's NV30 series, I have a level of quality and performance trade-off that I understand and need to make. The graphics driver/card don't know what this trade-off is at the gross level of my game. Therefore, it's up to me to decide how best to code for both cards. So, I guess the answer is "yes", I still need to worry about precision in terms of quality vs. performance. However, should I be able to code a single shader and use it on both series of cards and get (within a little hand waving) the same exact quality of rendered output.