While the introduction to our NVIDIA GeForce 6800 Ultra preview painted a relatively bleak picture for NVIDIA's GeForce FX product cycle, in NVIDIA's terms, business in this market is often relative. Had the market continued in the vein that it had for the GeForce 3/4 cycle NVIDIA would likely have weathered the cycle in an apparently, far less dramatic fashion; hhowever, NVIDIA's issues stemmed from some very unexpected competition from a reorganised and resurgent ATI. The product that heralded ATI's newfound success was R300 (Radeon 9700) – the most complicated graphics chip of its time, the first to boast DriectX9 Shader 2.0 support and offering users no less than twice the performance of any previous products. A tall order to live up to, evidently even for ATI themselves!

At the back end of '02 and early '03, with Radeon 9700 just ramping into full availability, ATI’s soon to be CEO, Dave Orton, was already making references to what’s coming next in a few interviews and financial conference calls. Here he made note that R350, the faster successor to R300, would be ramping for the refresh cycle and that their new “R400” architecture was sampling soon, with references to an unveiling possibly as soon as July '03. Not long after those comments all fell silent on R400 – nary a peep was uttered for some time despite numerous other conferences and opportunities. So, what had happened?

During the late '02 period, when development of the R400 project was underway primarily at ATI’s Marlborough office, there is still numerous elements of uncertainty over some key “inflection points”. Inflection points are the key targets that trigger architectural or business shifts, and for the 3D industry API and platform changes are among the key inflection points. DirectX9’s Pixel and Vertex Shader 2.0 model has been a fairly key inflection point, however what DirectX9 turned out to be by the time it was released, still wasn’t fully known until quite late into '02 and the inclusion for specifications on the Pixel and Vertex Shaders 3.0 model began to indicate that DirectX9 may be lasting for at least a few years. With R400 thought to be a significantly different architecture, with a unified shader model, aimed more towards the next generation of DirectX the question was, would R400 be too adventurous for its initial timescale? Graphics capabilities are all about what's profitable to fit on an area of silicon, and this limits the maximum die size - the more of that die that is taken up with feature capabilities the less can be dedicated to pure performance. Hence it appears that the performance estimations of R400 would be under-powered for its proposed introduction period, while its feature capabilities may have been beyond the current DirectX specification.

So, with "R400" being removed from the roadmap late in the day, "R420" turns up in its place early in '03. With silicon cycle times as they are, this basically means that 6 months of development time is often spent purely at the fab, giving ATI around 10-12 months from inception to tape-out - what can be developed in this time period? And, while R300 clearly proved to be a a groundbreaking part in ATI's history, the question still remains as to whether this was a one off, especially in light of last minute changes to the roadmap, or whether ATI could continue with this development and performance increases.

Here we'll take a look at two implementations of R420, in its Radeon X800 PRO and X800 XT Platinum Edition configurations, and see if we can answer those questions.