Introduction

When the furore and chatter about NVIDIA G80 had died down a bit, following its release in November '06, speculation -- at least on our forums -- slowly turned to when the mid-range variant of NVIDIA's brand new architecture would see the light of day. With G80 nearly 700M transistors big and built by TSMC on their 90nm node, imagining what a mid-range part would be like based on that big base implementation became the sport of the adventurous few.

And if you give yourself G80 as a starting point, then consider costs, yields and the production availability of NVIDIA's foundry partners, getting close to the reality of G84 wouldn't have been too hard if you had your head screwed on. Right? Well, maybe not. You see we had a go at that same mental exercise in those same quiet days, but it turns out the G84 we came up with isn't quite the G84 you'll see explained today.

So if you'll join us, we'll explain what the heck we mean. It's not that the architecture is massively different, but the numbers of units and the features might not be what people were expecting, for the worse and for the better. So, NVIDIA's G84 (and G86, but we'll get to that) breaks cover today with a launch SKU or two, and we've spent a bit of time with a pair of G84-based boards from XFX in the run up to NDA expiry.

We start with our traditional look at the chip specification. After that, we'll analyse video decoding performance, and then look at basic synthetic benchmarks to make sure the architecture matches the specifications.