The engineering of modern handheld chipsets
In order to try to explain how the chips/integrated circuits (ICs) behind cutting-edge devices are engineered, and what makes one’s architecture more desirable or cost-effective than another, the easiest approach might just be to take real-world examples. For completeness’ sake, I’ll even take four of them: The 5G iPod, the 1G iPod Shuffle, the Nokia N93, and the Samsung P920. All these devices are aimed at very different markets, and the technology behind each of them is distinct; in fact, even the approaches taken are sometimes very different, partially so because of the intended markets.
The Dynamics of Integration and Systems-on-Chip
Integrated Circuits can be classified in three categories: digital, analogue and mixed signal. Most chips people think about are digital or at least mixed signal. In fact, when we referred to "handheld chipsets" in the last few pages, we were thinking specifically of digital chips. CPUs are nearly always digital-only, while GPUs are often mixed-signal, because (NVIDIA G80 is a notable exception) they have an on-chip DAC (Digital-to-Analogue Converter) to get the image in a form the display can understand. Furthermore, the sound chips on today's mainboards are analogue; the processing is done on the CPU. Creative Labs' sound cards also tend to include a digital signal processor (DSP) to offload processing from the CPU, though.
This brings us to the first difference between the 5G iPod and the 1G iPod Shuffle. The former uses a primarily digital chip designed by PortalPlayer for sound processing and decoding, and a DAC from Wolfson Microelectronics. The latter uses a single mixed-signal chip from SigmaTel, and doesn't need a LCD controller (due to lack of any display whatsoever). This vast reduction in the number of components necessary permits smaller form factors and costs, obviously.
The original iPod didn't have a very high level of integration, even though the technology at that time would most likely have permitted much better than that. There were two primary reasons for that: First of all, time to market. And the second reason is quality. Excluding the audio DAC, there is no inherent reason why a single SoC with everything integrated cannot have equivalent quality (and lower costs!) than multi-chip equivalents. The problem lies in the fact that many companies lack the in-house IP, expertise and/or budgets to deliver an integrated solution of excellent quality; they tend to be much more specialized. This is getting less and less true, but remains an important factor nonetheless.
Higher levels of integration, if you have the capability to achieve them, are thus highly desirable if it doesn't mean your product comes out too late to still be meaningful. There are (pretty big) catches to take into consideration, though. The first is the size of your addressable market for any given chip. If your chip is highly integrated, it can also mean it's highly specialized in many cases. The most obvious example of all is that if you add a 2.5G wideband transceiver to your application processor's design, it is also unlikely to be very competitive in anything but 2.5G mobile phones!
Thus, should you choose to pursue maximal levels of integration anyway, you've got two options. You can choose to either pack in as many features as possible to hit as many market segments as possible with a single product (which would reduce cost-competitiveness if the handheld manufacturer didn't need some of those features), or just make more distinct chips and target them at different markets. The latter might seem more desirable, but designing a variety cutting-edges chips on the latest available manufacturing processes is going to cost you money, and lots of it, because of both extra engineering/verification expenses and the one-time tape-out costs you pay to the fab. So this only makes sense if you've got large volumes to offset the extra costs.
Finally, there is at least one other problem with integration: digital circuits follow Moore's Law, but analogue circuits do not. The only way to rapidly shrink analogue chips (or analogue parts in a mixed-signal chip) is by being clever rather than relying on manufacturing innovation. That means increasing efficiency by better design, but also progressively moving as much functionality as possible towards the digital part of the chip. What used to be cheaper to do in analogue back in the days often isn't so anymore - by properly applying these principles, some TI engineers have claimed to be able to imitate Moore's Law for the analogue parts of their designs.