It's very difficult to underestimate the role Intel has played within the modern world. Barring the Ford Motor Company, no single commercial organisation can claim to have changed the industrial landscape quite so much. As inventors of the Integrated Circuit (IC) their place in hi-tech history is assured - whatever the future might bring.

However like any king, leader or dictator there are always those that want to overthrow the status quo. Today rival companies such as IBM, Cyrix, AMD are breathing down the Silicon Valley giant's neck; producing central processor units (CPUs) that are (in the view of many key industry voices) either comparable to Intel's models or else better priced.

The next few years seem certain be one of the interesting periods in the whole history of affordable computing as the playfield becomes more and more level.

In the last part of the series we looked at the various elements that dictate the speed that a computer can operate at. While I cannot simply repeat all that information here, it is worth emphasising again that the Central Processing Unit is not a complete computer in itself, merely the most important and most talked about component.

Equally worth repeating is that hardware is nothing without software. If software happened to very badly written - for example using horrendous inefficient maths techniques - this, alone, could nullify the advantages of a "more advanced" processor.

Journalists are often impressed by cutting edge products, at whatever price, but the public is becoming more and more seduced by price. The rise of the sub 1,000 dollar computer has played right into the hands of Intel's chief rivals who thrive on providing far more bang-for-your-buck. The $64,000 question - for Intel - has to be how to respond.

Sadly Intel has not had happy experiences producing. cut-price (non state-of-the-art) chips, a fact that they even acknowledge in interviews. However if they simply ignore this growing market they will certainly see a drop in their market share, and if they do take part they are going to have to find a way to cut costs if present profit margins are to be maintained.

The one policy change that they have set out is to become a more rounded high technology company, even getting involved in the toy and easy-to-use computer market. While a step away from the main thrust of this article, this is a clear indication that the CPU market is not the license to print money it once was.

What Intel must secretly hope for today is that new breakthroughs take place that can only be performed by the very latest set of chips: General speech recognition, for example. But once again the snag is clear, Intel are still predominantly a hardware company and therefore tied to the breakthroughs of the software sector.

While Intel have started to struggle, IBM have started to find their feet in this tricky and politic-led field. In fact they are perhaps the first chip company to explain their products clearly and in detail - including their "next generation" 64-bit Power PC (TM) Processor.

The overall "big idea" is to streamline the delivery of the target software by dividing the mathematical workload. Their most prominent function is to have separate units for floating point mathematics called FPU's - by far the difficult maths that a processor has to deal with.

This is the headline complication that would-be low level programmers or hardware designers have to grapple with: A CPU - which is essentially a bells-and-whistles calculator - takes a vastly different amounts of time dealing with the different maths functions. In fact it is not uncommon for programmers to avoid some maths techniques (such as floating-point) in order to gain faster processing times.

While the full function of the PowerPC processor can only be understood by designers or experts, the overall flowchart is clear and straightforward. The input instructions are delivered and modified through three separate in built caches, and when the maths has been split into categories (and then performed), the final results are shipped-out through yet another special cache.

(In abstract terms, this is a little like an efficient restaurant kitchen that has two doors for the waiters - "in" and"out" - and chefs that divide the dish components between themselves. The better chefs providing the more complicated parts.)

While not wishing to complicate matters too much further, chip designers do not live in a vacuum. They are trying to provide chips that run software and they have to analyse software (and therefore software design) to find what the common wait-states and hold-ups actually are.

Naturally they know the perennial hold-ups and the maths-based problems, but new and revolutionary software might create new and unique ones. In short, designers have to know (or at least guess) as to what the software the chip will be applied to.

Our PowerPC chip, for example, will not be much good in say a pocket calculator (although it could be used in one), and will far more likely be used to control a graphic work-station or as part of an Internet server.

Equally the designer has to know the limits, and consider, the supporting chip market. If the P3 was used in a graphical application computer its output would be tied to the co-processors (such as a graphic board) that it would supply.

If the processed data is mostly screen information (as it would be on a dedicated games computer) the chip only needs to provide the amount of changes need for every screen frame - what would be the use of a computer that provided screen changes faster than they could actually be displayed?

Once again we have to take a step back and realise that while there is always applications that cry out for speed, raw speed is not a benefit without end. In fact in certain cases optimal speed may have already been reached: Processors that power television controls and microwaves for example.

Next time - in the final part of this series - we will look forward to the future of computing and some of the revolutionary new ideas for processing information.