Skip to main content

What happened to CPU clock rates?

From the 80's all throughout the 90's, and somewhat at the beginning of the 2000's, CPU technology not only advanced in processing speed of machine code (ie. processing individual machine code opcodes using less and less clock cycles per opcode on average, and providing new opcodes that could perform more operations with one opcode), but more importantly, and most visibly to the end consumer, pumping up the CPU's clock rate more and more.

In the 80's consumer CPU's had usually clock rates between less than 10 MHz and up to about 20 MHz. During the 90's there was quite a race to pump up CPU clock rates, which were raising and raising, first reaching tens of MHz, then hundreds, and eventually the thousands. Perhaps some kind of pinnacle was reached with the Pentium 4 processor, a single-core 32-bit processor that in 2004 was able to reach a whopping 3.8 GHz.

During most of this time clock rates were in fact such an ubiquitous measure of speed, that the vast majority of processors, and their variants, were sold with their clock speed in their names. (At some point AMD got a bit of controversy for using a number in their processors that looked a lot like a clock rate, but wasn't. The actual clock rate was always lower than that number. The number was supposed to be an "equivalent" Intel CPU clock rate for that kind of processor.)

But then something happened. Many people were expecting clock rates to just keep raising and raising. But they just didn't. (In fact, I once commented to a friend how processors seemed to have reached an upper limit to their clock rates, and he responded with a rather firm belief that no, clock rates would be raising for quite some time. This was the time of the 3.8 GHz Pentium4, ie. the early 2000's. He was, as it turns out, quite wrong.)

Both Intel and AMD started producing their dual-core 64-bit CPUs... with clock rates that were significantly lower than those of the fastest Pentium4's. Versions with faster clock rates were produced... but that 4 GHz seemed to be some kind of upper limit. Both manufacturers created newer and newer models, at some point jumping to 4 cores (which became, and is still as of writing this, the most ubiquitous standard)... but clock rates didn't increase in any significant way. Some models have breached the 4 GHz limit even without overclocking, but not by much. More and more cores are being added to newer CPU models, but none of them is getting close to 5 GHz in terms of clock rates. The vast majority are in fact produced with less than 4 GHz rates. In fact, many brand new processors are still produced with clock rates lower than 3 GHz.

This means that CPU clock rates have been stagnant for something like 15 years already, and there is no sign that they will ever go much higher (unless some kind of completely new technology is discovered in the future).

For some reason it appears that that 4 GHz is some kind of magical limit. You can go a bit over it, but not much (as far as I know, no official CPU has been produced that goes to 5 GHz, at least not with serious and officially unsupported overclocking).

The main reason why CPUs can't be overclocked indefinitely is heat production: The more you overclock them, the hotter they get, and the hotter they get, the quicker they start failing. If you overclock them enough, they simply break (unless they have a protection mechanism that simply throttles their clock speed in order to avoid breaking). This has always been the case, since the early days of microchips. But as history shows, advances in technology have allowed increasing the clock rates without heat becoming a problem... up until that magical 4 GHz, it seems. I don't know what is it with that magical number, but it seems to be some kind of physical limit, with current transistor technology. Perhaps transistors simply can't be physically built in such a manner, with current materials, as to withstand clock rates much higher than that, without getting too hot.

For this reason for the past 10+ years development has gone "laterally", so to speak, rather than "upwards". In other words, development has been in the direction of adding more cores to the CPU, and to make the CPUs process more opcodes per clock cycle, rather than just amping up the clock rate itself.

Increasing clock rate was a very easy way to make a computer faster. The exact same program would automatically and without any changes become faster, for the simple reason that the CPU would process it faster. Making one single core process more opcodes per clock cycle also makes the same program run faster, but this is a much more limited technique. Major speedups are currently only possible by making the program use multiple cores, but this is not something that an existing program can automatically do. Programs need to be written specifically to use multiple cores in order to benefit from them. And this is not a trivial thing to program, especially when it comes to using all the cores as efficiently as possible. (One would think that 4 cores would easily allow a program to run 4 times faster than with a single core, but with very few exceptions it's actually very far from trivial to make programs use multiple cores as efficiently as possible. Quite often a typical program might become eg. 3 times, or even only 2 times faster, by using 4 cores, no matter how well it's optimized.)

It took quite a long time for programs, especially video games, to catch up. When it became clear that multi-core processors were the future, and programs could no longer rely on the speed of an individual core, video game development started to try to go in this direction. It took quite a long time for them to start using multiple cores efficiently. (During the early years of multi-core processors, most games still only used one core, and even those that used multiple, typically used one core at 100% and the remaining at something like 20% or less, because they just weren't optimized for multiprocessing.)

Nowadays video games have become quite good at using multiple processors, and will greatly benefit from them. In fact, there are benchmarks out there that test how a modern game, like GTA5, runs on a modern processor with only one core enabled, vs. two cores, and the difference can be quite drastic. (For example GTA5 becomes essentially unplayable, and even a buggy mess, with only one core.) Even just two cores can make an enormous difference compared to one.

Comments