THE EARLY DAYS
In 1983, INTEL was an unseen player
as they simply manufactured and
supplied the processors which
IBM used in the IBM XT (eXtended
Technology). Rather than being sold in
component form, PCs of the day were
sold as complete units and IBM at the
time, was the Wintel (Windows/INTEL)
of today.
The INTEL 8088 processor was
designed to run at up to 8MHz, and
clones made by AMD (who had an
x86 license) ran as fast as 10MHz.
This would make AMD one of the
first companies to commercially
sell overclocked hardware. Despite
this, IBM kept the INTEL 8088 chips
running no faster than 4.77MHz to
ensure stability.
Overclocking of these processors
could be accomplished by desoldering the clock crystal and
replacing it with another, faster
after-market crystal. However, this
could lead to instability of the system.
Today we have various bus speeds
and multipliers which can be changed
such as the BCLK (base clock), Hyper
Transport, and RAM/CPU multipliers.
The IBM XT however, had only one
- the Front Side Bus, or FSB. All
system buses ran at this frequency,
from the CPU to the RAM to all other
communication buses.
The RAM of the time had only one
specification and that was a refresh
rate of 210 nanoseconds. No other
specification was needed as it simply
ran at the same speed the rest of the
system used - 4.77MHz. The RAM
used in the XT came as a 40-pin Dual
Inline Package (DIP) module with a
default capacity of 64 KB.
Moving forward to 1984, IBM
launched the AT (Advanced
Technology) which used INTEL's new
80286 processor at speeds of either
6MHz or 8MHz. As a result of this, the
revised 16-bit ISA bus was designed to
run at either frequency. People soon
caught on that the only difference
between the two models was the clock
crystal used, and by replacing the
crystal they could get the extra 2MHz
offered by the more expensive model.
After catching on, IBM became
the world's first company to block
overclocking and it was done by
introducing a method of overclocking
protection at BIOS level. Once this
block was in place, users had to not
only replace the clock crystal but also
the BIOS chip itself.
A big problem with overclocking of
the time was that as everything was
running at the same clock speed and
most software was tied to the system
bus rather than using its own internal
timer. Applications would often run
faster than intended and rendered
games unplayable. Other applications
became unstable or refused to run at
all. This is how the Turbo button seen
on PCs from the 286 to 486 came to
be. Contrary to popular belief, the
Turbo button was not intended to run
your system at a higher speed but
rather to underclock it to a speed
usable by older applications. As its
purpose was misunderstood, Turbo
was left enabled by most people who
PERFORMANCE
RATINGS
AMD released their first in-house
chip, the K5, in 1996. Compared to
the Pentium, it was late to market,
hot and ran at a low frequency. In
order to make their processors more
appealing, AMD named them after
their "performance ratings" rather
than actual clock speed.
These performance ratings were to
show that the processor would offer
the same performance as a higher
clocked INTEL CPU. As an example,
the K5 133MHz was sold as the K5
PR200, implying that it would match
the speed of a Pentium running at
200MHz.
AMD would not be the only company
using performance ratings, with
Cyrix using them for the 6x86 and MII
CPU families, as well as ST's 6x86
and Rise Technology's MP6 family.
Years later, AMD would once more
adopt performance ratings with the
launch of the Athlon XP in late 2001.
22 The OverClocker Issue 24 | 2013