The first chips that could be considered microprocessors were designed and manufactured in the late 1960s and early 1970s, including the MP944 used in the Grumman F-14 CADC.[1] Intel's 4004 of 1971 is widely regarded as the first commercial microprocessor.[2]
Designers predominantly used MOSFET transistors with pMOS logic in the early 1970s, switching to nMOS logic after the mid-1970s. nMOS had the advantage that it could run on a single voltage, typically +5V, which simplified the power supply requirements and allowed it to be easily interfaced with the wide variety of +5V transistor-transistor logic (TTL) devices. nMOS had the disadvantage that it was more susceptible to electronic noise generated by slight impurities in the underlying silicon material, and it was not until the mid-1970s that these, sodium in particular, were successfully removed to the required levels. At that time, around 1975, nMOS quickly took over the market.[3]
This corresponded with the introduction of new semiconductor masking systems, notably the Micralign system from Perkin-Elmer. Micralign projected an image of the mask onto the silicon wafer, never touching it directly, which eliminated the previous problems when the mask would be lifted off the surface and take away some of the photoresist along with it, ruining the chips on that portion of the wafer.[4] By reducing the number of flawed chips, from about 70% to 10%, the cost of complex designs like early microprocessors fell by the same amount. Systems based on contact aligners cost on the order of $300 in single-unit quantities, the MOS 6502, designed specifically to take advantage of these improvements, cost only $25.[5]
This period also saw considerable experimentation with various word lengths. Early on, 4-bit processors were common, like the Intel 4004, simply because making a wider word length could not be accomplished cost-effectively in the room available on the small wafers of the era, especially when the majority would be defective. As yields improved, wafer sizes grew, and feature size continued to be reduced, more complex 8-bit designs emerged like the Intel 8080 and 6502. 16-bit processors emerged early but were expensive; by the decade's end, low-cost 16-bit designs like the Zilog Z8000 were becoming common. Some unusual word lengths were also produced, including 12-bit and 20-bit, often matching a design that had previously been implemented in a multi-chip format in a minicomputer. These had largely disappeared by the end of the decade as minicomputers moved to 32-bit formats.
^According to Ogdin 1975, the Fairchild PPS-25 was first delivered in 2Q 1971 and the Intel 4004 in 4Q 1971.
^The Intel 8088 had an 8-bit external data bus, but internally used a 16-bit architecture.
^The Motorola 68000 had a 16-bit external data bus, but internally used 32-bit registers.
1980s
As Moore's Law continued to drive the industry towards more complex chip designs, the expected widespread move from 8-bit designs of the 1970s to 16-bit designs almost didn't occur; instead, new 32-bit designs like the Motorola 68000 and National Semiconductor NS32000 emerged that offered far more performance. The only widespread use of 16-bit systems was in the IBM PC, which had selected the Intel 8088 in 1979 before the new designs had matured.
Another change was the move to CMOS gates as the primary method of building complex CPUs. CMOS had been available since the early 1970s; RCA introduced the COSMAC processor using CMOS in 1975.[37] Whereas earlier systems used a single transistor as the basis for each "gate", CMOS used a two-sided design, essentially making it twice as expensive to build. Its advantage was that its logic was not based on the voltage of a transistor compared to the silicon substrate, but the difference in voltages between the two sides, which was detectable at much lower power levels.[citation needed] As processor complexity continued to grow, power dissipation had become a significant concern and chips were prone to overheating; CMOS greatly reduced this problem and quickly took over the market.[38] This was aided by the uptake of CMOS by Japanese firms while US firms remained on nMOS, giving the Japanese industry a major advance during the 1980s.[39]
Semiconductor fabrication techniques continued to improve throughout. The Micralign, which had "created the modern IC industry", was obsolete by the early 1980s. They were replaced by the new steppers, which used high magnifications and extremely powerful light sources to allow a large mask to be copied onto the wafer at ever-smaller sizes. This technology allowed the industry to break below the former 1 micron limit.
Key home computers in the early part of the decade predominantly use processors developed in the 1970s. Versions of the 6502, first released in 1975, powered the Commodore 64, Apple II, BBC Micro, and Atari 8-bit computers. The 8-bit Zilog Z80 (1976) is at the core of the ZX Spectrum, MSX systems and many others. The 8086-based IBM PC, launched in 1981, started the move to 16-bit, but was soon passed by the 68000-based 16/32-bit Macintosh, then the Atari ST and Amiga. IBM PC compatibles moved to 32-bit with the introduction of the Intel 80386 in late 1985, although 386-based systems were considerably expensive at the time.
In addition to ever-growing word lengths, microprocessors began to add additional functional units that had previously been optional external parts. By the middle of the decade, memory management units (MMUs) were becoming commonplace, first appearing on designs like the Intel 80286 and Motorola 68030. By the end of the decade, floating point units (FPUs) were being added, first appearing on 1989s Intel 486 and followed the next year by the Motorola 68040.
Another change that began during the 1980s involved overall design philosophy with the emergence of the reduced instruction set computer, or RISC. Although the concept was first developed by IBM in the 1970s, the company did not introduce powerful systems based on it, largely for fear of cannibalizing their sales of larger mainframe systems. Market introduction was driven by smaller companies like MIPS Technologies, SPARC and ARM. These companies did not have access to high-end fabrication like Intel and Motorola, but were able to introduce chips that were highly competitive with those companies with a fraction of the complexity. By the end of the decade, every major vendor was introducing a RISC design of their own, like the IBM POWER, Intel i860 and Motorola 88000.
The 32-bit microprocessor dominated the consumer market in the 1990s. Processor clock speeds increased by more than tenfold between 1990 and 1999, and 64-bit processors began to emerge later in the decade. In the 1990s, microprocessors no longer used the same clock speed for the processor and the RAM. Processors began to have a front-side bus (FSB) clock speed used in communication with RAM and other components. Typically, the processor itself ran at a clock speed that was a multiple of the FSB clock speed. Intel's Pentium III, for example, had an internal clock speed of 450–600 MHz and an FSB speed of 100–133 MHz. Only the processor's internal clock speed is shown here.
64-bit processors became mainstream in the 2000s. Microprocessor clock speeds reached a ceiling because of the heat dissipation barrier[citation needed]. Instead of implementing expensive and impractical cooling systems, manufacturers turned to parallel computing in the form of the multi-core processor. Overclocking had its roots in the 1990s, but came into its own in the 2000s. Off-the-shelf cooling systems designed for overclocked processors became common, and the gaming PC had its advent as well. Over the decade, transistor counts increased by about an order of magnitude, a trend continued from previous decades. Process sizes decreased about fourfold, from 180 nm to 45 nm.
A new trend appears, the multi-chip module made of several chiplets. This is multiple monolithic chips in a single package. This allows higher integration with several smaller and easier to manufacture chips.
^ abDavid Russell (February 1978). "Microprocessor survey". Microprocessors. 2 (1): 13–20, See p. 18. doi:10.1016/0308-5953(78)90071-5.
^Allen Kent, James G. Williams, ed. (1990). "Evolution of Computerized Maintenance Management to Generation of Random Numbers". Encyclopedia of Microcomputers. Vol. 7. Marcel Dekker. p. 336. ISBN0-8247-2706-1.
^Hans Hoffman; John Nemec (April 1977). "A fast microprocessor for control applications". Euromicro Newsletter. 3 (3): 53–59. doi:10.1016/0303-1268(77)90010-4.
^ abKimura S, Komoto Y, Yano Y (1988). "Implementation of the V60/V70 and its FRM function". IEEE Micro. 8 (2): 22–36. doi:10.1109/40.527. S2CID9507994.
^
C Green; P Gülzow; L Johnson; K Meinzer; J Miller (Mar–Apr 1999). "The Experimental IHU-2 Aboard P3D". Amsat Journal. 22 (2). The first processor using these principles, called ARM-1, was fabricated by VLSI in April 1985, and gave startling performance for the time, whilst using barely 25,000 transistors
^Inayoshi H, Kawasaki I, Nishimukai T, Sakamura K (1988). "Realization of Gmicro/200". IEEE Micro. 8 (2): 12–21. doi:10.1109/40.526. S2CID36938046.
^Moore CR, Balser DM, Muhich JS, East RE (1992). "IBM Single Chip RISC Processor (RSC)"(PDF). Proceedings of the 1991 IEEE International Conference on Computer Design on VLSI in Computer & Processors. IEEE Computer Society. pp. 200–4. ISBN0-8186-3110-4. Archived from the original(PDF) on 2013-10-04. Retrieved 2008-11-15.