UPGRADING AND REPAIRING PCS 20TH EDITION PDF

adminComment(0)

Upgrading and Repairing PCs, 20th Edition. Copyright .. included in the DVD- ROM Content section of the DVD are in PDF format, which require Adobe Reader . 20th Edition. UPGRADING. AND. REPAIRING PCS,. Contents at a Glance. Introduction 1. 1 Development of the PC 5. 2 PC Components, Features, and System. Upgrading and Repairing PCs (20th Edition) The Complete Idiot's Guide to Upgrading and Repairing PCs (5th Edition) (Complete Idiot's Guides) · Read more.


Upgrading And Repairing Pcs 20th Edition Pdf

Author:ROXANNE AARDEMA
Language:English, Dutch, Portuguese
Country:Greece
Genre:Technology
Pages:564
Published (Last):21.05.2016
ISBN:403-4-37991-123-2
ePub File Size:26.43 MB
PDF File Size:15.60 MB
Distribution:Free* [*Sign up for free]
Downloads:22242
Uploaded by: LAVONNE

Upgrading and Repairing PCs (20th Edition) [Scott Mueller] on xumodaperma.ml * FREE* shipping on qualifying offers. “ a comprehensive resource for PC. Index Symbols V power sources, -5V power sources, inch drive, 1-inch hard drives, 1st ATAPI CD-ROM Drive setting (Boot. Note: If you're looking for a free download links of Upgrading and Repairing PCs (20th Edition) Pdf, epub, docx and torrent then this site is not for you.

The Atanasoff-Berry Computer called the ABC was the first to use modern digital switching techniques and vacuum tubes as switches, and it introduced the concepts of binary arithmetic and logic circuits. This was made legally official on October 19, when, following a lengthy court trial, U.

Federal Judge Earl R. Military needs during World War II caused a great thrust forward in the evolution of computers. In , Tommy Flowers completed a secret British code-breaking computer called Colossus, which was used to decode German secret messages.

Unfortunately, that work went largely uncredited because Colossus was kept secret until many years after the war. Besides code-breaking, systems were needed to calculate weapons trajectory and other military functions.

In , John P. Eckert, John W. Mauchly, and their associates at the Moore School of Electrical Engineering at the University of Pennsylvania built the first large-scale electronic computer for the military. It operated on digit numbers and could multiply two such numbers at the rate of products per second by finding the value of each product from a multiplication table stored in its memory.

ENIAC was about 1, times faster than the previous generation of electromechanical relay computers. ENIAC used approximately 18, vacuum tubes, occupied 1, square feet square meters of floor space, and consumed around , watts of electrical power. The executable instructions composing a given program were created via specified wiring and switches that controlled the flow of computations through the machine.

Although Eckert and Mauchly were originally given a patent for the electronic computer, it was later voided and the patent awarded to John Atanasoff for creating the Atanasoff-Berry Computer. Earlier in , the mathematician John von Neumann demonstrated that a computer could have a simple, fixed physical structure and yet be capable of executing any kind of computation effectively by means of proper programmed control without changes in hardware.

In other words, you could change the program without rewiring the system. The first generation of modern programmed electronic computers to take advantage of these improvements appeared in These computers included, for the first time, the use of true random access memory RAM for storing parts of the program and the data that is needed quickly.

Typically, they were programmed directly in machine language, although by the mids progress had been made in several aspects of advanced programming. The standout of the era is the UNIVAC Universal Automatic Computer , which was the first true general-purpose computer designed for both alphabetical and numerical uses.

The firstgeneration computers were known for using vacuum tubes in their construction. The generation to follow would use the much smaller and more efficient transistor. From Tubes to Transistors Any modern digital computer is largely a collection of electronic switches.

These switches are used to represent and control the routing of data elements called binary digits or bits. Because of the on-oroff nature of the binary information and signal routing the computer uses, an efficient electronic switch was required.

The first electronic computers used vacuum tubes as switches, and although the tubes worked, they had many problems. The type of tube used in early computers was called a triode and was invented by Lee De Forest in see Figure 1. It consists of a cathode and a plate, separated by a control grid, suspended in a glass vacuum tube.

The cathode is heated by a red-hot electric filament, which causes it to emit electrons that are attracted to the plate. The control grid in the middle can control this flow of electrons.

By making it negative, you cause the electrons to be repelled back to the cathode; by making it positive, you cause them to be attracted toward the plate. Grid Heated Cathode Figure 1. Unfortunately, the tube was inefficient as a switch. It consumed a great deal of electrical power and gave off enormous heat—a significant problem in the earlier systems. Primarily because of the heat they generated, tubes were notoriously unreliable—in larger systems, one failed every couple of hours or so.

The invention of the transistor was one of the most important developments leading to the personal computer revolution. The transistor was invented in and announced in by Bell Laboratory engineers John Bardeen and Walter Brattain.

Bell associate William Shockley invented the junction transistor a few months later, and all three jointly shared the Nobel Prize in Physics in for inventing the transistor. The transistor, which essentially functions as a solid-state electronic switch, replaced the less-suitable vacuum tube.

Because the transistor was so much smaller and consumed significantly less power, a computer system built with transistors was also much smaller, faster, and more efficient than a computer system built with vacuum tubes. Some of the layers contain silicon with certain impurities added by a process called doping or ion bombardment, whereas other layers include silicon dioxide which acts as an insulator , polysilicon which acts as an electrode , and metal to act as the wires to connect the transistor to other components.

The composition and arrangement of the different types of doped silicon allow them to act both as a conductor or an insulator, which is why silicon is called a semiconductor. Silicon doped with boron is called P-type positive because it lacks electrons, whereas silicon doped with phosphorus is called N-type negative because it has an excess of free electrons.

The gate is positioned above the P-type silicon, separating the source and drain, and is separated from the P-type silicon by an insulating layer of silicon dioxide.

Normally there is no current flow between N-type and P-type silicon, thus preventing electron flow between the source and drain. When a positive voltage is placed on the gate, the gate electrode creates a field that attracts electrons to the P-type silicon between the source and drain.

A PMOS transistor works in a similar but opposite fashion.

P-type silicon is used for the source and drain, with N-type silicon positioned between them. When a negative voltage is placed on the gate, the gate electrode creates a field that repels electrons from the N-type silicon between the source and History of the PC Chapter 1 13 drain.

Compared to a tube, a transistor is much more efficient as a switch and can be miniaturized to microscopic scale. Since the transistor was invented, engineers have strived to make it smaller and smaller.

In , NEC researchers unveiled a silicon transistor only 5 nanometers billionths of a meter in size. Other technology, such as Graphene and carbon nanotubes, are being explored to produce even smaller transistors, down to the molecular or even atomic scale.

In , British researchers unveiled a Graphene-based transistor only 1 atom thick and 10 atoms 1nm across, and in , IBM researchers created Graphene transistors switching at a rate of gigahertz, thus paving the way for future chips denser and faster than possible with silicon-based designs.

Integrated Circuits The third generation of modern computers is known for using integrated circuits instead of individual transistors. An IC is a semiconductor circuit that contains more than one component on the same base or substrate material , which are usually interconnected without wires.

However, because the flying wires had to be individually attached, this type of design was not practical to manufacture.

In , Fairchild constructed the first planar IC, consisting of a flip-flop circuit with four transistors and five resistors on a circular die only about 20mm2 in size. By comparison, the Intel Core i7 quad-core processor incorporates million transistors and numerous other components on a single mm2 die! History of the PC The fourth generation of the modern computer includes those that incorporate microprocessors in their designs.

Of course, part of this fourth generation of computers is the personal computer, which itself was made possible by the advent of low-cost microprocessors and memory. Birth of the Personal Computer In , some of the first microcomputer kits based on the chip were developed. In April , Intel introduced the microprocessor, which was 10 times faster than the earlier chip and addressed 64KB of memory. This was the breakthrough that the personal computer industry had been waiting for.

The Altair kit, considered by many to be the first personal computer, included an processor, a power supply, a front panel with a large number of lights, and bytes not kilobytes of memory. Assembly back then meant you got out your soldering iron to actually finish the circuit boards—not like today, where you can assemble a system of premade components with nothing more than a screwdriver. Ed Roberts became the sole owner in the early s, after which he designed the Altair.

The Altair included an open architecture system bus later called the S bus, so named because it became an industry standard and had pins per slot. The S bus open architecture meant that anybody could develop boards to fit in these slots and interface to the system, and it ensured a high level of cross-compatibility between different boards and systems.

IBM introduced what can be called its first personal computer in Obviously, the IBM system was not in competition for this low-cost market and did not sell as well by comparison. This system consisted of a main circuit board screwed to a piece of plywood; a case and power supply were not included.

The microcomputer world was dominated in by two types of computer systems. One type, the Apple II, claimed a large following of loyal users and a gigantic software base that was growing at a fantastic rate.

All these systems were built by a variety of companies and sold under various names. For the most part, however, these systems used the same software and plug-in hardware. It is interesting to note that none of these systems was PC compatible or Macintosh compatible, the two primary standards in place today. A new competitor looming on the horizon was able to see that to be successful, a personal computer needed to have an open architecture, slots for expansion, a modular design, and healthy support from both hardware and software companies other than the original manufacturer of the system.

This competitor turned out to be IBM, which was quite surprising at the time because IBM was not known for History of the PC Chapter 1 15 systems with these open-architecture attributes. The open architecture of the forthcoming IBM PC and the closed architecture of the forthcoming Macintosh caused a complete turnaround in the industry. IBM considered the previous system, developed in , to be an intelligent programmable terminal rather than a genuine computer, even though it truly was a computer.

Because these features were limiting, they became external units on the PC, although the PC keyboard layout and electrical designs were copied from the DataMaster. This copying of the bus design was possible because the PC used the same interrupt controller as the DataMaster and a similar direct memory access DMA controller. Also, expansion cards already designed for the DataMaster could easily be redesigned to function in the PC. This arrangement prompted the PC design team to use the Intel CPU, which offered a much larger 1MB memory address limit and an internal bit data bus, but only an 8-bit external data bus.

The 8-bit external data bus and similar instruction set enabled the to be easily interfaced into the earlier DataMaster designs. IBM brought its system from idea to delivery of functioning systems in one year by using existing designs and downloading as many components as possible from outside vendors. That decision was the major factor in establishing Microsoft as the dominant force in PC software. Microsoft jumped on the opportunity left open by Digital Research and, consequently, became the largest software company in the world.

Since then, hundreds of millions of PC-compatible systems have been sold, as the original PC has grown into an enormous family of computers and peripherals.

More software has been written for this computer family than for any other system on the market. The IBMcompatible computer, for example, advanced from a 4. Since the beginning of the PC industry, this pattern has held steady and, if anything, seems to be accelerating.

When he began to graph the data, he realized a striking trend existed. Each new chip contained roughly twice as much capacity as its predecessor, and each chip was released within 18—24 months of the previous chip.

If this trend continued, he reasoned, computing power would rise exponentially over relatively brief periods. It was found to not only describe memory chips, but also accurately describe the growth of processor power and disk drive storage capacity.

It has become the basis for many industry performance forecasts. As an example, in less than 40 years the number of transistors on a processor chip increased more than half a million fold, from 2, transistors in the processor in to 1.

IBM originated the PC-compatible standard, of course, but today it no longer sets the standards for the system it originated. More often than not, new standards in the PC industry are developed by companies and organizations other than IBM.

Upgrading and Repairing Servers

PC-compatible systems have thrived not only because compatible hardware can be assembled easily, but also because the most popular OS was available not from IBM but from a third party Microsoft. Later, with the success of Windows, even more reasons would exist for software developers to write programs for PC-compatible systems.

Although Apple has failed to adopt some of the industry-standard component form factors used in PCs rendering major components such as motherboards noninterchangeable , the PC-based Macs truly are PCs from a hardware standpoint, using all the same processors, chipsets, memory, buses, and other system architectures that PCs have been using for years. Apple could even become a real contender in the OS arena taking market share from Microsoft if the company would only sell its OS in an unlocked version that would run on non-Apple PCs.

There are ways to work around the check see OSx86project.

After 30 years the PC continues to thrive and prosper. With far-reaching industry support and an architecture that is continuously evolving, I would say it is a safe bet that PC-compatible systems will continue to dominate the personal computer marketplace for the foreseeable future. Many continue by defining a personal computer as any small computer system downloadd and used by an individual, which is also true.

However, while it is true that all PCs are personal computers, not all personal computers are PCs. For the true definition of what a PC is, we must look deeper. Calling something a PC implies that it is something much more specific than just any personal computer. Clearly, IBM did not invent the personal computer; however, it did invent the type of personal computer that today we call the PC.

The plans for Simon were available for download by Berkeley Enterprises as well as published in a series of 13 articles in Radio Electronics magazine from to It will affirm, assure and only slightly delay your entry into the solid-state of personal computing power.

The reality today is that although IBM clearly designed and created the PC in and controlled the development and evolution of the PC standard for several years thereafter, IBM is no longer in control of the PC standard; that is, it does not dictate what makes up a PC today.

First, who is in control of PC software? Second, who is in control of PC hardware?

Who Controls PC Software? Microsoft has effectively used its control of the PC OSs as leverage to also control other types of PC software, such as drivers, utilities, and applications. For example, many utility programs originally offered by independent companies, such as disk caching, disk compression, file defragmentation, file structure repair, firewalls, and even simple applications such as calculator and notepad programs, are now bundled in Windows.

Upgrading and Repairing PCs (20th Edition) .pdf

Microsoft has even bundled more comprehensive applications such as web browsers, word processors, and media players, ensuring an automatic installed base for these applications—much to the dismay of companies who produce competing versions. Microsoft has also leveraged its control of the OS to integrate its own networking software and applications suites more seamlessly into the OS than others.

What Is a PC? In what was later viewed as perhaps the most costly business mistake in history, IBM failed to secure exclusive rights to the DOS it had contracted from Microsoft, either by downloading it outright or by an exclusive license agreement. In retrospect, that single contractual error made Microsoft the dominant software company it is today and subsequently caused IBM to lose control of the very PC standard it had created.

As a writer of words, not software , I can appreciate what an incredible oversight this was. Imagine that a book publisher comes up with a great idea for a popular book and then contracts with an author to write it.

Then, by virtue of a poorly written contract, the author discovers that he can legally sell the same book perhaps with a different title to all the competitors of the original publisher. Of course, no publisher I know would allow this to happen; yet that is exactly what IBM allowed Microsoft to do back in It is interesting to note that in the PC business, software enjoys copyright protection, whereas hardware can be protected only by patents, which are much more difficult, time-consuming, and expensive to obtain.

And in the case of U. According to the U. This definition made it difficult to patent most aspects of the IBM PC because it was designed using previously existing parts that anybody could download off the shelf. These chips made up the heart and soul of the original PC motherboard. Because the design of the original PC was not wholly patented and virtually all the parts were readily available, almost anybody could duplicate the hardware of the IBM PC.

All one had to do was download the same chips from the same manufacturers and suppliers IBM used and design a new motherboard with a similar circuit. IBM made it even easier by publishing complete schematic diagrams of its motherboards and adapter cards in detailed and easily available technical reference manuals.

I have several of these early IBM manuals and still refer to them for specific component-level PC design information. In fact, I highly recommend these original manuals to anybody who wants to delve deeply into PC hardware design. Although they are long out of print, they do turn up in the used book market and online auction sites such as site. Both Compaq and Phoenix Software today known as Phoenix Technologies were among the first to develop a legal way around this problem, which enabled them to functionally duplicate but not exactly copy software such as the BIOS.

The BIOS is defined as the core set of control software that drives the hardware devices in the system directly. These types of programs are normally called device drivers, so in essence, the BIOS is a collection of all the core device drivers used to operate and control the system hardware.

The operating system such as DOS or Windows uses the drivers in the BIOS to control and communicate with the various hardware and peripherals in the system. They used two groups of software engineers, the second of which were specially screened to consist only of people who had never before seen or studied the IBM BIOS code.

The second group read the description written by the first group and set out to write from scratch a new BIOS that did everything the first group had described. Reverse-engineering DOS, even with the cleanroom approach, seemed to be a daunting task at the time, because DOS is much larger than the BIOS and consists of many more programs and functions. This is where Microsoft came in. Companies such as Sony, Power Computing, Radius, and even Motorola invested millions of dollars in developing these systems, but shortly after these first What Is a PC?

Chapter 2 23 Mac clones were sold, Apple canceled the licensing! By canceling these licenses, Apple virtually guaranteed that its systems would not be competitive with Windows-based PCs. Along with its smaller market share come much higher system costs, fewer available software applications, and fewer options for hardware repair, replacement, and upgrades as compared to PCs.

The proprietary form factors also ensure that major components such as motherboards, power supplies, and cases are available only from Apple at very high prices, making out-of-warranty repair, replacement, and upgrades of these components not cost effective. This means that the only thing keeping Mac systems unique is the ability to run OS X.

Who Controls PC Hardware? Although it is clear that Microsoft has always had the majority control over PC software by virtue of its control over the dominant PC OSs, what about the hardware? But to me the real question is which company has been responsible for creating and inventing newer and more recent PC hardware designs, interfaces, and standards?

Some, however, surmise the correct answer—Intel.

Upgrading and Repairing PCs (21st ed.)

No, not just one that says Intel inside on it which refers only to the system having an Intel processor , but a system that was designed and built by, or even downloadd through, Intel. Believe it or not, many people today do have Intel PCs! Certainly this does not mean that consumers have downloadd their systems from Intel because Intel does not sell complete PCs to end users. What I am talking about are the major components inside, including especially the motherboard as well as the core of the motherboard—the chipset.

By controlling the processor, Intel naturally controlled the chips necessary to integrate its processors into system designs. This naturally led Intel into the chipset business. It started its chipset business in with the Extended Industry Standard Architecture EISA chipset, and by it had become—along with the debut of the Pentium processor—the largest-volume major motherboard chipset supplier.

Now I imagine Intel sitting there, thinking that it makes the processor and all the other chips necessary to produce a motherboard, so why not just eliminate the middleman and make the entire motherboard, too? The answer to this, and a real turning point in the industry, came about in when Intel became the largest-volume motherboard manufacturer in the world.

Sample Content

After an industry downturn in , Intel concentrated on its core competency of chip making and began using Chinese contract manufacturers such as Foxconn to make Intel-branded motherboards.

Regardless of which company actually manufactures the boards, the main part of any motherboard is the chipset, which contains the majority of the motherboard circuitry.

Intel controls the PC hardware standard because it controls the PC motherboard and most of the components on it. It not only makes the majority of motherboards being used in systems today, but it also supplies the majority of processors and motherboard chipsets to other motherboard manufacturers.

Intel also has had a hand in setting several recent PC hardware standards, such as the following: ATX is still the most popular, and beginning in — it replaced the somewhat long-in-the-tooth IBM-designed Baby-AT form factor, which had been used since the early s.

Intel dominates not only the PC, but the entire worldwide semiconductor industry. According to the sales figures compiled by iSuppli, Intel has nearly twice the sales revenue of the next closest semiconductor company Samsung and more than six times that of competitor AMD see Table 2. Table 2. Ranking by revenue in millions of U. As you can see by these figures, it is no wonder that a popular industry news website called The Register www.

White-Box Systems Many of the top-selling system manufacturers do design and make their own motherboards, especially for their higher-end systems. These companies both design and manufacture their own motherboards as well as download existing boards from motherboard manufacturers.

In rare cases, they even design their own chips and chipset components for their own boards.

UPGRADING REPAIRING PCs

Although sales are high for these individual companies, a large segment of the market is what those in the industry call the white-box systems. The white-box designation comes from the fact that historically most of the chassis used by this type of system have been white or ivory or beige.

The great thing about white-box systems is that they use industry-standard components that are interchangeable. This interchangeability is the key to future upgrades and repairs because it ensures that a plethora of replacement parts will be available to choose from and will be interchangeable.

For many years, I have recommended avoiding proprietary systems and recommended more industrystandard white-box systems instead. Companies selling white-box systems do not usually manufacture the systems; they assemble them.

That is, they download commercially available motherboards, cases, power supplies, disk drives, peripherals, and so on and assemble and market everything together as complete systems. Some companies such as HP and Dell manufacture some of their own systems as well as assemble some from industry-standard parts.

In particular, the HP Pavilion and Dell Dimension lines are composed largely of mainstream systems made with mostly industry-standard parts. Microsoft jumped on the opportunity left open by Digital Research and, consequently, became the largest software company in the world. Since then, hundreds of millions of PC-compatible systems have been sold, as the original PC has grown into an enormous family of computers and peripherals.

More software has been written for this computer family than for any other system on the market. The IBMcompatible computer, for example, advanced from a 4. Since the beginning of the PC industry, this pattern has held steady and, if anything, seems to be accelerating.

When he began to graph the data, he realized a striking trend existed. Each new chip contained roughly twice as much capacity as its predecessor, and each chip was released within 18—24 months of the previous chip. If this trend continued, he reasoned, computing power would rise exponentially over relatively brief periods.

It was found to not only describe memory chips, but also accurately describe the growth of processor power and disk drive storage capacity. It has become the basis for many industry performance forecasts. As an example, in less than 40 years the number of transistors on a processor chip increased more than half a million fold, from 2, transistors in the processor in to 1. IBM originated the PC-compatible standard, of course, but today it no longer sets the standards for the system it originated.I would also like to say thanks to my publisher Greg Wiegand, who has stood behind all the Upgrading and Repairing book and video projects.

For the most part, however, these systems used the same software and plug-in hardware. The second group read the description written by the first group and set out to write from scratch a new BIOS that did everything the first group had described.

We do have a User Services group, however, where I will forward specific technical questions related to the book. Right-click the icon for the DVD drive containing the disc. Both Compaq and Phoenix Software today known as Phoenix Technologies were among the first to develop a legal way around this problem, which enabled them to functionally duplicate but not exactly copy software such as the BIOS.