Read Aloud the Text Content
This audio was created by Woord's Text to Speech service by content creators from all around the world.
Text Content or SSML code:
all the specifications associated with it. By software, they generally mean the pro- grams, whether operating systems like Android, ChromeOS, Linux, or Windows, or database systems like Access, MongoDB, Oracle, or DB-terrific, or applica- tion programs like Facebook, Chrome, Excel, or Word. The implication is that the person knows a whole lot about one of these two things and precious little about the other. Usually, there is the further implication that it is OK to be an expert at one of these (hardware OR software) and clueless about the other. It is as if there were a big wall between the hardware (the computer and how it actu- ally works) and the software (the programs that direct the computer to do their bidding), and that one should be content to remain on one side of that wall or the other. The power of abstraction allows us to “usually” operate at a level where we do not have to think about the underlying layers all the time. This is a good thing. It enables us to be more productive. But if we are clueless about the underlying layers, then we are not able to take advantage of the nuances of those underlying layers when it is very important to be able to. That is not to say that you must work at the lower level of abstraction and not take advantage of the productivity enhancements that the abstractions provide. On the contrary, you are encouraged to work at the highest level of abstraction available to you. But in doing so, if you are able to, at the same time, keep in mind the underlying levels, you will find yourself able to do a much better job. As you approach your study and practice of computing, we urge you to take the approach that hardware and software are names for components of two parts of a computing system that work best when they are designed by people who take into account the capabilities and limitations of both. Microprocessor designers who understand the needs of the programs that will execute on the microprocessor they are designing can design much more effective microprocessors than those who don’t. For example, Intel, AMD, ARM, and other major producers of microprocessors recognized a few years ago that a large fraction of future programs would contain video clips as part of e-mail, video games, and full-length movies. They recognized that it would be impor- tant for such programs to execute efficiently. The result: most microprocessors today contain special hardware capability to process those video clips. Intel defined additional instructions, initially called their MMX instruction set, and developed special hardware for it. Motorola, IBM, and Apple did essentially the same thing, resulting in the AltiVec instruction set and special hardware to support it. A similar story can be told about software designers. The designer of a large computer program who understands the capabilities and limitations of the hard- ware that will carry out the tasks of that program can design the program so it executes more efficiently than the designer who does not understand the nature of the hardware. One important task that almost all large software systems need to carry out is called sorting, where a number of items have to be arranged in some order. The words in a dictionary are arranged in alphabetical order. Students in a class are often graded based on a numerical order, according to their scores on the final exam. There is a large number of fundamentally different programs one can write to arrange a collection of items in order. Donald Knuth, one of the top computer scientists in the world, devoted 391 pages to the task in The Art of Computer Programming, vol. 3. Which sorting program works best is often very dependent on how much the software designer is aware of the underlying characteristics of the hardware. The Bottom Line We believe that whether your inclinations are in the direction of a computer hardware career or a computer software career, you will be much more capable if you master both. This book is about getting you started on the path to mastering both hardware and software. Although we sometimes ignore making the point explicitly when we are in the trenches of working through a concept, it really is the case that each sheds light on the other. When you study data types, a software concept, in C (Chapter 12), you will understand how the finite word length of the computer, a hardware concept, affects our notion of data types. When you study functions in C (Chapter 14), you will be able to tie the rules of calling a function with the hardware implementation that makes those rules necessary. When you study recursion, a powerful algorithmic device (initially in Chapter 8 and more extensively in Chapter 17), you will be able to tie it to the hardware. If you take the time to do that, you will better understand when the additional time to execute a procedure recursively is worth it. When you study pointer variables in C (in Chapter 16), your knowledge of computer memory will provide a deeper understanding of what pointers pro- vide, and very importantly, when they should be used and when they should be avoided. When you study data structures in C (in Chapter 19), your knowledge of com- puter memory will help you better understand what must be done to manipulate the actual structures in memory efficiently. We realize that most of the terms in the preceding five short paragraphs may not be familiar to you yet. That is OK; you can reread this page at the end of the semester. What is important to know right now is that there are important topics in the software that are very deeply interwoven with topics in the hardware. Our contention is that mastering either is easier if you pay attention to both. Most importantly, most computing problems yield better solutions when the problem solver has the capability of both at his or her disposal. 1.4 A Computer System We have used the word computer more than two dozen times in the preceding pages, and although we did not say so explicitly, we used it to mean a system consisting of the software (i.e., computer programs) that directs and specifies the processing of information and the hardware that performs the actual processing of information in response to what the software asks the hardware to do. When we say “performing the actual processing,” we mean doing the actual additions, multiplications, and so forth in the hardware that are necessary to get the job done. A more precise term for this hardware is a central processing unit (CPU), or simply a processor or microprocessor. This textbook is primarily about the processor and the programs that are executed by the processor. 1.4.1 A (Very) Little History for a (Lot) Better Perspective Before we get into the detail of how the processor and the software associated with it work, we should take a moment and note the enormous and unparalleled leaps of performance that the computing industry has made in the relatively short time computers have been around. After all, it wasn’t until the 1940s that the first computers showed their faces. One of the first computers was the ENIAC (the Electronic Numerical Integrator and Calculator), a general purpose electronic computer that could be reprogrammed for different tasks. It was designed and built in 1943–1945 at the University of Pennsylvania by Presper Eckert and his colleagues. It contained more than 17,000 vacuum tubes. It was approximately 8 feet high, more than 100 feet wide, and about 3 feet deep (about 300 square feet of floor space). It weighed 30 tons and required 140 kW to operate. Figure 1.1 shows three operators programming the ENIAC by plugging and unplugging cables and switches. About 40 years and many computer companies and computers later, in the y 1980s, the Burroughs A series was born. One of the dozen or so 18-inch boards that comprise that machine is shown in Figure 1.2. Each board contained 50 or more integrated circuit packages. Instead of 300 square feet, it took up around 50 to 60 square feet; instead of 30 tons, it weighed about 1 ton, and instead of 140 kW, it required approximately 25 kW to operate. Figure 1.1 The ENIAC, designed and built at University of Pennsylvania, 1943–45. Ⓧc Historical/Getty Images Figure 1.2 A processor board, vintage 1980s. Courtesy of Emilio Salguerio Fast forward another 30 or so years and we find many of today’s computers on desktops (Figure 1.3), in laptops (Figure 1.4), and most recently in smartphones (Figure 1.5). Their relative weights and energy requirements have decreased enormously, and the speed at which they process information has also increased enormously. We estimate that the computing power in a smartphone today (i.e., how fast we can compute with a smartphone) is more than four million times the computing power of the ENIAC! Figure 1.3 A desktop computer. c Joby Sessions/ Future/REX/ Shutterstock Figure 1.4 A laptop. c Rob Monk/Future/ REX/Shutterstock Figure 1.5 A smartphone. c Oleksiy Maksymenko/ imageBROKER/REX/Shutterstock Figure 1.6 A microprocessor. Ⓧc Peter Gudella/Shutterstock The integrated circuit packages that comprise modern digital computers have also seen phenomenal improvement. An example of one of today’s microproces- sors is shown in Figure 1.6. The first microprocessor, the Intel 4004 in 1971, contained 2300 transistors and operated at 106 KHz. By 1992, those numbers had jumped to 3.1 million transistors at a frequency of 66 MHz on the Intel Pentium microprocessor, an increase in both parameters of a factor of about 1000. Today’s microprocessors contain upwards of five billion transistors and can oper- ate at upwards of 4 GHz, another increase in both parameters of about a factor of 1000. This factor of one million since 1971 in both the number of transistors and the frequency that the microprocessor operates at has had very important impli- cations. The fact that each operation can be performed in one millionth of the time it took in 1971 means the microprocessor can do one million things today in the time it took to do one thing in 1971. The fact that there are more than a million times as many transistors on a chip means we can do a lot more things at the same time today than we could in 1971. The result of all this is we have today computers that seem able to understand the languages people speak – English, Spanish, Chinese, for example. We have computers that seem able to recognize faces. Many see this as the magic of artifi- cial intelligence. We will see as we get into the details of how a computer works that much of what appears to be magic is really due to how blazingly fast very simple mindless operations (many at the same time) can be carried out. 1.4.2 The Parts of a Computer System When most people use the word computer, they usually mean more than just the processor (i.e., CPU) that is in charge of doing what the software directs. 1.5 Two Very Important Ideas 11 They usually mean the collection of parts that in combination form their computer system. Today that computer system is often a laptop (see Figure 1.4), augmented with many additional devices. A computer system generally includes, in addition to the processor, a key- board for typing commands, a mouse or keypad or joystick for positioning on menu entries, a monitor for displaying information that the computer system has produced, memory for temporarily storing information, disks and USB memory sticks of one sort or another for storing information for a very long time, even after the computer has been turned off, connections to other devices such as a printer for obtaining paper copies of that information, and the collection of programs (the software) that the user wishes to execute. All these items help computer users to do their jobs. Without a printer, for example, the user would have to copy by hand what is displayed on the monitor. Without a mouse, keypad, or joystick, the user would have to type each command, rather than simply position the mouse, keypad, or joystick. So, as we begin our journey, which focuses on the CPU that occupies a small fraction of 1 square inch of silicon and the software that makes the CPU do our bidding, we note that the computer systems we use contain a lot of additional components. 1.5 Two Very Important Ideas Before we leave this first chapter, there are two very important ideas that we would like you to understand, ideas that are at the core of what computing is all about. Idea 1: All computers (the biggest and the smallest, the fastest and the slowest, the most expensive and the cheapest) are capable of comput- ing exactly the same things if they are given enough time and enough memory. That is, anything a fast computer can do, a slow computer can do also. The slow computer just does it more slowly. A more expensive computer cannot figure out something that a cheaper computer is unable to figure out as long as the cheaper computer can access enough mem- ory. (You may have to go to the store to buy more memory whenever it runs out of memory in order to keep increasing memory.) All computers can do exactly the same things. Some computers can do things faster, but none can do more than any other. Idea 2: We describe our problems in English or some other language spoken by people. Yet the problems are solved by electrons running around inside the computer. It is necessary to transform our problem from the language of humans to the voltages that influence the flow of electrons. This transformation is really a sequence of systematic trans- formations, developed and improved over the last 70 years, which combine to give the computer the ability to carry out what appear to be some very complicated tasks. In reality, these tasks are simple and straightforward. The rest of this chapter is devoted to discussing these two ideas. 1.6 Computers as Universal Computational Devices It may seem strange that an introductory textbook begins by describing how computers work. After all, mechanical engineering students begin by studying physics, not how car engines work. Chemical engineering students begin by studying chemistry, not oil refineries. Why should computing students begin by studying computers? The answer is that computers are different. To learn the fundamental prin- ciples of computing, you must study computers or machines that can do what computers can do. The reason for this has to do with the notion that computers are universal computational devices. Let’s see what that means. Before modern computers, there were many kinds of calculating machines. Some were analog machines—machines that produced an answer by measuring some physical quantity such as distance or voltage. For example, a slide rule is an analog machine that multiplies numbers by sliding one logarithmically graded ruler next to another. The user can read a logarithmic “distance” on the sec- ond ruler. Some early analog adding machines worked by dropping weights on a scale. The difficulty with analog machines is that it is very hard to increase their accuracy. This is why digital machines—machines that perform computations by manipulating a fixed finite set of digits or letters—came to dominate comput- ing. You are familiar with the distinction between analog and digital watches. An analog watch has hour and minute hands, and perhaps a second hand. It gives the time by the positions of its hands, which are really angular measures. Digital watches give the time in digits. You can increase accuracy just by adding more digits. For example, if it is important for you to measure time in hundredths of a second, you can buy a watch that gives a reading like 10:35.16 rather than just 10:35. How would you get an analog watch that would give you an accurate read- ing to one one-hundredth of a second? You could do it, but it would take a mighty long second hand! When we talk about computers in this book, we will always mean digital machines. Before modern digital computers, the most common digital machines in the st were adding machines. In other parts of the world another digital machine, abacus, was common. Digital adding machines were mechanical or elec- omechanical devices that could perform a specific kind of computation: adding integers. There were also digital machines that could multiply integers. There were digital machines that could put a stack of cards with punched names in alphabetical order. The main limitation of all these machines is that they could do only one specific kind of computation. If you owned only an adding machine 1.6 Computers as Universal Computational Devices 13 and wanted to multiply two integers, you had some pencil-and-paper work to do. This is why computers are different. You can tell a computer how to add num- bers. You can tell it how to multiply. You can tell it how to alphabetize a list or perform any computation you like. When you think of a new kind of computation, you do not have to buy or design a new computer. You just give the old computer a new set of instructions (a program) to carry out the new computation. This is why we say the computer is a universal computational device. Computer scien- tists believe that anything that can be computed, can be computed by a computer provided it has enough time and enough memory. When we study computers, we study the fundamentals of all computing. We learn what computation is and what can be computed. The idea of a universal computational device is due to Alan Turing. Turing proposed in 1937 that all computations could be carried out by a particular kind of machine, which is now called a Turing machine. He gave a mathematical descrip- tion of this kind of machine, but did not actually build one. Digital computers were not operating until several years later. Turing was more interested in solv- ing a philosophical problem: defining computation. He began by looking at the kinds of actions that people perform when they compute; these include making marks on paper, writing symbols according to certain rules when other symbols are present, and so on. He abstracted these actions and specified a mechanism that could carry them out. He gave some examples of the kinds of things that these machines could do. One Turing machine could add two integers; another could multiply two integers. Figure 1.7 shows what we call “black box” models of Turing machines that add and multiply. In each case, the operation to be performed is described in the box. The data elements on which to operate are shown as inputs to the box. The result of the operation is shown as output from the box. A black box model provides no information as to exactly how the operation is performed, and indeed, there are many ways to add or multiply two numbers. a, b a + b a, b a × b Figure 1.7 Black box models of Turing machines. Turing proposed that every computation can be performed by some Turing machine. We call this Turing’s thesis. Although Turing’s thesis has never been proved, there does exist a lot of evidence to suggest it is true. We know, for exam- ple, that various enhancements one can make to Turing machines do not result in machines that can compute more. Perhaps the best argument to support Turing’s thesis was provided by Turing himself in his original paper. He said that one way to try to construct a machine more powerful than any particular Turing machine was to make a machine U that could simulate all Turing machines. You would simply describe to U the particular Turing machine you wanted it to simulate, say a machine to add two integers, give U the input data, and U would compute the appropriate output, in this case the sum of the inputs. Turing then showed that there was, in fact, a Turing machine that could do this, so even this attempt to find something that could not be computed by Turing machines failed. Figure 1.8 further illustrates the point. Suppose you wanted to compute g ⋅ (e + f ). You would simply provide to U descriptions of the Turing machines to add and to multiply, and the three inputs, e, f , and g. U would do the rest. TADD, TMUL e, f, g g × (e + f ) Figure 1.8 Black box model of a universal Turing machine. In specifying U, Turing had provided us with a deep insight: He had given us the first description of what computers do. In fact, both a computer (with as much memory as it wants) and a universal Turing machine can compute exactly the same things. In both cases, you give the machine a description of a computation and the data it needs, and the machine computes the appropriate answer. Comput- ers and universal Turing machines can compute anything that can be computed because they are programmable. This is the reason that a big or expensive computer cannot do more than a small, cheap computer. More money may buy you a faster computer, a monitor with higher resolution, or a nice sound system. But if you have a small, cheap computer, you already have a universal computational device. 1.7 How Do We Get the Electrons to Do the Work? Figure 1.9 shows the process we must go through to get the electrons (which actually do the work) to do our bidding. We call the steps of this process the “Levels of Transformation.” As we will see, at each level we have choices. If we ignore any of the levels, our ability to make the best use of our computing system can be very adversely affected. 1.7.1 The Statement of the Problem We describe the problems we wish to solve in a “natural language.” Natural lan- guages are languages that people speak, like English, French, Japanese, Italian, and so on. They have evolved over centuries in accordance with their usage. They are fraught with a lot of things unacceptable for providing instructions to a Problems Algorithms Language Machine (ISA) Architecture Microarchitecture Circuits Figure 1.9 Levels of transformation. Devices computer. Most important of these unacceptable attributes is ambiguity. Natural language is filled with ambiguity. To infer the meaning of a sentence, a listener is often helped by the tone of voice of the speaker, or at the very least, the context of the sentence. An example of ambiguity in English is the sentence, “Time flies like an arrow.” At least three interpretations are possible, depending on whether (1) one is noticing how fast time passes, (2) one is at a track meet for insects, or (3) one is writing a letter to the Dear Abby of Insectville. In the first case, a simile; one is comparing the speed of time passing to the speed of an arrow that has been released. In the second case, one is telling the timekeeper to do his/her job much like an arrow would. In the third case, one is relating that a particular group of flies (time flies, as opposed to fruit flies) are all in love with the same arrow. Such ambiguity would be unacceptable in instructions provided to a com- puter. The computer, electronic idiot that it is, can only do as it is told. To tell it to do something where there are multiple interpretations would cause the computer to not know which interpretation to follow. 1.7.2 The Algorithm The first step in the sequence of transformations is to transform the natural lan- guage description of the problem to an algorithm, and in so doing, get rid of the objectionable characteristics of the natural language. An algorithm is a step-by- step procedure that is guaranteed to terminate, such that each step is precisely stated and can be carried out by the computer. There are terms to describe each of these properties. We use the term definiteness to describe the notion that each step is precisely stated. A recipe for excellent pancakes that instructs the preparer to “stir until lumpy” lacks definiteness, since the notion of lumpiness is not precise. We use the term effective computability to describe the notion that each step can be carried out by a computer. A procedure that instructs the computer to “take the largest prime number” lacks effective computability, since there is no largest prime number. We use the term finiteness to describe the notion that the procedure termi- nates. For every problem there are usually many different algorithms for solving that problem. One algorithm may require the fewest steps. Another algorithm may allow some steps to be performed concurrently. A computer that allows more than one thing to be done at a time can often solve the problem in less time, even though it is likely that the total number of steps to be performed has increased. 1.7.3 The Program The next step is to transform the algorithm into a computer program in one of the programming languages that are available. Programming languages are “mechan- ical languages.” That is, unlike natural languages, mechanical languages did not evolve through human discourse. Rather, they were invented for use in specify- ing a sequence of instructions to a computer. Therefore, mechanical languages do not suffer from failings such as ambiguity that would make them unacceptable for specifying a computer program. There are more than 1000 programming languages. Some have been designed use with particular applications, such as Fortran for solving scientific calcula- tions and COBOL for solving business data-processing problems. In the second half of this book, we will use C and C++, languages that were designed for manipulating low-level hardware structures. Other languages are useful for still other purposes. Prolog is the language of hoice for many applications that require the design of an expert system. LISP was for years the language of choice of a substantial number of people working on problems dealing with artificial intelligence. Pascal is a language invented as a vehicle for teaching beginning students how to program. There are two kinds of programming languages, high-level languages and -level languages. High-level languages are at a distance (a high level) from underlying computer. At their best, they are independent of the computer on which the programs will execute. We say the language is “machine independent.” All the languages mentioned thus far are high-level languages. Low-level lan- guages are tied to the computer on which the programs will execute. There is generally one such low-level language for each computer. That language is called the assembly language for that computer. 1.7.4 The ISA The next step is to translate the program into the instruction set of the particular computer that will be used to carry out the work of the program. The instruction set architecture (ISA) is the complete specification of the interface between pro- grams that have been written and the underlying computer hardware that must carry out the work of those programs. An analogy that may be helpful in understanding the concept of an ISA is provided by the automobile. Corresponding to a computer program, represented as a sequence of 0s and 1s in the case of the computer, is the human sitting in the driver’s seat of a car. Corresponding to the microprocessor hardware is the car itself. The “ISA” of the automobile is the specification of everything the human needs to know to tell the automobile what to do, and everything the automobile needs to know to carry out the tasks specified by the human driver. For example, one element of the automobile’s “ISA” is the pedal on the floor known as the brake, and its function. The human knows that if he/she steps on the brake, the car will stop. The automobile knows that if it feels pressure from the human on that pedal, the hardware of the automobile must engage those elements necessary to stop the car. The full “ISA” of the car includes the specification of the other pedals, the steering wheel, the ignition key, the gears, windshield wipers, etc. For each, the “ISA” specifies (a) what the human has to do to tell the automobile what he/she wants done, and (b) correspondingly, what the automobile will interpret those actions to mean so it (the automobile) can carry out the specified task. The ISA of a computer serves the same purpose as the “ISA” of an auto- mobile, except instead of the driver and the car, the ISA of a computer specifies the interface between the computer program directing the computer hardware and the hardware carrying out those directions. For example, consider the set of instructions that the computer can carry out—that is, what operations the com- puter can perform and where to get the data needed to perform those operations. The term opcode is used to describe the operation. The term operand is used to describe individual data values. The ISA specifies the acceptable representations for operands. They are called data types. A data type is a representation of an operand such that the computer can perform operations on that representation. The ISA specifies the mechanisms that the computer can use to figure out where the operands are located. These mechanisms are called addressing modes. The number of opcodes, data types, and addressing modes specified by an ISA vary among different ISAs. Some ISAs have as few as a half dozen opcodes, whereas others have as many as several hundred. Some ISAs have only one data type, while others have more than a dozen. Some ISAs have one or two addressing modes, whereas others have more than 20. The x86, the ISA used in the PC, has more than 200 opcodes, more than a dozen data types, and more than two dozen addressing modes. The ISA also specifies the number of unique locations that comprise the com- puter’s memory and the number of individual 0s and 1s that are contained in each location. Many ISAs are in use today. The most widely known example is the x86, introduced by Intel Corporation in 1979 and currently also manufactured by AMD and other companies. Other ISAs and the companies responsible for them include ARM and THUMB (ARM), POWER and z/Architecture (IBM), and SPARC (Oracle). The translation from a high-level language (such as C) to the ISA of the computer on which the program will execute (such as x86) is usually done by a translating program called a compiler. To translate from a program written in C to the x86 ISA, one would need a C to x86 compiler. For each high-level lan- guage and each desired target ISA, one must provide a corresponding compiler. The translation from the unique assembly language of a computer to its ISA is done by an assembler. 1.7.5 The Microarchitecture The next step is the implementation of the ISA, referred to as its microarchitec- ture. The automobile analogy that we used in our discussion of the ISA is also useful in showing the relationship between an ISA and a microarchitecture that implements that ISA. The automobile’s “ISA” describes what the driver needs to know as he/she sits inside the automobile to make the automobile carry out the driver’s wishes. All automobiles have the same ISA. If there are three pedals on the floor, it does not matter what manufacturer produced the car, the middle one is always the brake. The one on the right is always the accelerator, and the more it is depressed, the faster the car will move. Because there is only one ISA for automobiles, one does not need one driver’s license for Buicks and a different driver’s license for Hondas. The microarchitecture (or implementation) of the automobile’s ISA, on the other hand, is about what goes on underneath the hood. Here all automo- bile makes and models can be different, depending on what cost/performance tradeoffs the automobile designer made before the car was manufactured. Some automobiles come with disc brakes, others (in the past, at least) with drums. Some automobiles have eight cylinders, others run on six cylinders, and still oth- ers have only four. Some are turbocharged, some are not. Some automobiles can travel 60 miles on one gallon of gasoline, others are lucky to travel from one gas station to the next without running out of gas. Some automobiles cost 6000 US dollars, others cost 200,000 US dollars. In each case, the “microarchitecture” of the specific automobile is a result of the automobile designers’ decisions regarding the tradeoffs of cost and performance. The fact that the “micro- architecture” of every model or make is different is a good reason to take one’s Honda, when it is malfunctioning, to a Honda repair person, and not to a Buick repair person. In the previous section, we identified ISAs of several computer manufactur- ers, including the x86 (Intel), the PowerPC (IBM and Motorola), and THUMB (ARM). Each has been implemented by many different microarchitectures. For example, the x86’s original implementation in 1979 was the 8086, followed by the 80286, 80386, and 80486 in the 1980s. More recently, in 2001, Intel introduced the Pentium IV microprocessor. Even more recently, in 2015, Intel introduced Skylake. Each of these x86 microprocessors has its own microarchitecture. The story is the same for the PowerPC ISA, with more than a dozen different microprocessors, each having its own microarchitecture. Each microarchitecture is an opportunity for computer designers to make dif- ferent tradeoffs between the cost of the microprocessor, the performance that the microprocessor will provide, and the energy that is required to power the micro- processor. Computer design is always an exercise in tradeoffs, as the designer opts for higher (or lower) performance, more (or less) energy required, at greater (or lesser) cost. 1.7.6 The Logic Circuit The next step is to implement each element of the microarchitecture out of simple logic circuits. Here also there are choices, as the logic designer decides how to best make the tradeoffs between cost and performance. So, for example, even for an operation as simple as addition, there are several choices of logic circuits to perform the operation at differing speeds and corresponding costs. 1.7.7 The Devices Finally, each basic logic circuit is implemented in accordance with the require- ments of the particular device technology used. So, CMOS circuits are different from NMOS circuits, which are different, in turn, from gallium arsenide circuits. The Bottom Line In summary, from the natural language description of a prob- lem to the electrons that actually solve the problem by moving from one voltage potential to another, many transformations need to be performed. If we could speak electron, or if the electrons could understand English, perhaps we could just walk up to the computer and get the electrons to do our bidding. Since we can’t speak electron and they can’t speak English, the best we can do is this systematic sequence of transformations. At each level of transformation, there are choices as to how to proceed. Our handling of those choices determines the resulting cost and performance of our computer. In this book, we describe each of these transformations. We show how tran- tors combine to form logic circuits, how logic circuits combine to form the microarchitecture, and how the microarchitecture implements a particular ISA. In our case, the ISA is the LC-3. We complete the process by going from the English-language description of a problem to a C or C++ program that solves the problem, and we show how that C or C++ program is translated (i.e., compiled) to the ISA of the LC-3. We hope you enjoy the ride.