1 General Purpose Computers 44 4 example essay topic
2 Special-purpose computers 64.4. 3 Single-purpose computers 64.5 Classification by type of operation 65 Computer applications 65.1 The Internet 76 How computers work 76.1 Instructions 86.2 Memory 86.3 Processing (Processor) 86.4 Control (Control Unit) 96.5 Input and output 96.6 Architecture 96.7 Programs 96.7. 1 Operating system 107 Sources: 10 A computer is a device or machine for making calculations or controlling operations that are expressible in numerical or logical terms. Computers are made from components that perform simple well-defined functions. The complex interactions of these components give computers the ability to process information. If correctly configured, a computer can be made to represent some aspect of a problem or part of a system.
If a computer is configured in this way is given input data, then it can automatically solve the problem or predict the behavior of the system. 1 General principles Computers can work through the movement of mechanical parts, electrons, photons, quantum particles or any other well-understood physical phenomenon. Although computers have been built out of many different technologies, nearly all popular types of computers have electronic components. Computers may directly model the problem being solved, in the sense that the problem being solved is mapped as closely as possible onto the physical phenomena being exploited.
For example, electron flows might be used to model the flow of water in a dam. Such analog computers were once common in the 1960's but are now rare. They are practically dead. In most computers today, the problem is first translated into mathematical terms by rendering all relevant information into the binary base-two numeral system. Next, every operation on that information is reduced to simple Boolean algebra.
Electronic circuits are then used to represent Boolean operations. Since almost all of mathematics can be reduced to Boolean operations, a sufficiently fast electronic computer is capable of's loving almost any mathematical problems (and the majority of information processing problems that can be translated into mathematical ones). This basic idea, which made modern digital computers possible, was formally identified and explored by Claude E. Shannon. Computers cannot solve all mathematical problems.
Alan Turing identified which problems could and could not be solved by computers, and in doing so founded theoretical computer science. When the computer is finished calculating the problem, the result must be displayed to the user as output through output devices like light bulbs, LEDs, monitors, bearers and printers. Novice users, especially children, often have difficulty understanding the important idea that the computer is only a machine, and cannot 'think' or 'understand' the words it displays. The computer is simply performing a mechanical lookup on p reprogrammed tables of lines and colors, which are then translated into arbitrary patterns of light by the output device. It is the human brain which recognizes that those patterns form letters and numbers, and attaches meaning to them.
All that existing computers do is manipulate electrons that are logically equivalent to ones and zeroes; there are no known ways to successfully emulate human comprehension or self-awareness. 2 Etymology (Where the word is from) The word was originally used to describe a person who performed the arts and this usage is still valid. The OED 2 lists the year 1897 as the first year, where the word was used to refer to a mechanical calculating device. By 1946 several qualifiers were introduced by the OED 2 to differentiate between the different types of machine. These qualifiers included analogue, digital and electronic. However, from the context of the citation, it is obvious these terms were in use prior to 1946.3 The exponential progress of computer development Computing devices have doubled in capacity every 18 to 24 months since 1900.
Gordon E. Moore, co-founder of Intel, first described this property of computer development in 1965. His observation has become known as Moore's Law, although it of course is not actually a law, but rather a significant trend. Hand-in-hand with this increase in capacity per unit cost has been an equally dramatic process of miniaturization. The first electronic computers, such as the ENIAC (announced in 1946), were huge devices that weighed tons, occupied entire rooms, and required many operators to function successfully. These computers worked only for a few hours without errors.
They were so expensive that only governments and large research organizations could buy and use them and were considered so exotic that only a handful would ever be required to satisfy global demand. By contrast, modern computers are more powerful, less expensive, smaller and have become available in many areas. The exponential progress of computer development makes classification of computers problematic since modern computers are much more powerful than earlier devices. 4 Classification of computers The following sections describe different approaches to classifying computers. 4.1 Classification by intended use Supercomputer o Minisupercomputero Mainframe computer Enterprise application server o Minicomputer o Workstation Personal computer (PC) o Desktop computer Laptop computer Tablet computer Personal Digital Assistant (PDA) o Personal Video Recorder (PVR) EG: TiVo o Wearable computer Through the colloquial nature of this classification approach, the meanings are ambiguous.
It is usual for only current, commonly available devices to be included. The rapid nature of computer development means new uses for computers are frequently found and current definitions quickly become outdated. Many classes of computer that are no longer used, such as differential analyzers, are not commonly included in such lists. Other classification schemes are required to unambiguously define the word 'computer'. 4.2 Classification by implementation technology less ambiguous approach for classifying computing machines is by their implementation technology. The earliest computers were purely mechanical.
In the 1930's electro-mechanical components were introduced from the telecommunications industry, and in the 1940's the first purely electronic computers were constructed from thermionic valves. In the 1950's and 1960's valves were gradually replaced with transistors and in the late 1960's and early 1970's silicon chips were adopted and have been the back degree of computing technology since ever. This description of implementation technologies is not complete; it only covers the mainstream of development. Historically many exotic technologies have been explored and abandoned. For example, economic models have been constructed using water flowing through multiple-constricted channels, and between 1903 and 1909 Percy E. Lydgate developed a design for a programmable analytical machine based on weaving technologies in which variables were carried in shuttles.
Efforts are currently underway to develop optical computers that use light rather than electricity. The possibility that DNA can be used for computing is also being explored. One radical new area of research that could lead to computers with dramatic new capabilities is the field of quantum computing, but this is presently in its early stages. With the exception of quantum computers, the implementation technology of a computer is not as important for classification purposes as the features that the machine implements. Currently a quantum computer was build, but it isn't ready for the market, and needs a lot of further research. 4.3 Classification by design features Modern computers combine fundamental design features that have been developed by various contributors over many years.
These features are often independent of implementation technology. Modern computers get their overall capabilities from the way these features interact. Some of the most important design features are listed below. 4.3. 1 Digital versus analog fundamental decision in designing a computer is whether it should be digital or analog. Digital computers process discrete numeric or symbolic values, while analog computers process continuous data signals. Since the 1940's digital computers have become by far the most common, although analog computers are still used for some specialized purposes such as robotics and cyclotron control.
Other approaches, such as pulse computing and quantum computing may be possible but are either used for special purposes or are still experimental. 4.3. 2 Binary versus decimal significant design development in digital computing was the introduction of binary as the internal numeral system. This removed the need for the complex carry mechanisms required for computers based on other numeral systems, such as the decimal system. The adoption of binary resulted in simplified designs for implementing arithmetic functions and logic operations. 4.3. 3 Programmability The ability to program a computer - provide it with a set of instructions for execution - without physically reconfiguring the machine is a fundamental design feature of most computers. This feature was significantly extended when machines were developed that could dynamically control the flow of execution of the program.
This allowed computers to control the order in which the program of instructions was executed based on data calculated by the program as it executed. This major design advance was dramatically simplified by the introduction of binary arithmetic which can be used to represent various logic operations. 4.3. 4 Storage During the course of a calculation it is often necessary to store values for use in later calculations. The performance of many computers is usually dictated by the speed with which they can read and write values to and from memory, and the overall capacity of the memory. Originally memory was used only for intermediate values but in the 1940's it was suggested that the program itself could be stored in this way. This advance led to the development of the first stored-program computers of the type used today.
4.4 Classification by capability Perhaps the best way to classify the various types of computing device is by their true capabilities rather than their usage, implementation technology or design features. Computers can be subdivided into three main types based on capability: Single-Purpose devices that can compute only one function (e.g. The Anti kythera Mechanism 87 BC, and Lord Kelvin's Tide predictor 1876), Special-Purpose devices that can compute a limited range of functions (e.g. Charles Babbage's Difference Engine No 1.1832 and Vannevar Bush's Differential analyse r 1932), and General-Purpose devices of the type used today. Historically the word computer has been used to describe all these types of machine but modern colloquial usage usually restricts the term to general-purpose machines. 4.4. 1 General-purpose computers By definition a general-purpose computer can solve any problem that can be expressed as a program and executed within the practical limits set by: the storage capacity of the computer, the size of program, the speed of program execution, and the reliability of the machine.
In 1934 Alan Turing proved that, given the right program, any general-purpose computer could emulate the behavior of any other computer. This mathematical proof was purely theoretical as no general-purpose computers existed at the time. The implications of this proof are profound; for example, any existing general-purpose computer is theoretically able to emulate, albeit slowly, any general-purpose computer that may be built in the future. Computers with general-purpose capabilities are called Turing-complete and this status is often used as the threshold capability that defines modern computers, however, this definition is problematic. Several computing devices with simplistic designs have been shown to be Turing-complete. The Z 3, developed by Konrad Use in 1941 is the earliest working computer that has been shown to be Turing-complete, so far (the proof was developed in 1998).
While the Z 3 and possibly other early devices may be theoretically Turing-complete they are impractical as general-purpose computers. They lie in what is humorously known as the Turing Tar-Pit - 'a place where anything is possible but nothing of interest is practical'. Modern computers are more than theoretically general-purpose; they are also practical general-purpose tools. The modern, digital, electronic, general-purpose computer was developed, by many contributors, over an extended period from the mid 1930's to the late 1940's, during this period many experimental machines were built that were possibly Turing-complete (ABC, ENIAC, Harvard Mk I, Colossus etc. ). All these machines have been claimed, at one time or another, as the first computer, but they all had limited utility as general-purpose problem-solving devices and their designs have been discarded. 4.4.
1.1 Stored-program computers During the late 1940's the first design for a Stored-Program Computer was developed and documented at the Moore School of Electrical Engineering at The University of Pennsylvania. The approach described by this document has become known as the von Neumann architecture, after it's only named author John von Neumann although others at the Moore School essentially invented the design. The von Neumann architecture solved problems inherent in the design of the ENIAC, which was then under construction, by storing the machines program in its own memory. Von Neumann made the design available to other researchers shortly after the ENIAC was announced in 1946. Plans were developed to implement the design at the Moore School in a machine called the EDVAC. The EDVAC was not operational until 1953 due to technical difficulties in implementing a reliable memory.
Other research institutes, who had obtained copies of the design, solved the considerable technical problems of implementing a working memory before the Moore School team and implemented their own stored-program computers. In order of first successful operation the first 5 stored-program computers, that implemented the main features of the von Neumann Architecture were: o Manchester Mk I Prototype University of Manchester UK. June 21, 1948, o EDVAC. Cambridge University. UK.
May 6, 1949 o BINA C United States, April 1949 or August, 1949. o CSIR Mk 1 Australia November, 1949 o SEAC US May 9, 1950 The Stored Program design defined by the von Neumann architecture finally allowed computers to readily exploit their general-purpose potential. By storing the computer's program in its own memory it became possible to rapidly 'jump' from one instruction to another based on the result of evaluating a condition defined within the program. This condition usually evaluated data values calculated by the program and allowed programs to become highly dynamic. The design also supported the ability to automatically re-write the program as it executed - a powerful feature that must be used carefully. These features are fundamental to the way modern computers work.
To be precise, most modern computers are binary, electronic, stored-program, general-purpose, computing devices. 4.4. 2 Special-purpose computers The special-purpose computers that were popular in the 1930's and early 1940's have not been completely replaced by general-purpose computers. As the cost and size of computers has fallen and their capabilities have increased it has become cost effective to use them for special-purpose applications. Many domestic and industrial devices including; mobile telephones, video recorders, automobile ignition systems, etc. now contain special-purpose computers. In some cases these computers are turing-complete (Video Games, PDAs) but many are programmed once in the factory and only a few times, if ever, reprogrammed. The program that these devices execute is often contained in a Read Only Memory (ROM chip) which would need to be replaced to change the operation of the machine.
Computers embedded inside other devices are commonly referred to as micro controllers or embedded computers. 4.4. 3 Single-purpose computers Single-purpose computers were the earliest computing devices. Given some inputs they could calculate the result of the single function that was implemented by their mechanism. General-Purpose computers have almost completely replaced single-purpose computers and in doing so have created a completely new field of human endeavor - Software Development. General-purpose computers must be programmed with a set of instructions specific to the task they are required to perform and these instructions are collectively know as computer software. The design of single-purpose computing devices and many special-purpose computing devices is now a conceptual exercise that consists only of designing software.
4.5 Classification by type of operation Computers may be classified according to the way they are operated by the users. Two main types exist: batch processing and interactive processing. 5 Computer applications The first electronic digital computers, with their large size and cost, mainly performed scientific calculations, often to support military objectives. The ENIAC was originally designed to calculate ballistics firing tables for artillery, but it was also used to calculate neutron cross-sectional densities to help in the design of the hydrogen bomb. This calculation, performed in December, 1945 through January, 1946 and involving over a million punch cards of data, showed, that the design would fail. (Many of the most powerful supercomputers available today are also used for nuclear weapons simulations.) The CSIR Mk I, the first Australian stored-program computer, evaluated rainfall patterns for the catchment area of the Snowy Mountains Scheme, a large hydroelectric generation project.
Others were used in cryptanalysis, for example the world's first programmable (though not general-purpose) digital electronic computer, Colossus, built in 1943 during World War II. Despite this early focus of scientific applications, computers were quickly used in other areas. From the beginning, stored program computers were applied to business problems. The LEO, a stored program-computer built by J. Lyons and Co. in the United Kingdom, was operational and being used for inventory management and other purposes 3 years before IBM built their first commercial stored-program computer. Continual reductions in the cost and size of computers saw them adopted by ever-smaller organizations. And with the invention of the microprocessor in the 1970's, it became possible to produce cheaper computers.
In the 1980's, personal computers became popular for many tasks, including book-keeping, writing and printing documents, calculating prognoses and other repetitive mathematical tasks involving spreadsheets. 5.1 The Internet In the 1970's, computer engineers at research institutions throughout the US began to link their computers together using telecommunications technology. This effort was funded by ARPA, and the computer network that it produced was called the ARPANET. The technologies that made the Arpanet possible spread and evolved.
In time, the network spread beyond academic institutions and became known as the Internet. In the 1990's, the development of World Wide Web technologies enabled non-technical people to use the internet, and it grew fast to become a global communications medium. 6 How computers work While the technologies used in computers have changed dramatically since the first electronic, general-purpose, computers of the 1940's, most still use the von Neumann architecture. The functioning of such a computer is in principle quite simple.
Typically, on each clock cycle, the computer takes instructions and data from its memory. The instructions are executed, the results are stored, and the next instruction is taken. This procedure repeats until a halt instruction is encountered. The von Neumann architecture describes a computer with four main sections: the Arithmetic and Logic Unit (ALU), the control circuitry, the memory, and the input and output devices (collectively termed I / O). These parts are interconnected by a bundle of wires (a 'bus') and are usually driven by a timer or clock (although other events could drive the control circuitry). 6.1 Instructions The computer's instructions are not nearly as rich as a human language.
A computer only has a limited number of well-defined, simple instructions, but they are not ambiguous. Typical sorts of instructions supported by most computers are 'copy the contents of memory cell 5 and place the copy in cell 10', 'add the contents of cell 7 to the contents of cell 13 and place the result in cell 20', 'if the contents of cell 999 are 0, the next instruction is at cell 30'. Instructions are represented within the computer as binary code - a base two system of counting. For example, the code for one kind of 'copy' operation in the Intel line of microprocessors is 10110000. The particular instruction set that a specific computer supports is known as that computer's machine language. In practice, people do not normally write the instructions for computers directly in machine language but rather use a 'high level' programming language which is then translated into the machine language automatically by special computer programs (interpreters and compilers).
Some programming languages map very closely to the machine language, such as Assembly Language (low level languages); at the other end, languages like Prolog are based on abstract principles far removed from the details of the machine's actual operation (high level languages). 6.2 Memory The memory is a sequence of numbered cells, each containing a small piece of information. The information may be an instruction to tell the computer what to do. The cell may contain data that the computer needs to perform the instruction. Any cell may contain either, and indeed what is at one time data might be instructions later.
In general, the contents of a memory cell can be changed at any time - it is a scratchpad rather than a stone tablet. The size of each cell, and the number of cells, varies greatly from computer to computer, and the technologies used to implement memory have varied greatly - from electromechanical relays, to mercury-filled tubes (and later springs) in which acoustic pulses were formed, to matrices of permanent magnets, to individual transistors, to integrated circuits with millions of capacitors on a single chip. 6.3 Processing (Processor) The arithmetic and logical unit, or ALU, is the device that performs elementary operations such as arithmetic operations (addition, subtraction, and so on), logical operations (AND, OR, NOT), and comparison operations (for example, comparing the contents of two bytes for equality). This unit is where the 'real work' is done.
Newer Processors consist of new security features, which data and gave it special rights. This new features help to reduce attack that are produced by buffer overflows. 6.4 Control (Control Unit) The control unit keeps track of which bytes in memory contain the current instruction that the computer is performing, telling the ALU what operation to perform and retrieving the information (from memory) that it needs to perform it, and transfers the result back to the appropriate memory location. Once that occurs, the control unit goes to the next instruction (typically located in the next slot (memory address), unless the instruction is a jump instruction informing the computer that the next instruction is located in another location). When referring to memory, the current instruction may use various addressing modes to determine the relevant address in memory. Also some computer motherboards will support dual or more process ers.
Computer servers generally make use of dual / multi processors. 6.5 Input and output The I / O allows the computer to obtain information from the outside world, and send the results of its work back there. There is a broad range of I / O devices, from the familiar keyboards, mice, monitors and floppy disk drives, CD / DVD Drives, printers to the more unusual such as scanners, webcams and bearers. What all input devices have in common is that they encode (convert) information of some type into data which can further be processed by the digital computer system.
Output devices on the other hand, decode the data into information which can be understood by the computer user. In this sense, a digital computer system is an example of a data processing system. 6.6 Architecture Contemporary computers put the ALU and control unit into a single integrated circuit known as the Central Processing Unit or CPU. Typically, the computer's memory is located on a few small integrated circuits near the CPU. The overwhelming majority of the computer's mass is either ancillary systems (for instance, to supply electrical power) or I / O devices.
Some larger computers differ from the above model in one major respect - they have multiple CPUs and control units working simultaneously. Additionally, a few computers, used mainly for research purposes and scientific computing, have differed significantly from the above model, but they have found little commercial application, because their programming model has not yet standardized. 6.7 Programs Computer programs are simply large lists of instructions for the computer to execute, perhaps with tables of data. Many computer programs contain millions of instructions, and many of those instructions are executed repeatedly. A typical modern PC (in the year 2003) can execute around 2-3 billion instructions per second.
Computers do not gain their extraordinary capabilities through the ability to execute complex instructions. Rather, they do millions of simple instructions arranged by people known as 'programmers. ' Good programmers develop sets of instructions to do common tasks (for instance, draw a dot on screen) and then make those sets of instructions available to other programmers. Nowadays, most computers appear to execute several programs at the same time. This is usually referred to as multitasking. In reality, the CPU executes instructions from one program, then after a short period of time, it switches to a second program and executes some of its instructions.
This small interval of time is often referred to as a time slice. This creates the illusion of multiple programs being executed simultaneously by sharing the CPU's time between the programs. This is similar to how a movie is simply a rapid succession of still frames. The operating system is the program that usually controls this time sharing. Newer generations of CPU's really uses HT-technologies (e.g. Intel processors). But there are also processors with more than one processing unit on it.
AMD calls them dual-core processors. 6.7. 1 Operating system When a computer is running it needs a program, whether or not there is useful work to do. In a typical desktop computer, this program is the operating system (OS). The operating system decides which programs are run, when, and what resources (such as memory or input / output - I / O) the programs will get to use. The operating system also provides a layer of abstraction over the hardware, and gives access by providing services to other programs, such as code ('drivers') which allow programmers to write programs for a machine without needing to know the intimate details of all the attached electronic devices.
Most operating systems that have hardware abstraction layers also provide a standardized user interface. The most popular OS remains the Windows family of operating systems. Most computers are very small, very inexpensive computers embedded in other machinery. These embedded systems have programs, but often lack a recognizable operating system.