Learning Programming Languages And Computer Systems example essay topic

5,790 words
Hard Disk Drive The mechanism that reads and writes data on a hard disk. Hard disk drives for PCs generally have seek times of about 12 milliseconds or less. Many disk drives improve their performance through a technique called caching. There are several interface standards for passing data between a hard disk and a computer. The most common are IDE and SCSI.

Hard disk drives are sometimes called Winchester drives, Winchester being the name of one of the first popular hard disk drive technologies developed by IBM in 1973. Operating System The most important program that runs on a computer. Every general-purpose computer must have an operating system to run other programs. Operating systems perform basic tasks, such as recognizing input from the keyboard, sending output to the display screen, keeping track of files and directories on the disk, and controlling peripheral devices such as disk drives and printers. For large systems, the operating system has even greater responsibilities and powers. It is like a traffic cop -- it makes sure that different programs and users running at the same time do not interfere with each other.

The operating system is also responsible for security, ensuring that unauthorized users do not access the system. Operating systems can be classified as follows: . multi-user: Allows two or more users to run programs at the same time. Some operating systems permit hundreds or even thousands of concurrent users... multiprocessing: Supports running a program on more than one CPU... multitasking: Allows more than one program to run concurrently... multithreading: Allows different parts of a single program to run concurrently... real time: Responds to input instantly. General-purpose operating systems, such as DOS and UNIX, are not real-time. Operating systems provide a software platform on top of which other programs, called application programs, can run. The application programs must be written to run on top of a particular operating system.

Your choice of operating system, therefore, determines to a great extent the applications you can run. For PCs, the most popular operating systems are DOS, OS/2, and Windows, but others are available, such as Linux. As a user, you normally interact with the operating system through a set of commands. For example, the DOS operating system contains commands such as COPY and RENAME for copying files and changing the names of files, respectively.

The commands are accepted and executed by a part of the operating system called the command processor or command line interpreter. Graphical user interfaces allow you to enter commands by pointing and clicking at objects that appear on the screen. Hardware Refers to objects that you can actually touch, like disks, disk drives, display screens, keyboards, printers, boards, and chips. In contrast, software is untouchable. Software exists as ideas, concepts, and symbols, but it has no substance. Books provide a useful analogy.

The pages and the ink are the hardware, while the words, sentences, paragraphs, and the overall meaning are the software. A computer without software is like a book full of blank pages -- you need software to make the computer useful just as you need words to make a book meaningful. Software Computer instructions or data. Anything that can be stored electronically is software. The storage devices and display devices are hardware.

The terms software and hardware are used as both nouns and adjectives. For example, you can say: "The problem lies in the software", meaning that there is a problem with the program or data, not with the computer itself. You can also say: "It's a software problem". The distinction between software and hardware is sometimes confusing because they are so integrally linked. Clearly, when you purchase a program, you are buying software. But to buy the software, you need to buy the disk (hardware) on which the software is recorded.

Software is often divided into two categories: . systems software: Includes the operating system and all the utilities that enable the computer to function... applications software: Includes programs that do real work for users. For example, word processors, spreadsheets, and database management systems fall under the category of applications software. Runtime Error An error that occurs during the execution of a program. In contrast, compile-time errors occur while a program is being compiled.

Runtime errors indicate bugs in the program or problems that the designers had anticipated but could do nothing about. For example, running out of memory will often cause a runtime error. Note that runtime errors differ from bombs or crashes in that you can often recover gracefully from a runtime error. Firewall A system designed to prevent unauthorized access to or from a private network. Firewalls can be implemented in both hardware and software, or a combination of both.

Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria. There are several types of firewall techniques: . Packet filter: Looks at each packet entering or leaving the network and accepts or rejects it based on user-defined rules. Packet filtering is fairly effective and transparent to users, but it is difficult to configure. In addition, it is susceptible to IP spoofing...

Application gateway: Applies security mechanisms to specific applications, such as FTP and Telnet servers. This is very effective, but can impose a performance degradation... Circuit-level gateway: Applies security mechanisms when a TCP or UDP connection is established. Once the connection has been made, packets can flow between the hosts without further checking...

Proxy server: Intercepts all messages entering and leaving the network. The proxy server effectively hides the true network addresses. In practice, many firewalls use two or more of these techniques in concert. A firewall is considered a first line of defense in protecting private information. For greater security, data can be encrypted. Hacker A slang term for a computer enthusiast, i. e., a person who enjoys learning programming languages and computer systems and can often be considered an expert on the subject (s).

Among professional programmers, depending on how it used, the term can be either complimentary or derogatory, although it is developing an increasingly derogatory connotation. The pejorative sense of hacker is becoming more prominent largely because the popular press has co - opted the term to refer to individuals who gain unauthorized access to computer systems for the purpose of stealing and corrupting data. Hackers, themselves, maintain that the proper term for such individuals is cracker. Cookies A message given to a Web browser by a Web server. The browser stores the message in a text file. The message is then sent back to the server each time the browser requests a page from the server.

The main purpose of cookies is to identify users and possibly prepare customized Web pages for them. When you enter a Web site using cookies, you may be asked to fill out a form providing such information as your name and interests. This information is packaged into a cookie and sent to your Web browser, which stores it for later use. The next time you go to the same Web site, your browser will send the cookie to the Web server. The server can use this information to present you with custom Web pages.

So, for example, instead of seeing just a generic welcome page you might see a welcome page with your name on it. The name cookie derives from UNIX objects called magic cookies. These are tokens that are attached to a user or program and change depending on the areas entered by the user or program. Cache Pronounced cash, a special high-speed storage mechanism.

It can be either a reserved section of main memory or an independent high-speed storage device. Two types of caching are commonly used in personal computers: memory caching and disk caching. A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective because most programs access the same data or instructions over and over.

By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM. Some memory caches are built into the architecture of microprocessors. The Intel 80486 microprocessor, for example, contains an 8 K-memory cache, and the Pentium has a 16 K cache. Such internal caches are often called Level 1 (L 1) caches.

Most modern PCs also come with external cache memory, called Level 2 (L 2) caches. These caches sit between the CPU and the DRAM. Like L 1 caches, L 2 caches are composed of SRAM but they are much larger. Disk caching works under the same principle as memory caching, but instead of using high-speed SRAM, a disk cache uses conventional main memory. The most recently accessed data from the disk (as well as adjacent sectors) is stored in a memory buffer.

When a program needs to access data from the disk, it first checks the disk cache to see if the data is there. Disk caching can dramatically improve the performance of applications, because accessing a byte of data in RAM can be thousands of times faster than accessing a byte on a hard disk. When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is judged by its hit rate. Many cache systems use a technique known as smart caching, in which the system can recognize certain types of frequently used data. The strategies for determining which information should be kept in the cache constitute some of the more interesting problems in computer science. Buffer A temporary storage area, usually in RAM.

The purpose of most buffers is to act as a holding area, enabling the CPU to manipulate data before transferring it to a device. Because the processes of reading and writing data to a disk are relatively slow, many programs keep track of data changes in a buffer and then copy the buffer to a disk. For example, word processors employ a buffer to keep track of changes to files. Then when you save the file, the word processor updates the disk file with the contents of the buffer. This is much more efficient than accessing the file on the disk each time you make a change to the file. Note that because your changes are initially stored in a buffer, not on the disk, all of them will be lost if the computer fails during an editing session.

For this reason, it is a good idea to save your file periodically. Most word processors automatically save files at regular intervals. Another common use of buffers is for printing documents. When you enter a PRINT command, the operating system copies your document to a print buffer (a free area in memory or on a disk) from which the printer can draw characters at its own pace. This frees the computer to perform other tasks while the printer is running in the background. Print buffering is called spooling.

Most keyboard drivers also contain a buffer so that you can edit typing mistakes before sending your command to a program. Many operating systems, including DOS, also use a disk buffer to temporarily hold data that they have read from a disk. The disk buffer is really a cache. Pixel Short for Picture Element, a pixel is a single point in a graphic image.

Graphics monitors display pictures by dividing the display screen into thousands (or millions) of pixels, arranged in rows and columns. The pixels are so close together that they appear connected. The number of bits used to represent each pixel determines how many colors or shades of gray can be displayed. For example, in 8-bit color mode, the color monitor uses 8 bits for each pixel, making it possible to display 2 to the 8th power (256) different colors or shades of gray.

On color monitors, each pixel is actually composed of three dots -- a red, a blue, and a green one. Ideally, the three dots should all converge at the same point, but all monitors have some convergence error that can make color pixels appear fuzzy. The quality of a display system largely depends on its resolution, how many pixels it can display, and how many bits are used to represent each pixel. VGA systems display 640 by 480, or about 300,000 pixels.

In contrast, SVGA systems display 800 by 600, or 480,000 pixels. True Color systems use 24 bits per pixel, allowing them to display more than 16 million different colors. Local Area Network A computer network that spans a relatively small area. Most LANs are confined to a single building or group of buildings.

However, one LAN can be connected to other LANs over any distance via telephone lines and radio waves. A system of LANs connected in this way is called a wide-area network (WAN). Most LANs connect workstations and personal computers. Each node (individual computer) in a LAN has its own CPU with which it executes programs, but it also is able to access data and devices anywhere on the LAN. This means that many users can share expensive devices, such as laser printers, as well as data. Users can also use the LAN to communicate with each other, by sending e-mail or engaging in chat sessions.

There are many different types of LANs Ethernet's being the most common for PCs. Most Apple Macintosh networks are based on Apple's AppleTalk network system, which is built into Macintosh computers. The following characteristics differentiate one LAN from another: . topology: The geometric arrangement of devices on the network. For example, devices can be arranged in a ring or in a straight line... protocols: The rules and encoding specifications for sending data. The protocols also determine whether the network uses a peer-to-peer or client / server architecture... media: Devices can be connected by twisted-pair wire, coaxial cables, or fiber optic cables. Some networks do without connecting media altogether, communicating instead via radio waves.

LANs are capable of transmitting data at very fast rates, much faster than data can be transmitted over a telephone line; but the distances are limited, and there is also a limit on the number of computers that can be attached to a single LAN. The Five Generations of Computers The history of computer development is often referred to in reference to the different generations of computing devices. A major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices characterize each generation of computer. Read about each generation and the developments that led to the current devices that we use today. First Generation - 1940-1956: Vacuum Tubes The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions.

First generation computers relied on machine language to perform operations, and they could only solve one problem at a time. Input was based on punched cards and paper tape, and output was displayed on printouts. The UNIVAC and EN IAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951. Second Generation - 1956-1963: Transistors Transistors replaced vacuum tubes and ushered in the second generation of computers.

The transistor was invented in 1947 but did not see widespread use in computers until the late 50's. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output. Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN.

These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology. The first computers of this generation were developed for the atomic energy industry. Third Generation - 1964-1971: Integrated Circuits The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers. Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.

Fourth Generation - 1971-Present: Microprocessors The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer - from the central processing unit and memory to input / output controls - on a single chip. In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors. As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet.

Fourth generation computers also saw the development of GUIs, the mouse and handheld devices. Fifth Generation - Present and Beyond: Artificial Intelligence Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization. Are Deleted Files Completely Erased?

A common misconception when deleting files is that they are completely removed from the hard drive. However, users should be aware that highly sensitive data can still be retrieved from a hard drive even after the files have been deleted because the data is not really gone. Files that are moved to the recycle bin (on PCs) or the trash can (on Macs) stay in those folders until the user empties the recycle bin or trash can. Once they have been deleted from those folders, they are still located in the hard drive and can be retrieved with the right software. Any time that a file is deleted from a hard drive, it is not erased. What is erased is the bit of information that points to the location of the file on the hard drive.

The operating system uses these pointers to build the directory tree structure (the file allocation table), which consists of the pointers for every other file on the hard drive. When the pointer is erased, the file essentially becomes invisible to the operating system. The file still exists; the operating system just doesn't know how to find it. It is, however, relatively easy to retrieve deleted files with the right software. The only way to completely erase a file with no trace is to overwrite the data. The operating system will eventually overwrite files that have no pointers in the directory tree structure, so the longer an unpainted file remains in the hard drive the greater the probability that it has been overwritten.

There are also many "file erasing" software products currently on the market that will automatically permanently erase files by overwriting them. Who Owns the Internet? No one actually owns the Internet, and no single person or organization controls the Internet in its entirety. More of a concept than an actual tangible entity, the Internet relies on a physical infrastructure that connects networks to other networks. There are many organizations, corporations, governments, schools, private citizens and service providers that all own pieces of the infrastructure, but there is no one body that owns it all. There are, however, organizations that oversee and standardize what happens on the Internet and assign IP addresses and domain names, such as the National Science Foundation, the Internet Engineering Task Force, ICANN, InterNIC and the Internet Architecture Board.

The History of the @ SignIn 1972, Ray Tomlinson sent the first electronic message, now known as e-mail, using the @ symbol to indicate the location or institution of the e-mail recipient. Tomlinson, using a Model 33 Teletype device, understood that he needed to use a symbol that would not appear in anyone's name so that there was no confusion. The logical choice for Tomlinson was the "at sign", both because it was unlikely to appear in anyone's name and also because it represented the word "at", as in a particular user is sitting @ this specific computer. However, before the symbol became a standard key on typewriter keyboards in the 1880's and a standard on QWERTY keyboards in the 1940's, the @ sign had a long if somewhat sketchy history of use throughout the world.

Linguists are divided as to when the symbol first appeared. Some argue that the symbol dates back to the 6th or 7th centuries when Latin scribes adapted the symbol from the Latin word ad, meaning at, to or toward. The scribes, in an attempt to simplify the amount of pen strokes they were using, created the ligature (combination of two or more letters) by exaggerating the upstroke of the letter "d" and curving it to the left over the "a". Other linguists will argue that the @ sign is a more recent development, appearing sometime in the 18th century as a symbol used in commerce to indicate price per unit, as in 2 chickens @ 10 pence.

While these theories are largely speculative, in 2000 Giorgio Stabile, a professor of the history of science at La Sapienza University in Italy, discovered some original 14th-century documents clearly marked with the @ sign to indicate a measure of quantity - the amphora, meaning jar. The amphora was a standard-sized terra cotta vessel used to carry wine and grain among merchants, and, according to Stabile, the use of the @ symbol (the upper-case "A" embellished in the typical Florentine script) in trade led to its contemporary meaning of "at the price of". While in the English language, @ is referred to as the "at sign", other countries have different names for the symbol that is now so commonly used in e-mail transmissions throughout the world. Many of these countries associate the symbol with either food or animal names. The Difference Between the Internet and the World Wide Web Many people use the terms Internet and World Wide Web (a. k. a. the Web) interchangeably, but in fact the two terms are not synonymous. The Internet and the Web are two separate but related things.

The Internet is a massive network of networks, a networking infrastructure. It connects millions of computers together globally, forming a network in which any computer can communicate with any other computer as long as they are both connected to the Internet. Information that travels over the Internet does so via a variety of languages known as protocols. The World Wide Web, or simply Web, is a way of accessing information over the medium of the Internet. It is an information-sharing model that is built on top of the Internet. The Web uses the HTTP protocol, only one of the languages spoken over the Internet, to transmit data.

Web services, which use HTTP to allow applications to communicate in order to exchange business logic, use the Web to share information. The Web also utilizes browsers, such as Internet Explorer or Netscape, to access Web documents called Web pages that are linked to each other via hyperlinks. Web documents also contain graphics, sounds, text and video. The Web is just one of the ways that information can be disseminated over the Internet.

The Internet, not the Web, is also used for e-mail, which relies on SMTP, Usenet news groups, instant messaging and FTP. So the Web is just a portion of the Internet, albeit a large portion, but the two terms are not synonymous and should not be confused. The Birth of the Internet While computers were not a new concept in the 1950's, there were relatively few computers in existence and the field of computer science was still in its infancy. Most of the advances in technology at the time - cryptography, radar, and battlefield communications - were due to military operations during World War II, and it was, in fact, government activities that led to the development of the Internet. On October 4, 1957, the Soviets launched Sputnik, man's first foray into outer space, and the U.S. government under President Eisenhower subsequently launched an aggressive military campaign to compete with and surpass the Soviet activities. From the launch of Sputnik and the U.S.S.R. testing its first intercontinental ballistic missile, the Advanced Research Projects Agency (ARPA) was born.

ARPA was the U.S. government's research agency for all space and strategic missile research. In 1958, NASA was formed, and the activities of ARPA moved away from aeronautics and focused mainly on computer science and information processing. One of ARPA's goals was to connect mainframe computers at different universities around the country so that they would be able to communicate using a common language and a common protocol. Thus the ARPAnet -- the world's first multiple-site computer network -- was created in 1969. The original ARPAnet eventually grew into the Internet.

The Internet was based on the concept that there would be multiple independent networks that began with the ARPAnet as the pioneering packet-switching network but would soon include packet satellite networks and ground-based packet radio networks. Some different Types of Computers Macintosh A popular model of computer made by Apple Computer. Introduced in 1984, the Macintosh features a graphical user interface (GUI) that utilizes windows, icons, and a mouse to make it relatively easy for novices to use the computer productively. Rather than learning a complex set of commands, you need only point to a selection on a menu and click a mouse button.

Moreover, the GUI is embedded into the operating system. This means that all applications that run on a Macintosh computer have a similar user interface. Once a user has become familiar with one application, he or she can learn new applications relatively easily. The success of the Macintosh GUI led heralded a new age of graphics-based applications and operating systems. The Windows interface copies many features from the Mac. There are many different Macintosh models, with varying degrees of speed and power.

All models are available in many different configurations. All models since 1994 are based on the PowerPC microprocessor. Apple Computer A personal computer company founded in 1976 by Steven Jobs and Steve Wozniak. Throughout the history of personal computing, Apple has been one of the most innovative influences.

In fact, some analysts say that the entire evolution of the PC can be viewed as an effort to catch up with the Apple Macintosh. In addition to inventing new technologies, Apple also has often been the first to bring sophisticated technologies to the personal computer. Apple's innovations include: . Graphical user interface (GUI). First introduced in 1983 on its Lisa computer.

Many components of the Macintosh GUI have become de facto standards and can be found in other operating systems, such as Microsoft Windows... Color. The Apple II, introduced in 1977, was the first personal computer to offer color monitors... Built-in networking.

In 1985, Apple released a new version of the Macintosh with built-in support for networking (Local Talk)... Plug & play expansion. In 1987, the Mac II introduced a new expansion bus called NuB us that made it possible to add devices and configure them entirely with software... QuickTime. In 1991, Apple introduced QuickTime, a multi-platform standard for video, sound, and other multimedia applications... Integrated television.

In 1993, Apple released the Macintosh TV, the first personal computer with built-in television and stereo CD... RISC. In 1994, Apple introduced the Power Mac, based on the PowerPC RISC microprocessor. Mainframe A very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously. In the hierarchy that starts with a simple microprocessor (in watches, for example) at the bottom and moves to supercomputers at the top, mainframes are just below supercomputers.

In some ways, mainframes are more powerful than supercomputers because they support more simultaneous programs. But supercomputers can execute a single program faster than a mainframe. The distinction between small mainframes and minicomputers is vague, depending really on how the manufacturer wants to market its machines. Compact Disk Known by its abbreviation, CD, a compact disc is a polycarbonate with one or more metal layers capable of storing digital information. The most prevalent types of compact discs are those used by the music industry to store digital recordings and CD-ROMs used to store computer data. Both of these types of compact disc are read-only, which means that once the data has been recorded onto them, they can only be read, or played.

Workstation 1) A type of computer used for engineering applications (CAD / CAM), desktop publishing, software development, and other types of applications that require a moderate amount of computing power and relatively high quality graphics capabilities. Workstations generally come with a large, high-resolution graphics screen, at least 64 MB (megabytes) of RAM, built-in network support, and a graphical user interface. Most workstations also have a mass storage device such as a disk drive, but a special type of workstation, called a diskless workstation, comes without a disk drive. The most common operating systems for workstations are UNIX and Windows NT. In terms of computing power, workstations lie between personal computers and minicomputers, although the line is fuzzy on both ends. High-end personal computers are equivalent to low-end workstations.

And high-end workstations are equivalent to minicomputers. Like personal computers, most workstations are single-user computers. However, workstations are typically linked together to form a local-area network, although they can also be used as stand-alone systems. (2) In networking, workstation refers to any computer connected to a local-area network. It could be a workstation or a personal computer. Normalization (1) In relational database design, the process of organizing data to minimize redundancy.

Normalization usually involves dividing a database into two or more tables and defining relationships between the tables. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships. There are three main normal forms, each with increasing levels of normalization: . First Normal Form (1 NF): Each field in a table contains different information.

For example, in an employee list, each table would contain only one birthdate field... Second Normal Form (2 NF): Each field in a table that is not a determiner of the contents of another field must itself be a function of the other fields in the table... Third Normal Form (3 NF): No duplicate information is permitted. So, for example, if two tables both require a birthdate field, the birthdate information would be separated into a separate table, and the two other tables would then access the birthdate information via an index field in the birthdate table. Any change to a birthdate would automatically be reflecting in all tables that link to the birthdate table.

There are additional normalization levels, such as Boyce Code Normal Form (B CNF), fourth normal form (4 NF) and fifth normal form (5 NF). While normalization makes databases more efficient to maintain, they can also make them more complex because data is separated into so many different tables. (2) In data processing, a process applied to all data in a set that produces a specific statistical property. For example, each expenditure for a month can be divided by the total of all expenditures to produce a percentage.

(3) In programming, changing the format of a floating-point number so the left-most digit in the mantissa is not a zero.