Unix And Windows Nt example essay topic
Which server software offers complete functionality, with easy installation and management? Which one provides the highest value for the cost? What kind of support and performance can be expected from this system? And most important of all is what is more secure?
In this paper, Microsoft Windows NT Server is compared to UNIX, in the large commercial environment. The main focus of the comparison is on the areas of, reliability, compatibility, administration performance and security. Which system is worth the money? What can you expect from Windows NT Server out of the box and from UNIX out of the box?
NT can communicate with many different types of computers. So can UNIX. NT can secure sensitive data and keep unauthorized users off the network. So can UNIX. Essentially, both operating systems meet the minimum requirements for operating systems functioning in a networked environment. Put briefly, UNIX can do anything that NT can do and more.
Being over 25 years old, the UNIX design has been crystallized out further than any other operating system on a large scale. NT is fairly new and some say it is a cheap rip off of UNIX. But it is not cheap at all. To purchase an NT server with 50 Client Access Licenses, one will spend $4,859.00. Not so bad. But it gets much more costly than this.
This price is just for software, but everyone knows to build a network you need a lot more than this. E-mail has become an indispensable tool for communication. It is rapidly becoming the most popular form of communication. With Windows NT, you will have to buy a separate software package in order to set up an e-mail server. Many NT-based companies use Microsoft Exchange as they! |re mailing service. It is a nice tool, but an expensive solution with not such great success in the enterprise environment.
Microsoft Exchange Server Enterprise Edition with 25 Client Access Licenses costs $3,549.00. UNIX operating systems come with a program called Sendmail. There are other mail server software packages available for UNIX, but this one is the most widely used, and it is free. Some UNIX administrators feel that exit or q mail are better choices since they are not as difficult to configure as send mail.
Both exit and q mail, like send mail as well, are free, they are very stable but not very user friendly, and may not be the best choice for a company with a lot of users that are not computer oriented. So why do people choose NT? NT is often chosen because many customers are not willing to pay for the more expensive hardware required by most commercial flavors of UNIX. More important, however, is the overall cost of implementation which includes system administration along with several other factors like downtime, telephone support calls, loss of data due to unreliability. Unlike Unix, Windows NT Server can handle only one task well; so more systems are needed to support users. What about manpower?
What is it going to cost to support these systems? Because NT 4.0 lacks an enterprise directory on the scale of other systems, it requires more administrators to manage it in large enterprises. UNIX based networks require much less men power to maintain that NT. Both systems are able to run automated tasks, but running them is only useful when the scripts / tasks /executable's can be run without human intervention.
So much that runs on NT is GUI-based, and thus, requires interaction with a human administrator. I guess this kind of defeats the purpose. NT servers lack remote control and scripting capabilities (it must be purchased through a third party vendors), and their instability requires rebooting once or twice per week. This equals more monitoring and most importantly downtime. The estimated cost for setting up NT network in 1000 user environment including hardware, software and network management, would total about $900,000 for the first year. Annual cost of management, maintenance and salaries for a Windows NT Server network would be around $670,000.
Is there much difference in design? NT is often considered to be a 'multi-user' operating system, but this is very misleading. An NT server will validate an authorized user, but once the user is logged on to the NT network, all he / she can do is access files and printers. The NT user cannot just run any application on the NT server. When a user logs in to a UNIX server, he / she can then run any application if they are authorized to do so. This takes a major load off his / her workstation.
This also includes graphics-based applications since X-server software is standard issue on all UNIX operating systems. Another big difference is a disk related design. In Microsoft suite of operating systems is its antiquated use of 'drive letters,' i.e. drive C: , drive D: , etc. This schema imposes hardware specific limitations on system administrators and users alike. This is highly inappropriate for client / server environments where network shares and file systems are to represent hierarchies meaningful to humans. UNIX allows shared network file systems to be mounted at any point in a directory structure.
A network share can also span multiple disk drives (or even different machines!) in UNIX, thus allowing administrators to maintain pre-existing directory structures that are well-known to users, yet allowing them to expand the available disk space on the server, making such system changes transparent to users. This single difference between the UNIX and Windows operating systems further underscores the original intentions of their respective designers. Which system is more stable? Reliability is very important if not the most important aspects of the system.
System up time is a major concern for administrators. Every minute of downtime of a server means lost productivity on the part of the users. Although Windows NT rarely 'locks up' entirely, individual programs and subsystems crash often. Also NT has to be rebooted almost after every setting change which equals additional down time. People that are coming to NT from Win 95 often reboot because it's 'easier to do that than figure out what happened. Some vendors even recommend rebooting NT 'every week' to get rid of the random junk that running the system has left over.
But the gap between NT and UNIX is hardly news to value-added resellers in the outside world. 'We have a Solaris box that hasn't been rebooted in two years,' said James Domengeaux, president of Com space. Com, a Houston-based Web reseller. In comparison, NT servers are rebooted often, he said. 'That's a problem especially in e-commerce if you " re talking transactions per second, because how many orders do you miss?' he said. UNIX kernel and included software are solidly integrated to provide a time-tested, reliable server that just won't quit.
UNIX users have reported up times of over one year, rebooting only to install new releases. Troubleshooting problems as well as disaster recovery can be a very timely procedure. Anyone who ever worked with NT is familiar with The 'Blue Screen of Death'. It is often difficult to troubleshoot due to the either cryptic or non-existent error reporting. In addition to this, NT is particularly prone to virus attacks on the Intel-based hardware. For operating systems on Intel hardware that must be booted from a hard drive, i.e. NT Server, the Master Boot Record of a hard drive can be the death of the operating system.
Linux, along with several other UNIX operating systems that run on Intel-based hardware, can load a compressed kernel from a boot floppy, thus avoiding this problem. What this means is, an NT Server can theoretically be crashed by a virus written 10 years ago. So anyone planning to deploy an NT Server in a mission critical environment should definitely consider this fact. UNIX also has something similar to! SS Blue Screen of Death!" , it is called 'kernel panic. !" It does not happen very often, therefore not many people know about it.
Do not think that UNIX does not crash, it does, it isan extremely rare event, and it is almost always due to a hardware failure of some sort. Any software induced problems in a UNIX environment generally make themselves known over a period of time, sometimes in the form of overall gradual performance degradation of the system, giving the administrator plenty of time to track down the source of the problem, correct it, and stop / restart the process (very rarely the entire machine!) causing the problem. In general, a UNIX server is halted only in the following situations: "h Due to a hardware failure, for instance, a hard drive fails; "h A hardware upgrade needs to be performed; "h A lengthy power outage has occurred and the backup power supply resources have been exhausted; "h The kernel is being upgraded. "h A beta kernel is being tested. If none of the above occurs, then a UNIX system's up time can be measured in years.
NT, however, cannot produce such type of interrupted service. Even if one could eliminate the 'Blue Screen of Death,' NT is hampered by its own design and use of difficult-to-recreate proprietary binary configuration files, for example, the NT registry. Which one takes better tests? In a direct comparison with identical Web server system configurations, Sun Microsystems's solaris operating environment (Intel Platform Edition) proved more reliable and scalable and had a higher overall performance than Microsoft's NT operating system, according to Techn ovations, the company that conducted the independent Web Stone benchmarking test.
Under high loads of up to 900 simulated Web clients, Sun Solaris operating environment achieved a near 100 percent success rate servicing HTTP requests with a Lotus Domino 4.51 Server compared to NT's 20 percent success rate on the same system. Sun Solaris 2.5. 1 operating environment also outperformed Microsoft NT 4.0 in scaling by a factor of two-to-one. The test showed that upon adding a second central processing unit (CPU), the performance scaling improvement of Sun Solaris operating environment averaged 60 percent while Microsoft NT averaged only 30 percent. Additionally, Sun Solaris operating environment proved to be up to 65 percent faster on data throughput than NT. With Unix and Windows NT running on 133 MHz PC's, Unix ran 27% faster than Windows NT when reading static HTML content, and with API generated content, Unix is between 47% and 197% faster.
For CGI contents, Unix is 77% faster than Windows NT. To bulk up NT for the enterprise, Microsoft will increase beta-testing of Service Pack updates and of key server applications such as Lotus Domino and SAP R/3, and even Microsoft Back Office on the OS. Which system is more secure? Out of all the aspects of the network, security has got to be the most important. Especially in the large environment. It is hard to keep data secure and keep intruders from hacking into your network systems to try and access privet information.
Network administrators must choose a system that will provide the highest level of security. It is not at all an easy task. Security can be divided into three general statements: Risk, Threat, and Vulnerability. Risk The risk is the possibility that an intruder may be successful in attempting to access your local-area network via your wide-area network connectivity.
There are many possible effects of such an occurrence. In general, the possibility exists for someone to: READ ACCESS. Read or copy information from your network. WRITE ACCESS. Write to or destroy data on your network (including planting trojan horses, viruses, and back-doors). DENIAL OF SERVICE.
Deny normal use of your network resources by consuming all of your bandwidth, CPU, or memory. Threat The threat is anyone with the motivation to attempt to gain unauthorized access to your network or anyone with authorized access to your network. Therefore it is possible that the threat can be anyone. Your vulnerability to the threat depends on several factors such as: MOTIVATION.
How useful access to or destruction of your network might be to someone. TRUST. How well you can trust your authorized users and / or how well trained are your users to understand what is acceptable use of the network and what is not acceptable use, including the consequences of unacceptable use. Vulnerability Vulnerability essentially is a definition of how well protected your network is from someone outside of your network that attempts to gain access to it; and how well protected your network is from someone within your network intentionally or accidentally giving away access or otherwise damaging the network. Lets compare how UNIX and NT deal with these issues. UNIX uses user IDs (UIDs) to identify users, and group IDs (GIDs) to identify groups.
The UNIX permissions stored with each file consist of: - UID of the owner - GID of the owner - User permissions (defining read, write, execute for the owner) - Group permissions (defining read, write, execute for the group) - Other permissions (defining read, write, execute for anyone else) When performing validation, UNIX first determines whether the request is from the file's owner, someone in the file's group, or anyone else, and then uses the user permissions, group perms permissions, or other permissions, respectively. NT uses security IDs (SIDs) to identify both users and groups. The NT permissions for each file consist of: - SID of the owner - SID of the owner's primary group - ACL (Access Control List) for the file The ACL contains one or more access control entries (ACEs). Each ACE contains a SID, indicating the user or group to which the ACE applies, and a set of permission bits.
NT permission bits include the three UNIX bits -- read, write, and execute -- as well as 'change permissions' (P), 'take ownership' (O), 'delete' (D), and others. An ACE can either allow the specified permissions, or deny them. When performing validation, NT walks the list of permissions granted to the user by the ACEs until a deny (or the end of the list) is reached. Also NT has something called NTFS, file system, was designed for NT, and it is a file security, administrators are able to set permissions down to an individual file. Unix is now able to support NTFS, and also allows file level security.!
SS Since UNIX was designed as a multi-user multi-process platform for interconnected computers, guaranteeing security has been an issue ever since the beginning. The Internet and UNIX community has forced the UNIX vendors to become more and more open about security leaks in their systems. This means that UNIX vendors nowadays have to publish fixes for their software whenever a security problem is found. Microsoft still manages to be secretive about security problems; they don't tell you when a problem is found, and fixes are hard to come by. Microsoft dismisses many problems as 'user errors'. If you turn on a feature that was off by default, it was you who generated that security hole: you shouldn't have done that.
On UNIX we think that this feature was there for the purpose of being used anyway, and it becomes a priority to fix the problem!" , stated John Kirsch, a network consultant and an author of Microsoft Windows NT Server 4.0 versus UNIX. This will help to better understand: Which system is easier to administer? Many people believe that NT is easier to use than it actually is, scales better than it does, and is powerful enough to do what UNIX can do. But most of this perception is due to great marketing by Microsoft, and is not reality.
Unix was designed and implemented with remote management in mind. This enables system administrators to remotely perform management operations from another building or across the world. Windows NT is configured so that most of the administrative programs have to be run on the physical machine, without the ability to remotely control the machine. In order to administer NT server remotely, a third party utilities must be purchased. Conclusion The game these days is not so much about either / or ; rather, it's a matter of what platform is right for a particular application, or where in an enterprise Unix is appropriate vs. where NT is the better choice, and when the alternatives collectively are strong enough to trigger a switch from one to the other. Both systems have there advantages as well as disadvantages, and are being perfected on daily bases.
The trick is not to pick one over the other, but to take advantage of both systems and making them work in the best way fit for your environment.