Multiple Player Network Simulation example essay topic

2,551 words
Improving Network Performance Nowadays, it seems that everyone has a computer and is discovering that communication technologies are necessary. E-mail, Internet, and file transferring has become a part of the modern world. Networks allow people to connect their computers together and to share resources. They allow people to communicate and interact with each other. The days of the lone PC are diminishing. At the same time, computers are getting faster than ever.

The most powerful PC five years ago couldn't be sold for half of its original price today. This poses some problems to the consumer. New technologies can't be driven by older technology. As innovations continue to be invented, the computer of yesterday is fast outpaced by the technology of today. While processor and memory speed have been increasing astronomically, the speed of common networks have not. Networks tend to progress more slowly because of the large capital investment needed to implement one.

Tearing out the old network to install a new one is just not feasible in many cases. Because upgrading the hardware on the network is so difficult and uneconomical, many ways have been invented to increase the perceived speed of the network. Computer games have traditionally driven the market in regard to processor power and speed. Running a spread-sheet or a word processor on a 386 is not unthinkable, as these 'business' applications have little real-time processing requirements.

In contrast, the goal of a computer game is to immerse the player in an environment that is suitably realistic, so that it is intuitive and fun. The term 'suitably realistic' is pushing technology to its limits. With a fast enough computer, modeling every aspect of our physical world is conceivably possible. Combining the two ideas of interactive computer games and interactive communications seems only natural. A multiple player simulation extends interaction beyond the single player simulation, but introduces increased programming complexity.

Multiple programs on different machines have to be managed, and the participants must coordinate communication among themselves (Gossweiler 1). This paper will cover these aspects of network performance, specifically in regard to a computer game that is the accompanying senior project. The speed of the underlying communications hardware accounts for much of the speed concern. Different media can range from 9600 bits per second to a hundred megabits a second. Modems, a popular communications device, typically have throughput rates from 9600 baud (See Glossary) to 28800 baud. Modems use an RS 232 or serial port connection.

One solution to this transmission speed is the parallel or Centronics port. This port transmits eight bits at a time instead of one with a serial port, which is how they got their names. Because it sends eight bits at a time, a parallel port can achieve transfer rates of up to 40,000 bits or 5000 characters a second (S eyer 64). Parallel ports are commonly used to connect to printers and external storage devices. The main problem with modems and parallel ports is that only two computers can be connected at one time. Networks were invented to solve this.

They allowed a group of computers to communicate and to share resources such as hard disk space and printers. Different types of networks have different throughput rates, but all are higher than either serial or parallel ports. The type of network used at Banks High School is an Ethernet network with software from Novell. This network has the capability of supporting transfer rates up to ten megabits per second (Bennett 1). Although this seems like a lot, remember that all of the computers on the network are using this connection at the same time and each computer has to share with every other computer. There are a few ways to speed up a slow Ethernet network, however.

Splitting the responsibility from one server to many is one way, and using more than one network card in a server is another (Hartman interview). Both techniques are aimed at reducing the number of packet collisions. Other faster types of networks are available though. Token Ring networks are capable of rates up to 16 megabits per second, but are much more complicated and expensive. 'Most experts agree that, although Token-Ring networks offer slightly higher throughput (16 Mbps) compared to Ethernet's 10 Mbps rate, the performance improvement isn't worth the much higher cost' (qt d. in Bennett 1). According to Gary Hartman, the reason that Token Ring networks are so efficient is that there are no collisions between the packets and that collisions are one of the major causes of low throughput (Hartman interview).

Because these networks are so expensive, and because of the need for even greater speed, a group of companies formed together to form the Fast Ethernet Alliance. The Fast Ethernet Alliance is composed of fifteen large companies including: 3 Com, Digital Equipment Corp., Intel, National Semiconductor, Sun Microsystems, and SynOptics (Bennett 1). These companies joined together to develop a standard for a 100 megabit per second Ethernet network. They are calling this specification '100 BASE-X', or also sometimes '4 T+' (Bennett 1). Unfortunately, it is unlikely that companies will be able to upgrade their standard Ethernet networks to 100 BASE-X. Because of this, we will probably not see it take over for some time to come. There are other ways to increase network throughput than to get faster hardware.

Although most methods of optimization depend on the application, there are a number of general ways to increase performance over a wide variety of different applications. These methods range from simple and obvious to more advanced and internally complex. The most simple method for speeding up an application is to reduce the network traffic. That is, reduce the number of packets sent as much as possible while still fulfilling its pre-designated job. Speed and reliability are often at odds with each other, so that a programmer must decide between the two.

A protocol that guarantees that a packet will be delivered (and delivered in order in the same order that it was sent) will have a larger amount of overhead, and consequently will run much slower (Gossweiler 12). A protocol that does full error checking and synchronization typically runs as much as ten times slower than the equivalent 'best try' protocols that do not guarantee delivery. Another network characteristic is that sending two short packets takes longer that grouping the two pieces of information together and sending one larger packet. Because of this, a program should group all of the data it needs to send out into one place and transmit it as one packet.

The decoding program would then break it apart into its respective pieces and process the pieces. An example would be a database server that accepts queries into the database and returns the matches. If a program requests two queries, the database server can look them up and return them both in the same packet, saving network bandwidth and reducing the time it takes for the transfer. The network based game being developed with the techniques contained in this paper has a number of optimizations that apply to it, but wouldn't apply well to all projects. This program is based on a number of players at different computer stations on a network. Each player controls a robot space ship, and shares a virtual world with the other players.

This means that all of the ships of the participating players show up on each other's screens and the network is used to send updates on objects that are created or move on a computer. For example, when the player moves their ship, a packet is sent to notify all of the other players of the change. If the player shoots, a bullet object is created and the other computers are told to update their databases. In order to reduce the number of packets sent a process called 'dead-reckoning' can be applied (Gossweiler 5). The principle of this technique is that sending updates should only take place when something changes state. Instead of sending the position of the object during the update, dead reckoning sends out the position, the current velocity, and the current direction.

During the next frame, if the ship moves at the previously established velocity, then the other players already know where the ship is and an update does not need to be sent (Gossweiler 5). An extension to this is to allow a small amount of movement before a packet is sent. If the ship moves close to the previously established velocity, then the other players already know, about where it is. If the deviation gets to great, then an update is sent and the course is corrected (Gossweiler 6). See the figure to the right.

With this system, any given computer will have direct control over an object, called the live object, and use dead reckoning to move all of the other objects, called ghost objects (Gossweiler 6). One problem that comes up is related to decision-making. It is important that each decision be made in only one place, to avoid confusion. Suppose, for example, that there are three players in the simulation. Player A and Player B both fire at Player C at the same time, and both bullets arrive at Player C at approximately the same time. With dead reckoning, both players A and B would register the hit for Player C, when in fact only one could occur.

The solution to this is to let Player C decide whether it got hit, and if so, relay the fact to both Players A and B (Gossweiler 11). 'The general principle is that decision-making is distributed, and any given decision should be made in only one place' (qt d. in Gossweiler 12). Another source of network performance improvement is to consider the architecture of underlying systems of communication. There are two major kinds of architectures: centralized and distributed. With the centralized communications model, 'one computer collects all of the data from the different machines, stores the changes in some collection of data structures (the centralized database), and then sends the results back out to each participating machine' (qt d. Gossweiler 3).

The problem with the centralized model is that the server computer has to be the fastest computer, and as the number of players increases, the performance drops and the number of packets sent rises exponentially. The problems with the centralized model led to the development of the distributed model. The distributed model moves the responsibility for maintaining the whole simulation from one server computer to all of the computers participating in the simulation. This way, all of the computers maintain a database of the state of the simulation.

When a change occurs on a local computer, a packet is sent to all of the other computers, instead of just the server. That way, all of the computers will be updated instantly as soon as a change occurs. This 'changes the scalability problem from being a CPU bottleneck to one where there are many connections and messages' (qt d. in Gossweiler 3). Because of this, network performance needs to be fast, but there is no need for a single supercomputer to be the server. A technique called broadcasting can reduce the number of messages sent. It uses the same concept as radio broadcasting, from where it gets it's name.

With broadcasting, a computer sends one packet and all of the computers in the simulation will receive it. Instead of establishing a connection to every other computer in the simulation, broadcasting only establishes one connection for each computer (Gossweiler 5). A complete update can thus be sent out in one packet, dramatically reducing the amount of network traffic generated. As discussed earlier, a protocol, such as IPX, that does not guarantee delivery of a packet can typically run ten times as fast as a protocol that does. Since performance is critical in this simulation, taking advantage of this is crucial. The problem then becomes reliability.

If a update to the simulation is lost on the network, the game must continue, even if it is incorrect for the moment. 'Instead of sending relative motion, the players send absolute position' (Gossweiler 3). That way, if a packet is lost, the simulation is only affected momentarily. Another reliability problem occurs when players disconnect from the simulation improperly. If their computer locks up or they press reset, the rest of the players have to know that they are not around anymore. Heartbeats were invented to solve this problem.

A heartbeat is a message periodically sent out to tell the other computers that a process is still running. With a heartbeat system, it becomes the responsibility of the players to periodically issue heartbeats to the other players (Gossweiler 11). If after a predetermined amount of time, a player isn't heard from, then it is safe to assume that they have disconnected. Synchronization is another complication of creating a multiple player network simulation. The databases of all of the computers must be the same all of the time. One reason computers lose synchronization is that they run at differing computer speeds (Gossweiler 6).

If an 80386 computer is playing against a Pentium, velocities can cause the objects to move different distances on the computers. For example, if the 80386 is drawing the screen at ten updates per second, and an object is moving at five centimeters per frame, then the 80386 would move the object 5 10 = 50 centimeters every second. On the Pentium computer though, the program quite possibly could be running at sixty frames per second. On this computer, the object would move 5 60 = 300 centimeters, or three meters. The solution to this problem is to base the movement on time instead of frames. Instead of moving five centimeters in a frame, the object would be specified to move at 50 centimeters per second (Gossweiler 6).

That way, on a fast computer, the animation will be smoother, but the object will end up in the same place. Through all of this, the most important thing to remember is that optimization depends on the application. Increasing speed requires creative solutions to simple problems, and not all solutions work well. Optimizing performance requires a little trial and error, a little creative thinking, some common sense, and a lot of effort. When creating a program, the critical point is to make the program work and exceed the expectations of the users. The speed of a simulation is not worth much if it is not easy to use and fun to play.