Equation 22 The Cell Service Time example essay topic

8,229 words
INTRODUCTION History of ATM Asynchronous Transfer Mode or ATM came about through the evolution of the Integrated Services Digital Network (ISDN) in the 1980's and the demand for high-speed packet communications. This extended further into the higher speed solution of Broadband ISDN with the intention to provide integrated broadband services such as high-speed telephone, data and video communications. In 1988, the International Telecommunications Union (ITU) defined ATM as the vehicle for B-ISDN with a view to it becoming the universal network transport. It is the ability to serve different service types, together with fast transmission rates and low overheads, which is leading to the widespread use of B-ISDN today. More information may be obtained from [Ref. 6] Efficiency Issues of a Transmission System In the transfer of data, whether it be voice, video or computer data it is common to use some form of compression technique in order to use the transmission medium efficiently.

These data compression techniques commonly include some form of Run Length Encoding (RLE) to remove redundancy within a signal and thus reduce the bandwidth consumed by the transmission. Although, data compression is performed at a much higher level, ATM achieves its efficiency by making use of small, compact packets with a small header field, maximising the user information to system overhead ratio leading to higher data rates than other transmission mechanisms. Image and Video Compression Again in the late eighties, the MPEG standard for video compression was born and defined as a compression standard mainly for CD-ROM applications. This standard was closely followed by the MPEG-2 standard widely used today in the compression of pre-recorded video. Unlike M-JPEG compression, MPEG-2 works by predicting the movement of objects within a picture, producing a series of related frames each of which depends on a single start image. The relationships between frames can become very complex unlike those in M-JPEG compression, which may function independently or in a video stream.

As MPEG compression is beyond the scope of this project more information on the mechanisms involved may be found in [Ref. 4, 5, 6]. Applications for ATM ATM has enough flexibility to provide transport for a wide range of services including audio, video and raw data. Not only may it accommodate any one of these, but also it may successfully accommodate a mixture of data types. Each type however has its own optimum transmission characteristics. Transfers such as video and audio transmissions are ideally served by a system offering low delays or more importantly, a low delay variation.

One illustration of this is the transmission of human voice, which has a latency of around 250 ms and the delay can be observed over a long distance satellite link. The human ear is also sensitive to the continuity of speech implying that for audio information, the ATM needs to provide a transmission as close to continuous as possible. Due to the large cost of many ATM applications, it is ideally suited to large-scale transmissions such as long haul telephone calls. The initial investment of expensive switching equipment and optical fibres make ATM less feasible for a small scale communications solution which may be more ideally served by a high speed Local Area Network (LAN).

With the rapid growth of telecommunications and the Internet, applications for video transmissions include services such as Movies on Demand (MoD), News on Demand (NoD), World Wide Web access, Video Telephony and Video Conferencing, all of which require transmission of at least one variety of data. ATM may also become crucial in the medical field where remote diagnosis will become feasible due to the ability to perform high-speed transfers of medical images between different geographical locations. All of these applications require different network characteristics in order to transfer the information optimally leading towards need to characterise and budget for the expected traffic types. ATM was chosen as the transport for B-ISDN as it promised to be flexible enough to accommodate all of the features mentioned above whilst also providing high speed access. 2 PREVIOUS WORK 2.1 Simulation Over the past few years there have been several groups involved in the mathematical modelling of traffic flows through ATM and other networks. Much work has been done studying the MPEG compression standard, as it is a popular and widely used video compression system.

There has also been some effort put into the study of video traffic in the wavelet domain [refs. 21, 22] with particular reference to broadcast video traffic but so far, little investigation has been put into the less popular Motion Joint Pictures Expert Group (M-JPEG) compression standard. A paper related to the subject of this report [ref. 20] has theoretically studied the methods of bandwidth smoothing in M-JPEG video sequences needed to suppress the burstiness of VBR video streams but with particular reference to Video on Demand. As a contrast, this report extends the scope of the paper by characterising various types of video streams for a variety of applications, not just the transmission of movies to an end user and its expected burden on the transmission network. Much of the research in the area of simulation involves the production and comparison of network algorithms to evaluate their cost performance and quality trade-offs.

In comparison, experimental research allows data to be collected from a real-life network often introducing phenomena that may not easily be replicated in a simulation situation. 2.2 Experimental Previous work at the University Telecommunications Laboratory has included the practical application of statistical multiplexing algorithms and analysis of queue service priorities. These experiments have provided live results for the implementation of algorithms that help to guarantee the various Qualities of Service [ref. 16]. In particular, this research utilises Traffic Profile Queuing to minimise the Transfer Delays and Delay Variations of cells in delay sensitive services such as voice and video and incorporates the use of multiple buffers for different queues. The order of service of these queues then becomes a major factor in guaranteeing a QoS in terms of its delay.

3 THEORY OF TELECOMMUNICATIONS 3.1 Modes of Data Transfer In transmissions between two end-stations there are a number of possible link configurations. These vary from the cost to implement the connections to the type of link needed, whether it is in one direction or bi-directional. For many interactive video conferencing situations it is taken for granted that a bi-directional link would be needed. For the application of Movies on Demand however, a one way link might be used from the movie provider to the end user.

The application to be used is often a deciding factor in choosing a transmission mode and the possibilities are shown below. Simplex In the case of Simplex transmission, two end stations are linked by one physical link. This link allows the transfer of information in one direction only as A transmits and B receives. Figure 3.1: Simplex Data Transfer Half-Duplex The half duplex system allows either end station to transmit but the physical limitation of having only one transmission medium means that only one station may transmit at any one time.

For the other station to transmit, the connections must switch over therefore stopping the other station from transmitting. Figure 3.2: Half-Duplex Data Transfer Full Duplex In a Full Duplex system, two physical links exist between each end station allowing simultaneous transmission and reception. This configuration is actually more of a 'dual simplex' connection as information only passes down each wire in one direction. Figure 3.3: Full-Duplex Data Transfer In the experimental ATM network present in the Telecommunications Laboratory, it is possible to configure the network in a similar way to those listed above depending on the applications required. 3.2 Multiplexing Multiplexing is used in order to carry many different channels along one physical medium. This reduces the costs involved many times over and allows efficient use of a potentially costly medium.

Frequency Division Multiplexing (FDM) By modulating a signal onto a higher frequency carrier signal, the information contained in the original signal may be transposed to a different range of the spectrum. This produces two side bands centred round the carrier frequency, called upper and lower respectively, equate to the sum and difference of the carrier and signal frequencies and the resulting bandwidth consumed is twice that of the original signal. By using specific filters, one of the side bands may be ignored allowing recovery of the original signal. The use of different carrier frequencies on each of a set of channels increases the efficiency of a medium by allowing each channel an allocated slot of its total bandwidth. Wave Division Multiplexing (WDM) This method of multiplexing is basically similar to that of FDM but with the difference that it is performed on optical signals not electrical ones. By using prisms or diffraction gratings, various light inputs may be combined on to one transmission medium or optical fibre.

At the receiving end the process is reversed using the same equipment and the various constituent light signals may be recovered. Time Division Multiplexing (TDM) For a set of input channels, the output is gained by sweeping through the set of channels in order and allocating each channel a portion in time of the output. The most commonly used application of this is the telephone system. The output now has a set of signals that always occupy the same position with respect to the other signals being scanned. It is this aspect of TDM which leads to the term synchronous TDM. TDM may also occur non-synchronously or asynchronously with an application of this being statistical multiplexing.

In this case each channel is scanned for data before being allocated a time to send data. In this way idle channels are skipped and channels with available information can send their data. It is for this reason that statistical multiplexing is more efficient than other methods as the data rate needed to support the same number of channels is reduced. 3.3 The OSI 7-Layer Model Within the field of Telecommunications, a standardised model was developed by the International Organisation for Standardisation to define the functions required of the protocols in the communications architecture.

The model comprises of a hierarchical structure with only the uppermost level apparent to the user and each layer only interfacing with those surrounding it. The OSI model is separated into the seven layers as shown below. Layer 7: Application Layer, which provides direct access to users. Layer 6: Presentation Layer, deals with the format of data passed to layer 7 Layer 5: Session Layer, synchronizes and controls connections or sessions Layer 4: Transport Layer, provides efficient transfer of data between end points, flow control and addressing Layer 3: Network Layer, provides independence for upper layers and is responsible for organising and maintaining connections Layer 2: Data Link Layer, which concerns the transfer of data across the medium and includes flow control and framing operations Layer 1: Physical Layer, responsible for the transmission of data over the physical connection. It is at this level that signal levels and timing are defined. Figure 3.4: The OSI 7-Layer Model The aim of this standard is to increase compatibility between different vendors' equipment to allow easy interfacing.

In many situations however, the functions of a particular layer may be combined with a neighbouring layer, so this model defines a framework for the communication system and not a solid procedure. More information on the OSI model and the functions of each layer may be found in [ref. 9]. ATM has a similar layer configuration but with a few differences which shall be discussed later. 3.4 Network Topology and Switching Techniques The end points or nodes of a network may be configured in a number of possible ways. In the simplest case the nodes may be connected in a line or bus with each node directly connected to its two neighbours.

Alternatively, each node may be connected via a direct link to all other nodes but not only is this method complex, it is also expensive to implement due to the large amount of wire or transmission medium required. Nodes may also be connected to each other by a ring type structure or by some switching element to route the connections between nodes as they are needed. It is at this stage that the difference between a Connection Orientated and Connectionless system becomes apparent. In a Connection Orientated network a direct physical link is established between two nodes and is held for the duration of the connection. One example of this would be the telephone system, where a number is dialed and the network switches are set up to provide a physical link between the two handsets.

The converse system to a Connection Orientated one, is that of a Connectionless network where a fixed connection is not required. In this case data is sent in the form of packets that have no set route between nodes and have an overall destination included in the header part of each packet. The route between two separate nodes may be different for each separate connection and the methods of choosing such a route has been the subject of much research. 3.5 ATM Theory In the late eighties ATM or Asynchronous Transfer Mode was hailed as the carrier for the new Broadband Integrated Services Digital Network (B-ISDN). ATM was designed to handle multiple different data types for example, voice, data, video, sound and many more with high speed transmission through new optical networks whilst offering a guaranteed Quality of Service - QoS. In ATM, a short and fixed packet length (called a cell) was chosen as it gives a uniform transmission time for each packet which helps switching nodes to function faster due to reduced processor usage.

The use of small cells with low delay times, over larger packet based networks makes ATM attractive to the transmission of voice as audio information has a very low latency of around 250 ms. [Ref. 2] ATM Cell Configuration The ATM cell was chosen to be 53 bytes (or octets) long of which 48 bytes would be user data and the remaining 5 bytes would be a header field. This header field contains various routing information known as VPI and VCI, and other fields such as error checks (HEC) and Cell Loss Priority (CLP) [ref.

32]. ATM also has its own hierarchical structure similar to that of the OSI 7-Layer Model but in 3 D form, which is shown below. Figure 3.5: ATM Layer / Plane Model [ref. 32] Starting from the base of the model, the functions of the different layers are described as follows: Physical Layer: Collects and organises the ATM cells before they are transmitted down the physical medium. This layer must also provide error checking and ensure correct mapping between STM-1 frames.

ATM Layer: Is responsible for cell switching and other switching issues at cell level within the network. ATM Adaptation Layer: This is actually split into two sub layers and lies between the ATM layer and other higher layers ensuring the correct mapping of user requirements with available ATM services. It is at this layer that user information is transposed into the ATM cells and important timing information may be carried. The Convergence sub layer performs the function of putting the header and trailer information onto the user data packet whilst the Segmentation and Reassembly sub layer divides the Protocol Data Unit (PDU) into ATM cell sized packets. The classes of the ATM adaptation layer are matched to the B-ISDN service classes numbered AAL-1 through to AAL-5 chosen to be suitable for the particular type of traffic being transported. The correlation between the OSI 7-Layer Reference Model and the ATM Layer functions may be seen in the diagram below courtesy of [Ref.

13]. Figure 3.6: ATM Layer Functions 3.6 Switching in ATM The transmission mechanism of ATM provides two types of indexing for making a connection, these being Virtual Channels (VC's) and Virtual Paths (VP's). A VC may be thought of as pipe in which information flows in only one direction. Encapsulating a group of channels is a path or VP, usually given a number to distinguish it from other paths. The relationship between Virtual Paths and Virtual Channels is shown in the diagram below. Figure 3.7: VPI's & VCI's [ref.

32] As numerous channels may be contained within a single path, they must each have their own unique identifier within that path. These channel identifiers may be duplicated outside of their particular channel group as they may then be referenced due to a different path number. It is for this reason that, in the diagram above, the identifier VCI 1 occurs numerous times. VCI's and VPI's may differ from the start to end points of a network due to VP or VC switching occurring along the connection. The diagram below shows how VC and VP switching modifies the various identifiers in the ATM network. Figure 3.8: VP & VC Switching [ref.

32] Virtual Paths and channels may be set up or modified through a number of differing methods. Firstly, a VC or VP may be set up as permanent or semi-permanent through the network management software. A User to Network (UNI) signalling procedure may set up a connection on initiation by the user, or channels may be set up between two network nodes by signalling between each other. The final mechanism for setting up a VC or VP is through the meta signalling procedure that uses pre-assigned path and channel identifiers. 3.7 ATM Service Criteria [Ref.

18, 34] The Quality of Service (QoS) of an ATM link is described by two sub-sections, the first involving the traffic characteristics, the second in terms of the delays acceptable to the traffic. The traffic descriptors are defined as follows: SS Peak Cell Rate (PCR) Maximum data rate for the particular connection SS Sustainable Cell Rate (SCR) The mean cell rate for a given application SS Maximum Burst Size (MBS) The maximum size of cells at an increased cell rate SS Minimum Cell Rate (MCR) The lower bound on the cell rate Various traffic types may be characterised or described using one or more of the service criteria listed above to gauge the load of the source on the network. Together with this, the acceptable tolerances in terms of delay and loss may be quantified by three parameters: SS Maximum Cell Transfer Delay (MCT D) -The total end to end delay experienced in the system SS Cell Delay Variation Tolerance (CDVT) -The 'clumping' of data due to erratic inter-arrival times SS Cell Loss Ratio (CLR) -The amount of cells available to be discarded if congestion is encountered. 3.8 ATM Service Categories In addition to the ATM Service Criteria listed above, traffic may be categorised according to its Bit Rate in one of 5 Service Categories...

Constant Bit Rate (CBR) which requires a constant bandwidth for the duration of the connection. Variable Bit Rate (VBR) allows fluctuations in data rate due to changes in the input... Available Bit Rate (ABR) negotiates with the network in real time for a portion of the bandwidth. Unspecified Bit Rate (UBR) allows a 'best effort's cenario with no guarantees of success The Variable Bit Rate category may also be split into two sub-divisions, these being real-time and non-real time VBR.

Real-time VBR may be used for real-time traffic with stringent requirements in terms of cell delay and delay variation. Non-real time VBR is more suitable to non real-time traffic with 'bursty' characteristics. The Service Categories listed above descend in order of priority. At the top, the most demanding or costly service is that of the CBR. A source assigned this category would receive the premier service followed by the VBR option. The lower two categories are more of 'hit or miss's cenario where bandwidth is not guaranteed and the source data fits into the available bandwidth when higher priority streams are not in use.

These ATM Service Categories combined with the Service Criteria provide a means for quantifying and describing source traffic and its subsequent Quality of Service as required by the end-user. 3.9 Buffer Theory In many network systems buffers are often used to queue data before it is served, as the serving mechanism often may only serve one cell or packet at a time. By utilising a buffer, waiting cells may be temporarily held until the preceding cells have been processed in a First in First out (FIFO) manner. The diagram below shows a schematic model for the buffer set-up in an ATM switch. Figure 3.9: Schematic Model of Buffer in ATM switches [ref. 10] In the model of the ATM buffer, systems parameters may be calculated or specified for each portion of the process.

The nomenclature used is as follows: l = Mean cell arrival rate w = Mean number of customers or cells's = Service time per customer q = Mean number of customers in system tw = Mean waiting time in buffer tq = Total system time 3.10 Kendall's Notation [Ref. 10] Before progressing further into the analysis and performance of the buffers in an ATM switch, the relevant notation should be noted. Kendall's Notation classifies various queue configurations in the form: Equation 1 Where: A is the inter arrival time distribution B is the service time distribution X is the number of service channels Y is the system capacity Z is the queue discipline For example, An M / D/1 queue has: Memory-less (or Poisson) arrival distribution Deterministic service time distribution One service channel 3.11 Basic Buffer Equations Using the schematic diagram shown in Figure 3.9, some elementary relationships may be derived for cells entering and queuing in an ATM switch. These are as follows: r = l.'s (Utilisation) Equation 2 w = l. tw (Little's Formula) Equation 3 q = l. tq (Total No. Cells in whole system) Equation 4 tq = tw +'s (Total Time in System) Equation 5 q = w + r (Total No. Cells in whole system (2) ) Equation 6 In many of these cases, the numbers of cells in the system at a given time cannot be given accurately and these equations merely represent an average or mean value at a particular instant in time.

3.12 Analysis of an M / M/1 Queue [Ref. 10] According to Kendall's Notation, the M / M/1 queue is assumed to have a memory-less input distribution followed by a memory-less service time distribution, with cells being routed to a single output port. The Poisson Distribution has been chosen as the simplest form of a memory-less distribution sequence to be used with this model. By assuming an infinite buffer capacity the calculations are simplified and may be used as an approximation to any size of ATM buffer. The probability of the buffer exceeding its limit is given in Equation 7.

Probability {system size = x} = (1-r). rX Equation 7 3.13 Relationship Between r and q The average number of cells in the system may also be given by Equation 8, relation the number of cells in the system to the utilisation, r. q = Equation 8 This function when plotted shows a graph with an exponential form with the number of cells in the system increasing dramatically above a utilisation of around 80%. Figure 3.10: Graph showing Utilisation vs. Number of Cells in a System [Ref. 10] 3.14 Buffer Size and Cell Loss Probability Calculating whether the system has reached a predetermined size in the infinite queue model may approximate the probability that a cell is lost due to a full buffer. This may be calculated using Equation 7 subject to some assumptions. These assumptions are: 1) The infinite buffer model is a suitable approximation for finite buffer analysis 2) Equation 7 is a suitable approximation to loss from a finite queue of length x.

In past research, it has been shown that the infinite queue length model is suitable for CLP approximations in utilisations up to around 90%. This model may break down for higher utilisations where the network bandwidth is reaching its maximum capacity. The Cell Loss Probability may also be estimated by using a modified form of Equation 7. Instead of assuming a full buffer capacity at value x, all conditions past this point may be considered.

Previous work has showed that both approximations give similar results up to utilisations of 90% but differ slightly thereafter. Thus Equation 7 becomes: Probability {system size ^3 x} = rx+1 Equation 9 The graph below shows the difference between the two functions given a system size of 24 cells and variable utilisation, r. Figure 3.11: Graph showing Prob. {X = x} and Prob. {X ^3 x} for varying utilisation [Ref. 10] Key Notes on Buffer Analysis: SS As the utilisation of a system increases towards unity, the average number of cells in the system grows exponentially.

SS As the utilisation tends to unity, the probability of there being x cells in the buffer increases. SS For low values of utilisation (lb 90%) the Prob. {X = x} is a good approximation for the more accurate Prob. {X ^3 x} function. 3.15 Delays in the M / M/1 Queue The average delay in an M / M/1 queue is defined in Equation 10.

The resultant value is an average delay as some cells may be delayed more than others due to input stream variations. The effect of this variable delay manifests itself as 'Delay Jitter' or Cell Delay Variation (CVD). This effect is most noticeable in audio transmissions causing speech to become distorted and in some cases unrecognisable. Equation 10 This equation is directly related to the average number of cells in the system shown previously in Figure X. As there are more cells present in the system, the greater is the likelihood of delay due to storage of cells in a buffer. The linking factor is the cell service time's, which on a 155 Mbps link is 2.831 ms. Figure 3.12: Graph showing Average delay in system for varying utilisation.

[Ref. 10] It is interesting to note that for an M / D/1 queue where the service time distribution is said to be deterministic, the average delay is half that for the M / M/1 queue. This is given by: Equation 11 3.16 Input Distribution Models & Inter Arrival In the previous analysis, the input distributions are said to be Memory-less. A simple model of an arrival process obeying this rule would be the Poisson Distribution given by: Prob. {k arrivals in T} = Equation 12 Or Prob. {inter arrival time lb t} = Equation 13 Equation 13 gives a graph of the form shown below for the continuous time model. There is also a discrete time equivalent where inter arrivals occur in multiples of the cell slot rate or service time,'s.

Figure 3.13: Graph showing Continuous Form for a Poisson Arrival Process [Ref. 10] 3.17 Finite Buffer Analysis Using an extended version of Kendall's Notation, a finite buffer is described in the form M / M/1/Y or M / D/1/Y where Y is the length of the buffer implemented in the system. From basic traffic theory, the lost traffic is the difference between the offered traffic and the carried traffic. Lost Traffic (L) = Offered Traffic (A) - Carried Traffic (C) Equation 14 In the Infinite Buffer model, no loss is incurred as the buffer is assumed to hold as many cells as is required. However, the finite buffer will show loss in some high load circumstances.

3.18 State Probability Distributions [Ref. 10] State Probability Distributions are a useful way of analysing the state of a buffer in terms of its contents. For example, the system empty state,'s (0) may be obtained in two ways: 1) by starting empty and having no arrivals, therefore still empty 2) by starting with one item, having no arrivals but making a transmission The following shorthand notation shows this:'s (0) ='s (0) a (0) +'s (1) a (0) Equation 15 Which may then be rearranged to give the required state of interest, for example's (1) By following this method through increasing state sizes a generalised equation may be obtained for's (k-1).'s (k-1) ='s (0) a (k-1) +'s (1) a (k-1) +'s (2) a (k-2) +... +s (k) a (0) Equation 16 Equation 16 may be arranged to find's (k), the state of interest, to lead to the probability that the buffer is holding k cells. Equation 17 In this equation, a (k) is assumed to follow the arrival distribution discussed previously. By using this generalised equation, the buffer must receive X or more cells arriving in one time slot in order to reach its capacity X. This implies that:'s (X) ='s (0) a (X) Equation 18 It is required to define a new variable in order to calculate's (0) as it cannot be derived directly, so u (k) = Equation 18 Which produces the generalised equation in the form: Equation 19 And given that the probabilities must sum to unity,'s (0) = Equation 20 By using's (0) from Equation 20, the lost traffic may now be calculated according to Equation 14.3. 19 ATM Line Calculations The ATM Line Rate is defined as 155.52 Mbps with cells of 53 bytes long (424 bits).

This translates to a cell rate, given by the following formula: Equation 21 In actual fact, Equation 21 is modified due to 1 in 27 cells being used by the network for control purposes [Ref. 10]. Including this, Equation 21 becomes: Equation 22 The Cell Service Time may now be calculated from knowing the cell rate for a 155.52 Mbps ATM line by using the reciprocal function. Equation 23 In ATM analysis, each cell is assumed to enter service immediately at the start of a free slot, which are spaced according to the CST calculated above.

Together with the 1 in 27 cells used by monitoring and measurement, there is also a decrease in the overall line rate available to the user due to the SONET / ATM packing interface. As ATM cells are packed into SONET frames to be sent over the optical fibre. The SONET package consists of a 810 byte frame viewed in 90 byte columns, of which 3 columns are pure overhead. It is this overhead which reduces the available user rate to around 149.760 Mbps for a one-stream or concatenated source. SDH is a comparable standard to SONET with some difference in the configuration of overhead bytes with the SDH STM-1 frame being used on a 155.52 Mbps link.

The diagram below shows the configuration of a typical SONET frame. Figure 3.14: SONET / STM-1 Frame Configuration [Ref. Internet] 3.20 The Timescale Model When characterising video source traffic behaviour, it is possible to look at variations in the source in a number of time scales. The highest possible time scale may be that which is responsible for seasonal variations on a monthly or yearly basis.

Taking a closer look, a pattern might then be identified on a weekly time scale that was not apparent when considering the full picture. By reducing the size of the analysis window, patterns and behaviour may be identified which can be used to characterise a source video sequence. The Timescale Model [Ref. 10] shows this relationship between dimensions and ATM behaviour levels.

Figure 3.15: The Timescale Model At the highest level of dimensioning, there is a calendar level connection that includes weekly, monthly and seasonal variations. By reducing the dimensions by one step, it is possible to look at the characteristics of traffic over the duration of its connection. At this level, statistics when averaged over the connection would give peak and mean values for the data giving the upper and lower bounds on the data being transmitted. Within the connection, it is then possible to split the total duration of the data stream into zones and study each of these closely to view behaviour too detailed to be noticed on a larger timescale. This timescale also allows identification of bursts in the data rate and on / off characteristics of the source data. At the lowest level and smallest timescale, the cells themselves are studied.

The mechanism of cell inter-arrival times can be compared with theoretical models, if applicable. 4 EXPERIMENTAL SET-UP AND DATA ACQUISITION 4.1 M-JPEG Codec Attributes Before connection to the ATM network, the characteristics of the video compression codec alone were studied. For this, an input stream was fed into the codec and the output stream fed directly into a TV. By using the facilities supplied within the codec software (Cell stack Desktop) it is possible to alter various parameters and track the results on the output bandwidth. For these experiments, a set of codec parameters was chosen to be investigated and the effects on a arbitrary video camera input. For each test, all other variables were kept constant to ensure that one effect only was being investigated.

The figure below shows a schematic of the codec characteristic set-up. Figure 4.1: MJPEG Characteristics Experiment Using the configuration of hardware shown above, the following parameters were chosen for investigation. 1) Quantisation Factor (Q) 2) Frame Rate 3) Packing Factor (Tiles Per PDU) 4) Target Network Bandwidth For each of these parameters, the effect on the codec's output bandwidth was recorded independently providing a quantitative result. Simultaneously, the output picture was studied closely for any visual degradation and thus providing a qualitative aspect to the experiment. 4.2 Effects Across the ATM Network 4.2. 1 Bi-directional Configuration A bi-directional configuration was set-up as shown in the diagram below.

This was intended to provide a service similar to that of video conferencing with two end users both transmitting and receiving data simultaneously. In practice however, the configuration did not allow data to be sent simultaneously or in 'full duplex' due to a problem with the MJPEG compression codecs. This problem caused interaction between the two opposing video streams producing unintelligible video at each end. Even though extensive communications with the codec manufacturers a solution could not be found and the bi-directional tests were aborted. Figure 4.2: Bi-directional network connection schematic The Cell Blaster unit is inserted in series with the data being transmitted in order to monitor and capture cells as they travel down the link.

From this unit information about the characteristics of the data being transmitted can be obtained. 4.2. 2 Unidirectional configuration Due to the anomalies with the bi-directional connection, it was decided to set up a one-way connection in order to characterise the sample video streams and record the effects of the ATM network on the input data. Figure 4.3: Unidirectional network connection schematic In order for the connection to function correctly, the relevant VPI's and VCI's were programmed into the ATM switch. These were fixed at 0,401 respectively and the bandwidth on the switch set to its maximum in order to let all the video data through the network without incurring any loss. 4.2. 3 Sample Sequence Selection Three sample sequences from a variety of sources were chosen as test clips to use over the network. They were chosen to have a differing set of characteristics and to represent actual data that might be sent, by the public, over such a network in the future.

The sequences chosen, and their characteristics are as follows: i) Talking Head - newsreader or videoconference simulation. This sequence comprises of low motion combined with low detail. ii) Leafy Tree - a low motion but high detail picture chosen to simulate complex medical images or stills which may, in the future, be transmitted over ATM networks.) Laser Disc - an action-packed baseball sequence with many scene changes and movement. This clip would cover the high motion coupled with high detail aspect of video sequences and applications such as Movies On Demand (MoD) All of the video sequences listed above were chosen to be repeatable and reusable for the tests required whilst simulating characteristics which may appear in applications used over ATM in the future. 4.2. 4 Data Acquisition and Conversion As mentioned previously, the Cell Blaster units capture the incoming cells, logging their header content before allowing them on their way unchanged. By loading the header information file into the Cell Analysis software, statistics may be gathered on the cell sequence.

From this, a set of Inter arrival times is exported in text format for importing into any analysis package. In this case, Excel was used to manipulate and interrogate the data, which was presented in two columns, Cell Index Number and Inter Arrival times. Cell rates can then be found using the reciprocal relationship between rate and Inter Arrival time. 5 EXPERIMENTAL RESULTS & DISCUSSION 5.1 M-JPEG Codec Analysis a) Output bit rate for varying Quantisation Factor The codec and video inputs were set up according to the diagram in Figure 4.1. The quantisation factor Q, was varied in 10 unit steps within the range 10 to 120 and the output raw bit rate recorded using the software supplied with the codec. Each output bit rate reading was taken three times and the average value recorded to ensure consistency in the results and to avoid fluctuations in the video input.

The graph of results is shown below. Figure 5.1: Graph showing Output bit rate for varying Q-Factor for MJPEG codec For increasing Q-Factor, the output bit rate can be seen to reduce in an exponential manner, levelling off at around 6000 kbps. As Q is increased there are also some visible trends evident on the TV output. Firstly, larger squares of solid colour can be seen making the edges of items in the image rougher and less well defined. As Q increases yet further, the blocks appear to get larger making the image more difficult to interpret.

These results imply that a small degree of use of the Q factor has a major effect on the output bandwidth - but only up to a point, which is where the graph levels off. Over-use of the Q-Factor leads to an indistinguishable output sequence for no real reduction in bandwidth. As with many aspects of telecommunications, there is a trade-off between the image quality and the bandwidth consumed, and the optimum operating region in this case, might be the first portion of the graph shown above. b) Output bit rate for varying Frame Rate Using an identical equipment configuration to the previous experiment, the frame rate of the codec was varied and the output bit rate recorded. This experiment was performed in two sets, for two widely varying values of Q-Factor, 10 and 120, and for frame rates between 10% and 100%. Although the software implied that all fractions of the frame rate were available up to a maximum of 25 fps, in actual fact only discrete values in this range are used by the compression codec.

The values used were 25, 12, 8, 6, 5, and 2 fps and can be seen in the graph on the following page. Figure 5.2: Graph showing Output bit rate for varying Frame Rate A linear relationship is shown between frame rate and bit rate, with higher bit rates for a lower Q-Factor. As the frame rate is reduced from the maximum of 25 fps to 12 fps there is a slight jerkiness visible, but only slight. The major benefit of reducing the frame rate to 12 fps is to half the bandwidth consumed. For lower values of the frame rate, motion and sequences become more disjointed and there appears to be a 'catch-up' time where motion does not seem to be updated fast enough. Again, by reducing the frame rate parameter by as far as possible, massive savings may be made on the output bit rate. c) Output bit rate for varying Target Bandwidth The MJPEG compression codec provides a facility with which the user may try to control the output bandwidth of a video sequence to a certain extent.

For a set video sequence, the target bandwidth function was varied and its effect on the output bandwidth recorded. Figure 5.3: Graph showing Output bit rate for varying Target Bandwidth The target bandwidth steps were chosen in 1000 kbps units and for low target bandwidths the raw bit rate is low but does not come close to the target. As the target bandwidth increases, the output remains constant up to a 'transition region'. At this region the output bit rate increases before reaching a maximum value. At this point the output bit rate remains constant for any further increase in the target.

From this, it can be concluded that for a certain set of parameters a video sequence has a minimum and maximum compressed bandwidth. Visually, the picture is of a very poor quality in the lower region of the graph before jumping to near-perfect picture quality around the transition region. 5.2 Characterisation of Video Sequences Summary Statistics From the capture file containing information on cell inter arrival times, various statistics may be found using a pre-defined Excel function. A summary of the statistical results is shown in the table below. Mean Inter-Arrival Minimum IA Maximum IA Talking Head 0.2007 ms 2.333 ms 6257.47 ms Leafy Tree 0.1905 ms 2.333 ms 6090.00 ms Laser Disc 0.1952 ms 2.333 ms 3080.06 ms Figure 5.4: Table of Summary Statistics for Sample Video Sequences The summary statistics by themselves do not show much about the behaviour of the three sample video sequences being tested on the ATM network.

To extract more information from the set of inter arrival times obtained for the video sequences, behaviour must be analysed according to the Timescale Model mentioned previously in the Theory Section. As discussed before, there is a reciprocal relationship between Inter Arrival Time and Cell Rate. Thus, by averaging the Inter Arrival times over the entire sample and applying the inverse, the Sustained Cell Rate (SCR) or mean may be found. By a similar procedure, the Peak Cell Rate (PCR) and Maximum Burst Size (MBS) may also be found. For the purposes of this experiment the MBS is defined to be the maximum difference between the SCR and PCR i.e. MBS = PCR - SCR Equation 24 From these, values can be found corresponding to the ATM Service Descriptors [Ref. 34] used to identify parameters within the specific Qualities of Service.

The table below shows the basic ATM Service Descriptors obtained for the three sample video sequences. SCR PCR MBS Talking Head 31,000 cells / sec 61,000 cells / sec 30,000 cells / sec Leafy Tree 62,500 cells / sec 92,500 cells / sec 30,000 cells / sec Laser Disc 54,000 cells / sec 61,000 cells / sec 7,000 cells / sec Figure 5.5: Table of ATM Service Descriptors for Sample Video Sequences The values shown in Figure 5.5 give the main bounds for the video samples being characterised across the ATM network. However, this information alone may not be sufficient for accurate budgeting of network resources. In particular, the burstiness of a video sequence must be studied further by using different timescale to that used above. Detailed Investigation into Sequence Burstiness Travelling further down the Timescale Model, a smaller group of cells (10,000 of 130,000) was chosen for analysis.

Within this group, sets of 1000 cells arrival times were summed and the localised SCR found. This procedure was repeated for the set of 10 groups and the results plotted. Some localised bursts were evident so the procedure was extended to include localised SCR plotting for the entire video sample. The burstiness graphs can be seen on the next page. Figure 5.6: Burstiness Plot for Leafy Tree Video Sequence Figure 5.7: Burstiness Plot for Talking Head Video Sequence The two graphs show the burstiness for the Leafy Tree and Talking Head respectively. The graphs differ in two main ways: a) By the size of the burst or peak b) By the frequency of such bursts.

From the graphs, the following approximate numerical results can be obtained. Average Peak Size Approx. Frequency of Burst Talking Head 25,000 units Every 10 units Leafy Tree 35,000 units Every 4 to 5 units Figure 5.8: Burstiness Graph Attributes for Leafy Tree and Talking Head The burst size for the Leafy Tree analysis is greater than that of the Talking Head sequence by several thousand cells per second. Together with this, the bursts occur approximately twice as often suggesting a far greater burden on network resources needed to transmit the Leafy Tree sequence over the Talking Head one. This difference will be illustrated in later sections utilising bandwidth policing.

Laser Disc Sequence Characteristics From the graphs shown below, the Laser Disc baseball sequence has a format completely different to that of the Talking Head or Leafy Tree. Figures 5.9 & 5.10: Laser Disc Burstiness showing irregular & regular behaviour Although some portions of the laser disc sample show regular behaviour with well-defined peaks and troughs, this phenomenon does not exist throughout the entire sample. As the left-hand graph shows, the near-regular sequences are joined by a set of steps between levels making the overall behaviour of the sample much more difficult to predict. The jumping of behaviours between set levels could be due to the nature of movie clips, which have a tendency to flick between scenes every couple of seconds in order to keep the viewers attention: the more fast moving the picture, the greater the number of shot changes. This would account for the selection of average levels encountered due to the average information content in each scene filmed, whether it be high or low detail overall.

The conclusions from this experiment are that it is much more difficult to characterise a movie sequence due to bursts within bursts from rapid scene changes. A system that takes into account some of this behaviour is the Piecewise CBR model [Ref. 11]. For this, a pre-recorded video sequence is assumed to have sub-sections within its connection time. For each of these windows, a duration and bandwidth allotment are specified giving the Piecewise CBR schedule, defined as Equation 25 Where, rw = bandwidth allotment in cells / sec lw = length of windows in units or seconds WT = Total number of windows If this model may be combined with the burstiness characteristic model, a complex description of the source may be able to be obtained. 5.3 Suitability of the Poisson Model In much of the analysis discussed previously in the Theory section of this report, a Poisson distribution models the input stream of cells.

This is a simple, memory-less implementation of an arrival process that may be assessed against the behaviour of an actual cell stream. Using the cell inter-arrival data, the values were firstly sorted to obtain the minimum and maximum values. Within these limits, ranges were allocated and the theoretical probability of a cell being in this range was calculated according to Equation 13. The experimental data distribution was found using a histogram function to find the frequencies of values contained within the ranges mentioned previously. From this, the actual probability of a cell lying within one of the ranges could be calculated.

By plotting these results side by side, the differences between the Poisson model and the actual arrival process can be viewed. Figure 5.11: Spreadsheet Calculation of Probabilities for a Talking Head Sequence Figure 5.12: Graphical plot of Theoretical & Actual Probability Distributions The distributions for the arrival process differ in two main areas: SS Firstly, there is a huge change in probability for the actual arrival process in the range 2.5