Throughput and Bottleneck Server Analysis

Throughput and Bottleneck Server Analysis

Introduction#

An important measure of quality of a network is the maximum throughput available to an application process (we will also call it a flow) in the network. Throughput is commonly defined as the rate of transfer of application payload through the network, and is often computed as

$$Throughput = \frac{application\ bytes\ transferred}{\ Transferred\ duration}bps$$

A Single Flow Scenario#

Figure 3‑1: A flow $f$ passing through a link $l$ of fixed capacity $C_{l}$.

Application throughput depends on a lot of factors including the nature of the application, transport protocol, queueing and scheduling policies at the intermediate routers, MAC protocol and PHY parameters of the links along the route, as well as the dynamic link and traffic profile in the network. A key and a fundamental aspect of the network that limits or determines application throughput is the capacity of the constituent links (capacity may be defined at MAC/PHY layer). Consider a flow $f$ passing through a link $l$ with fixed capacity $C_{l}$ bps. Trivially, the amount of application bytes transferred via the link over a duration of T seconds is upper bounded by $C_{l} \times T$ bits. Hence,

$$Throughput = \frac{application\ bytes\ transferred}{\ Transferred\ duration} \leq C_{l}\ bps$$

The upper bound is nearly achievable if the flow can generate sufficient input traffic to the link. Here, we would like to note that the actual throughput may be slightly less than the link capacity due to overheads in the communication protocols.

Figure 3‑2: A single flow $f$ passing through a series of links. The link with the least capacity will be identified as the bottleneck link for the flow $f$.

If a flow $f$ passes through multiple links $l \in L_{f}\ $(in series), then, the application throughput will be limited by the link with the least capacity among them, i.e.,

$$throughput \leq \left{ {\ \ \min_{1 \in \ L_{f}}}C_{l} \right} bps$$

The link $l_{f}^{} = \arg{\min_{l \in \mathcal{L}{f}}C}$ may be identified as the bottleneck link for the flow $f$. Typically, a server or a link that determines the performance of a flow is called as the bottleneck server or bottleneck link for the flow. In the case where a single flow $f$ passes through multiple links $\left( \mathcal{L}{f} \right)\ $in series, the link $l^{}$ will limit the maximum throughput achievable and is the bottleneck link for the flow $f$. A noticeable characteristic of the bottleneck link is queue (of packets of the flow) build-up at the bottleneck server. The queue tends to increase with the input flow rate and is known to grow unbounded as the input flow rate matches or exceeds the bottleneck link capacity.

Figure 3‑3: Approximation of a network using bottleneck server technique

It is a common and a useful technique to reduce a network into a bottleneck link (from the perspective of a flow(s)) to study throughput and queue buildup. For example, a network with two links (in series) can be approximated by a single link of capacity $\min{(C1,\ C2)}$ as illustrated in Figure 3‑3. Such analysis is commonly known as bottleneck server analysis. Single server queueing models such as M/M/1, M/G/1, etc can provide tremendous insights on the flow and network performance with the bottleneck server analysis.

Multiple Flow Scenario#

Figure 3‑4: Two flows $f_{1}$ and $f_{2}$ passing through a link $l$ of capacity $C_{l}$

Consider a scenario where multiple flows compete for the network resources. Suppose that the flows interact at some link buffer/server, say ${l}^{\hat{}}\ $, and compete for capacity. In such scenarios, the link capacity $C^{\hat{}}\ }$ is shared among the competing flows and it is quite possible that the link can become the bottleneck link for the flows (limiting throughput). Here again, the queue tends to increase with the combined input flow rate and will grow unbounded as the combined input flow rate matches or exceeds the bottleneck link capacity. A plausible bound of throughput in this case is (under nicer assumptions on the competing flows)

$$throughput = \frac{C_{l}^{\hat{}}}{\ number\ of\ flows\ competing\ for\ capacity\ at\ link\ \ \ _{l}^{\hat{}}\ }\ bps$$

NetSim Simulation Setup#

Open NetSim and click on Experiments> Internetworks> Network Performance> Throughput and Bottleneck Server Analysis then click on the tile in the middle panel to load the example as shown in below screenshot

Table Description automatically generated with medium
confidence

Figure 3‑5: List of scenarios for the example of Throughput and Bottleneck Server Analysis

Part-1: A Single Flow Scenario#

We will study a simple network setup with a single flow illustrated in Figure 3‑6 to review the definition of a bottleneck link and the maximum application throughput achievable in the network. An application process at Wired Node 1 seeks to transfer data to an application process at Wired_Node_ 2. We consider a custom traffic generation process (at the application) that generates data packets of constant length (say, L bits) with i,i,d. inter-arrival times (say, with average inter-arrival time $v$ seconds). The application traffic generation rate in this setup is $\frac{L}{v}$ bits per second. We prefer to minimize the communication overheads and hence, will use UDP for data transfer between the application processes.

In this setup, we will vary the traffic generation rate by varying the average inter-arrival time $v$ and review the average queue at the different links, packet loss rate and the application throughput.

Procedure#

We will simulate the network setup illustrated in Figure 3‑6 with the configuration parameters listed in detail in Table 3‑1 to study the single flow scenario.

NetSim UI displays the configuration file corresponding to this experiment as shown below:

Figure 3‑6: Network set up for studying a single flow

The following set of procedures were done to generate this sample:

Step 1: Drop two wired nodes and two routers onto the simulation environment. The wired nodes and the routers are connected with wired links as shown in (See Figure 3‑6).

Step 2: Click the Application icon to configure a custom application between the two wired nodes. In the Application configuration dialog box (see Figure 3‑7), select Application Type as CUSTOM, Source ID as 1 (to indicate Wired_Node_1), Destination ID as 2 (to indicate Wired_Node_2) and Transport Protocol as UDP. In the PACKET SIZE tab, select Distribution as CONSTANT and Value as 1460 bytes. In the INTER ARRIVAL TIME tab, select Distribution as EXPONENTIAL and Mean as 11680 microseconds.

Figure 3‑7: Application configuration dialog box

Step 3: The properties of the wired nodes are left to the default values.

Step 4: Right-click the link ID (of a wired link) and select Properties to access the link's properties dialog box (see Figure 3‑8). Set Max Uplink Speed and Max Downlink Speed to 10 Mbps for link 2 (the backbone link connecting the routers) and 1000 Mbps for links 1 and 3 (the access link connecting the Wired_Nodes and the routers). Set Uplink BER and Downlink BER as 0 for links 1, 2 and 3. Set Uplink_Propagation_Delay and Downlink_Propagation

_Delay as 0 microseconds for the two-access links 1 and 3 and 100 microseconds for the backbone link 2.

Figure 3‑8: Link Properties dialog box

Step 5: Right-click Router 3 icon and select Properties to access the link's properties dialog box (see Figure 3‑9). In the INTERFACE 2 (WAN) tab, select the NETWORK LAYER properties, set Buffer size (MB) to 8.

Figure 3‑9: Router Properties dialog box

Step 6: Click on Packet Trace option and select the Enable Packet Trace check box. Packet Trace can be used for packet level analysis and Enable Plots in GUI.

Step 7: Click on Run icon to access the Run Simulation dialog box (see Figure 3‑10) and set the Simulation Time to 100 seconds in the Simulation Configuration tab. Now, run the simulation.

Figure 3‑10: Run Simulation dialog box

Step 8: Now, repeat the simulation with different average inter-arrival times (such as 5840 µs, 3893 µs, 2920 µs, 2336 µs and so on). We vary the input flow rate by varying the average inter-arrival time. This should permit us to identify the bottleneck link and the maximum achievable throughput.

The detailed list of network configuration parameters is presented in (See Table 3‑1).


Parameter Value


LINK PARAMETERS

Wired Link Speed (access link) 1000 Mbps

Wired Link Speed (backbone link) 10 Mbps

Wired Link BER 0

Wired Link Propagation Delay (access link) 0

Wired Link Propagation Delay (backbone link) 100 µs

APPLICATION PARAMETERS

Application Custom

Source ID 1

Destination ID 2

Transport Protocol UDP

Packet Size -- Value 1460 bytes

Packet Size - Distribution Constant

Inter Arrival Time - Mean AIAT (µs) Table 3‑2

Inter Arrival Time -- Distribution Exponential

ROUTER PARAMETERS

Buffer Size 8

MISCELLANEOUS

Simulation Time 100 Sec

Packet Trace Enabled

Plots Enabled


Table 3‑1: Detailed Network Parameters

Performance Measure#

In Table 3‑2, we report the flow average inter-arrival time v and the corresponding application traffic generation rate, input flow rate (at the physical layer), average queue at the three buffers (of Wired_Node_1, Router_3 and Router_4), average throughput (over the simulation time) and packet loss rate (computed at the destination).

Given the average inter-arrival time v and the application payload size L bits (here, 1460×8 = 11680 bits), we have,

$$Traffic\ generation\ rate = \frac{L}{v} = \frac{11680}{v}bps$$

$$input\ flow\ rate = \frac{11680 + 54*8}{v} = \frac{12112}{v}bps$$

where the packet overheads of 54 bytes is computed as $54 = 8(UDP\ header) + 20(IP\ header) + 26(MAC + PHY\ header)\ bytes$. Let $Q_{l}(u)$ as denote the instantaneous queue at link $l$ at time $u$ . Then, the average queue at link $l$ is computed as

$$average\ queue\ at\ link\ l = \frac{1}{T}\int_{0}^{T}{Q_{l\ \ }(u)}\ \ du\ bits$$

where, $T$ is the simulation time. The average throughput of the flow is computed as

$$throughput = \frac{application\ byte\ transferred}{T}bps$$

The packet loss rate is defined as the fraction of application data lost (here, due to buffer overflow at the bottleneck server).

$$packet\ loss\ rate\mathbf{=}\frac{application\ bytes\ not\ received\ at\ destination}{application\ bytes\ transmitted\ at\ source}\mathbf{\ }$$

Average Queue Computation from Packet Trace#

  • Open Packet Trace file using the Open Packet Trace option available in the Simulation Results window.

  • Click on below highlighted icon to create new Pivot Table.

Figure 3‑11: Packet Trace

  • Click on Insert on Top ribbon $\rightarrow$ Select Pivot Table.

Figure 3‑12: Top Ribbon

  • Then select packet trace and press Ctrl + A$\ \ \rightarrow$ Select ok

Figure 3‑13: Packet Trace Pivot Table

  • Then we will get blank Pivot table.

Figure 3‑14: Blank Pivot Table

  • Packet ID drag and drop into Values field for 2 times, CONTROL PACKET TYPE/APP NAME, TRANSMITTER ID, RECEIVER ID into Filter field, NW_LAYER_ARRIVAL_

TIME(US) to Rows field see Figure 3‑15,

  • Change Sum of PACKET ID -> Values Field Settings ->Select Count -> ok for both Values field, CONTROL PACKET TYPE to APP1 CUSTOM, TRANSMITTER ID to Router_3 and RECEIVER ID to Router 4

Figure 3‑15: Adding fields into Filter, Columns, Rows and Values

  • Right click on first value of Row Labels ->Group ->Select By value as 1000000.

  • Go to Values field under left click on Count of PACKET ID2 ->Values Field Settings-> click on show values as -> Running total in-> click on OK.

  • Again, create one more Pivot Table, Click on Insert on Top ribbon $\rightarrow$ Select Pivot Table.

  • Then select packet trace and press Ctrl + A$\ \ \rightarrow$ Select ok

  • Then we will get blank Pivot table See Figure 3‑16.

  • Packet ID drag and drop into Values field for 2 times, CONTROL PACKET TYPE/APP NAME, TRANSMITTER ID, RECEIVER ID into Filter field, PHY_LAYER_ARRIVAL_

TIME(US) to Rows field see Figure 3‑16 ,

  • Change Sum of PACKET ID -> Values Field Settings ->Select Count -> ok for both Values field, CONTROL PACKET TYPE to APP1 CUSTOM, TRANSMITTER ID to Router_3 and RECEIVER ID to Router 4

  • Right click on first value of Row Labels for second Pivot Table->Group ->Select By value as 1000000.

Figure 3‑16: Create one more Pivot Table and Add All Fields

  • Go to Values field under left click on Count of PACKET ID ->Values Field Settings-> click on show values as -> Running total in-> click on OK.

  • Calculate the average queue by taking the mean of the number of packets in queue at every time interval during the simulation.

  • The difference between the count of PACKET ID2 (Column C) and count of PACKET ID2 (Column G), Note down the average value for difference see Figure 3‑17.

Figure 3‑17: Average Packets in Queue

$$\mathbf{Packet\ Loss\ Rate\ (in\ percent)}\mathbf{=}\frac{\mathbf{Packet\ Generated - Packet\ Received}}{\mathbf{Packet\ Generated}}\mathbf{\times 100}$$

Results#

In Table 3‑2, we report the flow average inter-arrival time (AIAT) and the corresponding application traffic generation rate (TGR), input flow rate, average queue at the three buffers (of Wired_Node_1, Router_3 and Router_4), average throughput and packet loss rate.

+-----+--------+--------+---------+-------+-------+----------+-------+ | A | TGR | Input | Average | | | Average | P | | IAT | | Flow | queue | | | Th | acket | | | $$\ | Rate | (in | | | roughput | Loss | | $$ | frac{\ | | pkts) | | | (in | Rate | | \ma | mathbf | (in | | | | Mbps) | (in | | thb | {L}}{\ | Mbps) | | | | | per | | f{v | mathbf | | | | | | cent) | | }$$ | {v}}$$ | | | | | | | | | | | | | | | | | (in | (in | | | | | | | | µs) | Mbps) | | | | | | | +=====+========+========+=========+=======+=======+==========+=======+ | | | | Wired | R | Ro | | | | | | | Node 1 | outer | uter4 | | | | | | | | 3 | | | | | | | | (Link | | (Link | | | | | | | 1) | (Link | 3) | | | | | | | | 2) | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 11 | 1 | 1.037 | 0 | 0 | 0 | 0.999925 | 0 | | 680 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 5 | 2 | 2.074 | 0 | 0.02 | 0 | 1.998214 | 0 | | 840 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 3 | 3.0003 | 3.1112 | 0 | 0.04 | 0 | 2.999307 | 0 | | 893 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 2 | 4 | 4.1479 | 0 | 0.11 | 0 | 3.996429 | 0 | | 920 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 2 | 5 | 5.1849 | 0 | 0.26 | 0 | 5.009435 | 0 | | 336 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 5.999 | 6.2209 | 0 | 0.43 | 0 | 6.000016 | 0.01 | | 947 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 6.9982 | 7.257 | 0 | 0.9 | 0 | 7.004262 | 0 | | 669 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 8 | 8.2959 | 0 | 1.92 | 0 | 8.028131 | 0 | | 460 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 8.9985 | 9.3313 | 0 | 5.26 | 0 | 9.009718 | 0.01 | | 298 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 9.0966 | 9.433 | 0 | 6.92 | 0 | 9.107013 | 0.01 | | 284 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 9.1969 | 9.537 | 0 | 7.98 | 0 | 9.209563 | 0.01 | | 270 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 9.2994 | 9.6433 | 0 | 7.88 | 0 | 9.314683 | 0 | | 256 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 9.3966 | 9.7442 | 0 | 11.48 | 0 | 9.416182 | 0.01 | | 243 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 9.5037 | 9.8552 | 0 | 16.26 | 0 | 9.520718 | 0.02 | | 229 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 9.5974 | 9.9523 | 0 | 25.64 | 0 | 9.616027 | 0.01 | | 217 | | | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 9.701 | 1 | 0 | 42.88 | 0 | 9.717994 | 0.05 | | 204 | | 0.0598 | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 9.7987 | 1 | 0 | 90.86 | 0 | 9.796133 | 0.26 | | 192 | | 0.1611 | | | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 9.8983 | 1 | 0 | 4 | 0 | 9.807696 | 1.15 | | 180 | | 0.2644 | | 36.41 | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 10 | 1 | 0 | 8 | 0 | 9.808981 | 2.09 | | 168 | | 0.3699 | | 47.65 | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 1 | 1 | 1 | 0 | 38 | 0 | 9.811667 | 11.00 | | 062 | 0.9981 | 1.4049 | | 76.87 | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 973 | 1 | 1 | 0 | 45 | 0 | 9.811667 | 18.53 | | | 2.0041 | 2.4481 | | 93.67 | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 898 | 1 | 1 | 0 | 48 | 0 | 9.811667 | 24.75 | | | 3.0067 | 3.4878 | | 59.68 | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 834 | 1 | 1 | 0 | 50 | 0 | 9.811667 | 30.09 | | | 4.0048 | 4.5228 | | 00.57 | | | | +-----+--------+--------+---------+-------+-------+----------+-------+ | 779 | 1 | 1 | 0 | 50 | 0 | 9.811667 | 34.75 | | | 4.9936 | 5.5481 | | 85.05 | | | | +-----+--------+--------+---------+-------+-------+----------+-------+

Table 3‑2: Average queue, throughput and loss rate as a function of traffic generation rate

We can infer the following from Table 3‑2.

  • The input flow rate is slightly larger than the application traffic generation rate. This is due to the overheads in communication.

  • There is queue buildup at Router 3 (Link 2) as the input flow rate increases. So, Link 2 is the bottleneck link for the flow.

  • As the input flow rate increases, the average queue increases at the (bottleneck) server at Router 3. The traffic generation rate matches the application throughput (with nearly zero packet loss rate) when the input flow rate is less than the capacity of the link.

  • As the input flow rate reaches or exceeds the link capacity, the average queue at the (bottleneck) server at Router 3 increases unbounded (limited by the buffer size) and the packet loss rate increases as well.

For the sake of the readers, we have made the following plots for clarity. In Figure 3‑18, we plot application throughput as a function of the traffic generation rate. We note that the application throughput saturates as the traffic generate rate (in fact, the input flow rate) gets closer to the link capacity. The maximum application throughput achievable in the setup is 9.81 Mbps (for a bottleneck link with capacity 10 Mbps).

Figure 3‑18: Application throughput as a function of the traffic generation rate

In Figure 3‑19, we plot the queue evolution at the buffers of Links 1 and 2 for two different input flow rates. We note that the buffer occupancy is a stochastic process and is a function of the input flow rate and the link capacity as well.

a) At Wired Node 1 for TGR = 8 Mbps b) At Router 3 for TGR = 8 Mbps

c) At Wired Node 1 for TGR = 9.5037 d) At Router 3 for TGR = 9.5037 Mbps

Figure 3‑19: Queue evolution at Wired Node 1 (Link 1) and Router 3 (Link 2) for two different traffic generation rates

In Figure 3‑20, we plot the average queue at the bottleneck link 2 (at Router 3) as a function of the traffic generation rate. We note that the average queue increases gradually before it increases unboundedly near the link capacity.

Figure 3‑20: Average queue (in packets) at the bottleneck link 2 (at Router 3) as a function of the traffic generation rate

Bottleneck Server Analysis as M/G/1 Queue#

Let us now analyse the network by focusing on the flow and the bottleneck link (Link 2). Consider a single flow (with average inter-arrival time v) into a bottleneck link (with capacity C). Let us the denote the input flow rate in packet arrivals per second as $\lambda$ , where $\lambda = 1/\ v\ $ . Let us also denote the service rate of the bottleneck server in packets served per second as $\mu$, where $\mu = \frac{C}{L + 54 \times 8}$ . Then,

$$ {\rho = \ \lambda \times \frac{1}{\mu} = \frac{\lambda}{\mu}}$$

denotes the offered load to the server. When $\rho < 1$, $\rho$ also denotes (approximately) the fraction of time the server is busy serving packets (i.e., $\rho\ $denotes link utilization). When $\rho \ll 1$, then the link is barely utilized. When $\rho > 1\ $, then the link is said to be overloaded or saturated (and the buffer will grow unbounded). The interesting regime is when $0 < \rho < 1$.

Suppose that the application packet inter-arrival time is i.i.d. with exponential distribution. From the M/G/1 queue analysis (in fact, M/D/1 queue analysis), we know that the average queue at the link buffer (assuming large buffer size) must be

$$average\ queue = \ \rho \times \frac{1}{2}\left( \frac{\rho^{2}}{1 - \rho} \right),\ \ 0\ < \ \rho\ < \ 1$$

where, $\rho$ is the offered load. In Figure 3‑20, we also plot the average queue from (1) (from the bottleneck analysis) and compare it with the average queue from the simulation. You will notice that the bottleneck link analysis predicts the average queue (from simulation) very well.

An interesting fact is that the average queue depends on $\ \lambda$ and $\mu$ only as $\rho = \frac{\ \lambda}{\mu}$.

Part - 2: Two Flow Scenario#

We will consider a simple network setup with two flows illustrated in Figure 3‑21 to review the definition of a bottleneck link and the maximum application throughput achievable in the network. An application process at Wired_Node_1 seeks to transfer data to an application process at Wired_Node_2. Also, an application process at Wired_Node_3 seeks to transfer data to an application process at Wired_Node_4. The two flows interact at the buffer of Router_ 5 (Link 3) and compete for link capacity. We will again consider custom traffic generation process (at the application processes) that generates data packets of constant length (L bits) with i.i.d. inter-arrival times (with average inter-arrival time $v$ seconds) with a common distribution. The application traffic generation rate in this setup is $\frac{L}{v}$ bits per second (for either application).

In this setup, we will vary the traffic generation rate of the two sources (by identically varying the average inter-arrival time v) and review the average queue at the different links, application throughput (s) and packet loss rate (s).

Procedure#

We will simulate the network setup illustrated in Figure 3‑21 with the configuration parameters listed in detail in Table 3‑1 to study the two-flow scenario. We will assume identical configuration parameters for the access links and the two application processes.

Figure 3‑21: Network set up for studying two flows

Step 1: Right-click the link ID (of a wired link) and select Properties to access the link's properties dialog box. Set Max Uplink Speed and Max Downlink Speed to 10 Mbps for link 3 (the backbone link connecting the routers) and 1000 Mbps for links 1,2,4, 5 (the access link connecting the Wired Nodes and the routers). Set Uplink BER and Downlink BER as 0 for all links. Set Uplink Propagation Delay and Downlink Propagation Delay as 0 microseconds for links 1,2,4 and 5 and 100 microseconds for the backbone link 3.

Step 2: Enable Plots and Packet trace in NetSim GUI.

Step 3: Simulation time is 100 sec for all samples.

Results#

In Table 3‑3, we report the common flow average inter-arrival time (AIAT) and the corresponding application traffic generation rate (TGR), input flow rate, combined input flow rate, average queue at the buffers (of Wired_Node_1, Wired_Node_3 and Router_5), average throughput(s) and packet loss rate(s).

AIAT

v

(in µs)

TGR

$$\frac{\mathbf{L}}{\mathbf{v}}$$

(in Mbps)

Input Flow Rate

(in Mbps)

Combined Input Flow Rate (in Mbps)

Average queue

(in pkts)

Average Throughput (in Mbps)

Packet Loss Rate

(in percent)

Wired Node 1 Wired Node 3 Router 5 App1 Custom App2 Custom App1 Custom App2 Custom
11680 1 1.037 2.074 0 0 0.03 0.999925 1.002728 0 0
5840 2 2.074 4.148 0 0 0.16 1.998214 2.006624 0 0
3893 3.0003 3.1112 6.2224 0 0 0.32 2.999307 3.001410 0 0
2920 4 4.1479 8.2958 0 0 1.99 3.996312 4.018504 0 0
2336 5 5.1849 10.3698 0 0 847.19 4.903614 4.907469 2.12 2.10
1947 5.999 6.2209 12.4418 0 0 4607.12 4.896606 4.915061 18.38 18.38
1669 6.9982 7.257 14.514 0 0 5009.33 4.896373 4.915294 30.10 30.00
1460 8 8.2959 16.5918 0 0 5150.91 4.906418 4.905250 38.88 38.78
1298 8.9985 9.3313 18.6626 0 0 5222.86 4.904782 4.906885 45.56 45.52
1168 10 10.3699 20.7398 0 0 5265.95 4.920317 4.891350 50.88 51.16

Table 3‑3: Average queue, throughput(s) packet loss rate(s) as a function of the traffic generation

We can infer the following from Table 3‑3.

  1. There is queue buildup at Router_5 (Link 3) as the combined input flow rate increases. So, Link 3 is the bottleneck link for the two flows.

  2. The traffic generation rate matches the application throughput(s) (with nearly zero packet loss rate) when the combined input flow rate is less than the capacity of the bottleneck link.

  3. As the combined input flow rate reaches or exceeds the bottleneck link capacity, the average queue at the (bottleneck) server at Router 5 increases unbounded (limited by the buffer size) and the packet loss rate increases as well.

  4. The two flows share the available link capacity and see a maximum application throughput of 4.9 Mbps (half of bottleneck link capacity 10 Mbps).

Useful Exercises#

  1. Redo the single flow experiment with constant inter-arrival time for the application process. Comment on average queue evolution and maximum throughput at the links.

  2. Redo the single flow experiment with small buffer size (8 KBytes) at the bottleneck link 2. Compute the average queue evolution at the bottleneck link and the average throughput of the flow as a function of traffic generation rate. Compare it with the case when the buffer size in 8 MB.

  3. Redo the single flow experiment with a bottleneck link capacity of 100 Mbps. Evaluate the average queue as a function of the traffic generation rate. Now, plot the average queue as a function of the offered load and compare it with the case of bottleneck link with 10 Mbps capacity (studied in the report). Comment.